Nice video! I had this on in the background, so forgive me if you mentioned this, but there is a really nice way to invalidate many "proofs" of the Collatz Conjecture. The trick is to see if the argument still holds when 3 is replaced with 5. If the proof still works, then the argument is definitely invalid, since the 5k+1 version does not always converge to 1. For example, starting with 13 will eventually return to 13. In general, checking whether a proof is "too strong" is often an excellent sanity check that an approach might be fruitful. I'm not sure if this applies to your method, but it's a good idea to keep in mind.
This wasn't mentioned but I love this! I'll for sure mention it if I make a future video (and remember to mention it xD) Thank you for bringing it up :D Edit: I plan to make listenable content btw. Math videos to chill, game, sleep or do chores to haha. Absolutely feel free to have it as background noise - it's why I cut out almost all the silences.
I met a 65 yo Guy who never studied maths past middle school (he did not go to high school) and he is conviced he solved collatz and Riemann hypothesis. I took a look at his work, it is just a load of basic arithmetic and he does not prove anything lol
The sad reality is that most "proofs" of this conjecture boil down to similar things. I want to encourage a love of math and an exploration thereof, but arrogance is a delicious poison with a bitter antidote. Worse yet, some are too far gone to even entertain the possibility that they *could be* wrong.
so i was at a bar... really i was, and the guy near me was going on about non particle physics - he solved dark matter by supposing that his stuff covered things that weren't particles. i'm not even sure what to say to that
@@dercooneyThe funny thing about that crowd is that dark matter isn't even a problem which can be solved by theorists. There are plenty of working models of dark matter, plenty of dark matter candidates. The problem is in choosing a model that represents the universe, not coming up with a new one. Developments in this field will come from experiment.
In theory, for each number, there is 3 possibilities. 1, it collapses to 1-2-4, every positive integer we've ever checked does this. 2, in its sequence it eventually loops back to itself, creating a ring. Negative integers actually do this. It would imply a second root to the structure, and if you have that root, you can use any member of that loop to generate infinite numbers that fall into this root by multiplying with powers of 2. 3, some number runs off to infinity. In this case we can find infinite numbers that run off to infinity by applying the conjecture to the number repeatedly, as well as by multiplying any number in this sequence by every power of 2. This means that either the conjecture is true, or there are infinitely many counter examples, we just haven't found them yet. There may also be non-trivial extension to counter examples, but that requires using collatz in reverse to look for odd numbers that evolve into the counter examples, which is non-trivial itself, and involves scanning numbers 1 greater than the product of a counter example and powers of 2 for divisibility by 3. The trivial extensions will always be greater than the counter example, the non-trivial extension may not be, meaning a number may climb up into a loop. This gives some structure to the problem... to prove the collatz conjecture, if we can disprove 2 and 3 are possible, we prove the conjecture. If we can show that some sequence is in group 2 or 3 we disprove the conjecture. If we can prove that every number except 1 eventually descends below itself, we prove te conjecture, cause both a loop, and a runaway to infinity require a lowest value of their sequence. This paper shows that proving you can invert collatz to find the full set of the natural numbers is extremely non-trivial, but so is, it seems, every pathway to proof. Good to know you had fun working on this. That's the most important thing.
Wow this is relatable. I first heard of this problem and got into it around the age of 14 - 15. I immediately found it really really interesting and wanted to learn more. For me though, I had an entirely different approach, from the start I always tried to prove that every number n eventually reaches a number lover then itself (except one of course). This also would proove the conjecture. I tried a lot of things and to this day I am actually surprised how far I reached into the conjecture without any outside help. For example I got a great understanding of the conditions for a loop or why exactly numbers that shoot up to infinity would have to be so incredibly rare. Over the years everytime I learned something new in math or heard of a new concept I threw it at the problem. I really learned a ton of stuff just from doing basic research and also first got familiar with research papers. Overall it really helped my understanding of math concepts and as you already said, gave me motivation to pursue mathematics further.
This is part of why I hate it so much when people call it a waste of time or when they discourage others from looking at the problem. It's so interesting and rich with examples. And fun! The only snag is that many people that do look at the problem also let their ego get the better of them, then they claim victory when they really shouldn't.
Collatz wasn't that problem for me, mine was integer factorisation. It led me to learn more about optimisation and computer programming when I was convinced my approach would work but that the computer was too slow, and then inevitably I would go back to the maths to continue looking into it deeper when I realised my program was going to take forever for large numbers.
This is a good video; nice work. I read about the Collatz conjecture when I was about 13, and have thought about it, put it down, and come back to it, many times in the 35 years since then. Along the way, I went to graduate school a couple of times, and picked up a couple of degrees in mathematics. Whenever I learned more math, I returned to the problem, and figured out a little bit more. At the beginning, I worked independently, not as a matter of principle, but because I was a child, and had no idea about mathematical literature, much less a clue how to find papers, or the ability to read and understand them. Eventually, I did look at some of the seminal work by Terras, Crandall, Thwaites, Lagarias, and others. Meanwhile, I've continued to develop my own approach, which diverges a bit from the well worn paths. That said, I don't flatter myself that I'm doing anything truly original. I do it for the fun; I do it for the exercise. Over the years, I've also made a hobby of reading attempts by would-be provers, helping them cast their ideas into more standard language and notation, and pointing out their errors in reasoning. From that experience, I can confirm what you've said are the most common approaches to the problem. I laughed at the part where you said, "That's crazy! Who would think of that?" I agree that it's not a waste of time. Having this problem in the back of my mind has motivated so much of my study of number theory, which has been richly rewarding. I ended up with a PhD, and a career as a professor! I have learned more than I imagined I would about p-adic numbers, probability theory, multivariable calculus, computer programming, and data analysis, by applying those tools to this problem. Currently, I am working with a collaborator on a paper that I think will be ready for publication early next year. It's not about the conjecture itself, but it is Collatz-adjacent, and inspired by the conjecture. For me, the most fruitful and interesting path has been the exploration of loops other than the famous 1,4,2 loop. By extending the domain of the Collatz function to all rational numbers with odd denominators (the localization of the integers at 2, if you want to be formal about it), we see infinitely many loops, which display all kinds of unexpected behaviors. Classifying and investigating this menagerie of rational loops has provided me with years of entertainment. Anyway, thanks again for the video. If you'd like to compare notes in more detail, let me know; I enjoy corresponding with fellow enthusiasts.
Amazing stuff. You're clearly very intelligent and have great pedagogical value. Often showing what doesnt work is just as valueable as showing what does
My favourite formulation of the conjecture is to prove or disprove whether the function f(p) = odd-factor(3p+1) is eventually decreasing. That is, for all odd p > 1, you can get a k > 0 such that f^k(p) < p. This function f is a collapsing of the Collatz iteration function to a definite domain and range of only odd numbers, allowing the proof to ignore even factors entirely. That said, the odd-factor function is what's holding us up, as mathematicians still don't have the right insights to understand how it interleaves with other functions. As for how to derive this, the idea is to realize that all even iteratons of the Collatz iteration function can be collapsed into getting the odd factor before redoing the 3p+1 step. Now, if f as eventually decreases for all values at or below p, then it must necessarily reach a minimum by the well-ordering principle (here that would be1), otherwise we would have an infinite descent situation with the sequence of every time f eventually decreases. This also inspires the p > 1 bit, as we need to stop the descent somewhere, namely at the lower bound of 1.
It is actually quite straight forward to understand why 1, 5, 21, 85 etc are the "odd factors" of n in D_0, if you think about how those numbers are written in base 4. Spoiler: 1=1, 5=11, 21=111, 85=1111 etc and if you multiply those numbers by three you get 3, 33, 333, 3333 etc, and if you then add 1 to that new number (remember we are in base 4), you will by definition end up at 10, 100, 1000, 10000 etc, which are all clean powers of 4. Also, if you multiply any number in base 4 by 4, you are simply appending a 0 to the end of the original number. If you also add 1, the result is instead that you append a 1 to the original number. This is the reason why all those numbers are linked by that recursive formula a_n = 4a_n-1 + 1. I almost forgot to say thank you for the video!
Changing bases is something I see very often but never thought to do myself ^^ thank you for this, I agree that it's much clearer in base 4 why the members of D_0 are that way ^^ Wonderful insight ^^
I remember when I started my take on the Collatz Conjecture, and I don't think I've seen anyone else try and do this before. Instead I went and tried to find properties of similar a family of problems, in the form {a*x/2+b,c*(x+1)/2+d} for the mod 2 case. I didn't go to far with it, but found that most other coefficients are somewhere between easy and trivial to find out if they tend to infinity or have loops.
This is one thing I’ve been curious about and have made some brute force attempts at too. One very trivial fact is that for any number which can be reached, it multiplied by ALL powers of 2 are also on it. The other interesting fact is that this tree cannot be represented in 2 dimensions cleanly, as many nodes overlap in strange (but predictable) ways. For example, the numbers 4 and 5 can be shown to overlap on this tree. Start by drawing it out like a binary tree from the root node 1, where going down and to the right means multiplying by 2, and going down and to the left means subtracting 1 and dividing by 3. You can draw this tree down to 4, and then when it splits off, it recursively contains itself if you go the left, as it returns to a new copy of the root node of the tree, 1. But, if you keep going down and right from the tree to 16, you can again go to the left, which arrives at 5. The strange feature is that if you continue the tree after going left of 4, so go from 4 to 1 to 2 back to 4 again, this new 4 will overlap in the same location as the 5 connected to the 16 (you need to draw all nodes with equal spacing on the tree, don’t avoid overlapping nodes, this is actually an intentional feature). So the question is, what does it mean for two numbers to overlap? Why 4 and 5? Well, the simple answer is actually just the binary representation of 1/3, which is 0.010101… since the whole idea of moving down and to the right is multiplying by 2, that means it’s equivalent to a binary shift to the left. 4 is the result of cutting off the repeating digits after the first 1, then shifting all the bits to the left, but 5 is the result of chopping off only after the first TWO 1s, then shifting the bits to the left by the same amount, which is why they are each 100 and 101 in binary respectively. In fact, 10000 (16) and 10101 (21) also overlap. This also holds for going down further layers to the left on the tree, but it gets MUCH more complex to represent. But, ultimately every single number on the tree is just the result of bit shifting 1/3 plus some other chunk of digits on the head of the number. That doesn’t make it easy to solve of course, but visualizing it this way I feel makes it make more sense what the rules actually do to the numbers as you navigate the tree. (Btw, the overlapping of numbers also has some unique effects. For example, if you navigate up the tree to the right from 4 and 5- yes, even though 4 is even and technically should only have a path to the left- then you can see that to get from 4 to 16, it’s the result of multiplying by 3 and adding 4, whereas getting from 5 to 16 is multiplying by 3 and adding only 1, like the actual rule should be. This extends into a further pattern to show that even overlapping values on the tree can navigate to parents of other values in the place, and the amount you add after multiplying by 3 will ALWAYS be a power of 2 related to how far you had to navigate down the tree before arriving at the overlap. If you instead look at 2, you can show that it connects to 8 by multiplying by 3 and adding 2, you can show that 8 connects to 32 by multiplying by 3 and adding 8, and you can connect 16 to 64 by multiplying by 3 and adding 16, which is why 16 and 21 overlap. In addition to 16 and 21, 20 also overlaps in the same place, because 4 and 5 overlap, and going down to the right on the tree from 5 puts 20 in the same spot, which is 10100, again showing that it’s just 1/3 in binary with chopped off repeating digits. Because you only had to navigate 2 spaces to the right from 5 to get to 20 however, to connect it back to 64 you only need to multiply by 3 and add 4, instead of 16 like for connecting 16. It’s all connected, it’s all binary thirds.)
@joshuaiosevich3727 I’m well aware that brute force doesn’t work, but what I’m more talking about is determining if there’s any details related to the patterns I’ve stated that can be abused to show that any sequence of starting digits must be possible to make. Because of how the pattern as I’ve stated works, you ONLY need to consider the leading digits that aren’t a repeating sequence of 1 and 0, OR trailing 0s. If you have a number like say 1001101010100 (in binary of course) then you only need to consider the 10011 part at the start, as of course the trailing 0s can be removed easily, but ALSO I can determine that the 01010101 can be be excluded as it’s just a trailing 1/3 that’s been shifted to the left a bunch, meaning based on the pattern I’ve established, it is trivial to show that- as long as the leading digits do- it connects somewhere on the tree as far as I understand. This only leaves the beginning digits, 10011 as far as I can tell. Maybe I’m making a mistake somewhere, but it even further reduces the search pool than just chopping off trailing 0s if it’s all correct.
I'm still watching.... (short pause) But I started with the same mindset. I heard about the conjecture and basically refused to read up on what people had tried so far, wanting to figure it out all on my own. I worked now and then on it over a period of 4 years. Sometimes every day a bit, sometimes on a Saturday up to 6 hours. Sometimes a few weeks break. I looked at the usual formula, thought probabilistically first ofc, calculated backwards, moved the index, thought about tree-arguments and counting arguments, looked at things with modulo-arithmetics, trying to reduce a bit that way what one needs to prove. I looked at the general parametrized formula that describes the whole global structure, I thought about whether using the rational number could help me. I looked at the binary-automaton that {0,1}-sequence ... But Basically whenever I thought I have a proof, slept over it, thought more about it, I noticed I had made a thinking mistake. I went through that 3 or 4 times. I tried all these things, before I realized other people have done the same things already. 😗 I was wondering, whether p-adic numbers, continued fractions or harmonic analysis could be the next tool, but that would have been like starting from zero, needing to learn new techniques, not really having a clear idea, whether it could get me somewhere then. So 1 or 2 years ago I felt I need some new input, finally read some Research Papers about the Collatz Conjecture, also one that summed up was has been found out so far by mathematicans, and that was something I should have done way earlier (!). I should have maybe worked on the conjecture for a few weeks just by myself, but then really looked at what professional mathematicians have done already. To waste less time. edit: So now, whenever I read that someone claims to have proven the Collatz-Conjecture, I immediately think that with a 99% chance the proof is wrong. For it is very easy to make thinking mistakes there. edit again: And yes, working on the conjecture was fun. But well, likely the primary thing most people will find out is that it is a really a hard problem. ☺ One needs to rule out a cycle and rule out that there is no starting value that makes the sequence rise asymptotically to infinity. And if you believe you have shown that this can't happen, your argument is most likely flawed. Sleep over it, think again ... Before you declare victory. That decribes my experience. 😉 The problem with the conjecture is that it insinuates it must be true, but there are similar functions that behave also almost like that, creating almost a tree, but having nevertheless unexpectedly exceptions.
What a wonderful and well thought out comment! I said "for reasons removed from this paper,..." somewhere in the video, saying that I find "proofs" out there unconvincing and I'll likely elaborate on this in the future. Sadly I don't think I should say much more here yet - but I do plan to make up to two more Collatz videos. In the meantime, though, I'm going to branch out (pun intended) and cover topics unrelated to the conjecture first :) Thank you for watching and thank you for this excellent contribution to the conversation!
23:48 If there is one number that fails (or multiple), there would be indeed infinitely more because we can multiply by 2 as many times as we want and all of them reaches that number.
@@michaelihc2452 indeed, my head was stuck on the "odd factors" and I was thinking that one counter example would not necessarily give us access to infinitely many other counter examples which have other odd factors. Others have pointed out that we can actually can generate new odd factors in some situations, and this intuition is wrong (which is ironic, considering that my point was that intuition is unreliable and here I am, putting a silly claim forth because I fell prey to my intuition). Either way, indeed, you're correct and this was an oversight. I'll try to do better in the future!
I was with your method until 2.2 in your paper, where I then began to try to find a function that would skip from any odd number to the next multiple of 4 in the sequence. If you have all odd numbers be described as (2^y)n - 1, n being an odd number, y being an integer > 0, then 2y was the number of steps in the sequence to the next multiple of 4. I used this to eventually discover that every 2 steps in the sequence multiplied n by 3 and decreased y by 1. This eventually led to the equation (3^y)n - 1 to get one step past the next multiple of 4 in the sequence. This is where I am now. I might be able to do more after I learn calculus in or before my senior year of high school.
@@soulsand4287 nice! Be cautious if you're relying on this video's exact equations. Because of the casual context (and frankly because I made a mistake), there are exponents where there shouldn't be exponents on the 3's in the denominators of D_k. I possibly made other mistakes. This is just meant to show the general idea of what someone might try, and how they can scam themself into thinking the conjecture is proven or that it is true when we definitely didn't achieve that in this video. The conjecture is rich with examples and many little questions that one can figure out (and even prove) along the way, making it an excellent tool for motivating one's exploration of some branches of maths ^^ If you have any specific questions or the like, feel free to reach out, I'll reply if I have a moment (and the energy) ^^
I remember trying this problem and getting far enough to say that "if there's a counter-example that infinitely increases in size it can't be repeating." Basically, if steps of the collatz conjecture were treated like digits in a number, then an infinitely increasing counter example would have to be irrational.
I'm not sure I understand what you mean. I'll do my best to answer what I think you're asking, though. It can be shown that if the odd factor of x is b and the odd factor of 3b+1 is c, and b
@@MathIguess Just typed up a 2 part explanation and YT seemed to delete part 1 because it got too long... hold on I think I'm gonna need 4 parts after this.
@@MathIguess So let's start by assigning each number a letter. 1 is A. 2 is B. 3 C, 4 D, 5 E, 6 F, and then 7 is A again, 8 is B again, and so on. D is defined as an even number that is +1 from a factor of 3, and as such all odd numbers (A, C, and E) will, after undergoing a step, become a D. An F will divide into another F or a C. A D can divide into a B or an E. A B can divide into a D or an A. Fs and Cs are irrelevant since they'll just eventually degrade into a D and can't ever return to an F or C. Part 1 of 4
With the chains of logic, you can work out that Ds can act as the inputs and outputs for functions that are then chained. There are 3 possible functions. D-E-D, D-B-D, and D-B-A-D. I call these the E function, B function, and A function respectively. And as the last thing we need to do, we need to introduce numbers back onto the letters. D1 is the first positive D, 4. D2 is 10, and so on. Assume the same was done to the other letters. Part 2 of 4
I remember trying to solve the Collatz conjecture back in high school, and I find it remarkable how many of these steps are similar to how I approached it. I remember in particular arriving at the formula for expressing any number given all the sub-sequence length, and thinking I was making big progress. I used the extended version of the Collatz conjecture, which was defined for any fraction, so that the formula would always be defined. The goal in that case was to show somehow, that the set of all fractions generated from this would contain the odd integers (excluding 1, -1, -5, -17), I soon realized this was no easy task. I also came up with a general formula for expressing any loops, in that case the goal was to show that the resulting fractions a/b generated by such a sequence was never and integer, or equivalently, that and b are always coprime. This again was no trivial task.
I hope you had fun along the way and I'm glad you didn't reach that point and then call it a victory. Many people don't realise that there is more work to be done after reaching a similar point in their attempted proofs.
I felt so called out in that first paragraph in the intro. 😂😂😂 It is still fun to get that satisfaction of deriving some methods/equations on your own, even though they’ve already been done before.
@@crazydog1750 absolutely ^^ I plan to call my audience much harder in the next video on the topic xD probably something like "you specifically can't prove this conjecture. Here's why." Or something unhinged like that. Problem is that I'd be treading a line between meme and clickbait which I don't wanna do. I'm still thinking about the title etc.
@@MathIguess Actually though I should be frustrated with you because this video made me go back and look through my old "proof" attempts of this stupid problem thinking "Hm... what if I did this?" 😂🤣🤣 It's just so tantalizing until you've dumped a hundred hours into it and realize that you're never gonna get anywhere. But slowly that wears off and the next time you think about it you're like "What if I take another look at it?" and the cycle repeats.
@@MathIguess But can it really be a waste of time if you're developing a more intimate understanding of the beautiful relationships between these wonderful symbols we call numbers? 🤔😅
In the intro, you could just as easily make the "if n is odd" output half of the expected size on purpose: the regular output... is even, so every time you apply the "odd" rule you (by default) immediately apply the "even" rule!
This is an equivalent formulation of the problem indeed, which I believe would lead us down the same path if applied. Still, you're correct, it does "cut to the chase" a bit better in some sense ^^
Such a useful video. It will convince a lot of young mathematicians to not waste their time on this dangerous problem. Seriously guys, just stop with it and focus on real math. Just let the real experts work on it because you will never make it with the level that you have now. If you want to work on it, then get good before because otherwise you guys will just become crazy.
@@PictooMath I mean.. I still encourage this exploration for fun, but any serious attempt at a proof should indeed be left to experts, as I'll show in a future video ^^ Thank you for the kind words ^^
Well I mean it can be fun to mess around with this kind of problem, I don’t think most people who do so really think they will solve the problem but it’s still fun to mess around with. What’s the harm
The approach I took when I looked into it a while ago was by trying to investigate the conditions for the SMALLEST number which doesn't go to 4,2,1 in terms of an+b (starting from 2n+1 (as any even number trivially goes to a smaller number)). This allowed me to substitute both n=2k and n=2k+1 and step each forward. In many cases, this allowed ruling out specific conditions which trivially became smaller after a certain number of steps, and adding additional conditions to check when it became odd a and even b. Example: 2n+1 Try n = 2k 4k+1 -> 12k+4 -> 6k+2 -> 3k+1 (< 4k+1 (k>0) - Contradiction) Try n = 2k+1 4k+3 -> 12k+10 -> 6k+5 (always odd) -> 18k+16 -> 9k+8 (could be odd or even depending on k, >4k+3, add 4n+3 to list of conditions to check) Unfortunately, in some cases both conditions were equally valid, leading to an exponentially growing list of conditions to test. I did manage to prove a lot of existing results about the smallest number which doesn't go to 1, but I didn't really get anywhere on proving that it exists
@@thinker-12 I heard about a person that went down this path in great detail as part of their master's thesis. I say this to say... nice, you did something cool! ^^
I was more into the route about the expanding phase. It is basically like considering x 2^n - 1 (e.g. 79 = 5 * 2^4 - 1), what it does is to take 2n steps to convert that into x 3^n - 1 (i.e. 404 = 5 × 3^4 - 1) due to repeated (3n+1)/2 operations. This somehow seems special as the even number x 2^n being 1 more than x 2^n - 1 takes the route down to x instead. x 2^n - 2 becomes x 2^(n - 1) - 1, and takes take 2n-2 more steps to turn into x 3^(n - 1) - 1. x 2^n - 3 becomes x 3 2^(n-1) - 4 and 2 more steps to x 3 2^(n-3) - 1 if there are enough factors of 2. x 2^n + 1 becomes x 3 2^(n-1) + 2 and a step to x 3 2^(n-2) + 1. And seems magical by considering that there are many consecutive ranges of numbers that take exactly the same number of steps to reach 1. (A006667) Notice if x 3^n - 1 gets 1 less 2 factor than x 3^(n - 1) - 1, they will get into the same path, thus having the same number of steps. 7×2^3-1 (55) → (6 steps) → 7×3^3-1 (188) → 94 (8 steps in total) 7×2^3-2 (54) → 27 → (4 steps) → 7×3^2-1 (62) → 31 → 94 (8 steps in total) Feels like something that happen simultaneously between consecutively numbers to converge until they become the same. Especially for numbers that are close to powers of 2, like (1028, 1029, 1030, 1032, 1034, 1035, 1036, 1037, 1042, 1043, 1047) all take the same number of steps to reach 1 as they have enough factor 2 to eventually converge. (i.e. 1028 and 1029 meets at 772, and 1030 is at 773. 772 and 773 meets at 580. And all these numbers meet at 94 after 19 steps) Then it boils down to finding out the numbers that eventually diverges behave differently. (e.g. While 27 itself in the above example looks weird, it "somehow connects" with 55 with itself doubled, 54) It seems like some numbers have larger gravity to bring more nearby numbers together.
After playing around with it for a while years ago my intuition was that if there was going to be a proof it would be a proof by contradiction. The extra challenging thing with that though is there are 2 possibilities for a counter example that are very unlikely to have an easy way to connect them so it’s entirely possible that it can be shown that there can’t be any other loops, but it’s not possible to show that there is a series that grows endlessly. And whats worse is it could also end up being a conditional where if there were to be another closed loop there can’t be endless growth, or the other way around but it can’t be shown to both be false at the same time. That’s why I don’t think I’ll ever think about this too seriously. It just doesn’t seem like it would be worth the effort for me. No matter how fun it is.
@@Vearru perfectly reasonable, the conjecture is a virtually endless wealth of examples to practise, but one can only interpret the same painting so many times before wanting to look at a new painting, much like the conjecture eventually grows stale for those that dwell on it too long when the goal isn't to actually solve it, but to understand it better.
@wibbuffey Proof by "shows up, drops a bunch of shenanigans, refuses to elaborate, leaves" is a classic. What a Chad move. If your gradschool professor would find this compelling, he would also not find it surprising and it'd have been "case closed" decades ago. The step that lost me is where pi is congruent to 1. The zeta function when evaluated at 2 is indeed pi^2/6 (search "Basel problem" for more details), but that pi isn't some arbitrary variable or something. It's pi. As in 3.14... After that step, for unexplained reasons, we explore 1+1/4 before evaluating only 5 with the Collatz function, then we call it a day. Maybe I'm slow but I don't understand what was done here or how it proves anything. Must be because I'm not a professor yet.
@@MathIguess My comment was more a joke but I'm very confused on the reasoning. I don't see how pi is congruent to 1...? Whipping out the zeta function out of nowhere sans explanation to solve the Collatz conjecture and then making a series of incomprehensible claims is absurd
I'm a little bit confused why the heavily nested expression has powers of 3 in the denominators instead of just 3's. I think the nested structure would implicitly multiply the 3's to some 3^k when you multiply all the terms.
@@dennismuller1141 you're correct. I made a mistake while copying partly from handwritten work and partly from a previous section of the same paper. In a nutshell, you're correct and it's a mistake. The expression remains absurdly complicated, though, which is why I still feel that any claims of having proved the conjecture would be a claim of a method for solving a crazy equation and it still shows how wild the conjecture is (but unfortunately in an exaggerated way because of a mistake).
An intersting idea that i had about this problem was to conveet the numbers into their binary representation. By doing so you can reformulate the problem where you repeatedly multiply the number by 3 an then add a 1 to the last 1 of the binary string. So for example if i take 14 (1110) it becomes 14 × 3 + 2 (101010 + 10) = 44. It is easily provable that these problems are similar. With this the only thing one needed to show to prove the conjecture is that the trailing zeros of the binary string grow faster then the string itself resulting in the zeros catching up until only one 1 is left and we have our power of 2. I havent proven(but think i could) but asume that the strings lenght "speed" approaches 1.5 characters / iteration for lim(iterations->infinity) However to show that the speed of the zeros is larger than that is way harder and i only had some trails for possible ways by looking at the behavior of clumps of 1 in the binary strings when applying the function. They started to shrink and leave patterns that could lead to the reason why the speed of zeros is >1.5 but this is just my imagination...
I've seen this one a few times too, but it's usually from those with a computer science perspective. Would you say you're more of a computer scientist than a mathematician? (Of course, one can be both and the lines are arbitrary and blurry if we get theoretical enough!)
Me personally I would've streamlined the definitions of Dn slightly by setting D0 to be the set of all numbers where the odd factor is only 1. That way, you could avoid doing duplicate work between what was your D0 and general Dn's (basically, your D0 is my D0 and D1). It also makes sense intuitively that n is the number of odd iterations done thus far. You would then be able to do the working out on the D1 case just after proving that D0 is the set of powers of 2.
It's as easy to prove as it's easy to disprove. I side with Paul Erdőz with this one, our current mathematics isn't advanced enough for this problem. Or at least I believe it was him who said that.
Please correct me if I'm wrong, but it seems to me that, in the formula for D_k at the bottom of 2.2.2, the denominators should be all exactly 3, as opposed to powers of 3.
@@juandesalgado another pointed this out and I think you're correct - I was partly copying from handwritten work where brackets were expanded (hence the exponents) and partly working from the previous parts of the paper itself and I think I confused myself x_x
There are lots of sensible variations of the Collatz problem. In the basic version, is it true that you will always end up in some cycle (so the sequence will not become unbounded), possibly not the 1-4-2-1 cycle? If so, is this also true if we replace the 3k+1 by ak+1, where a is any odd number, and if so, is there a bound on how long such a cycle can get, expressed as a function of a?
@@tommyrjensen I honestly don't know, even the work I've done that doesn't appear in this paper doesn't look into that. In all my explorations, I always focused on this specific version only. I think that Terrence Tao's work on this conjecture sort of answers what you asked, but not for other versions. It's definitely something to look at ^^
@@MathIguess Tao did do something. So far as I remember, he proved a statement about something being true for almost all numbers, not something definite. I could also be wrong here. What I am trying to say is that there are things one can try to prove that are formally easier than the full conjecture, like if you can definitely eliminate sequences that go unbounded, or have very long cycles. EDIT: the short version states something weaker, that Tao proved roughly speaking that almost all numbers 'almost satisfy' the Collatz conjecture.
I liked it - seems like there is definitely some halting problem logic that you got into in there haha. IE basically impossible to prove if a recursive program like you formulated will ever terminate.
I might try to show that no four numbers in a row ex:28,29,30,31 combine before reaching 4,2,1. It’s very complicated so far, but I’m getting closer to showing this. Also using the Collatz conjecture and another dumb result I made from it, I managed to prove (2^a((q+1)/2) -1)/q is always an integer for a(x) =A002326 on OEIS, meaning that if there are infinite primes in a(x) from composite q, there are infinitely many composite Mersenne numbers with prime power.
I think this is missing a handful of obvious things that everyone tries: Firstly, you only looked for length 1 cycles of odd factors, and found no solutions other than the 1-2-4 cycle (for positive integers), but a natural next idea is to look for cycles of longer lengths. For example, say I want to find cycles of length 2, then I get the equation: 3((3x + 1)/2^i) + 1 = 2^j x This equation needs x to be odd and (3x + 1)/2^i to be odd, but it's actually easy to see that these restrictions are the same as requiring that i and j both not be 0 (or more cleanly, i >= 1 and j >= 1). Also we require x > 0, i.e. x >= 1. Solving for x, we get x = (2^i + 3) / (2^(i + j) - 9). x > 0 implies 2^(i + j) - 9 > 0. We see that (2^i + 3) / (2^(i + j) - 9) = x >= 1 implies that 2^i + 3 >= 2^(i + j) - 9 >= 2^(i + 1) - 9 by j >= 1. Rearranging, 12 >= 2^i, hence 1
I think that there's a different comment that I've previously said I want to pin, but this comment takes the cake. I really appreciate this insight, and you make excellent points throughout. Thank you for taking the time to do so, because I can link people to this comment when their proofs go down one of these paths I didn't explore in the video. I have seen a few "proofs" of this conjecture but never explored the paths you mentioned here in much detail myself, so I wasn't as confident towards those as I was with the main one in the video, if that makes sense. Still, you are absolutely correct, I should have included this. I am not yet able to pin comments, but I'd like to pin this one when I can (assuming I remember to come back and actually do it). I will note that my next (likely my final) video about the conjecture will indirectly address all of these paths, but I wish I directly addressed them in this video. Thank you again for such an excellent contribution! ^^
It seems like it should be doable for each reasonably small k to (separately) show that there are cycles of length k (other than the one that is known). I forget what form the constraints are in for that? Oh, isn’t there something like, if you take x mod 2^k , then the result of that will tell you what type of steps the first k steps will be when starting at x? And so, for each of the 2^k possible results of x mod 2^k , you get a degree 1 equation in x which a x would have to satisfy if it started a loop, and then if you calculate these out, you will find in each case that has been tried so far, that the single solution is either a fraction or is negative, or is not equal to the chosen thing mod 2^k ? I imagine better techniques than his have been used to put very large lower bounds on what the smallest cycle other than the known one could be, if there are any.
This is for sure something I wouldn't have arrived at in undergrad. Indeed, though, I've come across information that suggested that they were able to put bounds on cycles, much like what you're talking about. That, to me, is the beauty of this conjecture. There's always more to explore. More examples where techniques can be applied. It makes me sad how often people call this a waste of time. Fun is never a waste of time :D
Just realized I made a typo in my first sentence, where I meant to say “that there are *no* cycles of length k (other than the one that is known)”. I guess maybe the rest of the comment made what I meant to say clear? Hm, are you sure you were unlikely to come up with it at that time? I noticed it, iirc, when I looked at how the sequence works on 2^N + x , vs how it works on x (first looked at x=1), when both are expressed in binary. And then going from that to looking at how it acts on C * 2^N + x . I imagine that you probably had a decent chance of trying this same idea and it was just by chance that you didn’t try this? I mean obviously I don’t know you well enough to be confident in saying that, but still.
@drdca8263 Back then, I only worked on the problem as stated, in base 10 with only natural numbers and the like. And it felt like I was making progress (I actually made a mistake which I only noticed this year when revisiting what I did back then). Due to that mistake making it look like I could continue making progress, I didn't explore other avenues such as using binary or even something as simple as comparing 3n+1 to like 5n+1 or the like. It's nice of you to believe I'd get there eventually though xD
Hmm, I'll double check it to make sure. It's *very* easy to make a mistake with how complicated this equation gets, so I wouldn't be surprised. Thank you for mentioning it, I'll get back to you once I've checked ^^
i think that to prove this sheiz one would need to show that any sequence which length tends to infinity, also tends to satisfy the probability of finding powers of 2 and 3 as the average, this automatically leads to a contradiction meaning no sequence can tend to infinite length.
@@MathIguess good guess, lol I did indeed look into binary. Since any binary number that is divisible by 2 ends with a 0, division by 2 is a simple as just ignoring the trailing 0s. I've noticed that if we do ignore the trailing 0s, any non trivial number starts and ends with a 1. And thus I became interested in the digits between these two 1s: The patterns, the count and how the entire number evolves after applying a single collatz step. (I define a collatz step as 3n+1 followed by removing all trailing 0s.) I really didn't care about the actual value of the number, just the patterns of 1s and 0s, and whether a number gains or loses a digit after one step. My thinking was, if I can prove that any number with n digits will have less than n digits after an arbitrary amount of collatz steps, then the conjecture is true. Suffice to say I did not suceed and gave up lmao I am mainly a programmer though, not a mathematician, so my terminology may be a bit wacky :P
I also tried the collatz conjecture as an undergrad and failed badly. I did try a few more paths that you didn't mention, but in the end, I didn't go anywhere. I feel you misspoke when you said if there's 1 counterexample, we don't know if there's infinitely many. You could easy show that there is, but the proportion of counterexamples compared to non-counterexample seems to tend to 0 as N goes to infinity. I know because I tried that approached and failed... Maybe you meant that only an alternate loop counts as counter example and not a starting position that leads to infinity or a loop? I don't know...
Indeed, I dropped the ball on that part. I was thinking too much in terms of odd factors and wanted to say something about the length of a "non-trivial" loop if it exists, that it intuitively should have infinite members (and therefore not be a loop at all) but I ended up fumbling that thought and spoke instead about how many counter examples exist rather than how many unique odd factors of counter examples exist in some sense. I'll perhaps pin a comment with this correction in the future, but it just goes to show that my intuition can be awful xD and since that was the point I wanted to make, I think the harm done is minimal at least xD The alternate approaches you mentioned here aren't ones I explored, but it does sound interesting ^^
How does a Diophantine equation have "more unknown variables than equations" 😵💫? Is it not actually just an equation with all integer coefficients and all integer valued variables?
@@tommyrjensen yes I'm not sure if I should have phrased it differently but we're used to solving systems of equations and when the variables are real-valued, having more unknowns than equations immediately tells us that we have infinitely many solutions, but for diophantine (systems of) equations, we might still have unique solutions despite having more variables than equations in the system. Does that make sense?
@@MathIguess So you do mean "system of equations", that helps to understand what you wrote in 4.1. Notwithstanding that Diophantine is suppose to refer to equations with all integer-valued variables. Also the equation x²+y²=0 has more than one variable, and yet a unique solution for real numbers. Maybe you think of restricting to linear equations only?
I went a little bit further in a different direction with my proof, but it also failed. I expected it to fail, so I enjoyed the process more than just getting disappointed.
Not a proof, but a pretty good heuristic argument (to me) comes from the binary representation of n. Any power of two will collapse immediately to a 4-2-1 cycle (because all the trailing bits are zero.) About half of the non-trivial cases reduce n by a power of two, the other half will yield 3n+1, which is always even so after two steps n will be at most (3n+1)/2, and this does not occur often enough for _random_ n as input to blow up. We can drive the probability of n increasing down further by analyzing longer sequences of steps (obviously, in the limit, we see the conjecture: all numbers seem to tend to a 4-2-1 cycle, but this is begging the question.) Edit: to make a theorem of it, all n= k multiples of powers of two trivially collapse in collatz(k) + leadingzerobits(n), because we just divide out the multiples of two first. So, if k does not blow up, then k*2^p for any p will also not blow up. Since apparently all small integers k satisfy the conjecture, we have all of them _and_ all of their binary exponents, which is a massive family of numbers who all are guaranteed to terminate.
@@yoavshati yes, this was an oversight, I was thinking in terms of odd factors when working on that section but almost trivially there would be infinitely many counter examples if we have one counter example. I should have been more clear on that.
something tells me we would have to find a proper pattern in prime numbers to prove this conjecture ngl, primes are so annoying to work with in math sometimes, like you can have a number with many factors but then you add 1 and suddenly it's a prime also i guess it's safe to say that any proof must have something like "for any given N where N is ... "
@lool8421 Someone on reddit said something along the lines of being able to show that any counter examples to the conjecture would have to be prime numbers, in terms of what I called "odd factors" in this paper. Unfortunately, they refused to elaborate and left like a Chad so I'll never know what they meant. Seems relevant to what you're saying though.
Great idea presenting a paper this way. I had trouble publishing and this is nice to see. I didn't see the Collatz conjecture itself defined for some time though, does it presume the sequence terminates. Here is a proof for you, and I see you ("[spoiler]: everyone") were really close. 1. Prove numbers 2 and 1 diverge. They do, which for example 2^n and 1^n will only get further apart. 2. Prove that odd numbers are in the category of 1^n. They never converge exactly with the number 2, so are approximately value 1 like an irrational. 3. Prove the collatz-conjecture format of 3n+1 oscillates sometimes between even and odd. Multiples of 3 do, and any fixed additive won't alter this behavior. 4. Show any rational convergence of two diverging directions is a line. This exists because 2 and 1 diverge, so any line between them must grow larger to infinite length. This line is what you have in "3n" for any n. This is proof that all counting numbers will oscillate between even and odd, and the collatz conjecture for any finite value terminates, but the formula (non-distinct, approx. 3x+b) is infinite. You're right, pushing yourself to try difficult problems independently is wonderful.
Not sure if you missed it, but I said that f is a function from the natural numbers to the natural numbers. I assumed that laypeople and undergraduates would typically take this to be {1,2,3,...} but indeed, it is clearer to say "the natural numbers excluding zero" or the like, since some mathematicians use the same symbol to denote the set {0,1,2,3,...}. Still, -1 is not on the table regardless of how you interpret \mathbb{N}.
Pretty sure there's a simple proof that no ever-increasing sequence can exist, either monotonic or not. In binary, a non-zero positive finite integer is a span of bits bounded on the left and on the right by a left-most '1' bit and a right-most '1' bit. (They can both be the same bit.) Instead of dividing out factors of 2, let '0' bits accumulate to the right as you perform "*3+1". Every time you times-3-add-1, you accumulate at least one '0' bit on the right, and possibly more. Don't remove them, just turn the next iteration into "times-3-add-1-at-the-current-right-most-1-position". So, start with 11 decimal = 1011 binary. 11*3+1=34 decimal, 100010 binary. Let's number the bit positions: Position [0] is the units bit position, position [1] is the two's bit position. Every bit position number is the power of 2 that it represents. From 100010, we don't divide out 2; on the next iteration, we just inject the "add-1" into bit [1] instead of bit [0]. Let's say that bit [R] is the right-most '1' bit (and [L] is the left-most), we change the formula to "times-3-add-[R]" (meaning "times-3-add-2**R") (times-3 doesn't change the position of the right-most '1'). So from 100010 we get 11 * 100010 + 10 = 1101000 (all in binary). Now R=3. Again, we don't divide out any factors of 2, we just let the zeros accumulate and update R. 11 * 1101000 + 1000 = 101000000, etc. In this new statement of the problem, we stop when we get a single 1 bit and all zeros to left and right. A single 1 with all zeros to the right means a simple power of 2, and if we did remove all of those factors, we'd be left with just 1. Now in this new view, every time we times-3, the left-most 1 moves further left. But also, every time we add-[R] to a number with a right-most 1 in [R], the right-most 1 also moves left. The left-most 1 always moves at least one position left, but never more than 2. The right-most 1 also moves at least one position left, but it's not capped at 2 positions, [R] can move all the way to [L] in a single iteration. e.g. 5 decimal = 101 binary. 11*101 = 1111 ([L] moved left 1 position), but 1111+1 = 10000 ([L] moved another position, but [R] moved 4 positions in a single step - and now [R] = [L]). And this sequence stops: We have a number with a single 1 and all zeros left and right. If we divide out all the factors of 2, we get 1. (5->16->8->4->2->1.) For an ever-increasing sequence, L must move left at least as fast as R indefinitely, and that isn't possible. [R] will always "catch up" with [L] over the long term. To understand why, consider a number with just two 1's, at [L] and [R], but far apart. Like 100001. "Times-3" affects both 1's the same: 100001 * 11 = 1100011. But "add-[R]" only affects [R]: 1100011 + 1 = 1100100. [L] and [R] started 5 bits apart, now they're only 4. "Add-[R]" is like a carry-in from the right; there was no carry-in to the 11 at the left. Depending on the bits in between [L] and [R], there *can* be a carry-in at [L], but not always. But there's *always* a carry-in at [R]. The original [R] 1 "grows" faster than the original [L] 1, and so [R] must catch up to [L] eventually. A non-looping non-terminating sequence is impossible. The *only* possible failure of the conjecture *must* be a looping sequence.
@@kevinjohnston8399 intuitively, I'd gladly grant .. all of that.. but rigorously... I'd be forced to insist that you somehow prove that there does not exist any string of 0's and 1's such that [L] somehow reliably matches or even exceeds [R] in terms of their "speed". You motivated this using examples, but how can we be sure that 001...1 doesn't have this property, regardless of what's in the ... part?
@@MathIguess Well first of all: Thanks for reading it! Second, I actually thought, "I'm sure this is true, but it surely must be a widely known thing", so I didn't include all the details. I did more details at the time, to convince myself. So give me a little time to write it up "all pretty like", and I'll be back soon.
While I can’t yet prove it, I assert there is only one (positive number) loop that contains positive numbers, 4, 2, 1… If you extended it to include the negative numbers, there are two more loops, one that includes -1 and one that includes -5. We can say there is no reason to believe there are any other loops. But simply proving there aren’t any other loops is insufficient to prove the conjecture, you need to prove that there is no number that does not increase to infinity.
It’s fun to fiddle with when you have time to burn, but it’s unlikely to tie in to other math very well, even if you could find a proof. Personally, I suspect a proof (if it exists) might be found by looking into convergence relations between the 2-adics and 3-adics.
What I have always wondered is that everyone tries to prove it in base 10. But i always felt like it has to be a binary problem. I don't claim to have solved it. I ain't capable of that. I just want a honest answer and write somewhere down my thoughts just so I stop thinking about it. For me it always seems binary since it looks like a tree where the new branches start from an odd number and the branches grow by multiplying by 2. The main loop is even the base 2 chain. That's why I always found it to be a binary problem. As a programmer seeing the division by 2 is just a bit shift in base 2 as dividing by 10 is in base 10. So representing the numbers in base 2 won't change the number only the position. So division becomes trivial. The multiplication by 3 is a little different. But you can break it up into 2 steps. It's having a bit shift to the left to have a multiplaction by 2 plus adding the original number itself onto it. Don't forget +1. What's interesting is that all numbers that end up in the 2^n branch have the pattern of alternating 1 and 0. (Reminds me of how computers represent some kind of numbers with complementary numbers for positive and negative, but doesn't matter). What you can do now is create an graph. A graph to represent that multiplication by 2 plus the original number of itself plus 1. The states of the graph would be the next digit of the next following number while the edges would be if the current digit of the original number is a 1 or a 0. If you do that, you will find that their are only 2 ways to create an alternating pattern of 1 and 0. You can also see that one of those 2 ways needs certain things to apply. To sum up one of those 2 ways causes the number to grow the other one causes it to shrink. It SEEMS that every finite binary sequence of 1 and 0 will end up after a finite step into an alternating pattern of 1 and 0. Since you have one spot in a sequence of 0 and 1 where their is a digit shift either from 1 to 0 or from 0 to 1 which causes the pattern to start to form after enough repetition. Taking that it seems the only numbers that won't fall into the 2^n branch after a finite amounts of steps are 0 since their is no spot in the sequence of 0 where a shift to a 1 as a digit. Also an infinite amounts of 1 wouldn't end in the 2^n branch for the same reason since their is not alternating spot in that infinite sequence. Every other finite number has an alternating spot after the last 1 digit was written since infinite amounts of 0 are ahead. I have written this down in hope that maybe someone reads and understands my jibberish with lots of spelling and grammar mistakes and can tell me what I am missing. Idk I just felt like I should write it down, since I am sick of myself. I never really feel being able to share my thoughts with others in real life and now was the time for it.
@@timetr4veler949 this is definitely an interesting lens to view the conjecture through. I'm glad you were able to share your thoughts here ^^ I think that you'll find that *proving* (to the extent that mathematicians demand proof) that the pattern persists for all finite strings of binary digits would be a very tricky endeavour. Indeed, it's easy to convince ourselves that the conjecture is true, but the proof has escaped us for a very long time. I'm glad that people are considering things I'd never have thought to try though ^^
@@MathIguess Look up a theorem by Crandall. The current state is that there need to be at least 1e11 (or so) elements in the loop, based on the fact that (very cleverly adapted) brute force methods (David Barina is at the front of that, currently) have shown that all numbers up to ~2E21 go to 1. Notice that numbers reach 1 relatively quickly: up to 1e20 no number takes more than 2500 iterations.
Why can’t you show that if we find one counter example you can’t find more? I guess depends on how you define example (do you mean an odd number or a set of odd numbers?) but if we have an odd number it can be written as 3m +k where k is 0, 1 or 2, i.e the number is 0, 1 or 2 mod 3. If it’s 2 mod 3 then we can mulitply it by 2 and subtract 1 and get a number divisible by 3 (2(3m+2)-1 = 6m +3 -> 2m+1 which is an odd number). We can generate an infinite amount of odd numbers by multiplying our number by 4 since if a number n is 2 mod 3 then 4n is also 2 mod 3. If instead our number is 1 mod 3 we can multiply it by 2 to get a number that is 2 mod 3 this generating infonite numbers again by the same process (that our new number that is 2 mod 3 is even doesn’t matter since [2(3m+2)-1]/3= 2m+1 is odd for all m. If the odd number is 0 mod 3 then it can be written as 3m where m is odd. Apply kollatz to it and get 9m+1 which is 1 mod 3. This number is even but if you divide by 2 you get a number than is 2 mod 3 thus allowing you to generate an infinite number of odd numbers. So since we can generate an infinite number of odd numbers from any odd (or even) number that is 2 mod 3 and since we can get to a number that is 2 mod 3 from any odd number that is 1 mod 3 by mutiplying by 2 or from 0 mod 3 by multiplying by 3 adding 1 and dividing by 2 there must be an infinite number of odd numbers connected to every odd number. This doesn’t prove kollatz, but I don’t see how this doesn’t show that if some odd number is disconnected from 1 via the collatz function there must be an infinite number of odd numbers disconnected from collatz. Am I missing something?
You're correct, I tripped over my thoughts a bit by that stage of the exploration and I was indeed thinking in terms of odd numbers and odd factors, rather than counter examples as one would generally define them (of which we can trivially find infinitely many since if k is a counter-example, so is 2k). I was trying to say that the odd factors might just hop between a few different numbers and form a loop other than 1-4-2 and there might not be infinitely many odd numbers in the set of odd factors of counter examples. It is as you say, though, we can indeed generate more counter example odd factors using those methods. I was just wrong on that part. Thankfully, I was wrong in a section where my point was that my intuition is unreliable xD I appreciate this comment ^^
@@MathIguess I want to add that I fully agree with your somewhat controversial take that time spent on Collatz is not wasted. It seems most people warn others to stay away from these famous open problems, but like you I've learned a lot from attempting them. I agree with the warning that after a certain point you might get stuck and bang your head against a problem that is beyond your abilities and as such your time would be better spent looking at other problems, but that is after the first few hours/days. Spending a week or two to really get a feel for the nature of what is tricky with Collatz or Goldbach or other famous problems is both good practice of one's mathematical skills and a lot of fun. For me, understanding the difficulty of predicting position in the binary carry sequence gave me a fundamental appreciation of why Collatz is a tricky function. Likewise, understanding the difficulty of the union of partial cyclical groups mod p made me realize why Goldbach is so easy to prove for individual even numbers but so hard to prove for all even numbers (greater than 2).
I'm not a mathematician, I'm a low-level programmer, so rather than thinking of binary and logic in terms of numbers and math I see numbers and math in terms of binary and logic. The collatz conjecture looks very different in binary. In order to optimize the calculation I defined your "extract odd factor" as "strip trailing zeroes", which can be coded as 2 machine instructions (tzcnt;shr;) rather than a loop. The collatz cycle is then "strip trailing zeros", 3n+1, repeat (since strip trailing zeroes always results in odd, and 3n+1 on odd n is always even). If you then print out the sequence generated in binary (I found 0,1 notation made it harder to see so I used -,* with blank for leading zeroes instead, I also only printed the sequence after the "strip trailing zeros") you start to notice patterns. The bit-length of the numbers only really grows if there's a long sequence of 1's at the right end (I call this "the origin", but I mean the Lowest Significant Bit). For each 1 in that position the total sequence can grow by (on average) log2(3)-1 bits, while the bits get consumed. Runs of 1's anywhere else in the number have no influence. This is because 3n+1 grows the sequence by log2(3) bits, and the minimum that the "strip trailing zeroes" will strip is 1 bit. a sequence of 1's next to the origin is equivalent to a 2^n-1, and all numbers of that type result in a 2-shorter length of 1's with a 10...01 at the ends eg 31(0b11111) * 3 = 93 (0b1011101) or 63(0b111111) * 3 = 189(0b10111101). Adding 1 to these sequences moves the final 1 in, leaving a 1-shorter length of 1's with a single zero after it, which gets shifted back to the origin. The single zero is the important part for the growth of the number. Any interruption to the sequence of 1's breaks the process, even just a single zero bit. The later 111's fizz off noise from both ends, the origin 111's fizz from one end, the gap between them widens... There are also some slow-growth patterns that are sequences of 111's that are just 1 extra bit from the origin, but these barely grow (I think its log2(3)-1.5 bits per cycle) and they consume their 111's at twice the rate, so I don't think they matter much. Bits beyond any run of 1's next to the origin just don't matter as far as growth is concerned, and sequences next to the origin get consumed. The Proof, or disproof, therefore, lies in the length and frequency at which these sequences of 1's get produced. The length of a sequence of 1's that can be naturally generated by 3n+1 seems to be limited to about 8 (just from memory I'd say upper bound is 10 bits of 1's ... should be easy enough to get exhaustive data, or rigorously prove it's impossible to be more, though) and they must be generated in the first log2(3)·n bits in order to generate any net growth (with (log2(3)-1)·n 0-bits between the n 1's and the origin), and at a frequency that is high enough to counteract the general attrition of bits between them. With these things said it seems to me that the absence of counterexamples in the range that has already been tested (I think it's beyond 64 bits?) is sufficient proof: if there were any sets of origin-adjacent bits that could reliably and repeatedly generate sequences of 111's frequently enough to cause infinite growth, they'd already have been found. As I said, I'm not a mathematician so I don't know what constitutes "proof" in this context, or how to express a mathematical proof in a way that mathematicians would accept, so let me know if there's something you don't understand, or as a mathematician seems unsatisfactory, and I'll try to elaborate.
@treelibrarian7618 the most honest response I can give to this is to say I'm not able to follow what you mean. Despite having made the Collatz coral animation thingies in python (with Manim), I don't know too much about programming at this level. I can say though that as mathematicians, we can check as many examples as you'd like, it still isn't proof. The standard is extremely high. Aside from this, sadly, I don't think I understood well enough to comment on the mathematical rigor or value of the rest of what you've said here. I hope that someone else comes across it and can more confidently weigh in.
@@MathIguess the basis of this potential "proof" is showing that nothing matters as far as growth/shrinkage of the number (I think of it in terms of number of bits, but it's actually scale of value) beyond the first 20 bits or so (values modulo 2²⁰). The main thing that would need to be proved is that it is impossible for the continuous series of 1-bits required for growth to arise naturally from the multiply-by-3 operation beyond a certain size. There is only one precursor to 1111111111, and that is 0101010101, and that in turn has only one precursor (hint: its a couple of smaller groups that interact if they're a certain distance apart) and that precursor is too big for multiple groups to get together to generate longer strings of 1's. I might be wrong about the limit being 10 bits, I saw one example of 11 bits appearing later in the number (not next to the origin) but I've not seen more than 10 arise at the origin from the collatz sequence, where there are no lower-order bits to feed an extra bit into the start except the +1 part. but as I said, I'm no mathematician, so I don't know how to phrase these concepts in ways that would impress mathematicians. I was rather hoping you'd poke holes in my idea that would help me develop it. as you rightly point out proof by lack of counterexample is not proof, or even the start of proof, until the domain for potential counterexamples has been exhausted. This is an attempt to show that the domain for counterexamples is finite, and has already been explored: and then mathematicians would still have to find some way to show why. This is more like an attempt to show that it is mechanically impossible for the collatz sequence to expand infinitely. Your inability to follow may be due in part to my inability to express clearly, but I feel it's also indicative of what I suspect: that the proof of collatz lies outside the standard domain of understanding of mathematicians, which is why I presented this idea. If you like I can post the code to generate the binary collatz sequences I mentioned, wherein the patterns become quite visible, and you can play and explore how the bit-sequences work. I did post an example of the output from the program, but youtube seems to have lost it. I wrote it in C, but if you think you'll have trouble working out how to compile it then I can re-write it in python.
@treelibrarian7618 you've definitely clarified a few things here. Someone once mentioned that I should be careful with the kinds of numbers that a particular language can handle. I'd want to know what size integers C can handle, and whether this is relevant to what you're up to.
Your second-to-last sentence is a classic trap in number theory proofs: "It seems to me that the absence of counterexamples in the range that's been tested so far is sufficient proof." The most famous example of this turning out to be false is Polya's conjecture.
Taha M. Muhammad/ USA Kurd Iraq I solved: 0- Pi Length 1- Euler Perfect Box in 2 ways 2- Collatz Sequence in 2 ways 3- Fermat’s Last Theorem on 5 pages 4- Fermat’s General Case on 8 pages 5- Cubic Root by Hand for real number. A- Unfortunately, no evaluation to my articles At American Math Society (AMS) and I am a member. AMS blocked my email and my phone! B- arXiv is not allowing to submit my victory under excuse that I need endorsement from a professor. I do not find a professor can give endorsement. Therefore, I am under discriminations of math societies. Otherwise, why my victories remained hiding from math world societies?
@@tahamuhammad5962 that's why you'll get blocked, friend. You come across as a crazy person by starting the conversation this way. I'm some random youtuber from [NOT AMERICA] that likes mathematics. I'm not an authority and for all you know, not even an expert or anything. Just some random person. This is the same as yelling about this at a bus stop at another passanger that's waiting for the bus.
@@Trizzer89 I included the "corals" (I assume this is the tree picture you're talking about) purely because they look cool and relate to the conjecture. The paper itself doesn't go into that at all. Additionally, if the set D from the video is equal to the set of all positive integers, it follows that there are no loops other than (...4,2,1,4,2,1,...). I did include a bit where I showed that a loop of numbers all having the same odd factor is forced to be this loop, but I did not go further down that path. With that said, others indeed pointed out that what you suggested is another common path and it would have been better if I had included it in the video. I agree with this. I don't think I would call any attempted proof "dumb", though, that bit seems harsh.
This is a case where a non-constructive proof would be very unsatisfying if it's found. At least for me. (I'd accept it but it's like knowing a puzzle *can* be built rather than seeing it built.)
@@berlinisvictorious two major things: Luck, and help from others. For views, the title and thumbnail do the heavy lifting. For engagement, the intro hook and content/pace are very important. I made a promise and tried my best to keep it by saying exactly what the video is about in the first 15ish seconds. The engagement on this one is higher than I expected, meaning that most people that see it would probably feel like it wasn't clickbait, which is important to me. I want to make content that's worth watching :) But the summary: This particular topic has a huge market and not a lot of competition if you make a good thumbnail and title. (No offense to others but some of the titles and thumbnails aren't gonna stand out - things like "my proof for Collatz" etc are a dime a dozen.)
@@berlinisvictorious additionally, I had 200 subs when I had zero videos because I used to have like 50 videos on this channel but erased everything when I came out as trans. I think that those 200 had to help in some way.
Here is my "proof" I thought up years ago. ( Basically it's just showing that statistically the function moves downwards, not upwards, over time) All even numbers are divisible by 2. So, for every odd number encountered you get exactly 1 collatz function. For every use of the collatz function you get an even number so it's immediately followed by a minimum of one division by two. This means you can effectively write the collatz function as 1/2(3N+1) Which simplifies to 3/2N+1/2, or 1.5N+0.5 That's the TRUE collatz function, because the division by 2 is guaranteed. Now, assuming the numberline has a regular distribution of even and odd numbers (I'd say that's fairly safe) then no matter what your starting number the distribution of even and odd numbers is effectively equal, (exactly equal 50% of the time, and preferring odd by one 25% of the time and preferring even by one 25% of the time, with the amount of deviation from 50% that single number matters quickly becoming trivial the further from 0 we move) Therefore, for every time you run the collatz function of 1.5N+0.5 You have a 50/50 chance of it bringing you to another even number. Now, let's stop there for a second. If you had a 50/50 chance of it simply performing the collatz 1.5N+0.5 again, OR floor(N/1.5) Then the average movement of the function would be 0, and no matter what your starting number you would just end up in a loop. But instead we are dividing by 2 So you have a floor of moving upward by 1.5*N+0.5 50% of the time. And the other 50% of the time you are moving downward by 0.25N-0.75 Taking just these two 50/50 probability outcomes we see: 0.75N+0.75 - 0.25 But thats not really the case at all, in fact, because every time you divide by 2 you have a 50% chance of doing it again, so you don't really spend 50% of your time moving down 0.25N+.25 You spend 25% moving down by 0.25N-0.75 So you spend 25% of your time moving down 0.62.5%-0.875 But that's not right either. You spend 0.5, and 0.25 as stated but that next 0.25 also splits So you spend 12.5% of your time moving down 0.62.5%-0.125 And you spend 12.5% of your time moving down 0.8125-0.625 Taking these and the next iteration the average of this conjecture is that the motion of the process is downward. Thus, given enough time we can be assured the function will eventually reach the loop
A problem is that there isn’t any uniform probability distribution on the natural numbers, due to sigma-additivity. Also, showing that it happens with probability 0 is not immediately the same as showing that it never happens, so you’d have to address those somehow.
Ironic, I put out a video on the Collatz as a proof just recently! I seem to have gotten further than you did, though maybe that further progress was incorrect! You'll have to tell me. We have very similair lines of thinking, though you set yourself up for failure by thinking of this stuff the way you laid it out. It's way too rigorously formatted and you're not thinking about the modular relationships that are forming. Or maybe you are, that d infinity set is impossible for me to comprehend, other than understanding what it's meant to represent. Here's the things I proved in my video (I think i proved!): No such loop exists, period. We can algebriacly solve every type of "movement" (2^k from one odd integer to the next is the same as all others in the list) and create a set or list of those solutions grouped together. From this fact, I think???? I say it's true but it's more of "it makes sense to me 🥺" that implies the conjecture is true. If it doesn't imply the conjecture is true, it implies it's unsolveable, or false. (Implies here being the non-formal, suddel implies)
@@MathIguess Solution to CC is trivial. CC is an iq-test (and once you instantly solve it, you understand this world is built on lies) for others, who cant do it, it is designed to be a conformity program. Eventually they all will say: "you solved it? take your meds" (not a joke, exact words every time). I´ll repeat here too - this 6th grader solution is not allowed by BigDemia. And btw, your failed logic s****, you think inside the dot. LOL
Taha M. Muhammad/ USA Kurd Iraq I solved: 0- Pi Length 1- Euler Perfect Box 2- Collatz Sequence in 2 ways 3- Fermat’s Last Theorem on 5 pages 4- Fermat’s General Case on 8 pages 5- Cubic Root by Hand for real number. Unfortunately, no evaluation to my articles At American Math Society (AMS) and I am a member. AMS also blocked my email and my phone! Why?
Theoretically conjecture could be proven by contradiction for any given number. Like if I have a number 10000000, I divide it by 2 and if I know that 5000000 will end with 4-2-1, I can say that any number that eventually comes down to any even multiple of 10000000 or 5000000 will end on 4-2-1. But I think my idea is basically meaningless
For this to work, we would have to know that the conjecture is already true. Sadly in number theory, proofs by contradiction often add an extra step as compared to direct proofs, from what I've seen. Could still be fun to explore though ^^
@MathIguess it was just my spanteneous thought, no analysis. Now that I think about it it could be a good algorithm to brute force the conjecture. Like if you know some 350000001 numbers will fall into 4-2-1, the next number will be smaller than our initial number and we know that it will fall too. It could make research for that number easier, but beside that it's useless
i solved, one other guy solved it, and solutions are being burried. I cant even mention c-word here. it is so simple, and i dont get it how is it possible to miss it. I guess the negative suggestions "it is impossible" work really well, locks up all intuition. "Problem" was designed to drive ppl mad. 6th grader can solve it.
nice solution comes from known properties. but if im not mistaken, i found 4 ways to prove it. other ways are just not that classical solution. It all happen, because i watched 90 min of Terence Tao and he all he talked about was nonsense and was not even close being on the right track (none of the 4 ways). All he did do was some whining: "math is not ready yet". Haha.
@@filipnagy3535 I disagree, I think that many of us already have a "why", and that's because it's fun or we're curious. These are perfectly valid reasons to try to solve a puzzle/riddle/problem ^^
How to prove Collazt Conjecture:
Use the secret technique of every professor, the pro move: "The proof is left as an exercise to the reader".
🗿
Or "The proof is trivial"
The classic: I ran out of space on my napkin (or a variant of this)
The proof is by magic🗿
@@MathIguess Fermat moment: "I have a marvelous proof that this universe is too small to contain"
@@MathIguess It's the margins, people. It's the margins it won't fit into.
Nice video! I had this on in the background, so forgive me if you mentioned this, but there is a really nice way to invalidate many "proofs" of the Collatz Conjecture. The trick is to see if the argument still holds when 3 is replaced with 5. If the proof still works, then the argument is definitely invalid, since the 5k+1 version does not always converge to 1. For example, starting with 13 will eventually return to 13.
In general, checking whether a proof is "too strong" is often an excellent sanity check that an approach might be fruitful. I'm not sure if this applies to your method, but it's a good idea to keep in mind.
This wasn't mentioned but I love this! I'll for sure mention it if I make a future video (and remember to mention it xD)
Thank you for bringing it up :D
Edit: I plan to make listenable content btw. Math videos to chill, game, sleep or do chores to haha. Absolutely feel free to have it as background noise - it's why I cut out almost all the silences.
I always do that when I'm solving math problems in general, although it generally comes with a gut feeling
That's excellent I'm going to start searching for such sanity checks in my own work!
There method also shouldn't be able to solve Collatz-like proofs where you have a list of congruences mod some n and rules for each one.
I met a 65 yo Guy who never studied maths past middle school (he did not go to high school) and he is conviced he solved collatz and Riemann hypothesis. I took a look at his work, it is just a load of basic arithmetic and he does not prove anything lol
The sad reality is that most "proofs" of this conjecture boil down to similar things. I want to encourage a love of math and an exploration thereof, but arrogance is a delicious poison with a bitter antidote. Worse yet, some are too far gone to even entertain the possibility that they *could be* wrong.
@MathIguess totally agree with that
so i was at a bar...
really i was, and the guy near me was going on about non particle physics - he solved dark matter by supposing that his stuff covered things that weren't particles. i'm not even sure what to say to that
@@dercooney Buy him a drink and ask for details 👍 Maybe you can correct him and get your name added to his paper 😀
@@dercooneyThe funny thing about that crowd is that dark matter isn't even a problem which can be solved by theorists. There are plenty of working models of dark matter, plenty of dark matter candidates. The problem is in choosing a model that represents the universe, not coming up with a new one. Developments in this field will come from experiment.
"Has no one solved this conjecture so far? tsk, everyone must be stupid, except me."
I feel like Disproving Divergence and Disproving Cycles are just wildly different problems, and the conjecture is just an awkward marriage of both.
In theory, for each number, there is 3 possibilities.
1, it collapses to 1-2-4, every positive integer we've ever checked does this.
2, in its sequence it eventually loops back to itself, creating a ring. Negative integers actually do this. It would imply a second root to the structure, and if you have that root, you can use any member of that loop to generate infinite numbers that fall into this root by multiplying with powers of 2.
3, some number runs off to infinity. In this case we can find infinite numbers that run off to infinity by applying the conjecture to the number repeatedly, as well as by multiplying any number in this sequence by every power of 2.
This means that either the conjecture is true, or there are infinitely many counter examples, we just haven't found them yet. There may also be non-trivial extension to counter examples, but that requires using collatz in reverse to look for odd numbers that evolve into the counter examples, which is non-trivial itself, and involves scanning numbers 1 greater than the product of a counter example and powers of 2 for divisibility by 3. The trivial extensions will always be greater than the counter example, the non-trivial extension may not be, meaning a number may climb up into a loop. This gives some structure to the problem... to prove the collatz conjecture, if we can disprove 2 and 3 are possible, we prove the conjecture. If we can show that some sequence is in group 2 or 3 we disprove the conjecture. If we can prove that every number except 1 eventually descends below itself, we prove te conjecture, cause both a loop, and a runaway to infinity require a lowest value of their sequence. This paper shows that proving you can invert collatz to find the full set of the natural numbers is extremely non-trivial, but so is, it seems, every pathway to proof. Good to know you had fun working on this. That's the most important thing.
Wow this is relatable.
I first heard of this problem and got into it around the age of 14 - 15. I immediately found it really really interesting and wanted to learn more.
For me though, I had an entirely different approach, from the start I always tried to prove that every number n eventually reaches a number lover then itself (except one of course). This also would proove the conjecture.
I tried a lot of things and to this day I am actually surprised how far I reached into the conjecture without any outside help. For example I got a great understanding of the conditions for a loop or why exactly numbers that shoot up to infinity would have to be so incredibly rare.
Over the years everytime I learned something new in math or heard of a new concept I threw it at the problem. I really learned a ton of stuff just from doing basic research and also first got familiar with research papers.
Overall it really helped my understanding of math concepts and as you already said, gave me motivation to pursue mathematics further.
This is part of why I hate it so much when people call it a waste of time or when they discourage others from looking at the problem. It's so interesting and rich with examples. And fun!
The only snag is that many people that do look at the problem also let their ego get the better of them, then they claim victory when they really shouldn't.
Collatz wasn't that problem for me, mine was integer factorisation. It led me to learn more about optimisation and computer programming when I was convinced my approach would work but that the computer was too slow, and then inevitably I would go back to the maths to continue looking into it deeper when I realised my program was going to take forever for large numbers.
I've had the same idea!
This is a good video; nice work. I read about the Collatz conjecture when I was about 13, and have thought about it, put it down, and come back to it, many times in the 35 years since then. Along the way, I went to graduate school a couple of times, and picked up a couple of degrees in mathematics. Whenever I learned more math, I returned to the problem, and figured out a little bit more.
At the beginning, I worked independently, not as a matter of principle, but because I was a child, and had no idea about mathematical literature, much less a clue how to find papers, or the ability to read and understand them. Eventually, I did look at some of the seminal work by Terras, Crandall, Thwaites, Lagarias, and others. Meanwhile, I've continued to develop my own approach, which diverges a bit from the well worn paths. That said, I don't flatter myself that I'm doing anything truly original. I do it for the fun; I do it for the exercise.
Over the years, I've also made a hobby of reading attempts by would-be provers, helping them cast their ideas into more standard language and notation, and pointing out their errors in reasoning. From that experience, I can confirm what you've said are the most common approaches to the problem. I laughed at the part where you said, "That's crazy! Who would think of that?"
I agree that it's not a waste of time. Having this problem in the back of my mind has motivated so much of my study of number theory, which has been richly rewarding. I ended up with a PhD, and a career as a professor! I have learned more than I imagined I would about p-adic numbers, probability theory, multivariable calculus, computer programming, and data analysis, by applying those tools to this problem. Currently, I am working with a collaborator on a paper that I think will be ready for publication early next year. It's not about the conjecture itself, but it is Collatz-adjacent, and inspired by the conjecture.
For me, the most fruitful and interesting path has been the exploration of loops other than the famous 1,4,2 loop. By extending the domain of the Collatz function to all rational numbers with odd denominators (the localization of the integers at 2, if you want to be formal about it), we see infinitely many loops, which display all kinds of unexpected behaviors. Classifying and investigating this menagerie of rational loops has provided me with years of entertainment.
Anyway, thanks again for the video. If you'd like to compare notes in more detail, let me know; I enjoy corresponding with fellow enthusiasts.
Amazing stuff. You're clearly very intelligent and have great pedagogical value. Often showing what doesnt work is just as valueable as showing what does
My favourite formulation of the conjecture is to prove or disprove whether the function f(p) = odd-factor(3p+1) is eventually decreasing. That is, for all odd p > 1, you can get a k > 0 such that f^k(p) < p. This function f is a collapsing of the Collatz iteration function to a definite domain and range of only odd numbers, allowing the proof to ignore even factors entirely. That said, the odd-factor function is what's holding us up, as mathematicians still don't have the right insights to understand how it interleaves with other functions.
As for how to derive this, the idea is to realize that all even iteratons of the Collatz iteration function can be collapsed into getting the odd factor before redoing the 3p+1 step. Now, if f as eventually decreases for all values at or below p, then it must necessarily reach a minimum by the well-ordering principle (here that would be1), otherwise we would have an infinite descent situation with the sequence of every time f eventually decreases. This also inspires the p > 1 bit, as we need to stop the descent somewhere, namely at the lower bound of 1.
Yeah, that's what I thought of, too, at first. I couldn't get beyond that point, though.
It is actually quite straight forward to understand why 1, 5, 21, 85 etc are the "odd factors" of n in D_0, if you think about how those numbers are written in base 4.
Spoiler: 1=1, 5=11, 21=111, 85=1111 etc and if you multiply those numbers by three you get 3, 33, 333, 3333 etc, and if you then add 1 to that new number (remember we are in base 4), you will by definition end up at 10, 100, 1000, 10000 etc, which are all clean powers of 4.
Also, if you multiply any number in base 4 by 4, you are simply appending a 0 to the end of the original number. If you also add 1, the result is instead that you append a 1 to the original number. This is the reason why all those numbers are linked by that recursive formula a_n = 4a_n-1 + 1.
I almost forgot to say thank you for the video!
Changing bases is something I see very often but never thought to do myself ^^ thank you for this, I agree that it's much clearer in base 4 why the members of D_0 are that way ^^
Wonderful insight ^^
I remember when I started my take on the Collatz Conjecture, and I don't think I've seen anyone else try and do this before. Instead I went and tried to find properties of similar a family of problems, in the form {a*x/2+b,c*(x+1)/2+d} for the mod 2 case. I didn't go to far with it, but found that most other coefficients are somewhere between easy and trivial to find out if they tend to infinity or have loops.
This is one thing I’ve been curious about and have made some brute force attempts at too.
One very trivial fact is that for any number which can be reached, it multiplied by ALL powers of 2 are also on it. The other interesting fact is that this tree cannot be represented in 2 dimensions cleanly, as many nodes overlap in strange (but predictable) ways.
For example, the numbers 4 and 5 can be shown to overlap on this tree. Start by drawing it out like a binary tree from the root node 1, where going down and to the right means multiplying by 2, and going down and to the left means subtracting 1 and dividing by 3. You can draw this tree down to 4, and then when it splits off, it recursively contains itself if you go the left, as it returns to a new copy of the root node of the tree, 1. But, if you keep going down and right from the tree to 16, you can again go to the left, which arrives at 5.
The strange feature is that if you continue the tree after going left of 4, so go from 4 to 1 to 2 back to 4 again, this new 4 will overlap in the same location as the 5 connected to the 16 (you need to draw all nodes with equal spacing on the tree, don’t avoid overlapping nodes, this is actually an intentional feature).
So the question is, what does it mean for two numbers to overlap? Why 4 and 5? Well, the simple answer is actually just the binary representation of 1/3, which is 0.010101… since the whole idea of moving down and to the right is multiplying by 2, that means it’s equivalent to a binary shift to the left. 4 is the result of cutting off the repeating digits after the first 1, then shifting all the bits to the left, but 5 is the result of chopping off only after the first TWO 1s, then shifting the bits to the left by the same amount, which is why they are each 100 and 101 in binary respectively.
In fact, 10000 (16) and 10101 (21) also overlap. This also holds for going down further layers to the left on the tree, but it gets MUCH more complex to represent. But, ultimately every single number on the tree is just the result of bit shifting 1/3 plus some other chunk of digits on the head of the number. That doesn’t make it easy to solve of course, but visualizing it this way I feel makes it make more sense what the rules actually do to the numbers as you navigate the tree.
(Btw, the overlapping of numbers also has some unique effects. For example, if you navigate up the tree to the right from 4 and 5- yes, even though 4 is even and technically should only have a path to the left- then you can see that to get from 4 to 16, it’s the result of multiplying by 3 and adding 4, whereas getting from 5 to 16 is multiplying by 3 and adding only 1, like the actual rule should be. This extends into a further pattern to show that even overlapping values on the tree can navigate to parents of other values in the place, and the amount you add after multiplying by 3 will ALWAYS be a power of 2 related to how far you had to navigate down the tree before arriving at the overlap. If you instead look at 2, you can show that it connects to 8 by multiplying by 3 and adding 2, you can show that 8 connects to 32 by multiplying by 3 and adding 8, and you can connect 16 to 64 by multiplying by 3 and adding 16, which is why 16 and 21 overlap. In addition to 16 and 21, 20 also overlaps in the same place, because 4 and 5 overlap, and going down to the right on the tree from 5 puts 20 in the same spot, which is 10100, again showing that it’s just 1/3 in binary with chopped off repeating digits. Because you only had to navigate 2 spaces to the right from 5 to get to 20 however, to connect it back to 64 you only need to multiply by 3 and add 4, instead of 16 like for connecting 16. It’s all connected, it’s all binary thirds.)
If you think the conjecture is true than brute force will never work for obvious reasons.
@joshuaiosevich3727 I’m well aware that brute force doesn’t work, but what I’m more talking about is determining if there’s any details related to the patterns I’ve stated that can be abused to show that any sequence of starting digits must be possible to make.
Because of how the pattern as I’ve stated works, you ONLY need to consider the leading digits that aren’t a repeating sequence of 1 and 0, OR trailing 0s.
If you have a number like say 1001101010100 (in binary of course) then you only need to consider the 10011 part at the start, as of course the trailing 0s can be removed easily, but ALSO I can determine that the 01010101 can be be excluded as it’s just a trailing 1/3 that’s been shifted to the left a bunch, meaning based on the pattern I’ve established, it is trivial to show that- as long as the leading digits do- it connects somewhere on the tree as far as I understand. This only leaves the beginning digits, 10011 as far as I can tell.
Maybe I’m making a mistake somewhere, but it even further reduces the search pool than just chopping off trailing 0s if it’s all correct.
I'm still watching.... (short pause) But I started with the same mindset. I heard about the conjecture and basically refused to read up on what people had tried so far, wanting to figure it out all on my own. I worked now and then on it over a period of 4 years. Sometimes every day a bit, sometimes on a Saturday up to 6 hours. Sometimes a few weeks break.
I looked at the usual formula, thought probabilistically first ofc, calculated backwards, moved the index, thought about tree-arguments and counting arguments, looked at things with modulo-arithmetics, trying to reduce a bit that way what one needs to prove.
I looked at the general parametrized formula that describes the whole global structure, I thought about whether using the rational number could help me. I looked at the binary-automaton that {0,1}-sequence ...
But Basically whenever I thought I have a proof, slept over it, thought more about it, I noticed I had made a thinking mistake. I went through that 3 or 4 times.
I tried all these things, before I realized other people have done the same things already. 😗
I was wondering, whether p-adic numbers, continued fractions or harmonic analysis could be the next tool, but that would have been like starting from
zero, needing to learn new techniques, not really having a clear idea, whether it could get me somewhere then.
So 1 or 2 years ago I felt I need some new input, finally read some Research Papers about the Collatz Conjecture, also one that summed up was has been found out so far by mathematicans, and that was something I should have done way earlier (!).
I should have maybe worked on the conjecture for a few weeks just by myself, but then really looked at what professional mathematicians have done already. To waste less time.
edit: So now, whenever I read that someone claims to have proven the Collatz-Conjecture, I immediately think that with a 99% chance the proof is wrong. For it is very easy to make thinking mistakes there.
edit again: And yes, working on the conjecture was fun. But well, likely the primary thing most people will find out is that it is a really a hard problem. ☺
One needs to rule out a cycle and rule out that there is no starting value that makes the sequence rise asymptotically to infinity. And if you believe you have shown that this can't happen, your argument is most likely flawed. Sleep over it, think again ... Before you declare victory. That decribes my experience. 😉
The problem with the conjecture is that it insinuates it must be true, but there are similar functions that behave also almost like that, creating almost a tree, but having nevertheless unexpectedly exceptions.
What a wonderful and well thought out comment!
I said "for reasons removed from this paper,..." somewhere in the video, saying that I find "proofs" out there unconvincing and I'll likely elaborate on this in the future. Sadly I don't think I should say much more here yet - but I do plan to make up to two more Collatz videos.
In the meantime, though, I'm going to branch out (pun intended) and cover topics unrelated to the conjecture first :)
Thank you for watching and thank you for this excellent contribution to the conversation!
23:48 If there is one number that fails (or multiple), there would be indeed infinitely more because we can multiply by 2 as many times as we want and all of them reaches that number.
@@michaelihc2452 indeed, my head was stuck on the "odd factors" and I was thinking that one counter example would not necessarily give us access to infinitely many other counter examples which have other odd factors. Others have pointed out that we can actually can generate new odd factors in some situations, and this intuition is wrong (which is ironic, considering that my point was that intuition is unreliable and here I am, putting a silly claim forth because I fell prey to my intuition).
Either way, indeed, you're correct and this was an oversight. I'll try to do better in the future!
I was with your method until 2.2 in your paper, where I then began to try to find a function that would skip from any odd number to the next multiple of 4 in the sequence. If you have all odd numbers be described as (2^y)n - 1, n being an odd number, y being an integer > 0, then 2y was the number of steps in the sequence to the next multiple of 4. I used this to eventually discover that every 2 steps in the sequence multiplied n by 3 and decreased y by 1. This eventually led to the equation (3^y)n - 1 to get one step past the next multiple of 4 in the sequence. This is where I am now. I might be able to do more after I learn calculus in or before my senior year of high school.
@@soulsand4287 nice! Be cautious if you're relying on this video's exact equations. Because of the casual context (and frankly because I made a mistake), there are exponents where there shouldn't be exponents on the 3's in the denominators of D_k. I possibly made other mistakes. This is just meant to show the general idea of what someone might try, and how they can scam themself into thinking the conjecture is proven or that it is true when we definitely didn't achieve that in this video.
The conjecture is rich with examples and many little questions that one can figure out (and even prove) along the way, making it an excellent tool for motivating one's exploration of some branches of maths ^^
If you have any specific questions or the like, feel free to reach out, I'll reply if I have a moment (and the energy) ^^
I remember trying this problem and getting far enough to say that "if there's a counter-example that infinitely increases in size it can't be repeating." Basically, if steps of the collatz conjecture were treated like digits in a number, then an infinitely increasing counter example would have to be irrational.
Just to check this one has been proven before right?
I'm not sure I understand what you mean. I'll do my best to answer what I think you're asking, though.
It can be shown that if the odd factor of x is b and the odd factor of 3b+1 is c, and b
@@MathIguess Just typed up a 2 part explanation and YT seemed to delete part 1 because it got too long... hold on I think I'm gonna need 4 parts after this.
@@MathIguess So let's start by assigning each number a letter. 1 is A. 2 is B. 3 C, 4 D, 5 E, 6 F, and then 7 is A again, 8 is B again, and so on.
D is defined as an even number that is +1 from a factor of 3, and as such all odd numbers (A, C, and E) will, after undergoing a step, become a D.
An F will divide into another F or a C.
A D can divide into a B or an E.
A B can divide into a D or an A.
Fs and Cs are irrelevant since they'll just eventually degrade into a D and can't ever return to an F or C.
Part 1 of 4
With the chains of logic, you can work out that Ds can act as the inputs and outputs for functions that are then chained. There are 3 possible functions.
D-E-D, D-B-D, and D-B-A-D.
I call these the E function, B function, and A function respectively.
And as the last thing we need to do, we need to introduce numbers back onto the letters. D1 is the first positive D, 4. D2 is 10, and so on. Assume the same was done to the other letters.
Part 2 of 4
I remember trying to solve the Collatz conjecture back in high school, and I find it remarkable how many of these steps are similar to how I approached it. I remember in particular arriving at the formula for expressing any number given all the sub-sequence length, and thinking I was making big progress. I used the extended version of the Collatz conjecture, which was defined for any fraction, so that the formula would always be defined. The goal in that case was to show somehow, that the set of all fractions generated from this would contain the odd integers (excluding 1, -1, -5, -17), I soon realized this was no easy task. I also came up with a general formula for expressing any loops, in that case the goal was to show that the resulting fractions a/b generated by such a sequence was never and integer, or equivalently, that and b are always coprime. This again was no trivial task.
I hope you had fun along the way and I'm glad you didn't reach that point and then call it a victory. Many people don't realise that there is more work to be done after reaching a similar point in their attempted proofs.
I felt so called out in that first paragraph in the intro. 😂😂😂
It is still fun to get that satisfaction of deriving some methods/equations on your own, even though they’ve already been done before.
@@crazydog1750 absolutely ^^ I plan to call my audience much harder in the next video on the topic xD probably something like "you specifically can't prove this conjecture. Here's why." Or something unhinged like that.
Problem is that I'd be treading a line between meme and clickbait which I don't wanna do. I'm still thinking about the title etc.
@@MathIguess "You (Yes, YOU) Will Never Be Right About This." I'm sure that title could have a decent CTR.
@@MathIguess Actually though I should be frustrated with you because this video made me go back and look through my old "proof" attempts of this stupid problem thinking "Hm... what if I did this?" 😂🤣🤣
It's just so tantalizing until you've dumped a hundred hours into it and realize that you're never gonna get anywhere. But slowly that wears off and the next time you think about it you're like "What if I take another look at it?" and the cycle repeats.
@@MathIguess But can it really be a waste of time if you're developing a more intimate understanding of the beautiful relationships between these wonderful symbols we call numbers? 🤔😅
@crazydog1750 guess not xD
In the intro, you could just as easily make the "if n is odd" output half of the expected size on purpose: the regular output... is even, so every time you apply the "odd" rule you (by default) immediately apply the "even" rule!
This is an equivalent formulation of the problem indeed, which I believe would lead us down the same path if applied. Still, you're correct, it does "cut to the chase" a bit better in some sense ^^
Such a useful video. It will convince a lot of young mathematicians to not waste their time on this dangerous problem. Seriously guys, just stop with it and focus on real math. Just let the real experts work on it because you will never make it with the level that you have now. If you want to work on it, then get good before because otherwise you guys will just become crazy.
@@PictooMath I mean.. I still encourage this exploration for fun, but any serious attempt at a proof should indeed be left to experts, as I'll show in a future video ^^
Thank you for the kind words ^^
well you would disagree with other comments... there is a dude who said working on this probleme skyrocketted his career
Well I mean it can be fun to mess around with this kind of problem, I don’t think most people who do so really think they will solve the problem but it’s still fun to mess around with. What’s the harm
You make math sound like it's dangerous.
The approach I took when I looked into it a while ago was by trying to investigate the conditions for the SMALLEST number which doesn't go to 4,2,1 in terms of an+b (starting from 2n+1 (as any even number trivially goes to a smaller number)).
This allowed me to substitute both n=2k and n=2k+1 and step each forward. In many cases, this allowed ruling out specific conditions which trivially became smaller after a certain number of steps, and adding additional conditions to check when it became odd a and even b.
Example: 2n+1
Try n = 2k
4k+1 -> 12k+4 -> 6k+2 -> 3k+1 (< 4k+1 (k>0) - Contradiction)
Try n = 2k+1
4k+3 -> 12k+10 -> 6k+5 (always odd) -> 18k+16 -> 9k+8 (could be odd or even depending on k, >4k+3, add 4n+3 to list of conditions to check)
Unfortunately, in some cases both conditions were equally valid, leading to an exponentially growing list of conditions to test.
I did manage to prove a lot of existing results about the smallest number which doesn't go to 1, but I didn't really get anywhere on proving that it exists
@@thinker-12 I heard about a person that went down this path in great detail as part of their master's thesis. I say this to say... nice, you did something cool! ^^
Ooh I tried that one too. But the extra conditions seem to go on forever!!
A good try nonetheless.
I was more into the route about the expanding phase.
It is basically like considering x 2^n - 1 (e.g. 79 = 5 * 2^4 - 1), what it does is to take 2n steps to convert that into x 3^n - 1 (i.e. 404 = 5 × 3^4 - 1) due to repeated (3n+1)/2 operations.
This somehow seems special as the even number x 2^n being 1 more than x 2^n - 1 takes the route down to x instead.
x 2^n - 2 becomes x 2^(n - 1) - 1, and takes take 2n-2 more steps to turn into x 3^(n - 1) - 1.
x 2^n - 3 becomes x 3 2^(n-1) - 4 and 2 more steps to x 3 2^(n-3) - 1 if there are enough factors of 2.
x 2^n + 1 becomes x 3 2^(n-1) + 2 and a step to x 3 2^(n-2) + 1.
And seems magical by considering that there are many consecutive ranges of numbers that take exactly the same number of steps to reach 1. (A006667)
Notice if x 3^n - 1 gets 1 less 2 factor than x 3^(n - 1) - 1, they will get into the same path, thus having the same number of steps.
7×2^3-1 (55) → (6 steps) → 7×3^3-1 (188) → 94 (8 steps in total)
7×2^3-2 (54) → 27 → (4 steps) → 7×3^2-1 (62) → 31 → 94 (8 steps in total)
Feels like something that happen simultaneously between consecutively numbers to converge until they become the same.
Especially for numbers that are close to powers of 2, like (1028, 1029, 1030, 1032, 1034, 1035, 1036, 1037, 1042, 1043, 1047) all take the same number of steps to reach 1 as they have enough factor 2 to eventually converge.
(i.e. 1028 and 1029 meets at 772, and 1030 is at 773. 772 and 773 meets at 580. And all these numbers meet at 94 after 19 steps)
Then it boils down to finding out the numbers that eventually diverges behave differently. (e.g. While 27 itself in the above example looks weird, it "somehow connects" with 55 with itself doubled, 54)
It seems like some numbers have larger gravity to bring more nearby numbers together.
After playing around with it for a while years ago my intuition was that if there was going to be a proof it would be a proof by contradiction.
The extra challenging thing with that though is there are 2 possibilities for a counter example that are very unlikely to have an easy way to connect them so it’s entirely possible that it can be shown that there can’t be any other loops, but it’s not possible to show that there is a series that grows endlessly. And whats worse is it could also end up being a conditional where if there were to be another closed loop there can’t be endless growth, or the other way around but it can’t be shown to both be false at the same time.
That’s why I don’t think I’ll ever think about this too seriously. It just doesn’t seem like it would be worth the effort for me. No matter how fun it is.
@@Vearru perfectly reasonable, the conjecture is a virtually endless wealth of examples to practise, but one can only interpret the same painting so many times before wanting to look at a new painting, much like the conjecture eventually grows stale for those that dwell on it too long when the goal isn't to actually solve it, but to understand it better.
The fact that everyone (including me) passed exactly same step
Collatz conjecture
ζ(2)=π^2 /6
3ζ(2)/2=π^2 /4
=π^2 /2^2
π≡1
1/4+1
1/4+4/4
5/4
//5 is an odd number//
3×5+1=16
16=2^3→1
All natural numbers
Collatz conjecture
reduces to 1.
what does this mean
@
Please show it to the graduate school professor. Then it will help you understand.
@wibbuffey
Proof by "shows up, drops a bunch of shenanigans, refuses to elaborate, leaves" is a classic. What a Chad move.
If your gradschool professor would find this compelling, he would also not find it surprising and it'd have been "case closed" decades ago.
The step that lost me is where pi is congruent to 1. The zeta function when evaluated at 2 is indeed pi^2/6 (search "Basel problem" for more details), but that pi isn't some arbitrary variable or something. It's pi. As in 3.14...
After that step, for unexplained reasons, we explore 1+1/4 before evaluating only 5 with the Collatz function, then we call it a day. Maybe I'm slow but I don't understand what was done here or how it proves anything. Must be because I'm not a professor yet.
Comes across as satire if I'm charitable and embarrassing if I'm not.
I intend no offense.
@@MathIguess My comment was more a joke but I'm very confused on the reasoning. I don't see how pi is congruent to 1...? Whipping out the zeta function out of nowhere sans explanation to solve the Collatz conjecture and then making a series of incomprehensible claims is absurd
I'm a little bit confused why the heavily nested expression has powers of 3 in the denominators instead of just 3's. I think the nested structure would implicitly multiply the 3's to some 3^k when you multiply all the terms.
@@dennismuller1141 you're correct. I made a mistake while copying partly from handwritten work and partly from a previous section of the same paper.
In a nutshell, you're correct and it's a mistake.
The expression remains absurdly complicated, though, which is why I still feel that any claims of having proved the conjecture would be a claim of a method for solving a crazy equation and it still shows how wild the conjecture is (but unfortunately in an exaggerated way because of a mistake).
@@MathIguess thanks for the clarification!
An intersting idea that i had about this problem was to conveet the numbers into their binary representation. By doing so you can reformulate the problem where you repeatedly multiply the number by 3 an then add a 1 to the last 1 of the binary string. So for example if i take 14 (1110) it becomes 14 × 3 + 2 (101010 + 10) = 44. It is easily provable that these problems are similar.
With this the only thing one needed to show to prove the conjecture is that the trailing zeros of the binary string grow faster then the string itself resulting in the zeros catching up until only one 1 is left and we have our power of 2.
I havent proven(but think i could) but asume that the strings lenght "speed" approaches 1.5 characters / iteration for lim(iterations->infinity)
However to show that the speed of the zeros is larger than that is way harder and i only had some trails for possible ways by looking at the behavior of clumps of 1 in the binary strings when applying the function. They started to shrink and leave patterns that could lead to the reason why the speed of zeros is >1.5 but this is just my imagination...
I've seen this one a few times too, but it's usually from those with a computer science perspective. Would you say you're more of a computer scientist than a mathematician? (Of course, one can be both and the lines are arbitrary and blurry if we get theoretical enough!)
@@MathIguess I guess rather a CS guy, I really enjoyed math in school but then went into informatics which I'm currently studying as well
@@pandorairq I wish you all the best with this!
Me personally I would've streamlined the definitions of Dn slightly by setting D0 to be the set of all numbers where the odd factor is only 1. That way, you could avoid doing duplicate work between what was your D0 and general Dn's (basically, your D0 is my D0 and D1). It also makes sense intuitively that n is the number of odd iterations done thus far. You would then be able to do the working out on the D1 case just after proving that D0 is the set of powers of 2.
@@minamagdy4126 this seems like a decent improvement to me, I appreciate the insight ^^
It's as easy to prove as it's easy to disprove.
I side with Paul Erdőz with this one, our current mathematics isn't advanced enough for this problem.
Or at least I believe it was him who said that.
Please correct me if I'm wrong, but it seems to me that, in the formula for D_k at the bottom of 2.2.2, the denominators should be all exactly 3, as opposed to powers of 3.
@@juandesalgado another pointed this out and I think you're correct - I was partly copying from handwritten work where brackets were expanded (hence the exponents) and partly working from the previous parts of the paper itself and I think I confused myself x_x
There are lots of sensible variations of the Collatz problem. In the basic version, is it true that you will always end up in some cycle (so the sequence will not become unbounded), possibly not the 1-4-2-1 cycle? If so, is this also true if we replace the 3k+1 by ak+1, where a is any odd number, and if so, is there a bound on how long such a cycle can get, expressed as a function of a?
@@tommyrjensen I honestly don't know, even the work I've done that doesn't appear in this paper doesn't look into that. In all my explorations, I always focused on this specific version only.
I think that Terrence Tao's work on this conjecture sort of answers what you asked, but not for other versions. It's definitely something to look at ^^
@@MathIguess Tao did do something. So far as I remember, he proved a statement about something being true for almost all numbers, not something definite. I could also be wrong here. What I am trying to say is that there are things one can try to prove that are formally easier than the full conjecture, like if you can definitely eliminate sequences that go unbounded, or have very long cycles. EDIT: the short version states something weaker, that Tao proved roughly speaking that almost all numbers 'almost satisfy' the Collatz conjecture.
@@MathIguess Wiki says that a certain generalization of the problem is provably undecidable, so there's a non 0 chance Collatz itself is _very_ hard.
I liked it - seems like there is definitely some halting problem logic that you got into in there haha. IE basically impossible to prove if a recursive program like you formulated will ever terminate.
I might try to show that no four numbers in a row ex:28,29,30,31 combine before reaching 4,2,1. It’s very complicated so far, but I’m getting closer to showing this. Also using the Collatz conjecture and another dumb result I made from it, I managed to prove (2^a((q+1)/2) -1)/q is always an integer for a(x) =A002326 on OEIS, meaning that if there are infinite primes in a(x) from composite q, there are infinitely many composite Mersenne numbers with prime power.
I think this is missing a handful of obvious things that everyone tries:
Firstly, you only looked for length 1 cycles of odd factors, and found no solutions other than the 1-2-4 cycle (for positive integers), but a natural next idea is to look for cycles of longer lengths. For example, say I want to find cycles of length 2, then I get the equation:
3((3x + 1)/2^i) + 1 = 2^j x
This equation needs x to be odd and (3x + 1)/2^i to be odd, but it's actually easy to see that these restrictions are the same as requiring that i and j both not be 0 (or more cleanly, i >= 1 and j >= 1). Also we require x > 0, i.e. x >= 1.
Solving for x, we get x = (2^i + 3) / (2^(i + j) - 9). x > 0 implies 2^(i + j) - 9 > 0. We see that (2^i + 3) / (2^(i + j) - 9) = x >= 1 implies that 2^i + 3 >= 2^(i + j) - 9 >= 2^(i + 1) - 9 by j >= 1. Rearranging, 12 >= 2^i, hence 1
I think that there's a different comment that I've previously said I want to pin, but this comment takes the cake. I really appreciate this insight, and you make excellent points throughout.
Thank you for taking the time to do so, because I can link people to this comment when their proofs go down one of these paths I didn't explore in the video. I have seen a few "proofs" of this conjecture but never explored the paths you mentioned here in much detail myself, so I wasn't as confident towards those as I was with the main one in the video, if that makes sense. Still, you are absolutely correct, I should have included this.
I am not yet able to pin comments, but I'd like to pin this one when I can (assuming I remember to come back and actually do it).
I will note that my next (likely my final) video about the conjecture will indirectly address all of these paths, but I wish I directly addressed them in this video. Thank you again for such an excellent contribution! ^^
Cool paper, cooler channel
... not to mention the cool and insightful comments section. People have some very interesting approaches.
It seems like it should be doable for each reasonably small k to (separately) show that there are cycles of length k (other than the one that is known).
I forget what form the constraints are in for that?
Oh, isn’t there something like, if you take x mod 2^k , then the result of that will tell you what type of steps the first k steps will be when starting at x?
And so, for each of the 2^k possible results of x mod 2^k , you get a degree 1 equation in x which a x would have to satisfy if it started a loop,
and then if you calculate these out, you will find in each case that has been tried so far, that the single solution is either a fraction or is negative, or is not equal to the chosen thing mod 2^k ?
I imagine better techniques than his have been used to put very large lower bounds on what the smallest cycle other than the known one could be, if there are any.
This is for sure something I wouldn't have arrived at in undergrad. Indeed, though, I've come across information that suggested that they were able to put bounds on cycles, much like what you're talking about.
That, to me, is the beauty of this conjecture. There's always more to explore. More examples where techniques can be applied. It makes me sad how often people call this a waste of time.
Fun is never a waste of time :D
Just realized I made a typo in my first sentence, where I meant to say “that there are *no* cycles of length k (other than the one that is known)”. I guess maybe the rest of the comment made what I meant to say clear?
Hm, are you sure you were unlikely to come up with it at that time?
I noticed it, iirc, when I looked at how the sequence works on 2^N + x , vs how it works on x (first looked at x=1), when both are expressed in binary. And then going from that to looking at how it acts on C * 2^N + x .
I imagine that you probably had a decent chance of trying this same idea and it was just by chance that you didn’t try this? I mean obviously I don’t know you well enough to be confident in saying that, but still.
@drdca8263 Back then, I only worked on the problem as stated, in base 10 with only natural numbers and the like. And it felt like I was making progress (I actually made a mistake which I only noticed this year when revisiting what I did back then).
Due to that mistake making it look like I could continue making progress, I didn't explore other avenues such as using binary or even something as simple as comparing 3n+1 to like 5n+1 or the like.
It's nice of you to believe I'd get there eventually though xD
13:06 I think there's a mistake in this formula. It should be either with 3 without an exponent in each denominator or something less like what's here
Hmm, I'll double check it to make sure. It's *very* easy to make a mistake with how complicated this equation gets, so I wouldn't be surprised. Thank you for mentioning it, I'll get back to you once I've checked ^^
i think that to prove this sheiz one would need to show that any sequence which length tends to infinity, also tends to satisfy the probability of finding powers of 2 and 3 as the average, this automatically leads to a contradiction meaning no sequence can tend to infinite length.
Oh yeah the collatz conjecture? I proved that. It's just too long to write in the margins.
@@tankfire20 society if notepads and books had wider margins moment
I'm glad to say that I actually went down a completely different route lol
Did you look at it in binary or another base? ^^
@@MathIguess good guess, lol
I did indeed look into binary. Since any binary number that is divisible by 2 ends with a 0, division by 2 is a simple as just ignoring the trailing 0s. I've noticed that if we do ignore the trailing 0s, any non trivial number starts and ends with a 1. And thus I became interested in the digits between these two 1s: The patterns, the count and how the entire number evolves after applying a single collatz step. (I define a collatz step as 3n+1 followed by removing all trailing 0s.) I really didn't care about the actual value of the number, just the patterns of 1s and 0s, and whether a number gains or loses a digit after one step.
My thinking was, if I can prove that any number with n digits will have less than n digits after an arbitrary amount of collatz steps, then the conjecture is true. Suffice to say I did not suceed and gave up lmao
I am mainly a programmer though, not a mathematician, so my terminology may be a bit wacky :P
I also tried the collatz conjecture as an undergrad and failed badly. I did try a few more paths that you didn't mention, but in the end, I didn't go anywhere.
I feel you misspoke when you said if there's 1 counterexample, we don't know if there's infinitely many. You could easy show that there is, but the proportion of counterexamples compared to non-counterexample seems to tend to 0 as N goes to infinity. I know because I tried that approached and failed... Maybe you meant that only an alternate loop counts as counter example and not a starting position that leads to infinity or a loop? I don't know...
Indeed, I dropped the ball on that part. I was thinking too much in terms of odd factors and wanted to say something about the length of a "non-trivial" loop if it exists, that it intuitively should have infinite members (and therefore not be a loop at all) but I ended up fumbling that thought and spoke instead about how many counter examples exist rather than how many unique odd factors of counter examples exist in some sense.
I'll perhaps pin a comment with this correction in the future, but it just goes to show that my intuition can be awful xD and since that was the point I wanted to make, I think the harm done is minimal at least xD
The alternate approaches you mentioned here aren't ones I explored, but it does sound interesting ^^
How does a Diophantine equation have "more unknown variables than equations" 😵💫? Is it not actually just an equation with all integer coefficients and all integer valued variables?
@@tommyrjensen yes
I'm not sure if I should have phrased it differently but we're used to solving systems of equations and when the variables are real-valued, having more unknowns than equations immediately tells us that we have infinitely many solutions, but for diophantine (systems of) equations, we might still have unique solutions despite having more variables than equations in the system.
Does that make sense?
@@MathIguess So you do mean "system of equations", that helps to understand what you wrote in 4.1. Notwithstanding that Diophantine is suppose to refer to equations with all integer-valued variables. Also the equation x²+y²=0 has more than one variable, and yet a unique solution for real numbers. Maybe you think of restricting to linear equations only?
I went a little bit further in a different direction with my proof, but it also failed. I expected it to fail, so I enjoyed the process more than just getting disappointed.
This resonates with me, the exploration is more important than the results sometimes ^^ especially if the goal is fun or learning
Not a proof, but a pretty good heuristic argument (to me) comes from the binary representation of n. Any power of two will collapse immediately to a 4-2-1 cycle (because all the trailing bits are zero.) About half of the non-trivial cases reduce n by a power of two, the other half will yield 3n+1, which is always even so after two steps n will be at most (3n+1)/2, and this does not occur often enough for _random_ n as input to blow up. We can drive the probability of n increasing down further by analyzing longer sequences of steps (obviously, in the limit, we see the conjecture: all numbers seem to tend to a 4-2-1 cycle, but this is begging the question.)
Edit: to make a theorem of it, all n= k multiples of powers of two trivially collapse in collatz(k) + leadingzerobits(n), because we just divide out the multiples of two first. So, if k does not blow up, then k*2^p for any p will also not blow up. Since apparently all small integers k satisfy the conjecture, we have all of them _and_ all of their binary exponents, which is a massive family of numbers who all are guaranteed to terminate.
If n is a counterexample then n2^k are obviously also counter examples, but I don't know about odd numbers
@@yoavshati yes, this was an oversight, I was thinking in terms of odd factors when working on that section but almost trivially there would be infinitely many counter examples if we have one counter example. I should have been more clear on that.
If the counterexample isn't divisible by 3 there are infinitely many n2^k that are 1 mod 3 so (n2^k - 1)/3 is an odd counterexample, I'm pretty sure
Agreed. I'd pin this comment if I could, but there's a verification process required for stuff like that, which I haven't put time aside for yet.
something tells me we would have to find a proper pattern in prime numbers to prove this conjecture
ngl, primes are so annoying to work with in math sometimes, like you can have a number with many factors but then you add 1 and suddenly it's a prime
also i guess it's safe to say that any proof must have something like "for any given N where N is ... "
@lool8421 Someone on reddit said something along the lines of being able to show that any counter examples to the conjecture would have to be prime numbers, in terms of what I called "odd factors" in this paper. Unfortunately, they refused to elaborate and left like a Chad so I'll never know what they meant.
Seems relevant to what you're saying though.
Great idea presenting a paper this way. I had trouble publishing and this is nice to see. I didn't see the Collatz conjecture itself defined for some time though, does it presume the sequence terminates. Here is a proof for you, and I see you ("[spoiler]: everyone") were really close.
1. Prove numbers 2 and 1 diverge. They do, which for example 2^n and 1^n will only get further apart.
2. Prove that odd numbers are in the category of 1^n. They never converge exactly with the number 2, so are approximately value 1 like an irrational.
3. Prove the collatz-conjecture format of 3n+1 oscillates sometimes between even and odd. Multiples of 3 do, and any fixed additive won't alter this behavior.
4. Show any rational convergence of two diverging directions is a line. This exists because 2 and 1 diverge, so any line between them must grow larger to infinite length.
This line is what you have in "3n" for any n. This is proof that all counting numbers will oscillate between even and odd, and the collatz conjecture for any finite value terminates, but the formula (non-distinct, approx. 3x+b) is infinite. You're right, pushing yourself to try difficult problems independently is wonderful.
3n + 1 doesn't work from every starting point if you vaguely define it. For example: 0 doesn't go to 1. -1 goes to -1 but not to 1.
Not sure if you missed it, but I said that f is a function from the natural numbers to the natural numbers.
I assumed that laypeople and undergraduates would typically take this to be {1,2,3,...} but indeed, it is clearer to say "the natural numbers excluding zero" or the like, since some mathematicians use the same symbol to denote the set {0,1,2,3,...}.
Still, -1 is not on the table regardless of how you interpret \mathbb{N}.
Pretty sure there's a simple proof that no ever-increasing sequence can exist, either monotonic or not. In binary, a non-zero positive finite integer is a span of bits bounded on the left and on the right by a left-most '1' bit and a right-most '1' bit. (They can both be the same bit.) Instead of dividing out factors of 2, let '0' bits accumulate to the right as you perform "*3+1". Every time you times-3-add-1, you accumulate at least one '0' bit on the right, and possibly more. Don't remove them, just turn the next iteration into "times-3-add-1-at-the-current-right-most-1-position".
So, start with 11 decimal = 1011 binary. 11*3+1=34 decimal, 100010 binary. Let's number the bit positions: Position [0] is the units bit position, position [1] is the two's bit position. Every bit position number is the power of 2 that it represents. From 100010, we don't divide out 2; on the next iteration, we just inject the "add-1" into bit [1] instead of bit [0]. Let's say that bit [R] is the right-most '1' bit (and [L] is the left-most), we change the formula to "times-3-add-[R]" (meaning "times-3-add-2**R") (times-3 doesn't change the position of the right-most '1'). So from 100010 we get 11 * 100010 + 10 = 1101000 (all in binary). Now R=3. Again, we don't divide out any factors of 2, we just let the zeros accumulate and update R. 11 * 1101000 + 1000 = 101000000, etc.
In this new statement of the problem, we stop when we get a single 1 bit and all zeros to left and right. A single 1 with all zeros to the right means a simple power of 2, and if we did remove all of those factors, we'd be left with just 1.
Now in this new view, every time we times-3, the left-most 1 moves further left. But also, every time we add-[R] to a number with a right-most 1 in [R], the right-most 1 also moves left. The left-most 1 always moves at least one position left, but never more than 2. The right-most 1 also moves at least one position left, but it's not capped at 2 positions, [R] can move all the way to [L] in a single iteration. e.g. 5 decimal = 101 binary. 11*101 = 1111 ([L] moved left 1 position), but 1111+1 = 10000 ([L] moved another position, but [R] moved 4 positions in a single step - and now [R] = [L]). And this sequence stops: We have a number with a single 1 and all zeros left and right. If we divide out all the factors of 2, we get 1. (5->16->8->4->2->1.)
For an ever-increasing sequence, L must move left at least as fast as R indefinitely, and that isn't possible. [R] will always "catch up" with [L] over the long term. To understand why, consider a number with just two 1's, at [L] and [R], but far apart. Like 100001. "Times-3" affects both 1's the same: 100001 * 11 = 1100011. But "add-[R]" only affects [R]: 1100011 + 1 = 1100100. [L] and [R] started 5 bits apart, now they're only 4. "Add-[R]" is like a carry-in from the right; there was no carry-in to the 11 at the left. Depending on the bits in between [L] and [R], there *can* be a carry-in at [L], but not always. But there's *always* a carry-in at [R]. The original [R] 1 "grows" faster than the original [L] 1, and so [R] must catch up to [L] eventually.
A non-looping non-terminating sequence is impossible. The *only* possible failure of the conjecture *must* be a looping sequence.
@@kevinjohnston8399 intuitively, I'd gladly grant .. all of that.. but rigorously... I'd be forced to insist that you somehow prove that there does not exist any string of 0's and 1's such that [L] somehow reliably matches or even exceeds [R] in terms of their "speed". You motivated this using examples, but how can we be sure that 001...1 doesn't have this property, regardless of what's in the ... part?
@@MathIguess Well first of all: Thanks for reading it! Second, I actually thought, "I'm sure this is true, but it surely must be a widely known thing", so I didn't include all the details. I did more details at the time, to convince myself. So give me a little time to write it up "all pretty like", and I'll be back soon.
While I can’t yet prove it, I assert there is only one (positive number) loop that contains positive numbers, 4, 2, 1… If you extended it to include the negative numbers, there are two more loops, one that includes -1 and one that includes -5. We can say there is no reason to believe there are any other loops.
But simply proving there aren’t any other loops is insufficient to prove the conjecture, you need to prove that there is no number that does not increase to infinity.
It’s fun to fiddle with when you have time to burn, but it’s unlikely to tie in to other math very well, even if you could find a proof. Personally, I suspect a proof (if it exists) might be found by looking into convergence relations between the 2-adics and 3-adics.
What I have always wondered is that everyone tries to prove it in base 10. But i always felt like it has to be a binary problem. I don't claim to have solved it. I ain't capable of that. I just want a honest answer and write somewhere down my thoughts just so I stop thinking about it.
For me it always seems binary since it looks like a tree where the new branches start from an odd number and the branches grow by multiplying by 2. The main loop is even the base 2 chain. That's why I always found it to be a binary problem. As a programmer seeing the division by 2 is just a bit shift in base 2 as dividing by 10 is in base 10. So representing the numbers in base 2 won't change the number only the position. So division becomes trivial. The multiplication by 3 is a little different. But you can break it up into 2 steps. It's having a bit shift to the left to have a multiplaction by 2 plus adding the original number itself onto it. Don't forget +1. What's interesting is that all numbers that end up in the 2^n branch have the pattern of alternating 1 and 0. (Reminds me of how computers represent some kind of numbers with complementary numbers for positive and negative, but doesn't matter). What you can do now is create an graph. A graph to represent that multiplication by 2 plus the original number of itself plus 1. The states of the graph would be the next digit of the next following number while the edges would be if the current digit of the original number is a 1 or a 0. If you do that, you will find that their are only 2 ways to create an alternating pattern of 1 and 0. You can also see that one of those 2 ways needs certain things to apply. To sum up one of those 2 ways causes the number to grow the other one causes it to shrink. It SEEMS that every finite binary sequence of 1 and 0 will end up after a finite step into an alternating pattern of 1 and 0. Since you have one spot in a sequence of 0 and 1 where their is a digit shift either from 1 to 0 or from 0 to 1 which causes the pattern to start to form after enough repetition. Taking that it seems the only numbers that won't fall into the 2^n branch after a finite amounts of steps are 0 since their is no spot in the sequence of 0 where a shift to a 1 as a digit. Also an infinite amounts of 1 wouldn't end in the 2^n branch for the same reason since their is not alternating spot in that infinite sequence. Every other finite number has an alternating spot after the last 1 digit was written since infinite amounts of 0 are ahead.
I have written this down in hope that maybe someone reads and understands my jibberish with lots of spelling and grammar mistakes and can tell me what I am missing. Idk I just felt like I should write it down, since I am sick of myself. I never really feel being able to share my thoughts with others in real life and now was the time for it.
@@timetr4veler949 this is definitely an interesting lens to view the conjecture through. I'm glad you were able to share your thoughts here ^^
I think that you'll find that *proving* (to the extent that mathematicians demand proof) that the pattern persists for all finite strings of binary digits would be a very tricky endeavour. Indeed, it's easy to convince ourselves that the conjecture is true, but the proof has escaped us for a very long time.
I'm glad that people are considering things I'd never have thought to try though ^^
Genuine question, why use a twitter clone as your social instead of, you know, the actual twitter? 😅
@@ΓιάννηςΤσίντζας I'm transgender.
23:57 There is actually a proven lower bound on loop length in the hundreds of billions already.
That's awesome! I bet they had to use some really tricky techniques to get there.
@@MathIguess Look up a theorem by Crandall. The current state is that there need to be at least 1e11 (or so) elements in the loop, based on the fact that (very cleverly adapted) brute force methods (David Barina is at the front of that, currently) have shown that all numbers up to ~2E21 go to 1. Notice that numbers reach 1 relatively quickly: up to 1e20 no number takes more than 2500 iterations.
@tinnderbox3410 I'll check it out, thank you!
By the way, the Collatz tree fits well in the hyperbolic plane
Why can’t you show that if we find one counter example you can’t find more? I guess depends on how you define example (do you mean an odd number or a set of odd numbers?) but if we have an odd number it can be written as 3m +k where k is 0, 1 or 2, i.e the number is 0, 1 or 2 mod 3. If it’s 2 mod 3 then we can mulitply it by 2 and subtract 1 and get a number divisible by 3 (2(3m+2)-1 = 6m +3 -> 2m+1 which is an odd number). We can generate an infinite amount of odd numbers by multiplying our number by 4 since if a number n is 2 mod 3 then 4n is also 2 mod 3. If instead our number is 1 mod 3 we can multiply it by 2 to get a number that is 2 mod 3 this generating infonite numbers again by the same process (that our new number that is 2 mod 3 is even doesn’t matter since [2(3m+2)-1]/3= 2m+1 is odd for all m.
If the odd number is 0 mod 3 then it can be written as 3m where m is odd. Apply kollatz to it and get 9m+1 which is 1 mod 3. This number is even but if you divide by 2 you get a number than is 2 mod 3 thus allowing you to generate an infinite number of odd numbers.
So since we can generate an infinite number of odd numbers from any odd (or even) number that is 2 mod 3 and since we can get to a number that is 2 mod 3 from any odd number that is 1 mod 3 by mutiplying by 2 or from 0 mod 3 by multiplying by 3 adding 1 and dividing by 2 there must be an infinite number of odd numbers connected to every odd number. This doesn’t prove kollatz, but I don’t see how this doesn’t show that if some odd number is disconnected from 1 via the collatz function there must be an infinite number of odd numbers disconnected from collatz.
Am I missing something?
You're correct, I tripped over my thoughts a bit by that stage of the exploration and I was indeed thinking in terms of odd numbers and odd factors, rather than counter examples as one would generally define them (of which we can trivially find infinitely many since if k is a counter-example, so is 2k). I was trying to say that the odd factors might just hop between a few different numbers and form a loop other than 1-4-2 and there might not be infinitely many odd numbers in the set of odd factors of counter examples.
It is as you say, though, we can indeed generate more counter example odd factors using those methods. I was just wrong on that part.
Thankfully, I was wrong in a section where my point was that my intuition is unreliable xD
I appreciate this comment ^^
@@MathIguess I want to add that I fully agree with your somewhat controversial take that time spent on Collatz is not wasted. It seems most people warn others to stay away from these famous open problems, but like you I've learned a lot from attempting them. I agree with the warning that after a certain point you might get stuck and bang your head against a problem that is beyond your abilities and as such your time would be better spent looking at other problems, but that is after the first few hours/days.
Spending a week or two to really get a feel for the nature of what is tricky with Collatz or Goldbach or other famous problems is both good practice of one's mathematical skills and a lot of fun. For me, understanding the difficulty of predicting position in the binary carry sequence gave me a fundamental appreciation of why Collatz is a tricky function. Likewise, understanding the difficulty of the union of partial cyclical groups mod p made me realize why Goldbach is so easy to prove for individual even numbers but so hard to prove for all even numbers (greater than 2).
I'm not a mathematician, I'm a low-level programmer, so rather than thinking of binary and logic in terms of numbers and math I see numbers and math in terms of binary and logic.
The collatz conjecture looks very different in binary.
In order to optimize the calculation I defined your "extract odd factor" as "strip trailing zeroes", which can be coded as 2 machine instructions (tzcnt;shr;) rather than a loop. The collatz cycle is then "strip trailing zeros", 3n+1, repeat (since strip trailing zeroes always results in odd, and 3n+1 on odd n is always even).
If you then print out the sequence generated in binary (I found 0,1 notation made it harder to see so I used -,* with blank for leading zeroes instead, I also only printed the sequence after the "strip trailing zeros") you start to notice patterns.
The bit-length of the numbers only really grows if there's a long sequence of 1's at the right end (I call this "the origin", but I mean the Lowest Significant Bit). For each 1 in that position the total sequence can grow by (on average) log2(3)-1 bits, while the bits get consumed. Runs of 1's anywhere else in the number have no influence. This is because 3n+1 grows the sequence by log2(3) bits, and the minimum that the "strip trailing zeroes" will strip is 1 bit. a sequence of 1's next to the origin is equivalent to a 2^n-1, and all numbers of that type result in a 2-shorter length of 1's with a 10...01 at the ends eg 31(0b11111) * 3 = 93 (0b1011101) or 63(0b111111) * 3 = 189(0b10111101). Adding 1 to these sequences moves the final 1 in, leaving a 1-shorter length of 1's with a single zero after it, which gets shifted back to the origin. The single zero is the important part for the growth of the number. Any interruption to the sequence of 1's breaks the process, even just a single zero bit. The later 111's fizz off noise from both ends, the origin 111's fizz from one end, the gap between them widens...
There are also some slow-growth patterns that are sequences of 111's that are just 1 extra bit from the origin, but these barely grow (I think its log2(3)-1.5 bits per cycle) and they consume their 111's at twice the rate, so I don't think they matter much.
Bits beyond any run of 1's next to the origin just don't matter as far as growth is concerned, and sequences next to the origin get consumed. The Proof, or disproof, therefore, lies in the length and frequency at which these sequences of 1's get produced. The length of a sequence of 1's that can be naturally generated by 3n+1 seems to be limited to about 8 (just from memory I'd say upper bound is 10 bits of 1's ... should be easy enough to get exhaustive data, or rigorously prove it's impossible to be more, though) and they must be generated in the first log2(3)·n bits in order to generate any net growth (with (log2(3)-1)·n 0-bits between the n 1's and the origin), and at a frequency that is high enough to counteract the general attrition of bits between them.
With these things said it seems to me that the absence of counterexamples in the range that has already been tested (I think it's beyond 64 bits?) is sufficient proof: if there were any sets of origin-adjacent bits that could reliably and repeatedly generate sequences of 111's frequently enough to cause infinite growth, they'd already have been found.
As I said, I'm not a mathematician so I don't know what constitutes "proof" in this context, or how to express a mathematical proof in a way that mathematicians would accept, so let me know if there's something you don't understand, or as a mathematician seems unsatisfactory, and I'll try to elaborate.
here is an example of the binary collatz sequence (copy-paste to a monospace text editor to regain alignment):
$ ./collatz 0xffffefffff
starting point: 1099510579199
###################-####################
#-##################--###################
#---################-##-##################
##-#-###############--#--#################
#-#----#############-#-###-################
####--#-############----##--###############
#-##-##---##########-#--#--##-##############
#---#---#-#-########-###-###-#--#############
##--##-#-----#######--##--#-####-############
#--##--###---#-#####-##--##---###--###########
###--##-#-#-#---####---##--#-#-#-##-##########
#-#-##--#######-#-##-#-#--##-------#--#########
#------##-######-----######--#------###-########
##----#-#--####-#---#-####-#-##----#-##--#######
#--#---#####-##-###-#---###-----#--#----##-######
##-##-#-####--#--#-###-#-#-#----##-##--#-#--#####
#-#--#----##-#-###---#-#######--#-#---#-#####-####
####-##--#-#----#-#-#---#####-#-####-#---####--###
#-###---#-####---######-#-####----##-###-#-##-##-##
#---#-#-#---##-#-#-#####----##-#--#-#--##----#--#--#
##--######-#-#------###-#--#--###-#####--#---##-###
#--##-####-#####----#-#-###-###-##--###-#-##-#-#--##
###-#--###--###-#--#-----##--##---##-##-----######-#
#-#-####-#-##-#-###-##---#--##--#-#-#---#---#-#####
#-----###----#----##---#--###--#-######--##-#---####
##---#-#-#---##--#--#--###-#-##---####-##--###-#-###
#--#--######-#--#-##-###-##-----#-#-###---##-##----##
##-###-####-####---#--##---#---#-----#-#-#-#---#--#-#
#-#--##--###--##-#--###--#--##--##----########--###
#####--##-#-##--####-#-#-###--##--#--#-######-##-##
#-###-##-#-----##-###------#-##--#-###---#####--#--#
#---##---###---#-#--#-#----#----##---#-#-#-###-#-###
##-#--#-#-#-#--####-####---##--#--#-#-------##----##
#--###-#########-###--##-#-#--#-##-####-----#--#--#-#
###-##--########--#-##--######---#--##-#----##-###
#-##---##-######-##----##-####-#--###--###--#-#--##
#----#-#-#--#####---#--#-#--##-####-#-##-#-#-#####-#
##---#######-###-#--##-#####-#--###----#------####
#--#-#-######--#-####-#--###-####-#-#---##----#-###
###-----####-##---##-####-##--##-#####-#--#--#---##
#-#-#---#-###---#-#-#--###---##-#--###-###-##-##-#-#
######-#---#-#--#######-#-#-#--####-##--##--#--#
#-####-###--#####-#####-########-###---##--#-###
#---###--#-##-####--####--#######--#-#-#--##---##
##-#-#-##---#--##-##-##-##-#####-#-#######--#-#-#
#-#-------#--###-#--#--#--#--####----#####-##
####------###-#-###-##-##-###-##-#--#-####--#
#-##-#----#-##----##--#--#--##---####---##-##
#----###--#----#--#--#-##-###--#-#-##-#-#-#--#
##--#-#-#-##---##-###---#--#-##------########
#--##--------#-#-#--#-#--###----#----#-#######
###--#-------######-#####-#-#---##--#---######
#-#-#-##-----#-#####--###-#####-#--#-##-#-#####
#--------#---#---###-##-##--###-####---#----####
##-------##--##-#-##--#---##-##--##-#--##--#-###
#--#-----#--##-#-----#-##-#-#---##--####--##---##
##-##----###--###---#----#####-#--##-##-##--#-#-#
#-#---#--#-#-##-#-#--##--#-###-####-#--#---##
####--###------######--##---##--##-###-##-#-#
#-##-##-#-#----#-####-##--#-#--##-#--##--#
#---#---#####--#---###---#-#####--####--##
##--##-#-###-#-##-#-#-#-#---###-##-##-##-#
#--##-#----##-----#########-#-##--#--#--#
###--###--#--#---#-########-----#-##-###
#-#-##-#-#-##-##-#---######-#---#---#--##
#-----#------#---###-#-####-###--##--###-#
##----##-----##-#-##----###--#-##--##-##
#--#--#--#---#-#-----#--#-#-##----##-#--#
##-##-##-##--####----###------#--#--####
#-#--#--#---##-##-#--#-#-#-----##-###-###
####-##-##-#-#---###-######---#-#--##--##
#-###--#---#####-#-##--####-#--#####--##-#
#---#-#-##-#-####-----##-##-####-###-##-#
##-#-----#----##-#---#-#--#--###--##--#
#--###----##--#--###--####-###-#-##--##
###-#-#--#--#-###-#-##-###--##-----##-#
#-#-#####-###---##----#--#-##--#---#-#
#-----####--#-#-#--#---###----#-##-#
##---#-##-#-######-##-#-#-#--#---#
#--#-#---#----#####---#######-##-#
##-####--##--#-###-#-#-######--#
#-#--##-##--##---##------####-##
#####-#---##--#-#--#----#-###--#
#-###-###-#--#-####-##--#---#-##
#---##--#-####---###---#-##-#---#
##-#--##---##-#-#-#-#-#----###-#
#--####--#-#--###########--#-##
###-##-#-#####-#########-##---#
#-##--#----####--########---#-#
#----#-##--#-##-##-######-#-#
##--#----##---#--#--#####
#--#-##--#--#--##-###-####
###----#-##-###-#--##--###
#-#-#--#---#--#-####--##-##
######-##--###---##-##-#--#
#-#####---##-#-#-#-#---####
#---###-#-#--#########-#-###
##-#-#-######-########----##
#-#------#####--######-#--#-#
####----#-###-##-####-####
#-##-#--#---##--#--###--###
#----###-##-#--#-###-#-##-##
##--#-##---####---##----#--#
#--##----#-#-##-#-#--#---###
###--#--#------#####-##-#-##
#-#-#-##-##----#-####--#----#
#-------#---#--#---##-#-##--#
##------##--##-##-#-#-----##
#--#----#--##-#---#####---#-#
##-##---###--###-#-###-#-#
#-#---#-#-#-##-##----##
####-#-------#---#--#-#
#-##-###------##--###
#---#--#-#----#--##-##
##--##-####---###-#--#
#--##-#--##-#-#-#-####
###--####-#--------###
#-#-##-##-###------#-##
#-----#--#--#-#----#---#
##----##-##-####---##-#
#--#--#-#--#--##-#-#-#
##-##-####-###-#
#-#--#--###--##
####-###-#-##-#
#-###--##----#
#---#-##--#--#
##-#----#-###
#--###--#---##
###-#-#-##-#-#
(trimmed for youtube not able to take the full length)
@treelibrarian7618 the most honest response I can give to this is to say I'm not able to follow what you mean. Despite having made the Collatz coral animation thingies in python (with Manim), I don't know too much about programming at this level. I can say though that as mathematicians, we can check as many examples as you'd like, it still isn't proof. The standard is extremely high. Aside from this, sadly, I don't think I understood well enough to comment on the mathematical rigor or value of the rest of what you've said here. I hope that someone else comes across it and can more confidently weigh in.
@@MathIguess the basis of this potential "proof" is showing that nothing matters as far as growth/shrinkage of the number (I think of it in terms of number of bits, but it's actually scale of value) beyond the first 20 bits or so (values modulo 2²⁰).
The main thing that would need to be proved is that it is impossible for the continuous series of 1-bits required for growth to arise naturally from the multiply-by-3 operation beyond a certain size. There is only one precursor to 1111111111, and that is 0101010101, and that in turn has only one precursor (hint: its a couple of smaller groups that interact if they're a certain distance apart) and that precursor is too big for multiple groups to get together to generate longer strings of 1's. I might be wrong about the limit being 10 bits, I saw one example of 11 bits appearing later in the number (not next to the origin) but I've not seen more than 10 arise at the origin from the collatz sequence, where there are no lower-order bits to feed an extra bit into the start except the +1 part.
but as I said, I'm no mathematician, so I don't know how to phrase these concepts in ways that would impress mathematicians. I was rather hoping you'd poke holes in my idea that would help me develop it.
as you rightly point out proof by lack of counterexample is not proof, or even the start of proof, until the domain for potential counterexamples has been exhausted. This is an attempt to show that the domain for counterexamples is finite, and has already been explored: and then mathematicians would still have to find some way to show why. This is more like an attempt to show that it is mechanically impossible for the collatz sequence to expand infinitely.
Your inability to follow may be due in part to my inability to express clearly, but I feel it's also indicative of what I suspect: that the proof of collatz lies outside the standard domain of understanding of mathematicians, which is why I presented this idea. If you like I can post the code to generate the binary collatz sequences I mentioned, wherein the patterns become quite visible, and you can play and explore how the bit-sequences work. I did post an example of the output from the program, but youtube seems to have lost it. I wrote it in C, but if you think you'll have trouble working out how to compile it then I can re-write it in python.
@treelibrarian7618 you've definitely clarified a few things here. Someone once mentioned that I should be careful with the kinds of numbers that a particular language can handle. I'd want to know what size integers C can handle, and whether this is relevant to what you're up to.
Your second-to-last sentence is a classic trap in number theory proofs: "It seems to me that the absence of counterexamples in the range that's been tested so far is sufficient proof."
The most famous example of this turning out to be false is Polya's conjecture.
I reject your proof... and substitute my own!
Savage.
Taha M. Muhammad/ USA Kurd Iraq
I solved:
0- Pi Length
1- Euler Perfect Box in 2 ways
2- Collatz Sequence in 2 ways
3- Fermat’s Last Theorem on 5 pages
4- Fermat’s General Case on 8 pages
5- Cubic Root by Hand for real number.
A- Unfortunately, no evaluation to my articles
At American Math Society (AMS) and I am a member. AMS blocked my email and my phone!
B- arXiv is not allowing to submit my victory under excuse that I need endorsement from a professor. I do not find a professor can give endorsement. Therefore, I am under discriminations of math societies. Otherwise, why my victories remained hiding from math world societies?
@@tahamuhammad5962 that's why you'll get blocked, friend. You come across as a crazy person by starting the conversation this way. I'm some random youtuber from [NOT AMERICA] that likes mathematics. I'm not an authority and for all you know, not even an expert or anything. Just some random person.
This is the same as yelling about this at a bus stop at another passanger that's waiting for the bus.
The tree picture is the dumbest way to try and prove it. Just try to prove whether you can get a loop of size 2, size 3, and extrapolate
@@Trizzer89 I included the "corals" (I assume this is the tree picture you're talking about) purely because they look cool and relate to the conjecture. The paper itself doesn't go into that at all. Additionally, if the set D from the video is equal to the set of all positive integers, it follows that there are no loops other than (...4,2,1,4,2,1,...). I did include a bit where I showed that a loop of numbers all having the same odd factor is forced to be this loop, but I did not go further down that path.
With that said, others indeed pointed out that what you suggested is another common path and it would have been better if I had included it in the video. I agree with this.
I don't think I would call any attempted proof "dumb", though, that bit seems harsh.
21:20 I mean, technically the proof doesn't *have* to be constructive, but...
This is a case where a non-constructive proof would be very unsatisfying if it's found. At least for me. (I'd accept it but it's like knowing a puzzle *can* be built rather than seeing it built.)
❤❤❤
Is it proven that 4,2,1 is the only loop?
@@Ckrishthofpher not to my knowledge
it's not proven, it's assumed to be true.
Excuse me but for research purposes how did you manage to get 19k views for a 13 day video?
@@berlinisvictorious two major things: Luck, and help from others.
For views, the title and thumbnail do the heavy lifting. For engagement, the intro hook and content/pace are very important. I made a promise and tried my best to keep it by saying exactly what the video is about in the first 15ish seconds.
The engagement on this one is higher than I expected, meaning that most people that see it would probably feel like it wasn't clickbait, which is important to me. I want to make content that's worth watching :)
But the summary: This particular topic has a huge market and not a lot of competition if you make a good thumbnail and title. (No offense to others but some of the titles and thumbnails aren't gonna stand out - things like "my proof for Collatz" etc are a dime a dozen.)
@@berlinisvictorious additionally, I had 200 subs when I had zero videos because I used to have like 50 videos on this channel but erased everything when I came out as trans. I think that those 200 had to help in some way.
Also just wanna reiterate the luck part. That was a huge part of this.
Here is my "proof" I thought up years ago.
( Basically it's just showing that statistically the function moves downwards, not upwards, over time)
All even numbers are divisible by 2.
So, for every odd number encountered you get exactly 1 collatz function.
For every use of the collatz function you get an even number so it's immediately followed by a minimum of one division by two.
This means you can effectively write the collatz function as
1/2(3N+1)
Which simplifies to 3/2N+1/2, or 1.5N+0.5
That's the TRUE collatz function, because the division by 2 is guaranteed.
Now, assuming the numberline has a regular distribution of even and odd numbers (I'd say that's fairly safe) then no matter what your starting number the distribution of even and odd numbers is effectively equal, (exactly equal 50% of the time, and preferring odd by one 25% of the time and preferring even by one 25% of the time, with the amount of deviation from 50% that single number matters quickly becoming trivial the further from 0 we move)
Therefore, for every time you run the collatz function of 1.5N+0.5
You have a 50/50 chance of it bringing you to another even number.
Now, let's stop there for a second.
If you had a 50/50 chance of it simply performing the collatz 1.5N+0.5 again, OR floor(N/1.5)
Then the average movement of the function would be 0, and no matter what your starting number you would just end up in a loop.
But instead we are dividing by 2
So you have a floor of moving upward by 1.5*N+0.5 50% of the time.
And the other 50% of the time you are moving downward by 0.25N-0.75
Taking just these two 50/50 probability outcomes we see:
0.75N+0.75 - 0.25
But thats not really the case at all, in fact, because every time you divide by 2 you have a 50% chance of doing it again, so you don't really spend 50% of your time moving down 0.25N+.25
You spend 25% moving down by 0.25N-0.75
So you spend 25% of your time moving down 0.62.5%-0.875
But that's not right either.
You spend 0.5, and 0.25 as stated but that next 0.25 also splits
So you spend 12.5% of your time moving down 0.62.5%-0.125
And you spend 12.5% of your time moving down 0.8125-0.625
Taking these and the next iteration the average of this conjecture is that the motion of the process is downward.
Thus, given enough time we can be assured the function will eventually reach the loop
A problem is that there isn’t any uniform probability distribution on the natural numbers, due to sigma-additivity. Also, showing that it happens with probability 0 is not immediately the same as showing that it never happens, so you’d have to address those somehow.
Ironic, I put out a video on the Collatz as a proof just recently!
I seem to have gotten further than you did, though maybe that further progress was incorrect! You'll have to tell me.
We have very similair lines of thinking, though you set yourself up for failure by thinking of this stuff the way you laid it out. It's way too rigorously formatted and you're not thinking about the modular relationships that are forming. Or maybe you are, that d infinity set is impossible for me to comprehend, other than understanding what it's meant to represent.
Here's the things I proved in my video (I think i proved!):
No such loop exists, period.
We can algebriacly solve every type of "movement" (2^k from one odd integer to the next is the same as all others in the list) and create a set or list of those solutions grouped together.
From this fact, I think???? I say it's true but it's more of "it makes sense to me 🥺" that implies the conjecture is true. If it doesn't imply the conjecture is true, it implies it's unsolveable, or false.
(Implies here being the non-formal, suddel implies)
@@elunedssong8909 I'll take a look when I have a chance and get back to you ^^
just do it backwards, for any n it will eventually hits 2^k
@@DeadJDona new proof just dropped lol
@MathIguess that's the proof, ofc it's new
@DeadJDona burden of proof reversal is a classic technique haha
@@MathIguess it's proof, no't burden
@@MathIguess Solution to CC is trivial. CC is an iq-test (and once you instantly solve it, you understand this world is built on lies) for others, who cant do it, it is designed to be a conformity program. Eventually they all will say: "you solved it? take your meds" (not a joke, exact words every time). I´ll repeat here too - this 6th grader solution is not allowed by BigDemia. And btw, your failed logic s****, you think inside the dot. LOL
Taha M. Muhammad/ USA Kurd Iraq
I solved:
0- Pi Length
1- Euler Perfect Box
2- Collatz Sequence in 2 ways
3- Fermat’s Last Theorem on 5 pages
4- Fermat’s General Case on 8 pages
5- Cubic Root by Hand for real number.
Unfortunately, no evaluation to my articles
At American Math Society (AMS) and I am a member. AMS also blocked my email and my phone!
Why?
@@tahamuhammad5962 no clue, I'm not even American
Following
Theoretically conjecture could be proven by contradiction for any given number. Like if I have a number 10000000, I divide it by 2 and if I know that 5000000 will end with 4-2-1, I can say that any number that eventually comes down to any even multiple of 10000000 or 5000000 will end on 4-2-1. But I think my idea is basically meaningless
For this to work, we would have to know that the conjecture is already true. Sadly in number theory, proofs by contradiction often add an extra step as compared to direct proofs, from what I've seen.
Could still be fun to explore though ^^
@MathIguess it was just my spanteneous thought, no analysis. Now that I think about it it could be a good algorithm to brute force the conjecture. Like if you know some 350000001 numbers will fall into 4-2-1, the next number will be smaller than our initial number and we know that it will fall too. It could make research for that number easier, but beside that it's useless
i solved, one other guy solved it, and solutions are being burried. I cant even mention c-word here. it is so simple, and i dont get it how is it possible to miss it. I guess the negative suggestions "it is impossible" work really well, locks up all intuition. "Problem" was designed to drive ppl mad. 6th grader can solve it.
nice solution comes from known properties. but if im not mistaken, i found 4 ways to prove it. other ways are just not that classical solution. It all happen, because i watched 90 min of Terence Tao and he all he talked about was nonsense and was not even close being on the right track (none of the 4 ways). All he did do was some whining: "math is not ready yet". Haha.
Once you have the solution it is fairly simple. The method is very unusual though, I had never used it before.
Then send us your solution and we will evaluate it
when it comes to proving collatz conjecture, instead of asking how, we should ask why
@@filipnagy3535 I disagree, I think that many of us already have a "why", and that's because it's fun or we're curious. These are perfectly valid reasons to try to solve a puzzle/riddle/problem ^^