Hi everyone! I’m one of the authors of the paper being discussed. Dr. Bazett, thank you so much for the fantastic video! It’s as clear and thorough an explanation as we could’ve hoped for, and I really appreciate how quickly you put it together. Just a small note: the author of the hypergraph paper’s surname is Hollom. P.S. We are not changing the title to "Debunking the bunkbed conjecture"!
A preprint showing a counterexample to the conjecture was posted on the arXiv in October 2024 by Nikita Gladkov, Igor Pak and Alexander Zimin.[2] they did this already
This is not how it works, Wikipedia editors won't change tenses without citing a reliable source, it is not an instant process and you have to wait for a reliable source to be found and create the citation, mass edits are disruptive and it is better to crowdsource it to ensure consensus.
@@DrTrefor I do. Nothing is worse than putting years into research and hearing in a side note on a conference that this method had been tried and failed a decade ago, and nobody bothered to tell someone.
"Show your working" was drilled into me by my secondary school maths teacher. (Who was able to retire on a high note, after her whole sixth form class passed A-level Maths a year early and Further Maths the following year with no grade worse than a B.) It's good advice in an examination, because even if you make a mistake you can still pick up partial marks, if the examiner can see you are doing the right operations but with the wrong values (e.g. because you missed a carry, or looked up the wrong value in your book of tables -- yes, I'm that old; but you could just as easily press the wrong key on a calculator or computer). But even in an academic paper, a bit of "I already tried this and it didn't work" might prove useful to others working in the same field. Sir Isaac Newton was able to see further than others by standing on the shoulders of giants, but there can sometimes be a decent view from atop a pile of failed experiments!
I just explained the problem to a nine-year-old. He paused for a while, gave it some thought, and then replied: "It's obvious, you just take a hypergraph and replace each triangle with 1204 spokes connected to a center, the probability is about 10^(-4000) greater. I wonder why it took them so long to figure out"
As Albert Einstein definitely once said, "If you can't explain it to a six year old and then let them disprove a standing conjecture from the 1980's, widely thought to be true, you don't understand it yourself"
The funny thing is that "counterexample with 7222 vertices" seems large, but it's actually rather small and often counterexamples are more in the region of "we found something to the order of 10^5,000,000". :D
There are 2^26,075,031 possible graphs of 7222 vertices. It is extremely rare counterexample for the given conjecture and one that is impossible to find through naïve brute force search alone
@@georgplazI'm not a mathematician, but from my experience, I would often find counter examples when working on graph theory theorems within graphs that contain at most 10 vertices. I've only had 2 situations in which I had to start randomly generating larger graphs to find counter examples and even then they had at most 50 vertices. 7000 is a lot.
funny story, they went back and checked that neural network they were using was very similar to the counterexample and this is all a lie I just made it up
It's a wonderful thing about mathematics that you can have something that everyone looks at and thinks, "This seems like it should be true. It's gotta be!" And then it's not.
Yes and no. Sure everyone says it’s true but there’s a very strange way where it IS false but it’s in hyper complex multidimensional system, that no longer resembles the original conjecture then it might as well be true
I have to admit this is over my head... I look at this, and don't see any magnitudes involved, it's probability. That it takes such a large counterexample makes me think that there has to be a simplification there, a reason. a wagon wheel topology means one point approaching a supergraph ('smeared" to me) number of vertices. Where things without magnitude tend to break is at infinity, like a wagon wheel with an infinite number of vertices bunk-bedded with another wheel of infinite vertices, you get an indeterminate comparison of infinities. If I have 3 vertices on each spoke (left, right, hub), there's a 1/3 chance of a vertex decaying goes to the hub, not changing the probability (still infinity). In 2/3 of the cases, a 3 vertex point goes to 2. You could compare that case with a graph with points that has a more even distribution of vertexes. Why am I thinking significant digits (e.g. 0,1,2,3, infinity for a wagon wheel, or (0,1,2,3,4,...)
@@alandouglas2789you’re wrong. The conjecture was that for any graph it’s true. There’s a graph for which it’s not true therefore it’s not true for any graph. Simple as that. There are limits you can put on the array that will make it so it is true, but knowing it isn’t always true is helpful, because complicated graphs are still graphs, and if this was true it would apply equally to graphs you find confusing and graphs you find perfectly reasonable.
I just love how concise the name of the paper and especially the abstract are. It's just "we did the thing - here you go". In my area (NLP), abstracts are usually very long, so this is nice to see. Also, I would usually advise against talking about papers that haven't passed peer review, but the nice thing about a proof from a counterexample like this is that you can just take the counterexample and verify it yourself.
the idea behind why the 1024 spokes works is really cool. In order for the center to not connect to a corner, but for the corners to connect to each other, every single red line would need to be intact, without any of the blue lines being intact, which is exceedingly unlikely. this makes the likelihood of the red line connecting to a bunch of other points really large relative to the likelihood that the red line stays intact on it's own. This makes the probabilities different, because in pretty much any case where a point on the red line doesn't connect to it's neighbor over a blue line, it almost certainly also doesn't connect to it's neighbor over a red line. The setup of the graph basically approximates making the probability of red disappearing higher than blue disappearing, because the cases where both lines disappear or both stay don't really assist either side in raising it's probability.
Really happy to see this! I saw this paper posted on r/mathematics on reddit, and I i tried to read through the paper, but it was difficult for me to really understand it. I did take graph theory in college, but yea, getting through the paper was tough, so it's awesome to have it explained. Thanks Dr Bazett!
We really don't know! Because their approach was to keep adding more and more spokes until you got a counterexample, it is very plausible that a smaller counterexample exists without such a brute force trick.
If the hypergraph is optimised, I don't know if finding a more optimal graph is feasible. If there is still a way to optimise the hypergraph, then maybe there's a smaller graph?
@@janzwendelaar907 Well, the hypergraph is optimized for making the smallest number of vertices in the original hypergraph. But given that you have to insert orders of magnitude more extra vertices than there were originally means that there is massive potential for finding a better solution with a hypergraph that is instead optimized for having to insert less extra vertices.
My conjuncture is that this is the smallest graph. I let Igor Pak disprove my conjuncture. When he does, I'll come up with another conjuncture for the smallest graph. At the end of this cycle we will find the true smallest graph.
And now I wonder how much likely you can reach the top bunk than the bottom. I wonder if you can do better than 10^(-4000). It seems unlikely but it might be the case.
This is actually my first ever "breaking" math video (the paper came out on friday and I was like omg I have to make a video) but I actually hope to do more. Finding the stories that are both interesting AND accessible to a UA-cam audience isn't always the easiest, but I think it is important.
Yes more math news videos would be amazing! Perhaps especially for people who once did math in university but now are doing something else, and aren't part of academic circles anymore. A great way to stay excited about math :)
Graph theory is one of the few branches of modern maths that have conjectures explainable to a layman. Some other are elementary number theory and geometric problems in optimization (like sphere packing, moving sofa) Still, most current conjectures require years of studying maths to get at least some understanding of their statements Not to mention highly specialized fields such as algebraic geometry or deep projects like Langlands program
very small (in case that there are people that don't know how to interpret the negative power, here is it written) 1/(10^6509) or 1/100000...(6500 times)...000 very small
How do you compute (or even estimate) a probability that small on a computer? I wouldn't have thought that numerical methods like monte-carlo would be good enough to get error bounds that small in a reasonable computation time.
Your restriction to the case of a single "post" is somewhat misleading. The bunkbed conjecture is known to be true if the number of posts is 1 or 2. Indeed, in the counterexample provided in the paper, the number of posts is 3.
Ya that's fair. I was trying to search for the simplest example to animate for UA-cam where we could actually explicitly compute the probabilities and see our example satisfied the conjecture. So please treat that example only as helpful for illustrating the concept, not as the right level of complexity for actually finding a counterexample.
This just furthers my position that 3 is the weirdest number. Everything gets crazy when you get to 3. Random walks from 2 to 3 dimensions, the number of platonic solids in 3 dimensions, k-satisfiability goes from easy at k=2 to np-hard at k=3 and then all higher k's being reducible to 3-sat, orbits only working in 3 dimensions, and now bunk beds becoming weird with 3 or more posts...
@@sykes1024Yes! I've long thought this since learning about the complexity differences between 2-SAT and 3-SAT in 2005. Then there's also 2-body vs 3-body, I seem to remember something about tiling in 2 vs 3 dimensions, and so on. I wonder if somebody keeps a list of these (and related) threshold effects.
Indeed. I've learned a lot about Collatz conjecture by optimizing a program that finds counter-examples! On a side note, I've even independently discovered a theorem about GCD of 2 arbitrary Mersenne numbers, by studying how Stein's algorithm works. After doing some research, I've found a Math StackExchange post proving the identity holds for bases other than `2` (`b^n - 1` instead of `2^n -1`)
This realm of uncertainty, this need for the null hypothesis, is why we still refer to 3x+1 as "collatz conjecture" not "theorem". Do not count your farm birds before they start creating graphs on the surface of their shell
If we have a v v' "post", then the probabilities are the same as if we can reach one, we can reach both. If v/v' can only be reached by a node x/x', and we have a post x/x', then the probabilities are the same as the probability that vx and v'x' has fade is the same. If we miss lot of post, this means that to reach v' you need to reach one post, AND to have a path u post + post v'. While u v doesn't have this constraint. If we have lot of post, then we can "jump" other/under missing links. At any point of the graph, the probability to get to v/v' without any should be the same as the decay is the same for the bottom/top graph. Adding post should only increase the probability we can reach the top graph (+ the probability to jump under/over a missing link, but both is true for top/bottom graph). I don't see how the conjecture can be wrong with the given issue... Just hopping the paper didn't had floating point computations errors xD.
I'm picturing a situation where you have likely breaks (single links) and unlikely breaks (1000 connections). Arranged so you're likely to need to take posts in order to get around breaks; three posts for an up-down-up sequence but no 4th post to bring you back down. And then a ton of fiddly detail to make it work right.
@@Neckhawker The idea is you are forced to take posts due to the gaps. And with an moderate sized odd number of posts to take, that helps the prime destination. Clearly not much, but enough apparently.
@@nickdumas2495 You are making an assumption which isn't true as the decay is random. From the last post, you have the same probability to be able to reach the final destination from the top and the bottom.
Exactly my thinking. For every potential path that goes past/through at least one post, the propability a path exists from the last post-having vertice should be the same on top (v') or bottom (v) (as there is no reason for it to be different, top and bottom have the same probabilities) - and if one end of that post can be reached, both can, so either both options exist until this point or none. So the probability to reach v should be the probability to reach v' plus the probability that the only possible path does not go past any post
You are a great teacher! I struggle with math and you made me understand this conjecture. I got a little lost with the hypergraphs, but I got the gist of it. Thanks so much for making me not feel dumb.
what bugs me about this explanation is that it does not give me the kind of understanding that makes my intuition shift even in the slightest way. i have no idea whether this implies that it is anywhere near possible to have a graph where the h -> v´ case is twice as likely as the h -> v case, for example. it seems like it would have been helpful to explain what makes the hypergraph work as a counterexample. like this its just "believe me, this thing somehow does the thing we are trying to understand" - and then another step where the details are unclear (i can guess that the number of spokes leads to a greater similarity to an actual triangle, but why that kind of similarity is even helpful stays unclear to me)...
yeah I kinda agree. like the entire video was about finding a counter example... and then he just says "trust me bro, this is definitely a counter example". I kinda wish an attempt at all was made to explain it
@@JavedAlam24 i am not interested in understanding why "the bunkbed conjecture is wrong", so the "disproof"-thing does not exactly deal with my issue. i am interested in understanding why "this graph has a greater likelyhood for h -> v´ than for h -> v".
@@davejacob5208 Sometimes (often, arguably) getting an intuition for why something works/doesn't work in mathematics is far harder than showing the thing itself, and doing so may not have been in the scope of the video. But if you're THAT curious, I imagine you can read the original paper to see why adding those spokes lowers the relative probability.
This is one of those rare examples where I can honestly say it does not seem the conjecture is intuitive. Glad to hear it was proven to be incorrect! Great job to the authors! 👍🏻
The way they choose to use the hypergraph to attack the problem is really interesting. From a human perspective it makes a lot of sense that if you replace a solid triangle with a lot of "closely packed" edges you can expect a similar result, but you really have to think carefully as to why this method had a chance. The way I see it, every triangle on each floor is connected to one post and two non-post vertices, and the fact that helps this hypergraph beat the generalized conjecture is that severing one triangle means two non-post vertices disconnect for the price of disconnecting one post, so what they needed is something that copies this behavior. So each triangle is replaced with N spokes connecting N extra vertices to the bed post, and connected in a single file between themselves, and the outer extras each connect to one of the non-post vertices in this triangle we're replacing. I'll call blue edges spokes, red edges arcs. So, take the leftmost vertex. If its spoke and arc connections both decay, then this vertex is both disconnected from the post and from the rest of the vertices in the ring. If the first arc stays connected, but the second arc and the two first spokes decay (less likely because we require more decays), then again the leftmost vertex is disconnected from both. Even less likely, if the first 3 spokes and the arc from vert 2 to 3 are all lost, we get the same result. Each of these possibilities is not very likely, and diminishingly so, but we just need one of these to occur, so the total probability is pretty substantial. Have these occurrences in both extreme vertices, and the new spokes-ring manage to do the job of a triangle.
@@DrTrefor Umm, Wikipedia article says that "the probabilities assigned to the posts can be arbitrary. A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability." And I kinda assumed that the posts have non-zero probability of being deleted. I haven't read the paper, or re-watched the video (yet). But surely the posts are subject to percolation, aka. deletion?
@@nikonyrh If it can be arbitrary for the posts, then even if you force it to be nonzero for the posts, then we can just set the chance of removal for the posts to be like 10^‐10000000000, so that it wouldn't change the result if a counterexample like this exists
@@diribigal Ahaa yes, so the interesting part is that "P(u v')" can become GREATER than (not just equal to) "P(u v)", thus being a counterexample (duh). Thanks! I suppose counterexamples become larger or impossible, as the probability of deleting posts increases.
I don't know if that is even possible. If there were a intuitive explanation, then probably would not have been a conjecture for truthfullness in 5he first place
If I were to have a guess, I would say that it probably hinges on 3 factors (though I don't know why they come together in this hyper-graph with the 3 posts, but not the "hyper-edge replacement structure" at the bottom of the screen): A: There is a super tiny probability that you can go all the way along the red edges in the "hyper-edge replacement structure" to another white vertex of the (hyper-)graph. B: If you can just go along _enough_ of the red edges, then you're basically guaranteed to have _some_ blue edge along which you can go to the "hub" vertex (which is green and has a post in the (hyper-) graph. C: (Assuming that none of the red edges goes through the whole way.) The longer the red edge is that you came from, the shorter of a maximum red edge you can have on the same level that stretches back from the opposite white vertex of the hyper-edge; While there isn't the same restriction for neither the red edge stretching back from the opposite white vertex, nor other blue edges coming out through the other pair of hyper-edges on the same post. But: D: If you can do this to swap the probability on the bottom hyper-edge then it would preclude you from doing the same swap on the right hyper-edge. E: If that was all there was to flipping the probability, then that wouldn't explain why you can't do it with only the "hyper-edge replacement structure"; nor why it works for a hyper-graph like this where the number of red edges is *even.*
@@Pystro Basically, I think your first three points come together to the observation that the probability that there's a path in the bottom bunk that contains none of the green vertices is very small. The paths that do contain green vertices also can be slightly modified into paths through the top bunk. The question is then, what gives the top bunk paths the miniscule advantage that they have?
Informative video, there's just one detail that I think is missing: the vertices they're trying to connect in Hollum's counterexample are the top and bottommost.
I don't know why I got this video on my feed, but I watched the whole thing, the explanation was clear, understandable, interesting and it was fun, so I'll subscribe!
I don’t get why P(uv) would ever be more or equally likely than P(uv’), there’s 2/4 edges that connect u to v after the post, where as there’s 3/4 edges connecting u to v’, so when removing 50% of the edges, there’s a higher chance that v gets disconnected than v’.
I wish you would explain a bit more why replacing the hyper edges with that structure leads to the result. Something about the number of edges leaving a big possibility of there still being a connected path?
My guess is that if you replace the hyper-edges with an infinite number of spokes then the maths reduces to the same behaviour as the hyper edges, so the more spokes you add the more closely it matches. If this is right then I'm thinking that the existence of a counter-example for hyper edges kind of proves that there will be a counter example for 'normal' edges, you just need to keep ramping up the number of spokes. Er, maybe...?!
@@Aaron628318the hyper edges dissapear all at once though, right? But these spoke things as it tends to infinity have a really really really high chance of having a path so doesn't this mean that they behave oppositely?
This is really cool! Im deathly curious how they arrived at replacing the hyperedges with wheels with 1204 spokes. I bet it has some property m or something, but i really like the idea they kept plugging in numbers to see if they were getting closer like tuning a radio.
In the search for aperiodic tilings of the plane, one of the first examples required > 20,000 tiles. Now it's down to one tile (the hat / spectre). Looking forward to seeing the size of known counterexamples for the bunkbed conjecture shrink ... though apparently it's been proven that you need > 8 vertices ... maybe > 30?
As soon as the hypergraph disproved the Alternative Bunkbed Conjecture, the proper conjecture's days were numbered. A hypergraph is basically a normal graph where the number of vertices in each triangle has gone to infinity, so obviously there must be 'some inflection point where there are enough vertices in a normal graph of similar structure' that yields the same result. It's kind of like reverse integration, going from infinite parts to finite parts.
14:00 Saying "what if they are all wrong" is a contradiction because "all such conjectures are wrong" is itself such a conjecture so it must be wrong but when you are saying it you assume it's not wrong.
Incorrect. It isn't self- referencing. If the conjecture is "all such conjectures are wrong," and 'such conjectures' refers to conjectures commonly thought of as true without having been proven yet, the conjecture itself would not be included, since it isn't something commonly thought of as true. (Which is obvious since people still believe that at least some of 'such conjectures' to be true, therefore they can't believe that they are all false.)
Question about the problem statement with more than one “post”: are you allowed to jump back down and back up again between the levels of the graph? Or are you restricted to staying at the base in the first case, and making at most 1 jump in the second case?
You have to be allowed to jump up and back down. If you couldn't, then the bunkbed with a "bed" of a single line segment (and two posts, for 4 edges in total) would be a counterexample.
11:00 this feels like the opposite of a generalization, since this should be eqivalent to a 3-cyclic connection, connecting those 3 points, with a normal graph
@@Emma-i9x In a 3-cycle one (or two) of the edges can fail independently, whereas the hyperedge is all-or-nothing, which naturally affects the probabilities
With the initial conceptualization of edge percolations you gave, now I'm wondering about an alternate Graph setup where, rather than decaying instantly, each edge has a scalar value, which one might consider its "strength." Each percolation could instead simply reduce the strength, but the edge is still present until that strength reaches zero. I wonder what kinds of explorations you could do with _that._ There's ways you could tweak it further; like constraining what values the scalar can have, having different categories of traveler that require a certain value to cross an edge, or even using the scalar as a _probability_ that a traveler could cross the edge, between 1 and 0. I expect real mathetaticians have already thought of this, of course; I'd be interested in looking into what they've found.
The bunkbed conjecture seems counterintuitive to me. Does it presume there isn't a pole at u,u' ? Wouldn't a pole at u,u' mean that u and u' are functionally the same point, thus both sides have identical probabilities, and thus by adding any other pole anywhere we increase the u v' probability? Wouldn't this also be the same for any pole that is at x,x' where x is only connected to v through u?
The poles are arbitrary; you can put them, or not put them, in any places you want. (Or even give them a random probability of being there.) I agree with you that any pole that _is_ there makes the points functionally equal, and so just makes the probabilities closer to equal. That reasoning suggests that the conjecture is simply true, though. 🤔
Have weaker forms of the conjecture been proven such as "if there's only one post" or "if you can only travel along a post once"? A conjecture I'll toss is that if at least 1 post exists along every possible path from u to v, the probability of reaching v or v1 will always be equal. Edit: I suddenly realize I've just been assuming the posts themselves never fade. As a followup, I think that if posts have a chance to fade, and there is at least 1 post along every possible path, the chance to reach v would always be higher, never equal.
According to wikipedia, the conjecture has been proven for "specific types of graphs, such as wheels, complete graphs, complete bipartite graphs and graphs with a local symmetry" I don't believe only traveling each post once would make a difference, since if you travelled a post twice you would have a loop in your path which you could just omit to satisfy the requirement. (Not quite sure tho, if I'm missing sth please tell me)
@@qedsoku849 Oh, so there can be multiple posts but you can use at most one? [Edit: I guess you have to use exactly one cause otherwise you never get to the top bunk]
@@qedsoku849 Adding the inability to traverse two posts in a path would make the conjecture simply false. The counterexample to that version of the conjecture is a "bed" of a single line segment, with non-fading posts connecting in both of the possible places. Let's say that single line segment's chance to fade is 50%, we then have: ...A 50% chance to being able to get from u to v (the chance that the one line segment hasn't faded) ...A 75% chance of being able to get from u to v' (the chance that at least one of the two possible paths haven't faded)
This feels calculus-esque where you add more and more subvertices and more and more edges, and the "limit" of all the connections (if you could take such a limit, despite there being no sense of "length" with graphs as far as I know) becomes a triangle in terms of solidness from the connections, like a web that could catch you. Thinking about why this counterexample works, I notice that the edges on each side of the bunk are extremely "sturdy" due to all the supporting "triangles" (instead tons of connecting edges. Each vertex only have on edge to another, but we use a triangle-like shape via subdivisions to allow for something close to connecting edges). Since they're sturdy, losing any of them won't affect the probability much, so it seems like removing a connection shouldn't really make it that much harder to get from u to v' than to v. But the fact that it flips, not just close to equal, is surprising. Maybe the sturdiness means that the upwards connection (supports can fit in easier with the triangles. There's less supports than horizontal connections, and if the sturdiness obtained by using both triangles from above and below is worth it, then it might be easier to go across by swapping top to bottom top to bottom than by staying just on the bottom. And since the spokes count is again low, it may be more likely that you have to stay on the stop due to every odd number of spokes you may have to stay on the top (in 1 spoke, you would want to go up and you can't do down again in some graphs, so staying up (at v') is easier than finding a way back.) I wonder if this will have applications to carpentry and sturdiness of designs? I wonder if this graph may be part of a larger series that causes the probability of the graph to increase to some limit, and they just found the first counter example where the inequality crosses but it could cross even more.
As a hobbyist game developer with a mathematical background I found this interesting as this idea of the probability of finding a link between paths comes up all the time when using procedural generation for mazes and you often have to find random seeds for the random number generator that will/won't let you make a fully connected maze.
@@ilprincipe8094That's an oversimplification. The independence of CH from ZFC really means that those axioms don't suffice to prove whether CH holds or not, because there are some models of ZFC where it holds and others where it doesn't. That neither proves nor disproves CH (or its negation), it only tells us that ZFC can't really tell because it doesn't sufficiently delimit the kinds of cardinals that can exist.
@@ashifarman4813 Hahahah, it's not really about being smart, it's mostly hard work 😅 You can learn quite a bit about the basics of mathematical logic and set theort from UA-cam and Wikipedia. Beyond that, A Friendly Introduction to Mathematical Logic by Christopher C. Leary and Lars Kristiansen and Herbert Enderton's Elements of Set Theory are both great!
Without an explanation of _how_ this counterexample managed to evade the intuitive understanding, I'm suspecting that the "tiny margin" by which the probabilities evaded the inequality is just rounding error. There's no way for humans to verify, with the graph being that large.
That for a very enjoyable explanation Dr. Bazett. If this passes review, I guess the bunkbed conjecture can be added to the growing selection of "true-ish" conjectures where the first counter example is huge and not accessible to brute force search. An overview of some of these might make a great series perhaps called "Why Mathematicians can't trust their gut."
Good video to threaten the kids with if they're arguing about who gets the top bed. Either they quieten down immediately or they're asleep after five mintues.
I feel like the specific subtype of the problem, whether either an edge or its corresponding edge must be removed, is pushing the probabilities a lot to be closer together. A bad layout on your starting plane raises the chances of there being a good path on the other plane. Even more so if you, instead, selected n random edges across both networks and removed those.
Im not much of a mathematician, but to my dev brain this sounds like a serialization problem. The process of "how can we represent complex data as a atring of 0s and 1s", is typically a matter of flattening high-dimensional data on a lower-dimensional space. For example a (0-indexed) 10x10 2d array can become a 1d array where the index of (x,y) is located at 10x+y. Hopefully my intuition correctly that the solution in this video was essentially to "encode" the hypergraph as a lower-dimensiinal graph in such a way that it preserved key characteristics.
Really insane!! But it seems almost accessible for a "normal person" with a computer to check just that specific counterexample. Oh, maybe if we go from 1204 to, say, 12040, the difference in probabilities will grow? Anyway, this triangle subgraphs are key. Once they are solved, the rest can be probably brut-forced. Or not, whatever, we'll see. Very exciting and inspiring. I am not a mathematician, so I don't care that even if I can solve it (unlikely), it is going to be reinventing the wheel. Thanks a lot for the film!
4:45 - no... that does'nt seem true... the same way you can think that's easier to walk in the bottom graph you now have the top one to "cut away" and connect more things. but then all of your connections and the things you jumped to are definetly conected before you realize if it's pair can be reached. so it only makes things worse. but trully, because I can flatten this graph and just get another bipartition to call one bottom and one top, it creates a simmetry that you shouldn't be able to prove either way is better than another.
Could that tiny difference not be attributed to machine error? It would need a supercomputer for sure to validate it. I love the idea of transposing the points within the hypergraph. I feel like this could be tested using a physical electrical analog...hmmm
I don't understand how you can make a hyper bunkbed. Wouldn't the spokes necessarily have to connect to two points on one side and one on the other and thus not be symmetrical?
This is awesome, I love this video. If I may ask a couple questions: 1. Is there any probability that the posts will fade away, or are they considered immovable? 2. Is there a possible breakdown of the probabilities in the counterexample as there was in the original basic example (e.g. what is P(u->v) and how does it compare to P(u->w) and P(w'->v'))?
I think the initial example you gave was too obviously gonna follow the conjecture. If you had put the post in another spot that wasn’t located at the symmetrical v node, then it’s not too clear why the probability is still greater.
Percolation theory has many practical applications. But was the bunkbed conjecture of any practical interest or importance, or just something that gained attention (among graph theorists) because someone gave it a cool name? If there are practical applications, it still seems to be a very good approximation, which might usually be good enough.
What are the probabilities if you replace each hyperedge of the hypergraph with just a triangle? Then with a version of the special object with just one new vertex inserted? Then two? I'd be interested to see how that value changes as the number of new vertices in the object goes up.
I think it's a mistake to calculate specific numbers. I was able to show in my head that the hypergraph counterexample 'works', but I did that by ignoring a lot of possibilities where the probabilities would certainly be equal, and looking only at the cases where the probabilities would be different. After ignoring all the equal cases, I found one configuration where only the u->v path could be connected, and two paths where only the u->v' path could be connected. I'm working on the hypergraph-reduced-to-triangle-graph version, but it's obviously harder.
Okay, I'm stopping here, bogged down in in the details; but I see an imbalance that makes it pretty likely that the hypergraph map is still a counterexample if you convert all the hyperedges to simple triangles, but keep the "each pair of edges between top and bottom has one edge in the pair fade and the other not" rule. To be clear, I don't think it's possible for the Bunkbed Conjecture to ever fail while keeping the probabilities completely independent.
I am sorry for asking this question, but when I started uni, I was really passionate about all kinds of math. But now at when I have finished university and started working as a real engineer (and basically not using any math right now), I feel like I can only appreciate complicated math, which is used to solve real practical problems. So, why would anyone care about these bunk-bed graphs and then randomly removing some lines in the graph and then checking is you can go between four points? Seem like very arbitrary with 1 billion equally arbitrary combinations.
You used informal language to explain the intuition behind why it really seems that the conjecture "should be" true. I was hoping for another explanation like that at the end of the video, like explaining where that intuition goes wrong, what adding in those 1204x6 nodes is really doing, etc. I guess it's still early days. Just being able to understand the problem and even comprehending the counterexample is a bit of a popular mathematics miracle.
the hyper link is basically a structure that holds infinite paths from one spoke to the rest on it, in this example its a trinagle, by adding the structure below you simulate the trinagle just in low resolution, if they added more then 1204 of this structures the margin would probably increase. 6 times is the number of new spokes for each given spoke
And this isn't just rounding errors or approximations? Floating point arithmetic only goes so far, and that is a lot of calculations with a very small difference in probability.
You probably need to explain/demonstrate why the number of possible graphs increases so rapidly with increasing numbers of nodes and edges. It is much faster than most people intuitively expect.
Seen from 6:53 It would be more unequal if going from U to V can be done going through V ' when needed. As this is not the case, I wouldn't put my money on the bunkbed conjecture. (edit) That's not fully correct, but there must be a class of two level (1 - 2) solutions available to V ' that are missing for V, while it is still not sure 1 - 2 - 1 solutions are allowed or needed.
I guess the trick is that you're not necessarily just going to an intermediate node on the bottom bunk, going up to the top, then going to v'. You could go up, down, then back up (repeat as often as you like). Not sure why that would make a difference, and as a software engineer I'm definitely aware that accurately computing such a small number is tricky and could easily have went wrong somewhere.
So is the idea that these spokes are essentially emulating the probabilities of the connections on the hypergraph within a small enough margin that it maintains effectively the same probabilities?
if the difference is 10^(-40) magnitude, does the paper account for errors in computation or did they calculate exactly first mathematically then to give a concrete value they used the computer and it gave 10^(-40)?
My understanding is once the counterexample was found, the computer CAN be fairly precise at computing our the relative probabilities even though the difference is tiny. However in the earlier stage of running machine learning algorithms to try and discover the counterexample, they were doing monte carlo simulations and those were much much less imprecise to find an example like this even if they could have dealt with the huge size (which they couldn't)
At 7:00 I think I can already see where the conjecture falls apart: if you add more posts, doesn't that create potential for detours? 8:00 I'm not sure why you would need thousands of pages for this detour effect to start being relevant to the conjecture, so maybe I'm wrong
Sorry but I think you need to explain how they went from the alternative to the regular conjecture? Constructions like the spokes are fairly common in graph theory. I don't see how it was transitioned to the regular conjecture and how that also does not reject the regular conjucture for hypergraphs?
It's great that we live in a world where all mankind's other problems have been solved, so that our most brilliant minds, just the cream of our academic elite, can set themselves against conundrums such as this. I feel like now that we finally know, everything has changed, and I can't wait for all the rich rewards this newfound knowledge will no doubt bring to humanity, and the world.
It's probably my background in OR, but I wonder... the graph counterexample, as presented, does not seem like a graph that would arise in most applied circumstances. I'm not debating the results of the paper, or saying it's not important... but we should also consider how useful something is in most cases, as opposed to finding a single counterexample and saying that the conjecture is 'wrong', in the sense of 'useless'. I mention my background because of the Simplex vs. Elliptical algorithms for solving linear programs. The Simplex algorithm, by complexity theory, is not in P, whereas the Elliptical algorithm is. However, the cases that push Simplex out of P are (AFAIK) solely degenerate and never show up in practical circumstances. Given that under those circumstances, the Simplex algorithm is far more computationally efficient than the elliptical algorithm, the Simplex algorithm is the one used. And yes, I realize that I'm talking about computation as opposed to truth, which is what the paper shows (specifically, the falsity of the conjecture), but we shouldn't as a result discard the bunkbed conjecture, as it's probably true in very many graphs, and would be helpful to determine other things about those graphs where it's true. I guess what I'm asking for is for an algorithm to determine whether or not the bunkbed conjecture is true for a given graph. Which does lead back to computational efficiency.
(From viewing up to the problem statement) I have an intuition for why the conjecture might be false. It "feels like" for any path from u to v', there should be another path from u to v involving fewer bedposts, so getting from u to v "should be" easier, but that's not necessarily the case. Each (loop-free) path that stays entirely in the lower "bunk" can visit u only once, whereas a path that transitions from the bottom bunk to the top bunk can loop back through u'. Therefore in a graph with exactly one bedpost, there can be more possibilities for a path from u to v' than for a path from u to v, specifically because the former set contains paths from u to u' to v', whereas the latter set does not contain paths from u to (either u or u') to v. That's pretty far from a counterexample, but at least it gives me a reason why my "feels like" argument is wrong and how it could in theory be easier to go from u to v'.
@@Tzizenorec That's true, but I'm not sure I understand the relevance -- it's not true in the case where there is only one bedpost, which is the setup I'm considering at the end of message, because a loop-free path can't use that bedpost twice.
@@zygoloid Oh, I thought you weren't relying on any actual "loop". The answer to any solution involving an actual loop is: "You can remove the loop without changing anything". Anytime you find yourself crossing over your own path, it means that the detour you just took could have been skipped and you still would have wound up in the same place.
@@zygoloid But why are you concerned about "can't use that bedpost twice"? You would only _want_ to use that bedpost twice if you were trying to make a loop.
Is the example at 5:12 calculated given that the post is always there? I've never studied graph theory, but I find it strange that the final probability doesn't depend on P(w -> w') at all. Is that just a given for this particular calculation (if, say, we are doing a single run of a monte carlo simulation, and for this run, there is a post at w -> w'), or does it exist there in *all* these calculations as a given?
You: The conjecture is just so intuitive. (Presents the conjecture.) Me: Whoa, wait. Why would you expect that? Sure, I could see graphs where that inequality would hold, but I'd expect others where it's violated. (Finishes watching the video & the counterexample presentation.) Yeah, I think there are much smaller counterexamples. Your example of why the conjecture should be expected has a single bedpost and so your u to v' probability is a product of probabilities, but if you have several bedposts, you're going to have a sum of products. If the path from u to v is fragile, a lot of bedposts can give you more flexibility in circumventing the fragility on your way from u to v'.
i just burst out laughing when he said how many posts the counter example has and now my coworker is asking why I’m laughing, I’m fucking panicking I have no fuckin clue where to even begin explaining why I’m dying laughing rn XD
I'm not sure I understand the conjecture correctly, because it seems to be obviously wrong. Let's say we have a triangle, and again, probability of one edge fading away is 1/2. Then if we choose two points, u and v, then with 1/2 probability the edge between them stays, and with another 1/2 probability it fades away, and for them to stay connected the other two edges have to stay there, which happens with 1/4 probability. So, total probability is 1/2 + 1/2*1/4 = 5/8, same as your answer. If we have two triangles, with all three “posts” connecting top and bottom triangles, then for u and v' to NOT be connected, we need AT LEAST that edges u-v and u'-v' BOTH disappear (otherwise we'd have at least one of the paths u-u'-v' or u-v-v'), which happens with probability 1/4. So, probability of u and v' staying connected is AT LEAST 3/4, which is more than 5/8.
1. Posts are optional; you don't have to have a post in every possible place. 2. Both trips are done on the second graph (the full bunkbed). So the connection from u to v can use the posts, just like the connection from u to v' uses the posts.
@@Tzizenorec 1) I understand that I don't have to; I choose to. 2) O...K, but it wasn't spelled out very well. It seems, if we have only one post, the probability of u and v being connected on the full graph is the same as them being connected on one half. Still, kinda weird to calculate that probability before even mentioning what a bunk bed is.
So frustrating! I almost proved it. I tried replacing the vertices in question with 100 spoke graphs, then 200 spokes, 300, all the way to 1200 and then I gave up. Only FOUR MORE!
I wonder if they could have done all the boolean math on the GPU by representing the values as binary and using masking. A super cluster of CPUs seems overkill when you can run millions of simple boolean operations in parallel on an average graphics card.
I'm still sorta baffled how the bunkbed conjecture could be false. (Granted, I haven't read the paper yet.) The bottom and top graphs are in a sense probabilistically "the same", right? So the probabilistic distance u-v should be the same as u'-v'. So it seems like switching from one graph to the other can never gain you any ground, because the new vertex you switch to must be equally far from its target as the vertex you switched from was from its target. Is there a rule I've missed, or something? Regardless, I'm inclined to wait and see if peer review uncovers anything; good luck.
Hi everyone! I’m one of the authors of the paper being discussed.
Dr. Bazett, thank you so much for the fantastic video! It’s as clear and thorough an explanation as we could’ve hoped for, and I really appreciate how quickly you put it together.
Just a small note: the author of the hypergraph paper’s surname is Hollom.
P.S. We are not changing the title to "Debunking the bunkbed conjecture"!
Oh thank you that’s so cool! Loved the paper:)
Too bad about the paper name... Ask ChatGPT for help with paper names next time.
You should. 😂
What about "it's just a bed now :("?
> P.S. We are not changing the title to "Debunking the bunkbed conjecture"!
Ooof. It's okay, you're allowed to be wrong about some things.
The fact that this paper isn't titled "Debunking the bunkbed conjecture" is such a huge miss
ya that's a fail:D
The bunkbed conjecture has been "debunked"
would also make interesting video titles for this
@@DrTrefor You can still change the title 👀
@@csilval18peer review should reject JUST for that reason!
@@DrTreforChange the titleeeeeeee!!!!!
Wikipedia editors on their way to change "is" to "was" for the bunkbed conjecture
Not without a disclaimer, at least until the paper is peer reviewed.
@@tomhejda6450Rest assured they have the edit ready and are just waiting to click on the Submit button the moment peer reviews come out.
A preprint showing a counterexample to the conjecture was posted on the arXiv in October 2024 by Nikita Gladkov, Igor Pak and Alexander Zimin.[2]
they did this already
Misery was
This is not how it works, Wikipedia editors won't change tenses without citing a reliable source, it is not an instant process and you have to wait for a reliable source to be found and create the citation, mass edits are disruptive and it is better to crowdsource it to ensure consensus.
Putting failed attempts into papers is really an honorable deed.
Ya gotta respect that
@@DrTrefor I do. Nothing is worse than putting years into research and hearing in a side note on a conference that this method had been tried and failed a decade ago, and nobody bothered to tell someone.
Man if I started publishing my papers I would be the most honorable person in the world.
Definitely, failed experiments are important.
"Show your working" was drilled into me by my secondary school maths teacher. (Who was able to retire on a high note, after her whole sixth form class passed A-level Maths a year early and Further Maths the following year with no grade worse than a B.)
It's good advice in an examination, because even if you make a mistake you can still pick up partial marks, if the examiner can see you are doing the right operations but with the wrong values (e.g. because you missed a carry, or looked up the wrong value in your book of tables -- yes, I'm that old; but you could just as easily press the wrong key on a calculator or computer).
But even in an academic paper, a bit of "I already tried this and it didn't work" might prove useful to others working in the same field.
Sir Isaac Newton was able to see further than others by standing on the shoulders of giants, but there can sometimes be a decent view from atop a pile of failed experiments!
I just explained the problem to a nine-year-old. He paused for a while, gave it some thought, and then replied: "It's obvious, you just take a hypergraph and replace each triangle with 1204 spokes connected to a center, the probability is about 10^(-4000) greater. I wonder why it took them so long to figure out"
That’s so crazy, my nine year old said the same thing.
As Albert Einstein definitely once said, "If you can't explain it to a six year old and then let them disprove a standing conjecture from the 1980's, widely thought to be true, you don't understand it yourself"
"... and the young man's name: Albert Einstein"
Guys I'm not sure this is true.
replied*
The funny thing is that "counterexample with 7222 vertices" seems large, but it's actually rather small and often counterexamples are more in the region of "we found something to the order of 10^5,000,000". :D
Or ask a graph theorist to continue the sequence 1,3,...😂
There are 2^26,075,031 possible graphs of 7222 vertices. It is extremely rare counterexample for the given conjecture and one that is impossible to find through naïve brute force search alone
@@mouadlouahi9985impossible with that attitude for sure!
@@juro17 stop right there buddy
@@georgplazI'm not a mathematician, but from my experience, I would often find counter examples when working on graph theory theorems within graphs that contain at most 10 vertices. I've only had 2 situations in which I had to start randomly generating larger graphs to find counter examples and even then they had at most 50 vertices. 7000 is a lot.
Using neural networks to solve graph theory problems is so funny. The graph was defeated by another graph.
funny story, they went back and checked that neural network they were using was very similar to the counterexample and this is all a lie I just made it up
it's the superior graph!!!
I love the irony!
If I've learned anything about CS, it's that everything is a graph problem if you try hard enough.
@@radadadadee You are what you compute.
It's a wonderful thing about mathematics that you can have something that everyone looks at and thinks, "This seems like it should be true. It's gotta be!" And then it's not.
Yes and no. Sure everyone says it’s true but there’s a very strange way where it IS false but it’s in hyper complex multidimensional system, that no longer resembles the original conjecture
then it might as well be true
@@alandouglas2789If the assertion is that it happens every time, then no it's not true, nor it might as well be.
I have to admit this is over my head... I look at this, and don't see any magnitudes involved, it's probability. That it takes such a large counterexample makes me think that there has to be a simplification there, a reason. a wagon wheel topology means one point approaching a supergraph ('smeared" to me) number of vertices. Where things without magnitude tend to break is at infinity, like a wagon wheel with an infinite number of vertices bunk-bedded with another wheel of infinite vertices, you get an indeterminate comparison of infinities.
If I have 3 vertices on each spoke (left, right, hub), there's a 1/3 chance of a vertex decaying goes to the hub, not changing the probability (still infinity). In 2/3 of the cases, a 3 vertex point goes to 2. You could compare that case with a graph with points that has a more even distribution of vertexes. Why am I thinking significant digits (e.g. 0,1,2,3, infinity for a wagon wheel, or (0,1,2,3,4,...)
@@alandouglas2789you’re wrong. The conjecture was that for any graph it’s true. There’s a graph for which it’s not true therefore it’s not true for any graph.
Simple as that. There are limits you can put on the array that will make it so it is true, but knowing it isn’t always true is helpful, because complicated graphs are still graphs, and if this was true it would apply equally to graphs you find confusing and graphs you find perfectly reasonable.
I just love how concise the name of the paper and especially the abstract are. It's just "we did the thing - here you go". In my area (NLP), abstracts are usually very long, so this is nice to see.
Also, I would usually advise against talking about papers that haven't passed peer review, but the nice thing about a proof from a counterexample like this is that you can just take the counterexample and verify it yourself.
Banger "just make every triangle into its own graph with 1000+spokes" angle from these guys
lol ya loved this:D
Does that even converge for infinite spokes?
@@iwersonsch5131Didn't think I'd find the SM64 ABC microcelebrity (nanocelebrity?) Iwer Sonsch here
@@Kebabrulle4869it’s a common name
@@iwersonsch5131 what converges to what
the idea behind why the 1024 spokes works is really cool. In order for the center to not connect to a corner, but for the corners to connect to each other, every single red line would need to be intact, without any of the blue lines being intact, which is exceedingly unlikely. this makes the likelihood of the red line connecting to a bunch of other points really large relative to the likelihood that the red line stays intact on it's own.
This makes the probabilities different, because in pretty much any case where a point on the red line doesn't connect to it's neighbor over a blue line, it almost certainly also doesn't connect to it's neighbor over a red line. The setup of the graph basically approximates making the probability of red disappearing higher than blue disappearing, because the cases where both lines disappear or both stay don't really assist either side in raising it's probability.
Really happy to see this! I saw this paper posted on r/mathematics on reddit, and I i tried to read through the paper, but it was difficult for me to really understand it. I did take graph theory in college, but yea, getting through the paper was tough, so it's awesome to have it explained. Thanks Dr Bazett!
Ha, you are like exactly my targeted audience for this video:D
@@DrTreforwhat about me? I just graduated high school. Yeah I barely understand anything here but I got the jist. Thank you sir!
Now I wonder what the smallest graph is which disproves the conjecture
We really don't know! Because their approach was to keep adding more and more spokes until you got a counterexample, it is very plausible that a smaller counterexample exists without such a brute force trick.
If the hypergraph is optimised, I don't know if finding a more optimal graph is feasible.
If there is still a way to optimise the hypergraph, then maybe there's a smaller graph?
@@janzwendelaar907 Well, the hypergraph is optimized for making the smallest number of vertices in the original hypergraph. But given that you have to insert orders of magnitude more extra vertices than there were originally means that there is massive potential for finding a better solution with a hypergraph that is instead optimized for having to insert less extra vertices.
My conjuncture is that this is the smallest graph. I let Igor Pak disprove my conjuncture. When he does, I'll come up with another conjuncture for the smallest graph. At the end of this cycle we will find the true smallest graph.
And now I wonder how much likely you can reach the top bunk than the bottom. I wonder if you can do better than 10^(-4000). It seems unlikely but it might be the case.
I really like when advances in mathematics have exposure. It would be cool to see more videos like this.
This is actually my first ever "breaking" math video (the paper came out on friday and I was like omg I have to make a video) but I actually hope to do more. Finding the stories that are both interesting AND accessible to a UA-cam audience isn't always the easiest, but I think it is important.
Yes more math news videos would be amazing! Perhaps especially for people who once did math in university but now are doing something else, and aren't part of academic circles anymore. A great way to stay excited about math :)
Graph theory is one of the few branches of modern maths that have conjectures explainable to a layman. Some other are elementary number theory and geometric problems in optimization (like sphere packing, moving sofa)
Still, most current conjectures require years of studying maths to get at least some understanding of their statements
Not to mention highly specialized fields such as algebraic geometry or deep projects like Langlands program
Now it's just the bed conjecture
The secret to math turned out to be that "cool S" that everyone used to doodle in class.
the "Stussy."
The difference in probability they found is 10^(-6509) by the way!
very small (in case that there are people that don't know how to interpret the negative power, here is it written)
1/(10^6509) or 1/100000...(6500 times)...000
very small
How do you compute (or even estimate) a probability that small on a computer? I wouldn't have thought that numerical methods like monte-carlo would be good enough to get error bounds that small in a reasonable computation time.
@@QuantumHistorian
Im also interested in that. maybe the conjecture is true after all
....AKA "effectively zero".
@@Rumpael is the reviewer of these kind of counterexample papers gonna have to validate the result? (not only reviewing for the formatting, etc)
Your restriction to the case of a single "post" is somewhat misleading. The bunkbed conjecture is known to be true if the number of posts is 1 or 2. Indeed, in the counterexample provided in the paper, the number of posts is 3.
Ya that's fair. I was trying to search for the simplest example to animate for UA-cam where we could actually explicitly compute the probabilities and see our example satisfied the conjecture. So please treat that example only as helpful for illustrating the concept, not as the right level of complexity for actually finding a counterexample.
Interesting. I was guessing the counterexample would have a ton of posts like >=20
This just furthers my position that 3 is the weirdest number. Everything gets crazy when you get to 3. Random walks from 2 to 3 dimensions, the number of platonic solids in 3 dimensions, k-satisfiability goes from easy at k=2 to np-hard at k=3 and then all higher k's being reducible to 3-sat, orbits only working in 3 dimensions, and now bunk beds becoming weird with 3 or more posts...
@@sykes1024Yes! I've long thought this since learning about the complexity differences between 2-SAT and 3-SAT in 2005. Then there's also 2-body vs 3-body, I seem to remember something about tiling in 2 vs 3 dimensions, and so on. I wonder if somebody keeps a list of these (and related) threshold effects.
@sykes1024 TREE(3)...
Trying to disprove a Conjecture can help you understand them if you fail too
ya absolutely definitely worth it for gaining intuition
Indeed. I've learned a lot about Collatz conjecture by optimizing a program that finds counter-examples!
On a side note, I've even independently discovered a theorem about GCD of 2 arbitrary Mersenne numbers, by studying how Stein's algorithm works. After doing some research, I've found a Math StackExchange post proving the identity holds for bases other than `2` (`b^n - 1` instead of `2^n -1`)
This realm of uncertainty, this need for the null hypothesis, is why we still refer to 3x+1 as "collatz conjecture" not "theorem".
Do not count your farm birds before they start creating graphs on the surface of their shell
If we have a v v' "post", then the probabilities are the same as if we can reach one, we can reach both.
If v/v' can only be reached by a node x/x', and we have a post x/x', then the probabilities are the same as the probability that vx and v'x' has fade is the same.
If we miss lot of post, this means that to reach v' you need to reach one post, AND to have a path u post + post v'. While u v doesn't have this constraint.
If we have lot of post, then we can "jump" other/under missing links. At any point of the graph, the probability to get to v/v' without any should be the same as the decay is the same for the bottom/top graph.
Adding post should only increase the probability we can reach the top graph (+ the probability to jump under/over a missing link, but both is true for top/bottom graph).
I don't see how the conjecture can be wrong with the given issue...
Just hopping the paper didn't had floating point computations errors xD.
I'm picturing a situation where you have likely breaks (single links) and unlikely breaks (1000 connections). Arranged so you're likely to need to take posts in order to get around breaks; three posts for an up-down-up sequence but no 4th post to bring you back down. And then a ton of fiddly detail to make it work right.
@@nickdumas2495 I do not think your reasoning works as you are not forced to take the post. At one post your are both top and down.
@@Neckhawker The idea is you are forced to take posts due to the gaps. And with an moderate sized odd number of posts to take, that helps the prime destination. Clearly not much, but enough apparently.
@@nickdumas2495 You are making an assumption which isn't true as the decay is random. From the last post, you have the same probability to be able to reach the final destination from the top and the bottom.
Exactly my thinking.
For every potential path that goes past/through at least one post, the propability a path exists from the last post-having vertice should be the same on top (v') or bottom (v) (as there is no reason for it to be different, top and bottom have the same probabilities) - and if one end of that post can be reached, both can, so either both options exist until this point or none.
So the probability to reach v should be the probability to reach v' plus the probability that the only possible path does not go past any post
You are a great teacher! I struggle with math and you made me understand this conjecture. I got a little lost with the hypergraphs, but I got the gist of it. Thanks so much for making me not feel dumb.
what bugs me about this explanation is that it does not give me the kind of understanding that makes my intuition shift even in the slightest way.
i have no idea whether this implies that it is anywhere near possible to have a graph where the h -> v´ case is twice as likely as the h -> v case, for example.
it seems like it would have been helpful to explain what makes the hypergraph work as a counterexample. like this its just "believe me, this thing somehow does the thing we are trying to understand" - and then another step where the details are unclear (i can guess that the number of spokes leads to a greater similarity to an actual triangle, but why that kind of similarity is even helpful stays unclear to me)...
I mean, I'm not sure if an intuitive understanding is easily explained. This sounds like it might be inherently unintuitive.
That's how you disprove though, you just need to give one counterexample. Disproofs usually aren't great for intuition
yeah I kinda agree. like the entire video was about finding a counter example... and then he just says "trust me bro, this is definitely a counter example".
I kinda wish an attempt at all was made to explain it
@@JavedAlam24 i am not interested in understanding why "the bunkbed conjecture is wrong", so the "disproof"-thing does not exactly deal with my issue.
i am interested in understanding why "this graph has a greater likelyhood for h -> v´ than for h -> v".
@@davejacob5208 Sometimes (often, arguably) getting an intuition for why something works/doesn't work in mathematics is far harder than showing the thing itself, and doing so may not have been in the scope of the video. But if you're THAT curious, I imagine you can read the original paper to see why adding those spokes lowers the relative probability.
This is one of those rare examples where I can honestly say it does not seem the conjecture is intuitive. Glad to hear it was proven to be incorrect! Great job to the authors! 👍🏻
The way they choose to use the hypergraph to attack the problem is really interesting.
From a human perspective it makes a lot of sense that if you replace a solid triangle with a lot of "closely packed" edges you can expect a similar result, but you really have to think carefully as to why this method had a chance.
The way I see it, every triangle on each floor is connected to one post and two non-post vertices, and the fact that helps this hypergraph beat the generalized conjecture is that severing one triangle means two non-post vertices disconnect for the price of disconnecting one post, so what they needed is something that copies this behavior.
So each triangle is replaced with N spokes connecting N extra vertices to the bed post, and connected in a single file between themselves, and the outer extras each connect to one of the non-post vertices in this triangle we're replacing.
I'll call blue edges spokes, red edges arcs.
So, take the leftmost vertex. If its spoke and arc connections both decay, then this vertex is both disconnected from the post and from the rest of the vertices in the ring.
If the first arc stays connected, but the second arc and the two first spokes decay (less likely because we require more decays), then again the leftmost vertex is disconnected from both.
Even less likely, if the first 3 spokes and the arc from vert 2 to 3 are all lost, we get the same result.
Each of these possibilities is not very likely, and diminishingly so, but we just need one of these to occur, so the total probability is pretty substantial.
Have these occurrences in both extreme vertices, and the new spokes-ring manage to do the job of a triangle.
I’m sure it’s mentioned in the more formal definition of this problem, but the “posts” are not subject to bond percolation, right?
That's right. Take some subset of the vertices and fix posts there THEN do bond percolation on the bunks.
@@DrTrefor Umm, Wikipedia article says that "the probabilities assigned to the posts can be arbitrary. A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability." And I kinda assumed that the posts have non-zero probability of being deleted. I haven't read the paper, or re-watched the video (yet). But surely the posts are subject to percolation, aka. deletion?
@@nikonyrh If it can be arbitrary for the posts, then even if you force it to be nonzero for the posts, then we can just set the chance of removal for the posts to be like 10^‐10000000000, so that it wouldn't change the result if a counterexample like this exists
@@diribigal Ahaa yes, so the interesting part is that "P(u v')" can become GREATER than (not just equal to) "P(u v)", thus being a counterexample (duh). Thanks!
I suppose counterexamples become larger or impossible, as the probability of deleting posts increases.
I wish you were able to elaborate a bit on the intuition. Basically show why there is a small probability in reverse.
I don't know if that is even possible. If there were a intuitive explanation, then probably would not have been a conjecture for truthfullness in 5he first place
If I were to have a guess, I would say that it probably hinges on 3 factors (though I don't know why they come together in this hyper-graph with the 3 posts, but not the "hyper-edge replacement structure" at the bottom of the screen):
A: There is a super tiny probability that you can go all the way along the red edges in the "hyper-edge replacement structure" to another white vertex of the (hyper-)graph.
B: If you can just go along _enough_ of the red edges, then you're basically guaranteed to have _some_ blue edge along which you can go to the "hub" vertex (which is green and has a post in the (hyper-) graph.
C: (Assuming that none of the red edges goes through the whole way.) The longer the red edge is that you came from, the shorter of a maximum red edge you can have on the same level that stretches back from the opposite white vertex of the hyper-edge; While there isn't the same restriction for neither the red edge stretching back from the opposite white vertex, nor other blue edges coming out through the other pair of hyper-edges on the same post.
But:
D: If you can do this to swap the probability on the bottom hyper-edge then it would preclude you from doing the same swap on the right hyper-edge.
E: If that was all there was to flipping the probability, then that wouldn't explain why you can't do it with only the "hyper-edge replacement structure"; nor why it works for a hyper-graph like this where the number of red edges is *even.*
@@Pystro Basically, I think your first three points come together to the observation that the probability that there's a path in the bottom bunk that contains none of the green vertices is very small. The paths that do contain green vertices also can be slightly modified into paths through the top bunk. The question is then, what gives the top bunk paths the miniscule advantage that they have?
Informative video, there's just one detail that I think is missing: the vertices they're trying to connect in Hollum's counterexample are the top and bottommost.
I don't know why I got this video on my feed, but I watched the whole thing, the explanation was clear, understandable, interesting and it was fun, so I'll subscribe!
Welcome!!
great explanation, I appreciate that you sacrificed the right amount of detail/generality to convey the key aspects of the problem :)
I don’t get why P(uv) would ever be more or equally likely than P(uv’), there’s 2/4 edges that connect u to v after the post, where as there’s 3/4 edges connecting u to v’, so when removing 50% of the edges, there’s a higher chance that v gets disconnected than v’.
I wish you would explain a bit more why replacing the hyper edges with that structure leads to the result. Something about the number of edges leaving a big possibility of there still being a connected path?
My guess is that if you replace the hyper-edges with an infinite number of spokes then the maths reduces to the same behaviour as the hyper edges, so the more spokes you add the more closely it matches. If this is right then I'm thinking that the existence of a counter-example for hyper edges kind of proves that there will be a counter example for 'normal' edges, you just need to keep ramping up the number of spokes. Er, maybe...?!
@@Aaron628318the hyper edges dissapear all at once though, right? But these spoke things as it tends to infinity have a really really really high chance of having a path so doesn't this mean that they behave oppositely?
This is really cool! Im deathly curious how they arrived at replacing the hyperedges with wheels with 1204 spokes. I bet it has some property m or something, but i really like the idea they kept plugging in numbers to see if they were getting closer like tuning a radio.
In the search for aperiodic tilings of the plane, one of the first examples required > 20,000 tiles. Now it's down to one tile (the hat / spectre). Looking forward to seeing the size of known counterexamples for the bunkbed conjecture shrink ... though apparently it's been proven that you need > 8 vertices ... maybe > 30?
super high quality broski, deserve more clout
Igor Pak! Gosh dang, he has been putting out so much stuff recently. An absolute genius of a guy
As soon as the hypergraph disproved the Alternative Bunkbed Conjecture, the proper conjecture's days were numbered. A hypergraph is basically a normal graph where the number of vertices in each triangle has gone to infinity, so obviously there must be 'some inflection point where there are enough vertices in a normal graph of similar structure' that yields the same result. It's kind of like reverse integration, going from infinite parts to finite parts.
"What if they're all wrong" really sounds like the hacker ethos. "A system unbreakable? You'll see as I'll try and break it."
Low probability, large payoff.
@@trucid2always bet on Hakari
All these years, they've been asking if we could go to v-prime, but nobody was asking if we should. Mathematicians will be the end uf us all.
14:00 Saying "what if they are all wrong" is a contradiction because "all such conjectures are wrong" is itself such a conjecture so it must be wrong but when you are saying it you assume it's not wrong.
The what makes it a question, not a statement
Incorrect. It isn't self- referencing. If the conjecture is "all such conjectures are wrong," and 'such conjectures' refers to conjectures commonly thought of as true without having been proven yet, the conjecture itself would not be included, since it isn't something commonly thought of as true. (Which is obvious since people still believe that at least some of 'such conjectures' to be true, therefore they can't believe that they are all false.)
We get it, you understood Inception.
Question about the problem statement with more than one “post”: are you allowed to jump back down and back up again between the levels of the graph? Or are you restricted to staying at the base in the first case, and making at most 1 jump in the second case?
You have to be allowed to jump up and back down. If you couldn't, then the bunkbed with a "bed" of a single line segment (and two posts, for 4 edges in total) would be a counterexample.
11:00 this feels like the opposite of a generalization, since this should be eqivalent to a 3-cyclic connection, connecting those 3 points, with a normal graph
you missunderstand, the hyper link holds infinite paths between one point to the other in the structure, triangle in our example.
@@MrDragonorp what's the difference?
@@Emma-i9x In a 3-cycle one (or two) of the edges can fail independently, whereas the hyperedge is all-or-nothing, which naturally affects the probabilities
With the initial conceptualization of edge percolations you gave, now I'm wondering about an alternate Graph setup where, rather than decaying instantly, each edge has a scalar value, which one might consider its "strength." Each percolation could instead simply reduce the strength, but the edge is still present until that strength reaches zero. I wonder what kinds of explorations you could do with _that._
There's ways you could tweak it further; like constraining what values the scalar can have, having different categories of traveler that require a certain value to cross an edge, or even using the scalar as a _probability_ that a traveler could cross the edge, between 1 and 0.
I expect real mathetaticians have already thought of this, of course; I'd be interested in looking into what they've found.
More math content like this!!
Its really chill listening to some math that isnt a lecture LOL
Idk why.
The bunkbed conjecture seems counterintuitive to me. Does it presume there isn't a pole at u,u' ? Wouldn't a pole at u,u' mean that u and u' are functionally the same point, thus both sides have identical probabilities, and thus by adding any other pole anywhere we increase the u v' probability? Wouldn't this also be the same for any pole that is at x,x' where x is only connected to v through u?
The poles are arbitrary; you can put them, or not put them, in any places you want. (Or even give them a random probability of being there.)
I agree with you that any pole that _is_ there makes the points functionally equal, and so just makes the probabilities closer to equal. That reasoning suggests that the conjecture is simply true, though. 🤔
Have weaker forms of the conjecture been proven such as "if there's only one post" or "if you can only travel along a post once"? A conjecture I'll toss is that if at least 1 post exists along every possible path from u to v, the probability of reaching v or v1 will always be equal. Edit: I suddenly realize I've just been assuming the posts themselves never fade. As a followup, I think that if posts have a chance to fade, and there is at least 1 post along every possible path, the chance to reach v would always be higher, never equal.
According to wikipedia, the conjecture has been proven for "specific types of graphs, such as wheels, complete graphs, complete bipartite graphs and graphs with a local symmetry"
I don't believe only traveling each post once would make a difference, since if you travelled a post twice you would have a loop in your path which you could just omit to satisfy the requirement.
(Not quite sure tho, if I'm missing sth please tell me)
@@cimadev I meant the inability to go down through a second post, admittedly I worded it poorly.
@@qedsoku849 Oh, so there can be multiple posts but you can use at most one?
[Edit: I guess you have to use exactly one cause otherwise you never get to the top bunk]
@@qedsoku849 Adding the inability to traverse two posts in a path would make the conjecture simply false. The counterexample to that version of the conjecture is a "bed" of a single line segment, with non-fading posts connecting in both of the possible places.
Let's say that single line segment's chance to fade is 50%, we then have:
...A 50% chance to being able to get from u to v (the chance that the one line segment hasn't faded)
...A 75% chance of being able to get from u to v' (the chance that at least one of the two possible paths haven't faded)
@@Tzizenorec Ah, I see. Thanks for the counterexample.
This feels calculus-esque where you add more and more subvertices and more and more edges, and the "limit" of all the connections (if you could take such a limit, despite there being no sense of "length" with graphs as far as I know) becomes a triangle in terms of solidness from the connections, like a web that could catch you.
Thinking about why this counterexample works, I notice that the edges on each side of the bunk are extremely "sturdy" due to all the supporting "triangles" (instead tons of connecting edges. Each vertex only have on edge to another, but we use a triangle-like shape via subdivisions to allow for something close to connecting edges). Since they're sturdy, losing any of them won't affect the probability much, so it seems like removing a connection shouldn't really make it that much harder to get from u to v' than to v. But the fact that it flips, not just close to equal, is surprising. Maybe the sturdiness means that the upwards connection (supports can fit in easier with the triangles. There's less supports than horizontal connections, and if the sturdiness obtained by using both triangles from above and below is worth it, then it might be easier to go across by swapping top to bottom top to bottom than by staying just on the bottom. And since the spokes count is again low, it may be more likely that you have to stay on the stop due to every odd number of spokes you may have to stay on the top (in 1 spoke, you would want to go up and you can't do down again in some graphs, so staying up (at v') is easier than finding a way back.)
I wonder if this will have applications to carpentry and sturdiness of designs?
I wonder if this graph may be part of a larger series that causes the probability of the graph to increase to some limit, and they just found the first counter example where the inequality crosses but it could cross even more.
As a hobbyist game developer with a mathematical background I found this interesting as this idea of the probability of finding a link between paths comes up all the time when using procedural generation for mazes and you often have to find random seeds for the random number generator that will/won't let you make a fully connected maze.
0:14 If the conjecture “has been disproven to be false” then that means it has been proven to be true.
🤓
it "has been disproven: to be false"
Not true actually, if a conjecture is not false, then the conjecture does not have to be true, see Continuum hypothesis for example
@@ilprincipe8094That's an oversimplification. The independence of CH from ZFC really means that those axioms don't suffice to prove whether CH holds or not, because there are some models of ZFC where it holds and others where it doesn't. That neither proves nor disproves CH (or its negation), it only tells us that ZFC can't really tell because it doesn't sufficiently delimit the kinds of cardinals that can exist.
@@ashifarman4813 Hahahah, it's not really about being smart, it's mostly hard work 😅 You can learn quite a bit about the basics of mathematical logic and set theort from UA-cam and Wikipedia. Beyond that, A Friendly Introduction to Mathematical Logic by Christopher C. Leary and Lars Kristiansen and Herbert Enderton's Elements of Set Theory are both great!
Bunkbed still feels true for most examples, so the question now is: what is the smallest adjustment we can do to this conjecture to make it true?
Aliens in 2024: Take us to your Graph Theory Specialist
Without an explanation of _how_ this counterexample managed to evade the intuitive understanding, I'm suspecting that the "tiny margin" by which the probabilities evaded the inequality is just rounding error.
There's no way for humans to verify, with the graph being that large.
I put this in my watch later and only decided to watch it when I saw the title finally changed😂
That for a very enjoyable explanation Dr. Bazett. If this passes review, I guess the bunkbed conjecture can be added to the growing selection of "true-ish" conjectures where the first counter example is huge and not accessible to brute force search. An overview of some of these might make a great series perhaps called "Why Mathematicians can't trust their gut."
Good video to threaten the kids with if they're arguing about who gets the top bed. Either they quieten down immediately or they're asleep after five mintues.
Does anyone have a not-super-technical explanation of why the logic that makes this conjecture seem “obvious” is wrong?
I feel like the specific subtype of the problem, whether either an edge or its corresponding edge must be removed, is pushing the probabilities a lot to be closer together. A bad layout on your starting plane raises the chances of there being a good path on the other plane. Even more so if you, instead, selected n random edges across both networks and removed those.
Im not much of a mathematician, but to my dev brain this sounds like a serialization problem. The process of "how can we represent complex data as a atring of 0s and 1s", is typically a matter of flattening high-dimensional data on a lower-dimensional space. For example a (0-indexed) 10x10 2d array can become a 1d array where the index of (x,y) is located at 10x+y.
Hopefully my intuition correctly that the solution in this video was essentially to "encode" the hypergraph as a lower-dimensiinal graph in such a way that it preserved key characteristics.
To me (not a mathematician) it seems kind of that uv
Really insane!! But it seems almost accessible for a "normal person" with a computer to check just that specific counterexample. Oh, maybe if we go from 1204 to, say, 12040, the difference in probabilities will grow? Anyway, this triangle subgraphs are key. Once they are solved, the rest can be probably brut-forced. Or not, whatever, we'll see. Very exciting and inspiring. I am not a mathematician, so I don't care that even if I can solve it (unlikely), it is going to be reinventing the wheel. Thanks a lot for the film!
4:45 - no... that does'nt seem true...
the same way you can think that's easier to walk in the bottom graph you now have the top one to "cut away" and connect more things.
but then all of your connections and the things you jumped to are definetly conected before you realize if it's pair can be reached.
so it only makes things worse.
but trully, because I can flatten this graph and just get another bipartition to call one bottom and one top, it creates a simmetry that you shouldn't be able to prove either way is better than another.
Could that tiny difference not be attributed to machine error? It would need a supercomputer for sure to validate it. I love the idea of transposing the points within the hypergraph. I feel like this could be tested using a physical electrical analog...hmmm
Well explained! Thanks
I don't understand how you can make a hyper bunkbed. Wouldn't the spokes necessarily have to connect to two points on one side and one on the other and thus not be symmetrical?
In Hollum's example along each "bunk" the edges are 3-edges but the posts between the bunks are 2-edges.
@@DrTrefor Aha
@@SeeAndDreamify en.wikipedia.org/wiki/Adjacency_matrix
@@SeeAndDreamify en.wikipedia.org/wiki/Incidence_matrix
is there any kind of combinatorial intuition why the probability in the cointerexmaple is higher for u->v‘? or do we just have to accept it 🙄
This is awesome, I love this video. If I may ask a couple questions:
1. Is there any probability that the posts will fade away, or are they considered immovable?
2. Is there a possible breakdown of the probabilities in the counterexample as there was in the original basic example (e.g. what is P(u->v) and how does it compare to P(u->w) and P(w'->v'))?
What about increasing the number of triangles from 1202 to something larger? Does the gap between the probabilities increase further?
Now I’m curious what the smallest counterexample to the bunkbed conjecture is
I think the initial example you gave was too obviously gonna follow the conjecture. If you had put the post in another spot that wasn’t located at the symmetrical v node, then it’s not too clear why the probability is still greater.
Percolation theory has many practical applications. But was the bunkbed conjecture of any practical interest or importance, or just something that gained attention (among graph theorists) because someone gave it a cool name? If there are practical applications, it still seems to be a very good approximation, which might usually be good enough.
What are the probabilities if you replace each hyperedge of the hypergraph with just a triangle? Then with a version of the special object with just one new vertex inserted? Then two? I'd be interested to see how that value changes as the number of new vertices in the object goes up.
I think it's a mistake to calculate specific numbers. I was able to show in my head that the hypergraph counterexample 'works', but I did that by ignoring a lot of possibilities where the probabilities would certainly be equal, and looking only at the cases where the probabilities would be different. After ignoring all the equal cases, I found one configuration where only the u->v path could be connected, and two paths where only the u->v' path could be connected.
I'm working on the hypergraph-reduced-to-triangle-graph version, but it's obviously harder.
Okay, I'm stopping here, bogged down in in the details; but I see an imbalance that makes it pretty likely that the hypergraph map is still a counterexample if you convert all the hyperedges to simple triangles, but keep the "each pair of edges between top and bottom has one edge in the pair fade and the other not" rule.
To be clear, I don't think it's possible for the Bunkbed Conjecture to ever fail while keeping the probabilities completely independent.
I am sorry for asking this question, but when I started uni, I was really passionate about all kinds of math. But now at when I have finished university and started working as a real engineer (and basically not using any math right now), I feel like I can only appreciate complicated math, which is used to solve real practical problems. So, why would anyone care about these bunk-bed graphs and then randomly removing some lines in the graph and then checking is you can go between four points? Seem like very arbitrary with 1 billion equally arbitrary combinations.
Graph theory is very useful in a number of areas, including in the optimization of computation algorithms or system design.
@@XGD5layer Yes of course, but why would anyone care about probabalistic bunkbed graphs?
@@CrazyMineCuber It's all about random walks, which is the context of all this.
Same here. That's why you and I are engineers and not mathematicians.
You used informal language to explain the intuition behind why it really seems that the conjecture "should be" true. I was hoping for another explanation like that at the end of the video, like explaining where that intuition goes wrong, what adding in those 1204x6 nodes is really doing, etc.
I guess it's still early days. Just being able to understand the problem and even comprehending the counterexample is a bit of a popular mathematics miracle.
the hyper link is basically a structure that holds infinite paths from one spoke to the rest on it, in this example its a trinagle, by adding the structure below you simulate the trinagle just in low resolution, if they added more then 1204 of this structures the margin would probably increase. 6 times is the number of new spokes for each given spoke
I’ve never heard of this conjecture, but pretty cool 🤔
And this isn't just rounding errors or approximations? Floating point arithmetic only goes so far, and that is a lot of calculations with a very small difference in probability.
Nope, the inequality was proven completely formally
@@DrTrefor wow!
This title really feels like it's one or two steps away from being a perfect tongue-twister.
You probably need to explain/demonstrate why the number of possible graphs increases so rapidly with increasing numbers of nodes and edges. It is much faster than most people intuitively expect.
Seen from 6:53 It would be more unequal if going from U to V can be done going through V ' when needed. As this is not the case, I wouldn't put my money on the bunkbed conjecture.
(edit) That's not fully correct, but there must be a class of two level (1 - 2) solutions available to V ' that are missing for V, while it is still not sure 1 - 2 - 1 solutions are allowed or needed.
I guess the trick is that you're not necessarily just going to an intermediate node on the bottom bunk, going up to the top, then going to v'. You could go up, down, then back up (repeat as often as you like). Not sure why that would make a difference, and as a software engineer I'm definitely aware that accurately computing such a small number is tricky and could easily have went wrong somewhere.
So is the idea that these spokes are essentially emulating the probabilities of the connections on the hypergraph within a small enough margin that it maintains effectively the same probabilities?
ya I think that's a fair summary of the intuition here
if the difference is 10^(-40) magnitude, does the paper account for errors in computation or did they calculate exactly first mathematically then to give a concrete value they used the computer and it gave 10^(-40)?
My understanding is once the counterexample was found, the computer CAN be fairly precise at computing our the relative probabilities even though the difference is tiny. However in the earlier stage of running machine learning algorithms to try and discover the counterexample, they were doing monte carlo simulations and those were much much less imprecise to find an example like this even if they could have dealt with the huge size (which they couldn't)
It's 10^(-4000)
@@jesusthroughmary mb. but that makes it even more concerning.
@@akirakato1293 I think that's why he made the caveat about peer review
At 7:00 I think I can already see where the conjecture falls apart: if you add more posts, doesn't that create potential for detours?
8:00 I'm not sure why you would need thousands of pages for this detour effect to start being relevant to the conjecture, so maybe I'm wrong
ok but now I want to see the monster graph I know it will be an inscrutable mess of lines but I still want to
Sorry but I think you need to explain how they went from the alternative to the regular conjecture? Constructions like the spokes are fairly common in graph theory. I don't see how it was transitioned to the regular conjecture and how that also does not reject the regular conjucture for hypergraphs?
It's great that we live in a world where all mankind's other problems have been solved, so that our most brilliant minds, just the cream of our academic elite, can set themselves against conundrums such as this. I feel like now that we finally know, everything has changed, and I can't wait for all the rich rewards this newfound knowledge will no doubt bring to humanity, and the world.
It's probably my background in OR, but I wonder... the graph counterexample, as presented, does not seem like a graph that would arise in most applied circumstances. I'm not debating the results of the paper, or saying it's not important... but we should also consider how useful something is in most cases, as opposed to finding a single counterexample and saying that the conjecture is 'wrong', in the sense of 'useless'.
I mention my background because of the Simplex vs. Elliptical algorithms for solving linear programs. The Simplex algorithm, by complexity theory, is not in P, whereas the Elliptical algorithm is. However, the cases that push Simplex out of P are (AFAIK) solely degenerate and never show up in practical circumstances. Given that under those circumstances, the Simplex algorithm is far more computationally efficient than the elliptical algorithm, the Simplex algorithm is the one used.
And yes, I realize that I'm talking about computation as opposed to truth, which is what the paper shows (specifically, the falsity of the conjecture), but we shouldn't as a result discard the bunkbed conjecture, as it's probably true in very many graphs, and would be helpful to determine other things about those graphs where it's true. I guess what I'm asking for is for an algorithm to determine whether or not the bunkbed conjecture is true for a given graph. Which does lead back to computational efficiency.
(From viewing up to the problem statement) I have an intuition for why the conjecture might be false. It "feels like" for any path from u to v', there should be another path from u to v involving fewer bedposts, so getting from u to v "should be" easier, but that's not necessarily the case. Each (loop-free) path that stays entirely in the lower "bunk" can visit u only once, whereas a path that transitions from the bottom bunk to the top bunk can loop back through u'. Therefore in a graph with exactly one bedpost, there can be more possibilities for a path from u to v' than for a path from u to v, specifically because the former set contains paths from u to u' to v', whereas the latter set does not contain paths from u to (either u or u') to v.
That's pretty far from a counterexample, but at least it gives me a reason why my "feels like" argument is wrong and how it could in theory be easier to go from u to v'.
Paths from u to v don't have to stay in the bottom bunk. They can visit the top bunk, then go back to the bottom.
@@Tzizenorec That's true, but I'm not sure I understand the relevance -- it's not true in the case where there is only one bedpost, which is the setup I'm considering at the end of message, because a loop-free path can't use that bedpost twice.
@@zygoloid Oh, I thought you weren't relying on any actual "loop". The answer to any solution involving an actual loop is: "You can remove the loop without changing anything". Anytime you find yourself crossing over your own path, it means that the detour you just took could have been skipped and you still would have wound up in the same place.
@@Tzizenorec Right, I know, that's why I'm only considering loop-free paths.
@@zygoloid But why are you concerned about "can't use that bedpost twice"? You would only _want_ to use that bedpost twice if you were trying to make a loop.
This is actually insane!
right?!?
Is the example at 5:12 calculated given that the post is always there? I've never studied graph theory, but I find it strange that the final probability doesn't depend on P(w -> w') at all. Is that just a given for this particular calculation (if, say, we are doing a single run of a monte carlo simulation, and for this run, there is a post at w -> w'), or does it exist there in *all* these calculations as a given?
The posts don't decay, IIUC.
You: The conjecture is just so intuitive. (Presents the conjecture.)
Me: Whoa, wait. Why would you expect that? Sure, I could see graphs where that inequality would hold, but I'd expect others where it's violated. (Finishes watching the video & the counterexample presentation.) Yeah, I think there are much smaller counterexamples. Your example of why the conjecture should be expected has a single bedpost and so your u to v' probability is a product of probabilities, but if you have several bedposts, you're going to have a sum of products. If the path from u to v is fragile, a lot of bedposts can give you more flexibility in circumventing the fragility on your way from u to v'.
i just burst out laughing when he said how many posts the counter example has and now my coworker is asking why I’m laughing, I’m fucking panicking I have no fuckin clue where to even begin explaining why I’m dying laughing rn XD
So now that it is debunked we're left with just the Bed conjecture?
If the probability is so minimally off, is it possibly a precision error? What if they're really just equivalent?
My favorite math trope is "but does it pass the big numbers test?"
I'm not sure I understand the conjecture correctly, because it seems to be obviously wrong.
Let's say we have a triangle, and again, probability of one edge fading away is 1/2. Then if we choose two points, u and v, then with 1/2 probability the edge between them stays, and with another 1/2 probability it fades away, and for them to stay connected the other two edges have to stay there, which happens with 1/4 probability. So, total probability is 1/2 + 1/2*1/4 = 5/8, same as your answer.
If we have two triangles, with all three “posts” connecting top and bottom triangles, then for u and v' to NOT be connected, we need AT LEAST that edges u-v and u'-v' BOTH disappear (otherwise we'd have at least one of the paths u-u'-v' or u-v-v'), which happens with probability 1/4. So, probability of u and v' staying connected is AT LEAST 3/4, which is more than 5/8.
1. Posts are optional; you don't have to have a post in every possible place.
2. Both trips are done on the second graph (the full bunkbed). So the connection from u to v can use the posts, just like the connection from u to v' uses the posts.
@@Tzizenorec
1) I understand that I don't have to; I choose to.
2) O...K, but it wasn't spelled out very well. It seems, if we have only one post, the probability of u and v being connected on the full graph is the same as them being connected on one half. Still, kinda weird to calculate that probability before even mentioning what a bunk bed is.
So frustrating! I almost proved it. I tried replacing the vertices in question with 100 spoke graphs, then 200 spokes, 300, all the way to 1200 and then I gave up. Only FOUR MORE!
I love how I watched the whole video even though I have no idea what's even being said
Mathematicians beating me up because I put two lines on two triangles.
lovely video btj
I wonder if they could have done all the boolean math on the GPU by representing the values as binary and using masking. A super cluster of CPUs seems overkill when you can run millions of simple boolean operations in parallel on an average graphics card.
I'm still sorta baffled how the bunkbed conjecture could be false. (Granted, I haven't read the paper yet.) The bottom and top graphs are in a sense probabilistically "the same", right? So the probabilistic distance u-v should be the same as u'-v'. So it seems like switching from one graph to the other can never gain you any ground, because the new vertex you switch to must be equally far from its target as the vertex you switched from was from its target. Is there a rule I've missed, or something? Regardless, I'm inclined to wait and see if peer review uncovers anything; good luck.