Hey Space Timers! We hope you enjoy the new logo!!! There will be a new intro coming in the next few weeks, but that needs a little more polish before it's ready. We also hope you enjoy the new merch store, which has 15% off everything for the month of December if you use the code PBS. Check it out here: crowdmade.com/collections/pbsspacetime
the best thing about this channel is that I have a PhD in quantum chemistry and this guy is taking all my knowledge on the topic, turning it inside out, and giving me a deeper understanding of it 20 years later. it feels glorious.
I was married to a quantum chemist for 20 years (Sarah J. Edwards, she wrote TagDock). She passed away unexpectedly earlier this year, and I watch these and miss her and our conversations every night. I keep hoping conservation of information exists so she'll be around forever and I am comforted that she still exists, just not in my time reference any more. Grief and science are an odd pairing.
We are living in the 2nd plane of existence from the 31 planes. Humans live in 4 different planes with 4 different great oceans. Devas (gods) and Brahmas live in higher planes of existence in mount Meru, and the earth is a part of that mountain.
I've been watching for five years, and this one was the hardest to grasp of all of them. Not because it was presented poorly, but because maybe I reached my brain's limit to understand :)
This explanation was an odd blend of rigorous and analogous... I've understood all of this for several years now, but I had trouble finding a clear path through the analogies. I think there are easier, more intuitive ways to explain it, but they don't _quite_ pinpoint that Many Worlds more naturally explains the Born rule. This is really Sean Carroll's argument, and he explains it in a way that I find much more intuitive... it's just also a bit less precise in its implications. I do think there's value in this kind of rigor, even within the framework of popular science communication, but I can't say for sure whether it's effective at conveying the information to someone who doesn't yet understand it, since I had that revelation, personally, well before this episode. You say it's one of the hardest to grasp... does that imply that you feel like you _did_ grasp it in the end? If so, then it was a successful explanation for at least one person. Still, I'd suggest watching one of Sean Carroll's lectures on the subject. He did a whole bunch of them while doing the publishing tour for his previous book, Something Deeply Hidden (and the book itself, of course, makes the same points in the same ways, it just goes a little deeper). The easiest to find is probably his lecture at the Royal Institution a few years ago, though it's not his clearest, best presentation of it (probably owing to the time constraints).
I don't think it is super hard to understand but any simple version of it would be more of a metaphor than accurate description and this channel leans toward accurate. My best summary (and I am not expert others add or correct as needed) is that many worlds matches what we know about quantum physics and math already, we can't falsify it, but the math is straight forward. Other interpretations can use the same math but they don't explain why the other realities don't exist. I think we intuitively want to think that there is 1 reality and that may be the shortcoming here, that we as humans want it to match our intuition but doing so requires some other effect or mechanism to be in play that we don't yet understand. It doesn't mean that it doesn't exist, but this argument is that many worlds is a more simple explanation. An Occam's razor of sorts.
Yeah they've blasted through all the 'easy' stuff, I'm honestly excited to see how they'll present the truly mind-shredding topics to the general public.
It was hard to grasp because they barely explained it at all. The crux of the math is that a. all the vector amplitude has to be 1 (which was why 1/√2 appeared out of nowehere when |T⟩ was broken into |T1⟩ and |T2⟩) and b. length addition follows usual euclidean rules (I mean, special relativity as a theory has its own peculiar way to "add" length, not to mention this is some abstract space and not normal space either, we can't just assume usual rules apply). I'm sure these are very basic quantum mechanics facts but I can't immediately see why, because I'm a layman, and they didn't explain.
I love listening to this channel even when I don’t understand all the concepts. Over time I pick up bits and pieces that help me to understand future episodes just a little bit more. This one had some great concepts that I understood, but when we start talking actual equations, my brain shuts itself off lol Great work as always. Love this program
God I love how this channel can actually go into the math of physics instead of just using spoken simplifications. All the math in the quantum mechanics videos is one of the reasons I managed to get an A in Quantum Mechanics 101 in University.
@@PrincipalSkinner3190 Same here as software engineer. I know what sigma, delta, and that fork symbol are, but some of those pipes don't seem to indicate a limit. It would be nice to make the math readable with Python.
My older brother has a daughter who is a bit of a physics nerd, she's 11, he asked me, the one who actually studied this stuff, if I could explain superposition to her. I said "yes, I can, but neither she nor I will understand what I'm saying"
"I may be capable of explaining it to you, or I may not be capable of explaining it to you. Until I attempt to explain it to you and you observe my explanation, neither of us knows if I can, therefore I am in a superposition of simultaneously able and unable to explain to you until you attempt to listen to me"
😂 Exactly why I am more of a GR guy. QM needs to figure out how to measure before I try understanding the religious nature of the interpretations since many are not testible.
I like how I either understand this as something that's obvious, or I don't understand it at all, and I appear to be in a quantum superposition of the two and I can't know yet which one I'm in. 🤔
The problem with superposition might be solved by more understanding of the quark mechanics. I feel like our understanding of quarks is muddied because of our misunderstanding of superposition because it's a fallacy.
Me too, I'm wondering whether this is the kind of episode that the folks who liked reading the proof that 1+1=2 would enjoy. Handy that someone put the effort in to actually put the proof on paper since it's the foundation of an entire branch of study, but a lot of people intuitively see it as a "well, yeah" statement without grasping the level of thinking that went into proving it's actually the case. It's hard for me to tell though as I'm usually in the other camp and playing with tensor fields and wavefunctions in my head. Maybe Feynman has finally infiltrated my brain 😬
This made me think of the Monty Hall problem. Information provided after choosing one of the doors is partially collapsing the wave function of when the prize is. And suddenly the two unchosen doors can be grouped into a single stack with a higher probability of the prize existing there.
@@antm9771 Many Worlds and QBism are distinct; in QBism there is a single world and the wavefunction encodes subjective beliefs about it whereas in Many Worlds the branching wavefunction exists objectively and observers within it have "indexical uncertainty" that is treated in a Bayesian way using the principle of indifference
The thought experiment and fictional philosophical argument between Newton's bucket and Mach was the hardest problem I've ever encountered. I took a university-level physics course more than a decade ago, but I still get nightmares about a final exam on Newton's bucket. I know it has something to do with either absolute motion or mopping.
This is a really great explanation! I'm still trying to wrap my head around the argument, but I appreciate the willingness to tackle complex topics, the logical way it's laid out, and the fancy animations behind it all. If I'm understanding the logic right, the proof goes something like this: Schrodinger equation demands a vector space and vectors of constant length -> state vectors must have magnitude one -> indifference means that states with the same coefficient must be equally likely -> states with any coefficient can be represented as a sum of substates with equal coefficients that convert the overall state into a sum of substates with equal coefficients -> the math works out to squares of coefficients = probability. I think I get it....?
"states with any coefficient can be represented as a sum of substates with equal coefficients" - Doesn't it seem too convenient that the state with the larger coefficient is allowed to be broken up into substates but the one with the smaller coefficient isn't, just to make 3 states with the same coefficient? Both states correspond to one macro observable. What makes one worthy of being broken up but not the other, except the motivation to force emergence of the born rule from many worlds? Not a rhetorical question. Genuinely wondering if I have missed something.
@kanishkchaturvedi1745 The idea is that you can always arrive at a consistent coefficient based on a common denominator. Like, let's say that the probability spread was 3/5 heads and 2/5 tails. These can be broken into 3 head worlds and 2 tail worlds out of a potential 5, all with a coefficient of 1. So we only break apart one side in Matt's example because that's all we need to do. He could just as easily have broken one side into 2 branches and the other into 4, all with a coefficient of .5 and the math wouldn't care.
@@umbrascitor2079 This is fine for showing that the born rule is a good mathematical tool for a physicist to calculate a probability from the schrodinger equation. But it does nothing at all to explain how the branching of the wave function would phsyically but you in a tails-flip branch twice as often as a heads-flip branch. The card explanation reveals the problem. The only reason it worked in that case was because there were more cards in one stack than the other to begin with. The analogy for many worlds would be that there were more quantum states in the tails world than the heads world to begin with. Exactly twice as many, in fact.
The way I've come to interpret it, observation simply IS entanglement. Spooky action at a distance, the quantum state being undefined until observation is made, it all falls out from that. The reason we can never observe conflicting quantom state observations is because that very observation is entangling the states together.
Absolutely. When you observe a particle to see whether it has property P or not-P, your brain and the particle will become an entangled superposition of the following two states: 1. The particle had property P and your brain contains the memory of having seen it as having property P, and 2. The particle had property not-P and your brain contains the memory of having seen it as having property not-P. And the "wave function collapse" that other quantum interpretations talk about, is simply the transition of going from an un-entangled state (prior to making the measurement) to the entangled state (post measurement).
@pbsspacetime, I LOVE this show. It's among the best on the internet. Dr. Matt is great, the topics are great, the coverage is great, and the discussion is great. I'm never disappointed to see a new episode come out. ☺️😄🥰
Great video! It answers a question that I had for years. A small followup question: at 15:00 where you are splitting the wave function into T1 and T2, you are assuming that they are orthogonal, and this becomes the underlying reason for the squaring of the coefficients. Where does this orthogonality comes from? Naively I would've assumed that |T> is split into 0.5 |T> and 0.5 |T>.
You bring up a good point. I think that the proof is flawed in that the fact that the states add as orthogonal vectors is coming as a result of Born's rule.
Yeah, I don’t understand because tails isn’t orthogonal to tails. It’s obviously orthogonal to heads. The whole point of superposition is that 2 or more different states exist together. It doesn’t make sense to talk about the superposition of the same state. I think it’s not a satisfactory solution but an ad hoc attempt to get proper probabilities from MWI.
When you dealt the cards in sets of 3 and then swapped one, I was very worried you were going to invoke the Monty Hall problem. 😂 So glad you didn't. Thank you for keeping quantum mechanics simple and easy to understand.
The monty hall problem is easily explained when looking at it like this: In order, for the showrunner to pick a door that is wrong, he MUST know whats behind each of the doors. So at the exact moment he opens a false door, information is being shared. That information in of itself gives the higher odds at the third door. Because he knew and now told you part of it.
@@gorgit Also, the trick to Monty Hall problem isn't that he knows which door is wrong, it's that him eliminating the wrong door changes the set structure. Intuitively, each door has a 1-in-set size chance of being right, which is correct, but by revealing a door after you pick, he is binding the 2 unselected doors into a set. People intuitively think of it as bare selection between 2 doors, and thus a 1-in-2 chance, but because the revealed door is selection entangled you are actually picking between your 1 door and the pair of other doors. So your original door is still 1-in-3 odds while the other doors collectively are 2-in-3 odds.
@@Merennulli Yes, but thats all because he KNOWS which door is wrong. That means information is being communicated by opening the wrong door. You are just going into the details what information actually is being communicated.
David, you're a poet. I have never seen a physicist describe the universe so eloquently and poetically as you do. Thank you for these videos. Keep them coming.
I follow a decent amount of science channels, but am no scientist myself. That said, so far the most compelling quantum theory i found is superdeterminism... The fundemental question of why the wave function results are how they are, hinges on the answers to two points, locality, and measurement independence. And so far, i've found very little compelling reasoning why our measurement doesn't alter the state. in fact, as little or less than why quantum physics would be non-local. What is certain is that it can't be both local, and independent of measurement... but... we measure speed by altering speed, bouncing something of a known speed off of the object we try to measure. We measure energy by funnelling it into a controlled process that consumes some. And the more complex the measurements, the more invasive they tend to be.
Agree, you find the same future depending input in superdeterminism and in classical physics in the principle of least action. No necessity of measurement independence.
y'all should do a video on Wigner's Friend thought experiment to really finish up the many worlds interpretation explanation. one might question if those branches are real given that we can't interact with them, but the point is that notions of what is real are observer dependent! an external observer can interfere our branch with a branch we aren't entangled with, and so end up with different (relative to us) observed probabilities for the same experiment.
What does it even mean to say "what is real are observer dependent"? This is an abuse of philosophical terminology. The term "realism" refers to an objective reality that is independent of the observer. "Observer-dependent reality" is nonsensical.
I love the pbs vids, Matt! Just one remark, the outro music is loud and sudden. I so enjoy getting into a state of musing about the deep things you explain, and the final music breaks that state with the heavy drumms and horn sections.
It's interesting that the framework for this thought experiment and Einstein's one for GR both require the observer to not know information about the exterior world to induce a symmetry (e.g. not knowing if you're in an accelerating rocket or in a gravity field). I don't think I can wrap my head around this observation, but I do think that it speaks to something about the information of the systems and that may be tied to the Mach principle.
An inability to measure a difference is another definition of a symmetry. It shouldn't be surprising that we have gained so much knowledge by considering the innate symmetries of nature with mathematical rigor. In combination with Noether's theorem, symmetries are also conservation laws. Thus, situations which we cannot discern are the source of all deep physical principles.
An accelerating rocket is a 'gravity' field. There is no difference between the earth accelerating you or a rocketship accelerating you. One is accelerating you in a curve, the other a more or less straight line. The laws of physics apply equally in all frames of reference. It doesn't matter if the frame is vertical, horizontal or curved. This is what Einstein and his flat Earth followers don't understand.
Great episode, anyway I'm not convinced at all by the splitting-state argument. To me it looks like a trick to force the final explanation of the Born rule.
It indeed is, see the paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll"_ by Richard Dawid and Simon Friederich, which is the paper this video is based upon, which precisely demonstrates that the derivation of the Born rule from this method requires arbitrary assumptions that are clearly chosen just for the purpose of deriving the Born rule and would not be chosen as assumptions otherwise.
I like how they even incorporated a sense of being "in the wave function" @9:37-@9:38 where you can be either in a world where the text bubble says 'degress of belief,' or a world where it says "degrees of belief',
This video was a miss for me. Reminded me of philosophy texts that meander on one topic for dozes of pages, using progressively deeper details and options. Then ultimately concludes no knew knowledge gained.
Hi PBS. If memory serves, in his PhD thesis Everett constructed his many-worlds theory so that its results are mathematically equivalent to other accepted (at that time) forms of quantum mechanics. Thus, whether you use it to solve a specific problem or not, is largely a matter of taste, not a matter of its correctness, because of its mathematical equivalence to the other interpretations of QM. Ed Salpeter suggested that you should adopt the QM interpretation that helps you take the next step in your physics research. [For example, I guess Everett thought his many-worlds interpretation of QM would lead to unification of QM & GR, which it obviously has not, so far.]
I think it’s because classical physics has the principle of least action. This is in contrast to quantum mechanics where every path is possible. I think taking measurements applies a gauge (redundancy/symmetries/group) on the quantum system. Any measurement or gauge respects some symmetries like for example translation in space or time. The symmetries results in conservation laws following the more general principle of least action by Noether’s theorem. Feynman’s path integral formulation of quantum mechanics reduces to classical mechanics with the addition of the principle of least action. A single trajectory or state is only observed because it is fixed by the large stationary action and doesn’t cancel out. The many worlds idea is possibly an artifact of there being redundant symmetries such as time invariance. The “guiding equation” of particles in pilot-wave theory is the stationary action that also has locality.
When summing all possible paths, the principle of stationary/least action is what you get as a consequence, for free, it's not like you add it in explicitly. Classical mechanics with its least action principle comes from this, from the quantum world where we sum over all paths.
Excellent video as always. I really wish Sean Carroll would see this episode. He also does the Mindscape podcast, highly recommended if you're into quantum mechanics and the deeper questions in physics.
I'm pretty sure this episode is based on Carroll's paper about deriving the Born rule in many worlds. Edit: it's not. It's based on the paper linked in the description.
It's based on Carroll's paper which is incredibly unconvincing and the circular nature of it is already obvious in this video. Notice at 15:12 if you apply the "principle of indifference" (what Sean Carroll actually called the "epistemic separability principle," i.e. ESP) you would get the wrong answer, so you have to intentionally rearrange the mathematics in order for it to get you the right answer. This is because Carroll does not rely on the ESP but a derivative he calls ESP-QM, which adds on a postulate that the probability one should assign to being in a certain branch depends on the reduced density matrix that describes the combined system of the agent and the detector... but density matrices give you Born rule probabilities, i.e. ESP-QM implicitly assumes the Born rule. Carroll's paper assumes the Born rule in order to derive the Born rule. I recommend Richard Dawid and Simon Friederich's paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll"_ which goes over this, ultimately showing that an assumption of ESP does not entail ESP-QM without additional assumptions, and these assumptions are arbitrarily chosen specifically to reproduce the Born rule. James Hartle's paper _"What Do We Learn by Deriving Born's Rule"_ also points out that you cannot derive the Born rule without additional assumptions. If the Born rule is not assumed as a postulate, you need some other postulate from which it can be derived. Carroll's ESP-QM really is of the same type as the "quantum non-equilibrium" postulate in Bohmian mechanics, that is to say, yes, you can derive the Born rule from such a postulate, but it is clear the postulate is chosen _specifically to derive the Born rule,_ and thus the postulate is just as arbitrary as just assuming the Born rule as its own postulate.
@@amihart9269 I think you’ve misunderstood a few things here. The Principle of Indifference (POI) and ESP are two different ideas. The video skips through some of the details that are shown in the doc linked in the description, but the basic idea is that you can show that the POI is valid when branches have equal amplitudes, and that it is invalid when they do not. So when you encounter a wavefunction with unequal amplitudes, you must re-express it in terms of equal amplitude terms to use POI and get the correct probabilities. The argument for when POI is valid and when it is not does *not* require knowing the Born Rule in advance, so this is not circular.
@@skewlkid521 You say are different but fail to show how they are different, and yes, you have to break them up in a specific way precisely to get the Born rule. The validity of the Born rule is a starting assumption and arbitrary postulates are added to get to this conclusion, but those postulates have no justification outside of the fact you could derive the Born rule with them. If we discard the Born rule, why should believe anything about your mathematical operation to break up the wave function?
This was quality content. Been a long time since i watched one of these pbs vids because I thought they turned a bit clickbaity but this vid was right on the money.
Mathematical physicist here. I'm curious what you think about this. I've been looking at this problem for a while now. As far as I can tell, the argument you've presented here was first championed by Sean Carroll. What it seems to suggest is that superpositions decohere into the environment in different ways depending on the amplitudes of the basis states in the superposition, which naturally implies that naïve branch counting is wrong. Those two substates of tails you constructed at 14:24 would correspond to two different ways the state could decohere into the environment so that that the environment measured tails. That's fascinating, however the big issue I have with this argument is Carroll's hand-wave for states with irrational amplitudes. He does a proof for coefficients from √ℤ up to phase differences. But then he argues by density that the same holds for any complex valued coefficients. What I am suspicious of is, if we had a large energy superposition, each of those amplitudes will vary with time. So, the time evolution changes how many ways any given eigenstate decoheres into the environment as time goes on. This seems nontrivial to me because the number of substates we'd need per eigenstate wouldn't be continuous with respect to small perturbations in the amplitudes. The time evolution would mess up the Hilbert space something fierce; we wouldn't know, by the uncertainty principle, when a measurement would be made, so we'd have no way to be certain how to pull the trick you use at 14:24. So, I am skeptical if the total wave function satisfies the Schrödinger equation throughout the entire decoherence process.
You're not looking at the experiment through the eyes of a quantum observer. While the outcome of an experiment can appear to be along a continuum logically, the uncertainty principle ensures that many states along that continuum share a common outcome. No measurement apparatus, including our eyes and our memories, can discern an infinite number of measurements. Therefore, the number of outcome states is always an integer at some level.
Thanks. I've had a long-standing intuitive sense that MWI doesn't handle irrational numbers consistently, and it's nice to know which resources to turn to (Carroll's paper) if I want to develop a more rigorous belief.
@@davidhand9721let's say the wavefunction for the coin is (2^(1/4)/a)|H>+(√(1-√2)/a)|T>, for some number a so that the coefficients square to 1. How would many worlds tell us about the probabilities of obtaining each measurement?
It's nice to see that you are paying due attention to the many-worlds interpretation. Born's rule indeed beautifully aligns with the fact that subsets (bundles) of branches after a quantum event have different cardinalities. The probability of ending up in a rare bundle may be infinitesimally small. But if you find yourself there, consider what happened to your clones in more probable bundles. Perhaps this extraordinary bundle is the only favorable one for you?
the moment you spoke of 1/3 compared to 2/3 i was like "but that's just a macro state made out of many small branches with the same weight" im glad i finished the video :P anyways the only interpertation that makes any sense to me is the many worlds one.
For me the many worlds interpretation is by far the most intuitive and it seems it matches exactly what all the weird properties and maths of quantum machanics is telling us.
I fully disagree that the MWI derives the Born rule. The argument used in this video simply assumed the Born rule from the start in an indirect way. The main issue is the fact that, prior to considering the importance of the coefficients, you allow yourself to have different partitions of possible states/outcomes. You can have a partition of H and T, or H and T_1 and T_2, or some other partition of possible states. Since there are different partitions with different numbers of outcomes, you can't apply the indifference principle to unambiguously assign probabilities. What you need to do to apply the indifference principle is to decide the "correct" partitioning of states such that there is an unambiguous number of possible states and thus unambiguous (equally distributed) probabilities. The thing is, the argument in this video doesn't _derive_ the "correct" partitioning, it _assumes_ that the "correct" partitioning is the one where all of the states have coefficients with equal magnitudes. This assumption is just the Born rule in disguise: states are only equally probable when their coefficient magnitudes are equal. Nothing in the argument nor in MWI derives this or says why this has to be the case; it's just an assumption. Why is the "correct" partitioning to apply the indifference principle the partition with equal coefficient magnitudes? This argument does nothing to address this fundamental question, it only assumes what it was trying to "derive" in the first place.
I think it would have been helpful in this explanation to explicitly state that this is because the Everett Interpretation necessarily has *symmetry* built into it, whereas other interpretations don't and merely assert the existence of probabilities instead of having them arise naturally.
@@d95mback I mean symmetry and for exactly or the reason you stated. My only issue with the video is that this symmetry is implicitly stated amongst a lot of talk about vectors and equations and so on. Not that any of it is wrong, it certainly isn't, but that it's easy for a viewer's eyes to start glazing over. It might have been better at one point to just say something like "Forget about the weird equations that only some people can read/understand, this is fundamentally an issue with symmetry" and explain why in a more easily digestible manner. There is a symmetry in MWI between the number of outcomes which are possible and the number of outcomes which happen. The outcomes are not symmetric individually, obviously; but they do, by definition, respect a symmetry as a whole. This is not true for CI, for instance, because it requires that only one outcome happens regardless of the number of possible outcomes. It requires that symmetry is broken but without any justification for what causes the breaking.
I have an issue with the card hypothetical. It assumes no knowledge of the jokers starting position, In reality, rather than math, that's important. You can say its shuffled/goes out of the observer's view to go into a quantum state. If there is an outside observer that "reads" the joker the entire time, the odds could be worked out. Laplaces demon
Isn't any non-quantum interaction counted as an observation of the particle? If so, wouldn't there be an uncountable number of "worlds" generated at any point with an uncountable number of different possibilities? If the entire universe is constantly entangled wouldn't that undermine relativity? Would it undermine the word entanglement if an infinite number of them happened to all particles all the time? I love your channel btw, this is what PBS should do. Thank you so much for your work!
I don't think it would have anything to do with relativity. Information can't travel ftl in MWI. Also MWI is both local and deterministic so problems with entanglement don't really appear in it. All of the quantum weirdness is stuffed into the MW part of MWI.
@@AwfulnewsFM MWI is only "local" if you redefine "locality" to be something entirely different. People who claim MWI is "local" remind me of compatibilists who come into the "free will" vs "determinism" debate and just say, "hey, why don't we just change the definition of 'free will' so the two are compatible?" I mean, sure, you can do that, but it doesn't really end the debate but just kind of misses the entire point of the debate in the first place. The same is true of silly MWI claims that it is "local." MWI is only "local" in an extremely intangible and unverifiable metaphysical sense of locality in terms of the multiverse, but nobody who cares about locality actually cares about locality as some vague philosophical principle. Most of physics was nonlocal up until Einstein. Newtonian gravity is treated as instantaneous and the speed of light was also treated as instantaneous. The desire for a local theory arose very specifically in the 20th century with the development of Einstein's General Relativity where not only is the theory entirely local and these speeds are actually finite, but locality is baked deeply into the theory, i.e.if you could travel at superluminal speeds it would lead to nonsense predictions like distances contracting to imaginary values. MWI claims about it being "local" entirely miss the point of the locality debate. Locality isn't desirable as some vague abstract principle, but very specifically due to a desire to one day make QM compatible with GR. That means the only locality that matters is locality in the GR sense. Any other form of "locality" is irrelevant and there is no reason to care about it. MWI is not local in terms of GR as it is only "local" in terms of its multiverse but still contains _apparent_ nonlocality when a single "branch" is considered in isolation. See the paper _"An Elementary Proof That Everett's Quantum Multiverse Is Nonlocal: Bell-Locality and Branch-Symmetry in the Many-Worlds Interpretation"_ by Aurélien Drezet. Also see the paper _"Einstein, incompleteness, and the epistemic view of quantum states"_ by Harrigan and Spekkens which provides a no-go theorem showing it is not even possible to have a purely ontological interpretation of quantum mechanics that is local. MWI is only "local" in a vague sense which is only defined in terms of itself, which misses the entire point of why people actually care about locality and debate whether or not it is real.
@@amihart9269You made some good points here, but there's an important clarification needed: All versions of ( unmodified) QM are "weakly" non local, because the light cone structure ( that defines Causality and temporal order in both special and General Relativity) still defines how signals ( particles , anything with four-momentum) are transmitted in a quantum world. It's not strong non-locality. This clarification is important , because still many people are confusing the quantum non locality ( non separability) with the strong non locality ( that exists only in science fiction...) But you're right of course, that MW is indeed weakly non local , like the other interpretations. (Bohmian mechanics has a more serious problem with relativity, because it breaks Lorentz Invariance).
Very good episode! I love the topics of entanglement and QM interpretations. I often think about how quantum strangeness could be like the allegory of the cave. If everything has some form of quantized state at a fundamental level, could it be that the macroscopic universe is an emergent phenomenon (sort of like a program running on a computer) from which we can only see shadows of the "hardware" that determines those states? After all, any problem can _look_ probabilistic if you can't access all the information, and as an emergent system of information, our universe would have no influence or access to information from the system giving rise to it, at least beyond what the quirks of this parent system would give away (ie: all the weird quantum effects that can't be fully understood without solving the measurement problem). Not claiming anything here, just a sharing my musings ig. Forvige me if this reeks of Tegmark, because I'm a fan hahaha.
The observed violation of the Bell inequalities is most often understood as a proof against the "hidden variable" scenario you've described, wherein there is some deeper system to which we are denied access that would make perfect sense of the quantum world. At least under Copenhagen, no hidden system obeying locality can account for quantum oddity. If you can tolerate non-locality, I guess that's fine, but Many Worlds offers the most sensible and elegant explanation: the hidden variable is the observer, who is just as quantum as the system they observe.
@@davidhand9721 Why, if I may ask, is locality so universally agreed upon as an essential characteristic for a successful model of QM? What is it about a local universe that is so much more attractive to physicists than the idea of a non-local universe?
Its a base philosophical assumption about how the world works, i.e. so far back in the chain of assumptions that they really don't want to ditch it too readily. From any experiment we can ask if the result is understood or if we should go back and change our major basic assumptions and this is a very major one, like causality. Results of any experiment give rise to a debate about which part of the holistic set of theories and observations we can change. Usually major philosophical assumptions are taken as a bedrock for which we imply something is wrong in observation or theory (although we haven't found that so still looking, didn't it take a while to accept an unseen planet didn't cause anomalies in the transit of Mercury but rather we had to find a whole new theory of gravity?).
@@davidhand9721 Someone should've told John Bell that, I'm sure he would've been interested in knowing that "Bell inequalities" disprove "hidden variables"... lol What you say is literally just an old wives' tale. There is no relevant at all to Bell's theorem and hidden variables, which is why John Bell himself was a major advocate and worked on developing hidden variable theories (see his paper "On the impossible pilot wave"). All Bell's theorem shows is that if there are hidden variables, they would have to be nonlocal (setting aside superdeterminism). Both MWI and Copenhagen are equally overly complicated interpretations that rely on an arbitrary and unfounded assumption that the statistics used in quantum mechanics differ fundamentally from statistics used in all other fields in all other sciences, that a particle having a 50% chance to be in one state and 50% chance of being in another is not due to an epistemic restriction in reference to the observer but somehow it actually exists in both states simulatenously. This assumption is entirely unfounded but is beloved by the mystics who want to believing in grand multiverses or collapsing wave functions.
Personally, I think the simplest and most intuitive way to think about quantum mechanics is not that there is some underlying hidden information we can't access that prevents us from predicting the outcome exactly, but that the probabilistic nature of QM is instead caused by precisely all the things we _can_ access. From such a point of view, it is not correct to even interpret classical mechanics as causally deterministic, because it is only causally deterministic in an ideal sense where you can fully isolate a system from its environment, which is not possible in reality. In reality, there are always some deviations, some _error,_ between the idealized predictions of classical mechanics and the actual measurement results. Typically, we think of the idealized mathematical laws as describing the system "in perfect isolation," but you could instead consider the idealized system as what Einstein had called an "ensemble." An ensemble is not a system in isolation, rather, an ensemble is formed when you repeat an experiment with the same initial setup an infinite number of times. As you continue to repeat the experiment and average out the results, the errors should cancel out, and you'll _converge_ towards the idealized mathematical law. If you think about classical mechanics this way, then there is not a fundamental distinction between the probability in QM and in classical mechanics. The wave function represents an ensemble, the Born rule gives you a probability distribution that the system will converge towards if you repeat the experiment an infinite number of times. If you only carry out the experiment once or a small number of times, you get deviation from the idealized prediction and the actual outcome, but this deviation is not caused by some underlying brand new phenomenon we haven't discovered, but is caused precisely by all the phenomena we already know able---the entire universal context in which the experiment is conducted. You can't account for this even in principle, so there is no way you'll be able to predict the outcome precisely, but the same is true also in classical mechanics. You have to get rid of the conception in your head of trying to imagine that idealized mathematical laws have any relationship to things "in perfect isolation," because things in perfect isolation can't exist, and consider how these idealized mathematical laws are actually produced if we can never isolate anything. Usually we can isolate things _enough_ so that the error is small enough that we can pretend it's possible to imagine some truly isolated system where that error doesn't exist, but when you get down to the atomic scale this mental trick doesn't work, you have to actually acknowledge that there is always error and that it stems from deviations from the ideal due to the fundamental inability to isolate systems form their environment. If you think about it that way, then the "measurement problem" is solved because there is no measurement problem in the first place. The measurement problem arises from a belief that particles in a sense "spread out" and take all possible paths described by the wave function, but we only ever measure a particle taking a single path, and so you have to explain how the particle seems to come back together into a single point whenever you make a measurement. Copenhagen claims things "spread out" as a postulate and then needs another postulate to explain how things "collapse" back together again. MWI denies things come back together but maintains the postulate that things "spread out," so it forced to then declare there's a grand multiverse composed of an infinite number of parallel universes. But the view I'm presenting here from Blokhintsev denies that particles "spreads out" in the first place, so there is no need to bring them back together, and thus there is no "measurement problem." Particles only ever take a single path, but the path does not have a singular essential cause but is instead caused by its universal context. All the _particulars_ of its context are knowable in principle but they work together to overdetermine the outcome of the experiment, and so you need to know all the _particulars_ in order to predict the outcome, and it is not possible to know the entire universal context simulatenously, so you will always have some error and deviation. A "correct" interpretation of QM from this point of view would thus be one that is both epistemic but also does not contain hidden variables because there is no essential cause that the error could be reduced down to, in the same sense you do not need to add hidden variables to classical mechanics to explain deviations that can be attributed to error. The error arises from epistemic reasons due to universal context which the observer did not (and could not) fully account for.
To solve the measurement problem, we theorize that the whole universe permanently copy and split itself for any quantum state within the universe that gets observed. And any of these "new" variations of the universe itself is doing the same for all variations possible. We better do not ask where all the energy these universes include is coming from. That is not exactly what sails under my boat of oghams razor.
The measurement problem is a pseudoproblem. The measurement problem arises from the measurement postulate, as the Copenhagen interpretation claims particles "spread out" into all possible states as a postulate, but if they do this, why don't we ever measure particles in all possible states? Copenhagen fixes this by adding on the measurement postulate to bring everything back together, to "collapse' the "spread out" state into a determinate state. MWI tries to "solve" the measurement problem by deleting the measurement postulate, by saying there is no "collapse," but then they're left with reality itself spreading out into a grand multiverse which we can't even observe. Einstein had correctly pointed out a century ago that all this confusion comes from the postulate that particles really do "spread out" into all possible states in the first place. If you drop this postulate, there is no need for a measurement postulate either because nothing ever "spreads out" that needs to be "collapsed," and you don't get grand multiverses, because, again, nothing spreads out at all. The percentages derivative of the Born rule are interpreted as a sort of "best guess" of what path the particle took, and not that the particle literally spread out into all possible paths, it only ever takes one path. Einstein had already given a lot of great arguments into why this "spreading out" postulate is absurd and that there is nothing in quantum mechanics that provides a convincing enough reason to adopt it. Einstein's biggest problem with it is that it inherently requires you to reject the existence of objective reality that is independent of the observer. He gave an example of atomic decay. As time passes, quantum mechanics would predict an increasing probability that the atom has decayed, but it never hits one-hundred percent. This means, if you interpret the wave function as ontologically real, you have to believe the atom really never does decay until you look at it, and so you cannot even say the atom has a real state independent of the observer. (Eugene Wigner also pointed out in his "Wigner's friend" thought experiment that under certain conditions two observers would come to experience entirely different realities, and thus you could not say there is an objective reality independent of the observer from a purely psi-ontological standpoint.) Believe it or not, Schrodinger had also initially put forwards his "cat" thought experiment also as a way to criticize Copenhagen, pointing out that if individual particles can "smear out," then there's no reason that particle could not then trigger a chain of events that would affect macroscopic objects, like a cat, and then you'd have to claim the cat is "smeared out" as well, which is an absurdity, and so it should be rejected. Yet people these days repeat Schrodinger's thought experiment as if he was arguing _in favor of_ believing "smeared out" cats exist, which is the opposite of what he was saying. Even John Bell is often misrepresented as saying he "proved hidden variables don't exist" and therefore objects really don't have real states until you look at them and are fundamentally indeterminate. John Bell did not believe this and also criticized Copenhagen in his paper "Against Measurement" because of its measurement postulate which makes it incapable of scaling the theory up to large scale systems. While Bell did not take on Einstein's approach, he became infatuated with Bohm's "pilot wave theory" and even contributed to it in his paper "On the impossible pilot wave."
One of the very few episodes of ST where I lost the plot early on and never got it back 😂. Really commenting though to ask if Matt responding to comments is ever coming back?
Yeah, you know, I've noticed a trend of interactivity on UA-cam decreasing in some niches/genres, science communication being one of them. The Royal Institution has seemingly stopped doing Q&A sessions after lectures, or at least has stopped recording and publishing them on their channel... SpaceTime doesn't seem to do comment responses much if at all anymore... I wonder if engagement with those kinds of content is just too low according to analytics and metrics... 🤔
So what happens when the wavefunction is continuous rather than discrete? That situation would be rather like a particle being detected at any of an infinite number of places on a screen after having been diffracted through a single slit rather than asking whether it went left or right in the dual slit experiment. Although one can assign a probability for detection in any small area defined on the screen, the probability of detection at any given point will be zero.
The waavefunction is continuous, we observe a smear on the screen after passing through observed slits. I don't think what you're stating is necessarily more than a Zeno's paradox type thing. We can do continuous probaability can we not?
Well you can’t really measure accurately down to a point because of the Heisenberg uncertainty principle. The uncertainty in position times the uncertainty in momentum needs to be greater than some constant, and a point position would have zero uncertainty. So you can never find yourself in a world that has zero probability to be measured (which kind of makes sense when put that way)
@@CTimmerman not sure what you mean by "proven". The Planck length is a very small length indeed, but it's just what you get when combining the Planck units. The smallest possible length could be much smaller than that.
I actually didn't hear any assertion of a distinction between "creating and distributing worlds over probabilities" and "creating and distributing corpuscles over probabilities". I'd be very surprised if yall ever made an episode on that fine a question, but I feel like that is necessary.
Does there need to be a difference? I think the intended difference is what comes after, like how the corpuscles disappear compared to the many worlds that persist.
@@n0tthemessiah No, it is not, as Everett's interpretation fails to explain why we find ourselves in one branch over another if all outcomes occur. It is just an empirical fact that only one outcome occurs, and if both X and Y are possible, and X occurs, why did we measure X and not Y? While I do not endorse pilot wave, Copenhagen, or MWI, pilot wave at least has a specific reason for why X would be measured instead of Y, that it depends on the particle position.
I really love the direction PBS Spacetime is taking, often considering interpretations on top of established, verified quantum theory. Its no simple "shut up and calculate" from PBS.😎
this seems like one of those things that really is easier to get a handle on if you know the math. a formidable proposition. as someone who went applied as soon as he could, having recognised his pure abilitity was severely limited, this quantum stuff is a cruel trick indeed
I don't see how this supports the Many Worlds hypothesis at all. Adding the "Principle of Indifference" is what does it. So yes, adding another convenient premise to a theory can help it explain another facet of quantum dynamics. That is true for any theory. Add more postulates, explain more. The "trick" here is convincing people that the "Principle of Indifference" is somehow "true" rather than the base speculation it is. It's a version of, "either it happens or it doesn't, so everything has a 50/50 chance of being true". Aliens killed Kennedy? 560/50 is a reasonable view if you don't know anything (which most people don't). The absence of knowledge of likely outcomes in no way should generate a view that things are 50/50. In my admittedly not so humble option :)
I'm very new to this so sorry if my question seems stupid. if my coin was weighted in such a way that it had exactly √2/10 chances of showing heads and 1-(√2/10) chances of showing tails, wouldn't that lead to an uncountably infinite number of branches, thus making the coefficient of each separate branch 0 ?
Good question. I don't think that many worlds has a definite answer to this. I also think that there are issues when the number of possible outcomes is infinite.
I think there is one more thing to consider for many worlds: I think it is also branching in the "past" direction (and if we are there, we just experience it as "forward" as normally) - So, theoretically, even past keeps getting fuzzily "rewritten" in the big "buzz" of the universal wave function that we exist in a small sliver of. (just like the future is a weird fuzzy "infinitely" many waves whose interferences creates the patterns we are made of and are observing)
My PhD is in probability, and what you have here is very good and well thought out. As a thought experiment to emphsize what I think you're eluding to, I'd ask those interested one question: Suppose I define X to be the sum of two 6-sided fair die. What's the sample space of X? (a) If you say {2,3,4,5,6,7,8,9,10,11,12}, then you're defining your 'outcomes' in such a way that they're not equally likely and a non-trivial pmf is needed to describe your random variable. (b) But if you say {{1,1},{1,2},{1,3},...,{6,5},{6,6}} remembering that the sample space need not contain the possible realizations of X, but rather X is a function from the sample space to the real line, in this case X({a,b})=a+b, then you've defined your sample space such that each outcome IS equally likely, and then even non-trivial events like {X=4} can be measured probabilistically simply by using the axioms of probability. It's super interesting how the rigorous parts of probability theory get interpreted on a combinatoric level in say an intro to probability course.
Can I just say that I absolutely love the thought experiment in the beginning. I like that quantum physics thought experiments are almost never experiments that could pass ethics reviews
I'm pretty sure everyone has a gut feeling preference based on some combination of what hand waviness they'll most easily overlook (and all of the interpretations have hand-waviness) and what is most emotionally appealing to them. It makes sense to me that a lot of physicists like this interpetation, because, as they like to say, "it's just taking the math [that we know so far]" and taking it literally! Physicists are immersed in the math, so they prefer the interpretation that takes the math literally, and that handwaves everything else (kidding. Sorta.) It's fine to have a gut-level preference, but given that there is absolutely no way to test this interpretation compared to most others (not even in principle I don't think), scientists like Sean Carroll should know better than to be out arguing in favor of a particular interpretation. It's much more scientific to say that we don't know, and to keep working on further developing the theory in ways that ARE testible.It seems far more likely to me that QM is an approximation and/or that there's something deeper or more subtle going on that we don't understand yet, than that any of our current problematic interpretations is correct. What Sean Carroll is engaging in is philosophy, not science, but given that he's not rigorously trained as a philosopher, he's probably not very good at it. This is one of the things I love so much about Roger Penrose. He comes up with some pretty crazy ideas, but they are all testable via experiment. And cases where they are testible in principle but our equipment isn't good enough to test yet, he refrains from saying how likely his theory is to be RIGHT, and just works on developing the theory.
@@amihart9269 There are many papers (and even more when just doing a quick check) that makes claims of ways to test MWI, but even if that weren't the case, my statement was about the ability to prove something else, so this comment still doesn't hold up. Nice try though.
@@ReivecS There are no ways to test MWI. You are appealing to vague amorphous papers which don't exist. Your comment was also about disproof and not about proof, and proof only exists in mathematics not in experimentation which can only show evidence with varying levels of confidence, and the Nobel prize cannot be awarded based on mathematical proof. Nice try though.
Love the video but honestly I'm a bit lost for this one. Maybe I'm overestimating how complicated this is, maybe I'm having a slow day, idk. Regardless, love the channel!
A few thoughts: 1) All we detect are particles. Waves are inferred from the detection of many particles. 2) Perhaps the manifestation of a particle is caused by a discontinuity in the wavefunction (It hits a screen for example). Where the particle lands is governed by the Born Rule. There are no hidden variables, it’s just the rule. 3) Then there is no “measurement”. It is just the universe doing what it does and infrequently we label it a measurement. 4) There is only one universe and it exists whether we look at it or not. 5) Many Worlds and the Copenhagen interpretation are both just silly.
The lack of intuition of this phenomenon in physics IS the problem. You’re operating on your macro scale view of the material world and while that is valid, it’s wholly incomplete. These interpretations can have real world implications such as the development of quantum computers. So yeah they’re not just silly, they’re actually important to what you consider relevant.
The distinction being made in 1) is between the wavefunction and the particles of the standard model, also pointing out the wave nature needs multiple particle detections to be decerned. This is a pretty standard viewpoint. Although energy is required to detect particles you also need a particle (or group of particles). That is, energy doesn’t exist independent of all particles (or group of particles). If you can detect energy without referencing any standard model particle (or group of particles) I think the rest of us would like to know how. Your last sentence is just an Ad Hominem logical fallacy. @@schmetterling4477
Hey Matt, Is the principle of indifference somehow equivalent to the ergodic hypothesis? It sounds exactly like distributing the probability across all microstates equally, and then counting the number of microstates in our macrostate of interest. Thanks!
They're related in that both of these theoretical areas draw on probability theory. It is insightful to pull a bit on why each does: Statistical physics and quantum physics both use the language of probabilities because we (a user of the theory) have some necessary ignorance about what is going on. In SM this is because knowing the microstate of an avogadro number of particles is impractical; in QM it is because a closed system, as described by the Schrodinger equation, is fundamentally inaccessible to external degrees of freedom (i.e. the closed system is no longer closed when we force a measurement device to interact with it). I think that starts to get at some of what your question intuits. As an aside, you might find some interesting follow up on those thoughts by 1) reading a bit about the Quantum Bayesian (QBist) interpretation of quantum theory, and 2) reading about the Maxwell Demon thought experiment from statmech, and it's analogs in the quantum context. There are some reasonably accessible articles around that can get you started on the latter (Quanta magazine has some nice articles about Maxwell's Demon, for instance). Connect those dots and you'll have understood a great deal about your excellent question!
General relativity and quantum mechanics will never be combined until we realize that they take place at different moments in time. Because causality has a speed limit (c) every point in space where you observe it from will be the closest to the present moment. When we look out into the universe, we see the past which is made of particles (GR). When we try to look at smaller and smaller sizes and distances, we are actually looking closer and closer to the present moment (QM). The wave property of particles appears when we start looking into the future of that particle. It is a probability wave because the future is probabilistic. Wave function collapse happens when we bring a particle into the present/past. GR is making measurements in the predictable past. QM is trying to make measurements of the probabilistic future.
"The wave property of particles appears when we start looking into the future of that particle" Isn't that notion inherently completely contrary to general relativity?
Instead of observing the systems out side the box, observe as if you are the box. You will see time is only real because of the presents of the minde ,("the potential of the beginning and end is observed through its own deterioration" time is infinite)
Assuming a closed system and entanglement not happening forever, can you apply Gittins index rather than the principle of indifference to superimposed particles?
Thank you ... This is your best episode to date ... The Born rule is just a axiom is something I could never accept in uni despite my lectures insurance ... But how did Born come up with this mystical rule wo the many worlds theory is still a mystery to me (future video topic maybe🤞?)
I'm pretty sure he just tried it and it worked--squaring the result produced the right predictions. So he basically derived it by throwing darts and testing the results.
Question: When we divide our basis state with the higher coefficient so that our coefficients are the same do we use any official mathematical method to do that? For example the Gram-Schimdt procedure to make the two basis orthogonal?
I think the world evolves according to the Schrodinger equation, but we can't see superposition because we are macroscopic beings, and there are too many particles interacting, causing the macroscopic world to approach non-superposition behavior as the number of interactions explodes with each new particle involved.
Brilliant video, thanks for sharing! Penrose has his own wiev too about when and where the quantum decoherence happens, and he links it with quantum gravity too, the Objective Reduction theory definitely deserves its own full video I guess! :)
@@ThomasEmilioVilla Yes, Penrose is an absolute master of relativity. The strange thing is that quantum theory most likely derives directly from relativity, as well, so one should think that relativists should be able to get a quick fix on it, but most of them are guessing just as badly as the average crowd.
The question I’ve always had about the many worlds interpretation is, what if the coefficients aren’t the square roots of rationals? What if you have something like α=sqrt(π/4) and β=sqrt(1-π/4)? The argument of splitting up the worlds doesn’t seem to work in this case. Or is that not possible for some reason?
I second this, the principle of indifference requires a finite number of "cards" (here: multiverses) at each split. Unless we propose a mechanism that changes the number of cards between splits that is a common multiple of both splits without changing probabilities, it also means that the number of multiverses since the big bang is a finite constant. If that's true (and irrational probabilities are just approximations of a rational reality), I'd be curious what that number could be. And at some point in the future we will run out of multiverses to accurately approximate probabilities using distinct multiverses.
That would "just" mean that the eternal wave function has uncountable infinite granularity (i.e. uncountable infinite number of branches) and you are either in one of the uncountably infinitely many "α branches", or one of the uncountably infinitely many "β branches".
This is why I hate it when they describe Many Worlds as "branching". There is only one world, but states within it lose coherence with other states. Thanks to the uncertainty principle, there can't be a continuum of outcomes forever; they must get pooled into a finite number of distinguishable outcomes. Even when outcomes appear to be continuously distributed in reality, there are, in fact, a finite integer number of results that any measurement device (e.g. your eyes) can distinguish.
@@db7213 yeah, like, I get that the many worlds interpretation seems like the most parsimonious interpretation of quantum theory, as in, it is the one which gels the easiest with it's formalism and etc. But what it says about the world is just about as un-parsimonious as anything could ever be, specially if we go down this uncountably infinitely path. Another thing you could say about reality, outside the formalism of quantum theory, would be that quantum theory is just not complete enough to offer a great interpretation for the measurement problem. To me, that just sounds quite a bit less out there, you know?
@@user-sl6gn1ss8p So we have a way to accurately predict and describe what happens at the quantum level, but you choose to ignore it and instead claim that it's not "complete enough", even though there is nothing that's missing about it. What theory would then be "complete enough"?
Yes, but how can those many worlds interact with each other? For example, given the double-slit experiment, how can an electron interact with it's other-world counterpart? They are in different worlds after all.
Schrodinger equation applies to the big wave function and changes amplitude of each point/"world" depending on its difference with its neighbor points ("worlds"), so nearby points in wave function influence each other.
At what point has someone made a compelling case for Pilot Wave? If the comparative inelegance alone doesn't turn you over to Many Worlds, then you still have to deal with non-locality and other contradictions with relativity. Many Worlds works with fewer axioms, never fails, and isn't ridiculous like Copenhagen. In an unbiased evaluation, Pilot Wave should be the inferior interpretation.
@@davidhand9721Copenhagen is far less ridiculous than the Many Worlds interpretation. I’d buy the boltsman brain or simulation theory before Many Worlds. The idea that every quantum event essentially results in two distinct universes and macroscopically an infinite number of universes is one of the most ridiculous scientific theories.
Yep. Many Worlds is the null hypothesis. All evidence ever discovered supports it without fail, because it is the simplest model devised to fit all available evidence. The other real _theories_ of quantum mechanics "interpretations" each make verifiable predictions. For example, a Hidden Variables theory like Pilot Wave Theory predicts that there exist hidden variables. The discovery or verification of one of them would prove it correct and eliminate the null hypothesis. Spontaneous Collapse theories predict that from time to time, a wave function of a particle will spontaneously collapse into a point function with no external stimulus. If we ever observe such a collapse (and can eliminate all extrinsic causes to a significant degree of certainty), it will prove that theory and eliminate the null hypothesis. Until one of those other theories has its predictions verified to 6 sigma significance, the null hypothesis _should_ remain the default and be what is taught in schools. It isn't. Because humans. Mostly because Bohr was apparently quite the force of personality, and, more cynically, because it's fun to watch students' faces scrunch up in confusion as we teach them the same magical gobbledygook of paradox and contradiction that we were taught. For a century now. Go humans!
I have never seen a compelling argument for Copenhagen, Many Worlds, or Pilot Wave. All of them rely on arbitrary postulates that Einstein had already eviscerated.
My challenges to this: 1. Why do we need to square the amplitude? It seems we're still taking it as an axiom that probability equals squared amplitude. 2. If you say we're entangled with what we're observing, which is why we can't see superpositions - how and when does the entanglement happen? In a double slit experiment, why aren't we entangled with the photon that's propagating as a wave, until we try to measure the photon's position? It's still not clear to me that MWI explains anything, and just seems to add more complications. (E.g. conservation of energy - if measuring the outcome of a heads/tails experiment splits one world into two worlds, does the energy of each world get cut to half? Then why don't we notice that the energy of the universe dropping exponentially over time? Alternatively, does each world have the same energy after the split, so the total energy across the multiverse is now double of what it was? If yes, where did all the extra energy come from?)
It's not an axiom. You can derive the structure of quantum mechanics from a set of axioms that are basically the same as Kolmogorov's axioms for probability theory. It just happens that algebraically probability theory is not the only solution that satisfies those axioms. Another solution is standard quantum mechanics with complex numbers and a third one is quantum mechanics based on quaternions. So what are Kolmogorov's axiom really saying? They are describing the logical requirements for ensemble theories. Unitarity, in particular, is just the requirement that all the dynamic systems we throw into an ensemble are still there at the end when we evaluate (measure) the outcomes. That's a highly abstract axiom. Real physical measurements are not unitary because of detector efficiency and noise. The reason why we can't observe superposition is absolutely trivial: it doesn't exist in nature. Superposition is a property of the ensemble theory. WE put it in there with the assumption that all systems in the ensemble are statistically independent. If you were to perform very high precision experiments on actual quantum systems you would see this break down because first order correlations in actual repeat experiments on the same source/detector pair are not zero... they are merely small enough to be ignored for systems that are given enough time to "thermalize". In other words... quantum mechanics is a theory that makes some subtle assumptions about the world that are just not so... but people can't tell the difference between the reality of it and the math used to describe the reality. That's only possible if you spend some time in an atomic, nuclear or high energy physics lab at the bench or accelerator. Only then will you start thinking about the difference between what you are doing and the math that you are using to model what you are doing.
1, the squaring is necessary since it's not just a single system in superposition, but both the receiving system (the detector) and the transmitting system (the lazer). So if the lazer is either A or B, and the detector is either X or Y, then when the two system interact, the result is you multiply them together (squaring). Analogy: there is never one coin being flipped, but two coins, so the final outcome has probability .5² Look up the Transactional Interpretation of Quantum Mechanics. 2, some have proposed that the entanglement is what causes time. As in, without entanglement occuring, no time would pass.
@@adamsmith7885 Time is a classical quantity, even in quantum mechanics. A single system is also never in superposition. Only the quantum mechanical ensemble can be in superposition. If you want a probability out of coins, then you have to flip an infinite number of them. :-)
I dunno anything about physics or math but maybe the reason why the many worlds interpretation doesn't have to rely on an axiom and is self-proving while the others aren't is because the concept of probability itself relies on the assumption that the world could be what it isn't, aka, that something could exist outside of it. The split into different versions of reality naturally follows from this. Whether it's a mathematical tool or reflective of reality is another question, whether there's a difference between the two posibilities or not.
question - how does many worlds deal with the time dimension? a lot of the examples you come across when people explain many worlds contain strictly sequential events, experiment - measurement - observation. time is implicit here. what are some more exotic possibilities
From what I understand, time in many worlds is Einsteinian and deterministic - all possibilities exist already 'somewhere there' and its the observer that follows one of 'predestined path' of events. Otherwise we'd have to assume that with every ongoing event reality splits into infinite number of possible options and that such process is ongoing constantly. Personally I find both options equally stupid :)
Mathematically time in QM is an external parameter, it just goes linearly and states evolve with time, everything if some function F(t). There is no time operator, no time quantization etc. Many Worlds Interpretation usually doesn't add anything new to this picture. Quantum Field Theory uses maths from Special Relativity, flat 4D spacetime, where different observers may have somewhat differently inclined time axes, and use Lorentz transformation to convert from one frame of reference to another. Interestingly though, and not many people think about it, time becomes a real puzzle in MWI: if we're inside some branch of a wave function, and wave function is all there is, and its amplitude is not observable from inside, and Schrodinger equation only changes amplitudes, this would mean from inside our world we should not be able to see any changes at all, so how the heck we see time passing is a riddle.
Hence my question really. I feel the whole premise of us observing and measuring the outcomes of an experiment, implies some sort of sequential order by default. So really it is at the centre of the entire concept, and yet i havent seen or heard much about it, admittedly im not an expert, but find it odd. @@thedeemon
Could it be that these are all artefacts of the limitations of our consciousness? We dont really understand consciousness very well, so perhaps a more advanced form of it could deal with seeing all possible worlds simultaneously. We are unable to observe all possible outcomes simultaneously, so we only do one at a time. Similarly we have invented the time concept to aid us with seeing logic in the whole process.@@thedeemon
When observing the effects of radioactive isotope decay, the cat acts as a detector. The most important thing is that the wave function and isotope superposition 'after decay/before decay' is only a MATHEMATICAL TOOL for calculating the probability of seeing (at the moment of observation) a cat alive/dead. There is no such thing in nature as an isotope that is both "decayed" and "undecayed". If Wigner's Friend performs the first measurement after the half-life (and the cat is alive), then before the second measurement (after another half-life) he estimates the probability of seeing the cat alive at 1/2. Wigner takes the measurement only after "two half-lives" and for him the probability of seeing the cat alive is 1/4. Therefore, both observers estimate their probabilities differently, and their superpositions differently, i.e. their wave functions "are different" (at a different stage of evolution), but if we have N observation positions with two observers, the results of both statistical predictions will be correct (1/4 of N positions cats will live).
For perhaps the second or third time, I’ve just watched a PBSST episode and was able to follow along the whole way without suffering from any induced headscratchers.
Because it's already debunked. It tries to present itself as the "inevitable" and "obvious" consequence of a single premise, yet the premise presented in the video is actually immediately abandoned in the original paper it references when it tries to derive the Born rule for an updated version of the premise which is incredibly arbitrary and clearly chosen just to derive the Born rule, meaning most of the video is an irrelevant red herring. See the paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll."_
Can anyone explain to me why we abandon principles like energy and mass conservation in the universe for a hypothesis in which each single Planck time interval gazillions of whole new universes have to pop into existence and look as if they have been there for billions of years???
MWI seems to pose no problem with conservation. It is just QM, and QM does not violate conservation. but i think we need an explanation for it because it seems contradictory
@@nmarbletoe8210thanx. I still think, yes a universe can start out of quantum fluctuations, but generating a 13.8 billion years old universum at each quantum event...?
@@Pit.Gutzmann It does seem extraordinary. But it seems to be required even without Many Worlds. In Feyman's Sum over Histories, all possible paths combine to form the probability wave function. Each path is like a world in the Many Worlds. Since information requires some material carrier, perhaps this means the MW are material as well...?
Very interesting argument presented, feels like the first time the MWI has any weight to it other than your standard interpretation hypothesis. The Born rule always felt like a hack, well we can't use imaginary numbers so let's just take the squares to get a real number and voila it works. This is the first time an actual possible explanation of why this works so well. How do we build from here on out though, assuming we're onto something with this?
I never thought justification of born rule was “to avoid complex values”. If you have a bounded observable, some self-adjoint operator, it will be diagonalizable, and so you can take an eigenbasis, and split whatever state into a sum of orthogonal eigenstates of the observable. And, the born rule just says that the probabilities of each observation are proportional to the norm squared of the lengths of these? Which, like, well, if the different lengths are to correspond to the probabilities in some way, and probabilities sum to one, then, well, the squared lengths of orthogonal vectors sum to the squared length of the sum of those vectors (which is 1 if we assume state vector is normalized), so it all works out? So like, it’s a consequence of using the l^2 norm over the l^1 norm , and the l^2 norm is the natural thing to use, and also, like, rotations.
To make the quantum coin and standard coin appear the same, we would take a picture of the standard coin flipping in the air, and that would count as the “check when flipped” - or the act of continuing to rotate while being observed being that superposition Once it’s on the back of your hand the decision is already made, which is the way to measure “headsness or tailsness” of the standard coin at 50/50 But midair, it could be either, depending on the angle you observe it, and it will change after your observation too, but you can figure out what it was in a single moment from a certain angle Of course, your taking of the picture can also change the outcome if someone else takes a follow-up picture, because you have disturbed the system in place if your detector isn’t completely passive
Hey Space Timers! We hope you enjoy the new logo!!! There will be a new intro coming in the next few weeks, but that needs a little more polish before it's ready. We also hope you enjoy the new merch store, which has 15% off everything for the month of December if you use the code PBS. Check it out here: crowdmade.com/collections/pbsspacetime
Why are scientists ignoring the 'Mandela effect' phenomenon. Its real and can't argued otherwise and it is an amazing sign of the multiverse at work.
Cool logo, congrats!
Please stop playing the xylophone when you're speaking
While you're at it, can you fix the bg music? It's too distracting from Matt's story. Or remove it completely.
@@DaleSteel your comment is the definition of the Dunning-Kruger effect at its finest 😆🤣
the best thing about this channel is that I have a PhD in quantum chemistry and this guy is taking all my knowledge on the topic, turning it inside out, and giving me a deeper understanding of it 20 years later. it feels glorious.
I was married to a quantum chemist for 20 years (Sarah J. Edwards, she wrote TagDock). She passed away unexpectedly earlier this year, and I watch these and miss her and our conversations every night. I keep hoping conservation of information exists so she'll be around forever and I am comforted that she still exists, just not in my time reference any more. Grief and science are an odd pairing.
YES another person who enjoys having their brain expanded. It's awesome isn't it?
you trash chemists have no real understanding of these topics. you just compute
What's the most fascinating thing about quantum chemistry that you think a layperson ought to know?
@@prdoyle seconded
Good lord, this was one of the most mind-bending episodes and I've been following this channel for years.
I think I'm having a stroke now
I have a serious headache now.
We are living in the 2nd plane of existence from the 31 planes. Humans live in 4 different planes with 4 different great oceans. Devas (gods) and Brahmas live in higher planes of existence in mount Meru, and the earth is a part of that mountain.
Yes
@@smlanka4uare you a Hindu? 😊
I've been watching for five years, and this one was the hardest to grasp of all of them. Not because it was presented poorly, but because maybe I reached my brain's limit to understand :)
This explanation was an odd blend of rigorous and analogous... I've understood all of this for several years now, but I had trouble finding a clear path through the analogies.
I think there are easier, more intuitive ways to explain it, but they don't _quite_ pinpoint that Many Worlds more naturally explains the Born rule.
This is really Sean Carroll's argument, and he explains it in a way that I find much more intuitive... it's just also a bit less precise in its implications.
I do think there's value in this kind of rigor, even within the framework of popular science communication, but I can't say for sure whether it's effective at conveying the information to someone who doesn't yet understand it, since I had that revelation, personally, well before this episode.
You say it's one of the hardest to grasp... does that imply that you feel like you _did_ grasp it in the end? If so, then it was a successful explanation for at least one person.
Still, I'd suggest watching one of Sean Carroll's lectures on the subject. He did a whole bunch of them while doing the publishing tour for his previous book, Something Deeply Hidden (and the book itself, of course, makes the same points in the same ways, it just goes a little deeper). The easiest to find is probably his lecture at the Royal Institution a few years ago, though it's not his clearest, best presentation of it (probably owing to the time constraints).
@@barefootalienthanks
I don't think it is super hard to understand but any simple version of it would be more of a metaphor than accurate description and this channel leans toward accurate. My best summary (and I am not expert others add or correct as needed) is that many worlds matches what we know about quantum physics and math already, we can't falsify it, but the math is straight forward. Other interpretations can use the same math but they don't explain why the other realities don't exist. I think we intuitively want to think that there is 1 reality and that may be the shortcoming here, that we as humans want it to match our intuition but doing so requires some other effect or mechanism to be in play that we don't yet understand. It doesn't mean that it doesn't exist, but this argument is that many worlds is a more simple explanation. An Occam's razor of sorts.
Yeah they've blasted through all the 'easy' stuff, I'm honestly excited to see how they'll present the truly mind-shredding topics to the general public.
It was hard to grasp because they barely explained it at all. The crux of the math is that a. all the vector amplitude has to be 1 (which was why 1/√2 appeared out of nowehere when |T⟩ was broken into |T1⟩ and |T2⟩) and b. length addition follows usual euclidean rules (I mean, special relativity as a theory has its own peculiar way to "add" length, not to mention this is some abstract space and not normal space either, we can't just assume usual rules apply). I'm sure these are very basic quantum mechanics facts but I can't immediately see why, because I'm a layman, and they didn't explain.
I love listening to this channel even when I don’t understand all the concepts. Over time I pick up bits and pieces that help me to understand future episodes just a little bit more.
This one had some great concepts that I understood, but when we start talking actual equations, my brain shuts itself off lol
Great work as always. Love this program
God I love how this channel can actually go into the math of physics instead of just using spoken simplifications. All the math in the quantum mechanics videos is one of the reasons I managed to get an A in Quantum Mechanics 101 in University.
As someone who specializes in literature, as soon as it goes into deep mathematics, I'm lost.
@@PrincipalSkinner3190 Same here as software engineer. I know what sigma, delta, and that fork symbol are, but some of those pipes don't seem to indicate a limit. It would be nice to make the math readable with Python.
My older brother has a daughter who is a bit of a physics nerd, she's 11, he asked me, the one who actually studied this stuff, if I could explain superposition to her. I said "yes, I can, but neither she nor I will understand what I'm saying"
"I may be capable of explaining it to you, or I may not be capable of explaining it to you. Until I attempt to explain it to you and you observe my explanation, neither of us knows if I can, therefore I am in a superposition of simultaneously able and unable to explain to you until you attempt to listen to me"
@@foxbruner There's a world in which I can, and a world in which I can't, shall we figure out which we're in?
@@shipwreck9146 Better. Thumbs up.
😂
Exactly why I am more of a GR guy. QM needs to figure out how to measure before I try understanding the religious nature of the interpretations since many are not testible.
When you add the integers 2 and 3 what "happens" before it collapses to become 5? I find the question of the measurement problem silly.
I like how I either understand this as something that's obvious, or I don't understand it at all, and I appear to be in a quantum superposition of the two and I can't know yet which one I'm in. 🤔
That is: until someone reads your comment and thinks "Yepp. This guy has got it!" 😁
The problem with superposition might be solved by more understanding of the quark mechanics.
I feel like our understanding of quarks is muddied because of our misunderstanding of superposition because it's a fallacy.
Me too, I'm wondering whether this is the kind of episode that the folks who liked reading the proof that 1+1=2 would enjoy.
Handy that someone put the effort in to actually put the proof on paper since it's the foundation of an entire branch of study, but a lot of people intuitively see it as a "well, yeah" statement without grasping the level of thinking that went into proving it's actually the case.
It's hard for me to tell though as I'm usually in the other camp and playing with tensor fields and wavefunctions in my head. Maybe Feynman has finally infiltrated my brain 😬
@@thedeadmoneyallstars Just shut up and calculate!
You can however split the worlds where you understand and those where you don't into stacks... It's easy sailing from there.
9:36 right about here, I moved from the world where the comma is inside the quote mark, to the world where the comma is outside the quote mark.
Wow!! Well spotted!!
This made me think of the Monty Hall problem. Information provided after choosing one of the doors is partially collapsing the wave function of when the prize is. And suddenly the two unchosen doors can be grouped into a single stack with a higher probability of the prize existing there.
For a moment I thought he may make this exact comparison in the video.
Sound like Qbism to me
информация несет эффект в будущее.) кто то знает заранее что произойдет?
@@antm9771 Many Worlds and QBism are distinct; in QBism there is a single world and the wavefunction encodes subjective beliefs about it whereas in Many Worlds the branching wavefunction exists objectively and observers within it have "indexical uncertainty" that is treated in a Bayesian way using the principle of indifference
@@kisskaspeik5220 go kick Putin in the balls and then ask that question again.
My favorite science-based UA-cam channel with one of the best science communicators. I love this channel!
The thought experiment and fictional philosophical argument between Newton's bucket and Mach was the hardest problem I've ever encountered.
I took a university-level physics course more than a decade ago, but I still get nightmares about a final exam on Newton's bucket.
I know it has something to do with either absolute motion or mopping.
Water molecules move relative to each other.
This is a really great explanation! I'm still trying to wrap my head around the argument, but I appreciate the willingness to tackle complex topics, the logical way it's laid out, and the fancy animations behind it all.
If I'm understanding the logic right, the proof goes something like this: Schrodinger equation demands a vector space and vectors of constant length -> state vectors must have magnitude one -> indifference means that states with the same coefficient must be equally likely -> states with any coefficient can be represented as a sum of substates with equal coefficients that convert the overall state into a sum of substates with equal coefficients -> the math works out to squares of coefficients = probability.
I think I get it....?
You did better than me, that's for sure.
"states with any coefficient can be represented as a sum of substates with equal coefficients" - Doesn't it seem too convenient that the state with the larger coefficient is allowed to be broken up into substates but the one with the smaller coefficient isn't, just to make 3 states with the same coefficient? Both states correspond to one macro observable. What makes one worthy of being broken up but not the other, except the motivation to force emergence of the born rule from many worlds? Not a rhetorical question. Genuinely wondering if I have missed something.
@@kanishkchaturvedi1745 That's one of the things that's tripping me up too, honestly
@kanishkchaturvedi1745 The idea is that you can always arrive at a consistent coefficient based on a common denominator. Like, let's say that the probability spread was 3/5 heads and 2/5 tails. These can be broken into 3 head worlds and 2 tail worlds out of a potential 5, all with a coefficient of 1.
So we only break apart one side in Matt's example because that's all we need to do. He could just as easily have broken one side into 2 branches and the other into 4, all with a coefficient of .5 and the math wouldn't care.
@@umbrascitor2079 This is fine for showing that the born rule is a good mathematical tool for a physicist to calculate a probability from the schrodinger equation. But it does nothing at all to explain how the branching of the wave function would phsyically but you in a tails-flip branch twice as often as a heads-flip branch. The card explanation reveals the problem. The only reason it worked in that case was because there were more cards in one stack than the other to begin with. The analogy for many worlds would be that there were more quantum states in the tails world than the heads world to begin with. Exactly twice as many, in fact.
The way I've come to interpret it, observation simply IS entanglement. Spooky action at a distance, the quantum state being undefined until observation is made, it all falls out from that. The reason we can never observe conflicting quantom state observations is because that very observation is entangling the states together.
Absolutely. When you observe a particle to see whether it has property P or not-P, your brain and the particle will become an entangled superposition of the following two states:
1. The particle had property P and your brain contains the memory of having seen it as having property P, and
2. The particle had property not-P and your brain contains the memory of having seen it as having property not-P.
And the "wave function collapse" that other quantum interpretations talk about, is simply the transition of going from an un-entangled state (prior to making the measurement) to the entangled state (post measurement).
@pbsspacetime, I LOVE this show. It's among the best on the internet. Dr. Matt is great, the topics are great, the coverage is great, and the discussion is great. I'm never disappointed to see a new episode come out. ☺️😄🥰
Great video! It answers a question that I had for years.
A small followup question: at 15:00 where you are splitting the wave function into T1 and T2, you are assuming that they are orthogonal, and this becomes the underlying reason for the squaring of the coefficients. Where does this orthogonality comes from? Naively I would've assumed that |T> is split into 0.5 |T> and 0.5 |T>.
You bring up a good point. I think that the proof is flawed in that the fact that the states add as orthogonal vectors is coming as a result of Born's rule.
Yeah, I don’t understand because tails isn’t orthogonal to tails. It’s obviously orthogonal to heads. The whole point of superposition is that 2 or more different states exist together. It doesn’t make sense to talk about the superposition of the same state. I think it’s not a satisfactory solution but an ad hoc attempt to get proper probabilities from MWI.
When you dealt the cards in sets of 3 and then swapped one, I was very worried you were going to invoke the Monty Hall problem. 😂 So glad you didn't. Thank you for keeping quantum mechanics simple and easy to understand.
The monty hall problem is easily explained when looking at it like this: In order, for the showrunner to pick a door that is wrong, he MUST know whats behind each of the doors. So at the exact moment he opens a false door, information is being shared. That information in of itself gives the higher odds at the third door. Because he knew and now told you part of it.
@@gorgit Of course the Monty Hall problem is easily explained. The inversion joke wouldn't work if it was difficult.
@@gorgit Also, the trick to Monty Hall problem isn't that he knows which door is wrong, it's that him eliminating the wrong door changes the set structure. Intuitively, each door has a 1-in-set size chance of being right, which is correct, but by revealing a door after you pick, he is binding the 2 unselected doors into a set. People intuitively think of it as bare selection between 2 doors, and thus a 1-in-2 chance, but because the revealed door is selection entangled you are actually picking between your 1 door and the pair of other doors. So your original door is still 1-in-3 odds while the other doors collectively are 2-in-3 odds.
@@Merennulli thought you were joking about qm being easy compares to the monty hall problem.
@@Merennulli Yes, but thats all because he KNOWS which door is wrong. That means information is being communicated by opening the wrong door. You are just going into the details what information actually is being communicated.
The bundle idea reminds me of Feynman's path integral formulation.
This episode is nuts!!! Crazy. One of the best episodes in a while, and that's saying something!
David, you're a poet.
I have never seen a physicist describe the universe so eloquently and poetically as you do.
Thank you for these videos.
Keep them coming.
Thanks, but the physicist in these videos is named Matt.
Fantastic job of explaining a complex concept! This channel is usually this way, but, to me, this one stands out even more. Thanks!
I follow a decent amount of science channels, but am no scientist myself. That said, so far the most compelling quantum theory i found is superdeterminism... The fundemental question of why the wave function results are how they are, hinges on the answers to two points, locality, and measurement independence. And so far, i've found very little compelling reasoning why our measurement doesn't alter the state. in fact, as little or less than why quantum physics would be non-local. What is certain is that it can't be both local, and independent of measurement... but... we measure speed by altering speed, bouncing something of a known speed off of the object we try to measure. We measure energy by funnelling it into a controlled process that consumes some. And the more complex the measurements, the more invasive they tend to be.
Agree, you find the same future depending input in superdeterminism and in classical physics in the principle of least action. No necessity of measurement independence.
y'all should do a video on Wigner's Friend thought experiment to really finish up the many worlds interpretation explanation. one might question if those branches are real given that we can't interact with them, but the point is that notions of what is real are observer dependent! an external observer can interfere our branch with a branch we aren't entangled with, and so end up with different (relative to us) observed probabilities for the same experiment.
They already did that episode.
y'all should play the banjo more
What does it even mean to say "what is real are observer dependent"? This is an abuse of philosophical terminology. The term "realism" refers to an objective reality that is independent of the observer. "Observer-dependent reality" is nonsensical.
This video took a really complex and tricky topic and explained in a way I genuinely understood. Thanks!!
I love the pbs vids, Matt! Just one remark, the outro music is loud and sudden. I so enjoy getting into a state of musing about the deep things you explain, and the final music breaks that state with the heavy drumms and horn sections.
I absolutely love these videos. Please please please do not ever stop! 😊
There's no reason to stop because no conclusion is ever reached. This ensures the jobs in some realities are secure.
Finally, a channel for those who've covered the basics and are wanting more. Thank you PBS
It's interesting that the framework for this thought experiment and Einstein's one for GR both require the observer to not know information about the exterior world to induce a symmetry (e.g. not knowing if you're in an accelerating rocket or in a gravity field). I don't think I can wrap my head around this observation, but I do think that it speaks to something about the information of the systems and that may be tied to the Mach principle.
getting rid of unnecessary data is a great way to better understand your situation - it's just good advice, not just in physics.
An inability to measure a difference is another definition of a symmetry. It shouldn't be surprising that we have gained so much knowledge by considering the innate symmetries of nature with mathematical rigor. In combination with Noether's theorem, symmetries are also conservation laws. Thus, situations which we cannot discern are the source of all deep physical principles.
An accelerating rocket is a 'gravity' field. There is no difference between the earth accelerating you or a rocketship accelerating you. One is accelerating you in a curve, the other a more or less straight line.
The laws of physics apply equally in all frames of reference. It doesn't matter if the frame is vertical, horizontal or curved. This is what Einstein and his flat Earth followers don't understand.
Great episode, anyway I'm not convinced at all by the splitting-state argument. To me it looks like a trick to force the final explanation of the Born rule.
It indeed is, see the paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll"_ by Richard Dawid and Simon Friederich, which is the paper this video is based upon, which precisely demonstrates that the derivation of the Born rule from this method requires arbitrary assumptions that are clearly chosen just for the purpose of deriving the Born rule and would not be chosen as assumptions otherwise.
@@amihart9269Thank you! I'll read the paper
*_hits bong_*
"Okay, so hear me out."
But seriously, this video is amazing. I love physics so much.
I like how they even incorporated a sense of being "in the wave function" @9:37-@9:38 where you can be either in a world where the text bubble says 'degress of belief,' or a world where it says "degrees of belief',
I saw that too!
This video was a miss for me. Reminded me of philosophy texts that meander on one topic for dozes of pages, using progressively deeper details and options. Then ultimately concludes no knew knowledge gained.
As a long time Sean Carroll enjoyer, this was a great episode!
I always look forward to new pbs spacetime videos
oh man, this is one thing Sean Carroll really skimmed over in "something deeply hidden". I've been wondering about it ever since, thank you!
Sean Carroll is simply pulling your leg with MWI to get book sales up. Forget about him. There is nothing he can teach you about quantum mechanics.
Hi PBS. If memory serves, in his PhD thesis Everett constructed his many-worlds theory so that its results are mathematically equivalent to other accepted (at that time) forms of quantum mechanics. Thus, whether you use it to solve a specific problem or not, is largely a matter of taste, not a matter of its correctness, because of its mathematical equivalence to the other interpretations of QM. Ed Salpeter suggested that you should adopt the QM interpretation that helps you take the next step in your physics research. [For example, I guess Everett thought his many-worlds interpretation of QM would lead to unification of QM & GR, which it obviously has not, so far.]
I think it’s because classical physics has the principle of least action. This is in contrast to quantum mechanics where every path is possible. I think taking measurements applies a gauge (redundancy/symmetries/group) on the quantum system. Any measurement or gauge respects some symmetries like for example translation in space or time. The symmetries results in conservation laws following the more general principle of least action by Noether’s theorem.
Feynman’s path integral formulation of quantum mechanics reduces to classical mechanics with the addition of the principle of least action. A single trajectory or state is only observed because it is fixed by the large stationary action and doesn’t cancel out.
The many worlds idea is possibly an artifact of there being redundant symmetries such as time invariance. The “guiding equation” of particles in pilot-wave theory is the stationary action that also has locality.
I thought the principle of least action applies to both quantum and classical theories
When summing all possible paths, the principle of stationary/least action is what you get as a consequence, for free, it's not like you add it in explicitly. Classical mechanics with its least action principle comes from this, from the quantum world where we sum over all paths.
@@thedeemon This!
Excellent video as always.
I really wish Sean Carroll would see this episode.
He also does the Mindscape podcast, highly recommended if you're into quantum mechanics and the deeper questions in physics.
I'm pretty sure this episode is based on Carroll's paper about deriving the Born rule in many worlds.
Edit: it's not. It's based on the paper linked in the description.
@@narfwhals7843 the paper in the description is based on Carroll’s work (it says so at the end), so you were right
It's based on Carroll's paper which is incredibly unconvincing and the circular nature of it is already obvious in this video. Notice at 15:12 if you apply the "principle of indifference" (what Sean Carroll actually called the "epistemic separability principle," i.e. ESP) you would get the wrong answer, so you have to intentionally rearrange the mathematics in order for it to get you the right answer.
This is because Carroll does not rely on the ESP but a derivative he calls ESP-QM, which adds on a postulate that the probability one should assign to being in a certain branch depends on the reduced density matrix that describes the combined system of the agent and the detector... but density matrices give you Born rule probabilities, i.e. ESP-QM implicitly assumes the Born rule. Carroll's paper assumes the Born rule in order to derive the Born rule.
I recommend Richard Dawid and Simon Friederich's paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll"_ which goes over this, ultimately showing that an assumption of ESP does not entail ESP-QM without additional assumptions, and these assumptions are arbitrarily chosen specifically to reproduce the Born rule.
James Hartle's paper _"What Do We Learn by Deriving Born's Rule"_ also points out that you cannot derive the Born rule without additional assumptions. If the Born rule is not assumed as a postulate, you need some other postulate from which it can be derived. Carroll's ESP-QM really is of the same type as the "quantum non-equilibrium" postulate in Bohmian mechanics, that is to say, yes, you can derive the Born rule from such a postulate, but it is clear the postulate is chosen _specifically to derive the Born rule,_ and thus the postulate is just as arbitrary as just assuming the Born rule as its own postulate.
@@amihart9269 I think you’ve misunderstood a few things here. The Principle of Indifference (POI) and ESP are two different ideas. The video skips through some of the details that are shown in the doc linked in the description, but the basic idea is that you can show that the POI is valid when branches have equal amplitudes, and that it is invalid when they do not. So when you encounter a wavefunction with unequal amplitudes, you must re-express it in terms of equal amplitude terms to use POI and get the correct probabilities. The argument for when POI is valid and when it is not does *not* require knowing the Born Rule in advance, so this is not circular.
@@skewlkid521 You say are different but fail to show how they are different, and yes, you have to break them up in a specific way precisely to get the Born rule. The validity of the Born rule is a starting assumption and arbitrary postulates are added to get to this conclusion, but those postulates have no justification outside of the fact you could derive the Born rule with them. If we discard the Born rule, why should believe anything about your mathematical operation to break up the wave function?
This was quality content. Been a long time since i watched one of these pbs vids because I thought they turned a bit clickbaity but this vid was right on the money.
Please never stop teaching us all more about the universe around us and thank you for all you do.
Mathematical physicist here. I'm curious what you think about this. I've been looking at this problem for a while now.
As far as I can tell, the argument you've presented here was first championed by Sean Carroll. What it seems to suggest is that superpositions decohere into the environment in different ways depending on the amplitudes of the basis states in the superposition, which naturally implies that naïve branch counting is wrong. Those two substates of tails you constructed at 14:24 would correspond to two different ways the state could decohere into the environment so that that the environment measured tails.
That's fascinating, however the big issue I have with this argument is Carroll's hand-wave for states with irrational amplitudes. He does a proof for coefficients from √ℤ up to phase differences. But then he argues by density that the same holds for any complex valued coefficients. What I am suspicious of is, if we had a large energy superposition, each of those amplitudes will vary with time. So, the time evolution changes how many ways any given eigenstate decoheres into the environment as time goes on.
This seems nontrivial to me because the number of substates we'd need per eigenstate wouldn't be continuous with respect to small perturbations in the amplitudes. The time evolution would mess up the Hilbert space something fierce; we wouldn't know, by the uncertainty principle, when a measurement would be made, so we'd have no way to be certain how to pull the trick you use at 14:24. So, I am skeptical if the total wave function satisfies the Schrödinger equation throughout the entire decoherence process.
You're not looking at the experiment through the eyes of a quantum observer. While the outcome of an experiment can appear to be along a continuum logically, the uncertainty principle ensures that many states along that continuum share a common outcome. No measurement apparatus, including our eyes and our memories, can discern an infinite number of measurements. Therefore, the number of outcome states is always an integer at some level.
Thanks. I've had a long-standing intuitive sense that MWI doesn't handle irrational numbers consistently, and it's nice to know which resources to turn to (Carroll's paper) if I want to develop a more rigorous belief.
@@davidhand9721let's say the wavefunction for the coin is (2^(1/4)/a)|H>+(√(1-√2)/a)|T>, for some number a so that the coefficients square to 1. How would many worlds tell us about the probabilities of obtaining each measurement?
It's nice to see that you are paying due attention to the many-worlds interpretation. Born's rule indeed beautifully aligns with the fact that subsets (bundles) of branches after a quantum event have different cardinalities. The probability of ending up in a rare bundle may be infinitesimally small. But if you find yourself there, consider what happened to your clones in more probable bundles. Perhaps this extraordinary bundle is the only favorable one for you?
Man, the Jason Bourne movies keep getting more complicated.
This is a tough topic, but so well done. Thank you.
the moment you spoke of 1/3 compared to 2/3 i was like "but that's just a macro state made out of many small branches with the same weight" im glad i finished the video :P
anyways the only interpertation that makes any sense to me is the many worlds one.
For me the many worlds interpretation is by far the most intuitive and it seems it matches exactly what all the weird properties and maths of quantum machanics is telling us.
I fully disagree that the MWI derives the Born rule. The argument used in this video simply assumed the Born rule from the start in an indirect way.
The main issue is the fact that, prior to considering the importance of the coefficients, you allow yourself to have different partitions of possible states/outcomes. You can have a partition of H and T, or H and T_1 and T_2, or some other partition of possible states. Since there are different partitions with different numbers of outcomes, you can't apply the indifference principle to unambiguously assign probabilities.
What you need to do to apply the indifference principle is to decide the "correct" partitioning of states such that there is an unambiguous number of possible states and thus unambiguous (equally distributed) probabilities.
The thing is, the argument in this video doesn't _derive_ the "correct" partitioning, it _assumes_ that the "correct" partitioning is the one where all of the states have coefficients with equal magnitudes. This assumption is just the Born rule in disguise: states are only equally probable when their coefficient magnitudes are equal.
Nothing in the argument nor in MWI derives this or says why this has to be the case; it's just an assumption. Why is the "correct" partitioning to apply the indifference principle the partition with equal coefficient magnitudes? This argument does nothing to address this fundamental question, it only assumes what it was trying to "derive" in the first place.
Ok but what about the transition from countable cards to uncountable probabilities?
Who says the probabilities are uncountable by definition?
I think it would have been helpful in this explanation to explicitly state that this is because the Everett Interpretation necessarily has *symmetry* built into it, whereas other interpretations don't and merely assert the existence of probabilities instead of having them arise naturally.
What symmetry are you referring to?
Perhaps you mean linearity. The reason is that the Schrodinger equation is linear so any operation applies equally to all terms of it.
@@d95mback I mean symmetry and for exactly or the reason you stated. My only issue with the video is that this symmetry is implicitly stated amongst a lot of talk about vectors and equations and so on. Not that any of it is wrong, it certainly isn't, but that it's easy for a viewer's eyes to start glazing over. It might have been better at one point to just say something like "Forget about the weird equations that only some people can read/understand, this is fundamentally an issue with symmetry" and explain why in a more easily digestible manner.
There is a symmetry in MWI between the number of outcomes which are possible and the number of outcomes which happen. The outcomes are not symmetric individually, obviously; but they do, by definition, respect a symmetry as a whole.
This is not true for CI, for instance, because it requires that only one outcome happens regardless of the number of possible outcomes. It requires that symmetry is broken but without any justification for what causes the breaking.
I have an issue with the card hypothetical. It assumes no knowledge of the jokers starting position, In reality, rather than math, that's important. You can say its shuffled/goes out of the observer's view to go into a quantum state. If there is an outside observer that "reads" the joker the entire time, the odds could be worked out.
Laplaces demon
Isn't any non-quantum interaction counted as an observation of the particle? If so, wouldn't there be an uncountable number of "worlds" generated at any point with an uncountable number of different possibilities? If the entire universe is constantly entangled wouldn't that undermine relativity? Would it undermine the word entanglement if an infinite number of them happened to all particles all the time? I love your channel btw, this is what PBS should do. Thank you so much for your work!
Yes. The MWI is ridiculously extravagant.
what do you mean by a non-quantum interaction? every interaction is at some point quantum if you zoom in enough, no?
I don't think it would have anything to do with relativity. Information can't travel ftl in MWI.
Also MWI is both local and deterministic so problems with entanglement don't really appear in it. All of the quantum weirdness is stuffed into the MW part of MWI.
@@AwfulnewsFM MWI is only "local" if you redefine "locality" to be something entirely different. People who claim MWI is "local" remind me of compatibilists who come into the "free will" vs "determinism" debate and just say, "hey, why don't we just change the definition of 'free will' so the two are compatible?" I mean, sure, you can do that, but it doesn't really end the debate but just kind of misses the entire point of the debate in the first place. The same is true of silly MWI claims that it is "local."
MWI is only "local" in an extremely intangible and unverifiable metaphysical sense of locality in terms of the multiverse, but nobody who cares about locality actually cares about locality as some vague philosophical principle. Most of physics was nonlocal up until Einstein. Newtonian gravity is treated as instantaneous and the speed of light was also treated as instantaneous. The desire for a local theory arose very specifically in the 20th century with the development of Einstein's General Relativity where not only is the theory entirely local and these speeds are actually finite, but locality is baked deeply into the theory, i.e.if you could travel at superluminal speeds it would lead to nonsense predictions like distances contracting to imaginary values.
MWI claims about it being "local" entirely miss the point of the locality debate. Locality isn't desirable as some vague abstract principle, but very specifically due to a desire to one day make QM compatible with GR. That means the only locality that matters is locality in the GR sense. Any other form of "locality" is irrelevant and there is no reason to care about it. MWI is not local in terms of GR as it is only "local" in terms of its multiverse but still contains _apparent_ nonlocality when a single "branch" is considered in isolation.
See the paper _"An Elementary Proof That Everett's Quantum Multiverse Is Nonlocal: Bell-Locality and Branch-Symmetry in the Many-Worlds Interpretation"_ by Aurélien Drezet.
Also see the paper _"Einstein, incompleteness, and the epistemic view of quantum states"_ by Harrigan and Spekkens which provides a no-go theorem showing it is not even possible to have a purely ontological interpretation of quantum mechanics that is local.
MWI is only "local" in a vague sense which is only defined in terms of itself, which misses the entire point of why people actually care about locality and debate whether or not it is real.
@@amihart9269You made some good points here, but there's an important clarification needed:
All versions of ( unmodified) QM are "weakly" non local, because the light cone structure ( that defines Causality and temporal order in both special and General Relativity) still defines how signals ( particles , anything with four-momentum) are transmitted in a quantum world.
It's not strong non-locality.
This clarification is important , because still many people are confusing the quantum non locality ( non separability) with the strong non locality ( that exists only in science fiction...)
But you're right of course, that MW is indeed weakly non local , like the other interpretations.
(Bohmian mechanics has a more serious problem with relativity, because it breaks Lorentz Invariance).
Very good episode! I love the topics of entanglement and QM interpretations.
I often think about how quantum strangeness could be like the allegory of the cave. If everything has some form of quantized state at a fundamental level, could it be that the macroscopic universe is an emergent phenomenon (sort of like a program running on a computer) from which we can only see shadows of the "hardware" that determines those states? After all, any problem can _look_ probabilistic if you can't access all the information, and as an emergent system of information, our universe would have no influence or access to information from the system giving rise to it, at least beyond what the quirks of this parent system would give away (ie: all the weird quantum effects that can't be fully understood without solving the measurement problem).
Not claiming anything here, just a sharing my musings ig. Forvige me if this reeks of Tegmark, because I'm a fan hahaha.
The observed violation of the Bell inequalities is most often understood as a proof against the "hidden variable" scenario you've described, wherein there is some deeper system to which we are denied access that would make perfect sense of the quantum world. At least under Copenhagen, no hidden system obeying locality can account for quantum oddity. If you can tolerate non-locality, I guess that's fine, but Many Worlds offers the most sensible and elegant explanation: the hidden variable is the observer, who is just as quantum as the system they observe.
@@davidhand9721 Why, if I may ask, is locality so universally agreed upon as an essential characteristic for a successful model of QM? What is it about a local universe that is so much more attractive to physicists than the idea of a non-local universe?
Its a base philosophical assumption about how the world works, i.e. so far back in the chain of assumptions that they really don't want to ditch it too readily. From any experiment we can ask if the result is understood or if we should go back and change our major basic assumptions and this is a very major one, like causality. Results of any experiment give rise to a debate about which part of the holistic set of theories and observations we can change. Usually major philosophical assumptions are taken as a bedrock for which we imply something is wrong in observation or theory (although we haven't found that so still looking, didn't it take a while to accept an unseen planet didn't cause anomalies in the transit of Mercury but rather we had to find a whole new theory of gravity?).
@@davidhand9721 Someone should've told John Bell that, I'm sure he would've been interested in knowing that "Bell inequalities" disprove "hidden variables"... lol
What you say is literally just an old wives' tale. There is no relevant at all to Bell's theorem and hidden variables, which is why John Bell himself was a major advocate and worked on developing hidden variable theories (see his paper "On the impossible pilot wave"). All Bell's theorem shows is that if there are hidden variables, they would have to be nonlocal (setting aside superdeterminism).
Both MWI and Copenhagen are equally overly complicated interpretations that rely on an arbitrary and unfounded assumption that the statistics used in quantum mechanics differ fundamentally from statistics used in all other fields in all other sciences, that a particle having a 50% chance to be in one state and 50% chance of being in another is not due to an epistemic restriction in reference to the observer but somehow it actually exists in both states simulatenously.
This assumption is entirely unfounded but is beloved by the mystics who want to believing in grand multiverses or collapsing wave functions.
Personally, I think the simplest and most intuitive way to think about quantum mechanics is not that there is some underlying hidden information we can't access that prevents us from predicting the outcome exactly, but that the probabilistic nature of QM is instead caused by precisely all the things we _can_ access. From such a point of view, it is not correct to even interpret classical mechanics as causally deterministic, because it is only causally deterministic in an ideal sense where you can fully isolate a system from its environment, which is not possible in reality. In reality, there are always some deviations, some _error,_ between the idealized predictions of classical mechanics and the actual measurement results.
Typically, we think of the idealized mathematical laws as describing the system "in perfect isolation," but you could instead consider the idealized system as what Einstein had called an "ensemble." An ensemble is not a system in isolation, rather, an ensemble is formed when you repeat an experiment with the same initial setup an infinite number of times. As you continue to repeat the experiment and average out the results, the errors should cancel out, and you'll _converge_ towards the idealized mathematical law.
If you think about classical mechanics this way, then there is not a fundamental distinction between the probability in QM and in classical mechanics. The wave function represents an ensemble, the Born rule gives you a probability distribution that the system will converge towards if you repeat the experiment an infinite number of times. If you only carry out the experiment once or a small number of times, you get deviation from the idealized prediction and the actual outcome, but this deviation is not caused by some underlying brand new phenomenon we haven't discovered, but is caused precisely by all the phenomena we already know able---the entire universal context in which the experiment is conducted. You can't account for this even in principle, so there is no way you'll be able to predict the outcome precisely, but the same is true also in classical mechanics.
You have to get rid of the conception in your head of trying to imagine that idealized mathematical laws have any relationship to things "in perfect isolation," because things in perfect isolation can't exist, and consider how these idealized mathematical laws are actually produced if we can never isolate anything. Usually we can isolate things _enough_ so that the error is small enough that we can pretend it's possible to imagine some truly isolated system where that error doesn't exist, but when you get down to the atomic scale this mental trick doesn't work, you have to actually acknowledge that there is always error and that it stems from deviations from the ideal due to the fundamental inability to isolate systems form their environment.
If you think about it that way, then the "measurement problem" is solved because there is no measurement problem in the first place. The measurement problem arises from a belief that particles in a sense "spread out" and take all possible paths described by the wave function, but we only ever measure a particle taking a single path, and so you have to explain how the particle seems to come back together into a single point whenever you make a measurement.
Copenhagen claims things "spread out" as a postulate and then needs another postulate to explain how things "collapse" back together again. MWI denies things come back together but maintains the postulate that things "spread out," so it forced to then declare there's a grand multiverse composed of an infinite number of parallel universes. But the view I'm presenting here from Blokhintsev denies that particles "spreads out" in the first place, so there is no need to bring them back together, and thus there is no "measurement problem." Particles only ever take a single path, but the path does not have a singular essential cause but is instead caused by its universal context. All the _particulars_ of its context are knowable in principle but they work together to overdetermine the outcome of the experiment, and so you need to know all the _particulars_ in order to predict the outcome, and it is not possible to know the entire universal context simulatenously, so you will always have some error and deviation.
A "correct" interpretation of QM from this point of view would thus be one that is both epistemic but also does not contain hidden variables because there is no essential cause that the error could be reduced down to, in the same sense you do not need to add hidden variables to classical mechanics to explain deviations that can be attributed to error. The error arises from epistemic reasons due to universal context which the observer did not (and could not) fully account for.
To solve the measurement problem, we theorize that the whole universe permanently copy and split itself for any quantum state within the universe that gets observed. And any of these "new" variations of the universe itself is doing the same for all variations possible. We better do not ask where all the energy these universes include is coming from. That is not exactly what sails under my boat of oghams razor.
Totally agree.
The measurement problem is a pseudoproblem.
The measurement problem arises from the measurement postulate, as the Copenhagen interpretation claims particles "spread out" into all possible states as a postulate, but if they do this, why don't we ever measure particles in all possible states? Copenhagen fixes this by adding on the measurement postulate to bring everything back together, to "collapse' the "spread out" state into a determinate state. MWI tries to "solve" the measurement problem by deleting the measurement postulate, by saying there is no "collapse," but then they're left with reality itself spreading out into a grand multiverse which we can't even observe.
Einstein had correctly pointed out a century ago that all this confusion comes from the postulate that particles really do "spread out" into all possible states in the first place. If you drop this postulate, there is no need for a measurement postulate either because nothing ever "spreads out" that needs to be "collapsed," and you don't get grand multiverses, because, again, nothing spreads out at all. The percentages derivative of the Born rule are interpreted as a sort of "best guess" of what path the particle took, and not that the particle literally spread out into all possible paths, it only ever takes one path.
Einstein had already given a lot of great arguments into why this "spreading out" postulate is absurd and that there is nothing in quantum mechanics that provides a convincing enough reason to adopt it. Einstein's biggest problem with it is that it inherently requires you to reject the existence of objective reality that is independent of the observer. He gave an example of atomic decay. As time passes, quantum mechanics would predict an increasing probability that the atom has decayed, but it never hits one-hundred percent. This means, if you interpret the wave function as ontologically real, you have to believe the atom really never does decay until you look at it, and so you cannot even say the atom has a real state independent of the observer. (Eugene Wigner also pointed out in his "Wigner's friend" thought experiment that under certain conditions two observers would come to experience entirely different realities, and thus you could not say there is an objective reality independent of the observer from a purely psi-ontological standpoint.)
Believe it or not, Schrodinger had also initially put forwards his "cat" thought experiment also as a way to criticize Copenhagen, pointing out that if individual particles can "smear out," then there's no reason that particle could not then trigger a chain of events that would affect macroscopic objects, like a cat, and then you'd have to claim the cat is "smeared out" as well, which is an absurdity, and so it should be rejected. Yet people these days repeat Schrodinger's thought experiment as if he was arguing _in favor of_ believing "smeared out" cats exist, which is the opposite of what he was saying.
Even John Bell is often misrepresented as saying he "proved hidden variables don't exist" and therefore objects really don't have real states until you look at them and are fundamentally indeterminate. John Bell did not believe this and also criticized Copenhagen in his paper "Against Measurement" because of its measurement postulate which makes it incapable of scaling the theory up to large scale systems. While Bell did not take on Einstein's approach, he became infatuated with Bohm's "pilot wave theory" and even contributed to it in his paper "On the impossible pilot wave."
@@amihart9269 thanks a lot for this valid explanation👍. Hope some other guys read it too.
One of the very few episodes of ST where I lost the plot early on and never got it back 😂. Really commenting though to ask if Matt responding to comments is ever coming back?
Yeah, you know, I've noticed a trend of interactivity on UA-cam decreasing in some niches/genres, science communication being one of them. The Royal Institution has seemingly stopped doing Q&A sessions after lectures, or at least has stopped recording and publishing them on their channel... SpaceTime doesn't seem to do comment responses much if at all anymore... I wonder if engagement with those kinds of content is just too low according to analytics and metrics... 🤔
I for one switch off after the main topic is done and dusted. I suspect I'm not alone. Which would count as only partially watched in the stats.
wow. My brain was starting to dissolve after just a minute of this discourse. This guy is exceptional.
This is mere philosophy. No physical content. No new predictions. Just word games.
So what happens when the wavefunction is continuous rather than discrete? That situation would be rather like a particle being detected at any of an infinite number of places on a screen after having been diffracted through a single slit rather than asking whether it went left or right in the dual slit experiment. Although one can assign a probability for detection in any small area defined on the screen, the probability of detection at any given point will be zero.
I think we sooner or later will find that space and time are discrete.
Nothing else makes much sense.
The waavefunction is continuous, we observe a smear on the screen after passing through observed slits.
I don't think what you're stating is necessarily more than a Zeno's paradox type thing. We can do continuous probaability can we not?
Well you can’t really measure accurately down to a point because of the Heisenberg uncertainty principle. The uncertainty in position times the uncertainty in momentum needs to be greater than some constant, and a point position would have zero uncertainty. So you can never find yourself in a world that has zero probability to be measured (which kind of makes sense when put that way)
@@d95mback Isn't discreteness already proven by Planck length and the Zeno paradox?
@@CTimmerman not sure what you mean by "proven". The Planck length is a very small length indeed, but it's just what you get when combining the Planck units.
The smallest possible length could be much smaller than that.
Science requires confirmation of hypothesis via experiments, except for science communicated via UA-cam, where anything goes.
My brother has a jacuzzi with a wave function.
I love PBS Space Time sooooo much❤❤❤ thank you so much for this information! Gonna watch this one a couple times to let it soak in
Sometimes when i'm getting too smug, i just need a spacetime video to humble me
I'm glad i exist in the world, where watching this did not melt my brain. I do mourn the loss of my many copies that were not so lucky :s
Watching PBS space time makes me feel smart. Even though I can't help but blank out for half the video
I actually didn't hear any assertion of a distinction between "creating and distributing worlds over probabilities" and "creating and distributing corpuscles over probabilities".
I'd be very surprised if yall ever made an episode on that fine a question, but I feel like that is necessary.
Pilot-wave Theory is basically just the Everett Interpretation with unnecessary extra steps.
Does there need to be a difference? I think the intended difference is what comes after, like how the corpuscles disappear compared to the many worlds that persist.
He already did make a video about it: ua-cam.com/video/BUHW1zlstVk/v-deo.html
@@n0tthemessiah No, it is not, as Everett's interpretation fails to explain why we find ourselves in one branch over another if all outcomes occur. It is just an empirical fact that only one outcome occurs, and if both X and Y are possible, and X occurs, why did we measure X and not Y? While I do not endorse pilot wave, Copenhagen, or MWI, pilot wave at least has a specific reason for why X would be measured instead of Y, that it depends on the particle position.
I really love the direction PBS Spacetime is taking, often considering interpretations on top of established, verified quantum theory. Its no simple "shut up and calculate" from PBS.😎
Nobody asked you to shut up when they explained Copenhagen to you. You just didn't take the time to learn what it is trying to tell you. ;-)
The new logo is absolutely starLIT✨💙💚
this seems like one of those things that really is easier to get a handle on if you know the math. a formidable proposition. as someone who went applied as soon as he could, having recognised his pure abilitity was severely limited, this quantum stuff is a cruel trick indeed
The math doesn’t help with the understanding. It just gets you the answer that matches measurements.
I don't see how this supports the Many Worlds hypothesis at all. Adding the "Principle of Indifference" is what does it. So yes, adding another convenient premise to a theory can help it explain another facet of quantum dynamics. That is true for any theory. Add more postulates, explain more.
The "trick" here is convincing people that the "Principle of Indifference" is somehow "true" rather than the base speculation it is. It's a version of, "either it happens or it doesn't, so everything has a 50/50 chance of being true". Aliens killed Kennedy? 560/50 is a reasonable view if you don't know anything (which most people don't). The absence of knowledge of likely outcomes in no way should generate a view that things are 50/50. In my admittedly not so humble option :)
I'm very new to this so sorry if my question seems stupid.
if my coin was weighted in such a way that it had exactly √2/10 chances of showing heads and 1-(√2/10) chances of showing tails, wouldn't that lead to an uncountably infinite number of branches, thus making the coefficient of each separate branch 0 ?
Good question. I don't think that many worlds has a definite answer to this. I also think that there are issues when the number of possible outcomes is infinite.
You can take a limit of closer and closer rational approximations, perhaps?
I think there is one more thing to consider for many worlds: I think it is also branching in the "past" direction (and if we are there, we just experience it as "forward" as normally)
- So, theoretically, even past keeps getting fuzzily "rewritten" in the big "buzz" of the universal wave function that we exist in a small sliver of. (just like the future is a weird fuzzy "infinitely" many waves whose interferences creates the patterns we are made of and are observing)
My PhD is in probability, and what you have here is very good and well thought out. As a thought experiment to emphsize what I think you're eluding to, I'd ask those interested one question:
Suppose I define X to be the sum of two 6-sided fair die. What's the sample space of X?
(a) If you say {2,3,4,5,6,7,8,9,10,11,12}, then you're defining your 'outcomes' in such a way that they're not equally likely and a non-trivial pmf is needed to describe your random variable.
(b) But if you say {{1,1},{1,2},{1,3},...,{6,5},{6,6}} remembering that the sample space need not contain the possible realizations of X, but rather X is a function from the sample space to the real line, in this case X({a,b})=a+b, then you've defined your sample space such that each outcome IS equally likely, and then even non-trivial events like {X=4} can be measured probabilistically simply by using the axioms of probability. It's super interesting how the rigorous parts of probability theory get interpreted on a combinatoric level in say an intro to probability course.
Can I just say that I absolutely love the thought experiment in the beginning. I like that quantum physics thought experiments are almost never experiments that could pass ethics reviews
nor do they seem to be conducted in reality. i dont think superpositions exist in reality, object permence and all
Fascinating! Many worlds is my favorite, just from a gut feeling perspective.
@@armandaneshjoo So you have proved otherwise? I look forward to reading about your nobel prize.
I'm pretty sure everyone has a gut feeling preference based on some combination of what hand waviness they'll most easily overlook (and all of the interpretations have hand-waviness) and what is most emotionally appealing to them. It makes sense to me that a lot of physicists like this interpetation, because, as they like to say, "it's just taking the math [that we know so far]" and taking it literally! Physicists are immersed in the math, so they prefer the interpretation that takes the math literally, and that handwaves everything else (kidding. Sorta.) It's fine to have a gut-level preference, but given that there is absolutely no way to test this interpretation compared to most others (not even in principle I don't think), scientists like Sean Carroll should know better than to be out arguing in favor of a particular interpretation. It's much more scientific to say that we don't know, and to keep working on further developing the theory in ways that ARE testible.It seems far more likely to me that QM is an approximation and/or that there's something deeper or more subtle going on that we don't understand yet, than that any of our current problematic interpretations is correct. What Sean Carroll is engaging in is philosophy, not science, but given that he's not rigorously trained as a philosopher, he's probably not very good at it.
This is one of the things I love so much about Roger Penrose. He comes up with some pretty crazy ideas, but they are all testable via experiment. And cases where they are testible in principle but our equipment isn't good enough to test yet, he refrains from saying how likely his theory is to be RIGHT, and just works on developing the theory.
@@ReivecS It's ironic you say this when MWI is unfalsifiable...
@@amihart9269 There are many papers (and even more when just doing a quick check) that makes claims of ways to test MWI, but even if that weren't the case, my statement was about the ability to prove something else, so this comment still doesn't hold up. Nice try though.
@@ReivecS There are no ways to test MWI. You are appealing to vague amorphous papers which don't exist. Your comment was also about disproof and not about proof, and proof only exists in mathematics not in experimentation which can only show evidence with varying levels of confidence, and the Nobel prize cannot be awarded based on mathematical proof. Nice try though.
Love the video but honestly I'm a bit lost for this one. Maybe I'm overestimating how complicated this is, maybe I'm having a slow day, idk. Regardless, love the channel!
A few thoughts:
1) All we detect are particles. Waves are inferred from the detection of many particles.
2) Perhaps the manifestation of a particle is caused by a discontinuity in the wavefunction (It hits a screen for example). Where the particle lands is governed by the Born Rule. There are no hidden variables, it’s just the rule.
3) Then there is no “measurement”. It is just the universe doing what it does and infrequently we label it a measurement.
4) There is only one universe and it exists whether we look at it or not.
5) Many Worlds and the Copenhagen interpretation are both just silly.
Nobody has ever detected a particle. We can only detect energy. If you don't know why, then you weren't paying any attention in undergrad physics.
When you pretend you have a solid grasp on physics and form opinions
The lack of intuition of this phenomenon in physics IS the problem. You’re operating on your macro scale view of the material world and while that is valid, it’s wholly incomplete. These interpretations can have real world implications such as the development of quantum computers. So yeah they’re not just silly, they’re actually important to what you consider relevant.
@@Palbizu Copenhagen is the only interpretation you will even need. If you don't know why, then you weren't paying attention in undergrad physics. ;-)
The distinction being made in 1) is between the wavefunction and the particles of the standard model, also pointing out the wave nature needs multiple particle detections to be decerned. This is a pretty standard viewpoint. Although energy is required to detect particles you also need a particle (or group of particles). That is, energy doesn’t exist independent of all particles (or group of particles). If you can detect energy without referencing any standard model particle (or group of particles) I think the rest of us would like to know how.
Your last sentence is just an Ad Hominem logical fallacy.
@@schmetterling4477
The quality of this series is as exciting as ever
Hey Matt,
Is the principle of indifference somehow equivalent to the ergodic hypothesis?
It sounds exactly like distributing the probability across all microstates equally, and then counting the number of microstates in our macrostate of interest.
Thanks!
interesting question
Indifference should be not attributing any belief equally, rather than attributing equal belief evenly. The latter sounds more like apportion.
They're related in that both of these theoretical areas draw on probability theory. It is insightful to pull a bit on why each does: Statistical physics and quantum physics both use the language of probabilities because we (a user of the theory) have some necessary ignorance about what is going on. In SM this is because knowing the microstate of an avogadro number of particles is impractical; in QM it is because a closed system, as described by the Schrodinger equation, is fundamentally inaccessible to external degrees of freedom (i.e. the closed system is no longer closed when we force a measurement device to interact with it). I think that starts to get at some of what your question intuits.
As an aside, you might find some interesting follow up on those thoughts by 1) reading a bit about the Quantum Bayesian (QBist) interpretation of quantum theory, and 2) reading about the Maxwell Demon thought experiment from statmech, and it's analogs in the quantum context. There are some reasonably accessible articles around that can get you started on the latter (Quanta magazine has some nice articles about Maxwell's Demon, for instance). Connect those dots and you'll have understood a great deal about your excellent question!
Yes a new episode!
General relativity and quantum mechanics will never be combined until we realize that they take place at different moments in time. Because causality has a speed limit (c) every point in space where you observe it from will be the closest to the present moment. When we look out into the universe, we see the past which is made of particles (GR). When we try to look at smaller and smaller sizes and distances, we are actually looking closer and closer to the present moment (QM). The wave property of particles appears when we start looking into the future of that particle. It is a probability wave because the future is probabilistic. Wave function collapse happens when we bring a particle into the present/past. GR is making measurements in the predictable past. QM is trying to make measurements of the probabilistic future.
"The wave property of particles appears when we start looking into the future of that particle"
Isn't that notion inherently completely contrary to general relativity?
@@asd-wd5bjThat's the point.
Instead of observing the systems out side the box, observe as if you are the box. You will see time is only real because of the presents of the minde ,("the potential of the beginning and end is observed through its own deterioration" time is infinite)
the coin is always "observed" by the table, we cant see macro effects is that everything collapses far before it could reach macro scale
Assuming a closed system and entanglement not happening forever, can you apply Gittins index rather than the principle of indifference to superimposed particles?
Thank you ... This is your best episode to date ... The Born rule is just a axiom is something I could never accept in uni despite my lectures insurance ... But how did Born come up with this mystical rule wo the many worlds theory is still a mystery to me (future video topic maybe🤞?)
I'm pretty sure he just tried it and it worked--squaring the result produced the right predictions. So he basically derived it by throwing darts and testing the results.
Question: When we divide our basis state with the higher coefficient so that our coefficients are the same do we use any official mathematical method to do that? For example the Gram-Schimdt procedure to make the two basis orthogonal?
It doesn't matter how you chose the basis. Nature doesn't care about your coordinate systems, neither in physical, nor in Hilbert space. ;-)
I think the world evolves according to the Schrodinger equation, but we can't see superposition because we are macroscopic beings, and there are too many particles interacting, causing the macroscopic world to approach non-superposition behavior as the number of interactions explodes with each new particle involved.
Brilliant video, thanks for sharing! Penrose has his own wiev too about when and where the quantum decoherence happens, and he links it with quantum gravity too, the Objective Reduction theory definitely deserves its own full video I guess! :)
Penrose doesn't understand quantum mechanics. :-)
@@schmetterling4477 but he do understand relativity and he's a maverick take on it! :p
@@ThomasEmilioVilla Yes, Penrose is an absolute master of relativity. The strange thing is that quantum theory most likely derives directly from relativity, as well, so one should think that relativists should be able to get a quick fix on it, but most of them are guessing just as badly as the average crowd.
The question I’ve always had about the many worlds interpretation is, what if the coefficients aren’t the square roots of rationals? What if you have something like α=sqrt(π/4) and β=sqrt(1-π/4)? The argument of splitting up the worlds doesn’t seem to work in this case. Or is that not possible for some reason?
I second this, the principle of indifference requires a finite number of "cards" (here: multiverses) at each split. Unless we propose a mechanism that changes the number of cards between splits that is a common multiple of both splits without changing probabilities, it also means that the number of multiverses since the big bang is a finite constant. If that's true (and irrational probabilities are just approximations of a rational reality), I'd be curious what that number could be. And at some point in the future we will run out of multiverses to accurately approximate probabilities using distinct multiverses.
That would "just" mean that the eternal wave function has uncountable infinite granularity (i.e. uncountable infinite number of branches) and you are either in one of the uncountably infinitely many "α branches", or one of the uncountably infinitely many "β branches".
This is why I hate it when they describe Many Worlds as "branching". There is only one world, but states within it lose coherence with other states. Thanks to the uncertainty principle, there can't be a continuum of outcomes forever; they must get pooled into a finite number of distinguishable outcomes. Even when outcomes appear to be continuously distributed in reality, there are, in fact, a finite integer number of results that any measurement device (e.g. your eyes) can distinguish.
@@db7213 yeah, like, I get that the many worlds interpretation seems like the most parsimonious interpretation of quantum theory, as in, it is the one which gels the easiest with it's formalism and etc.
But what it says about the world is just about as un-parsimonious as anything could ever be, specially if we go down this uncountably infinitely path.
Another thing you could say about reality, outside the formalism of quantum theory, would be that quantum theory is just not complete enough to offer a great interpretation for the measurement problem. To me, that just sounds quite a bit less out there, you know?
@@user-sl6gn1ss8p So we have a way to accurately predict and describe what happens at the quantum level, but you choose to ignore it and instead claim that it's not "complete enough", even though there is nothing that's missing about it. What theory would then be "complete enough"?
Yes, but how can those many worlds interact with each other? For example, given the double-slit experiment, how can an electron interact with it's other-world counterpart? They are in different worlds after all.
Schrodinger equation applies to the big wave function and changes amplitude of each point/"world" depending on its difference with its neighbor points ("worlds"), so nearby points in wave function influence each other.
Very impressed to see compelling arguments for both Many-Worlds and for Pilot-Waves. Now THAT is science; no preferences, just verifiable experiments.
At what point has someone made a compelling case for Pilot Wave? If the comparative inelegance alone doesn't turn you over to Many Worlds, then you still have to deal with non-locality and other contradictions with relativity. Many Worlds works with fewer axioms, never fails, and isn't ridiculous like Copenhagen. In an unbiased evaluation, Pilot Wave should be the inferior interpretation.
@@davidhand9721 -- What are the other contradictions with general relativity?
@@davidhand9721Copenhagen is far less ridiculous than the Many Worlds interpretation. I’d buy the boltsman brain or simulation theory before Many Worlds.
The idea that every quantum event essentially results in two distinct universes and macroscopically an infinite number of universes is one of the most ridiculous scientific theories.
Yep. Many Worlds is the null hypothesis. All evidence ever discovered supports it without fail, because it is the simplest model devised to fit all available evidence.
The other real _theories_ of quantum mechanics "interpretations" each make verifiable predictions. For example, a Hidden Variables theory like Pilot Wave Theory predicts that there exist hidden variables. The discovery or verification of one of them would prove it correct and eliminate the null hypothesis.
Spontaneous Collapse theories predict that from time to time, a wave function of a particle will spontaneously collapse into a point function with no external stimulus. If we ever observe such a collapse (and can eliminate all extrinsic causes to a significant degree of certainty), it will prove that theory and eliminate the null hypothesis.
Until one of those other theories has its predictions verified to 6 sigma significance, the null hypothesis _should_ remain the default and be what is taught in schools.
It isn't. Because humans. Mostly because Bohr was apparently quite the force of personality, and, more cynically, because it's fun to watch students' faces scrunch up in confusion as we teach them the same magical gobbledygook of paradox and contradiction that we were taught. For a century now. Go humans!
I have never seen a compelling argument for Copenhagen, Many Worlds, or Pilot Wave. All of them rely on arbitrary postulates that Einstein had already eviscerated.
My challenges to this:
1. Why do we need to square the amplitude? It seems we're still taking it as an axiom that probability equals squared amplitude.
2. If you say we're entangled with what we're observing, which is why we can't see superpositions - how and when does the entanglement happen? In a double slit experiment, why aren't we entangled with the photon that's propagating as a wave, until we try to measure the photon's position?
It's still not clear to me that MWI explains anything, and just seems to add more complications. (E.g. conservation of energy - if measuring the outcome of a heads/tails experiment splits one world into two worlds, does the energy of each world get cut to half? Then why don't we notice that the energy of the universe dropping exponentially over time? Alternatively, does each world have the same energy after the split, so the total energy across the multiverse is now double of what it was? If yes, where did all the extra energy come from?)
It's not an axiom. You can derive the structure of quantum mechanics from a set of axioms that are basically the same as Kolmogorov's axioms for probability theory. It just happens that algebraically probability theory is not the only solution that satisfies those axioms. Another solution is standard quantum mechanics with complex numbers and a third one is quantum mechanics based on quaternions. So what are Kolmogorov's axiom really saying? They are describing the logical requirements for ensemble theories. Unitarity, in particular, is just the requirement that all the dynamic systems we throw into an ensemble are still there at the end when we evaluate (measure) the outcomes. That's a highly abstract axiom. Real physical measurements are not unitary because of detector efficiency and noise.
The reason why we can't observe superposition is absolutely trivial: it doesn't exist in nature. Superposition is a property of the ensemble theory. WE put it in there with the assumption that all systems in the ensemble are statistically independent. If you were to perform very high precision experiments on actual quantum systems you would see this break down because first order correlations in actual repeat experiments on the same source/detector pair are not zero... they are merely small enough to be ignored for systems that are given enough time to "thermalize".
In other words... quantum mechanics is a theory that makes some subtle assumptions about the world that are just not so... but people can't tell the difference between the reality of it and the math used to describe the reality. That's only possible if you spend some time in an atomic, nuclear or high energy physics lab at the bench or accelerator. Only then will you start thinking about the difference between what you are doing and the math that you are using to model what you are doing.
1, the squaring is necessary since it's not just a single system in superposition, but both the receiving system (the detector) and the transmitting system (the lazer). So if the lazer is either A or B, and the detector is either X or Y, then when the two system interact, the result is you multiply them together (squaring).
Analogy: there is never one coin being flipped, but two coins, so the final outcome has probability .5²
Look up the Transactional Interpretation of Quantum Mechanics.
2, some have proposed that the entanglement is what causes time. As in, without entanglement occuring, no time would pass.
@@adamsmith7885 Time is a classical quantity, even in quantum mechanics. A single system is also never in superposition. Only the quantum mechanical ensemble can be in superposition. If you want a probability out of coins, then you have to flip an infinite number of them. :-)
I dunno anything about physics or math but maybe the reason why the many worlds interpretation doesn't have to rely on an axiom and is self-proving while the others aren't is because the concept of probability itself relies on the assumption that the world could be what it isn't, aka, that something could exist outside of it. The split into different versions of reality naturally follows from this. Whether it's a mathematical tool or reflective of reality is another question, whether there's a difference between the two posibilities or not.
As Monty Python's Professor Gumby once famously whined, ...
... "My brain hurts!"
question - how does many worlds deal with the time dimension? a lot of the examples you come across when people explain many worlds contain strictly sequential events, experiment - measurement - observation. time is implicit here. what are some more exotic possibilities
From what I understand, time in many worlds is Einsteinian and deterministic - all possibilities exist already 'somewhere there' and its the observer that follows one of 'predestined path' of events. Otherwise we'd have to assume that with every ongoing event reality splits into infinite number of possible options and that such process is ongoing constantly. Personally I find both options equally stupid :)
Mathematically time in QM is an external parameter, it just goes linearly and states evolve with time, everything if some function F(t). There is no time operator, no time quantization etc. Many Worlds Interpretation usually doesn't add anything new to this picture. Quantum Field Theory uses maths from Special Relativity, flat 4D spacetime, where different observers may have somewhat differently inclined time axes, and use Lorentz transformation to convert from one frame of reference to another.
Interestingly though, and not many people think about it, time becomes a real puzzle in MWI: if we're inside some branch of a wave function, and wave function is all there is, and its amplitude is not observable from inside, and Schrodinger equation only changes amplitudes, this would mean from inside our world we should not be able to see any changes at all, so how the heck we see time passing is a riddle.
Hence my question really. I feel the whole premise of us observing and measuring the outcomes of an experiment, implies some sort of sequential order by default. So really it is at the centre of the entire concept, and yet i havent seen or heard much about it, admittedly im not an expert, but find it odd. @@thedeemon
Could it be that these are all artefacts of the limitations of our consciousness? We dont really understand consciousness very well, so perhaps a more advanced form of it could deal with seeing all possible worlds simultaneously. We are unable to observe all possible outcomes simultaneously, so we only do one at a time. Similarly we have invented the time concept to aid us with seeing logic in the whole process.@@thedeemon
choose the wave function where you're feeling better, Matt. Be well.
When observing the effects of radioactive isotope decay, the cat acts as a detector. The most important thing is that the wave function and isotope superposition 'after decay/before decay' is only a MATHEMATICAL TOOL for calculating the probability of seeing (at the moment of observation) a cat alive/dead. There is no such thing in nature as an isotope that is both "decayed" and "undecayed". If Wigner's Friend performs the first measurement after the half-life (and the cat is alive), then before the second measurement (after another half-life) he estimates the probability of seeing the cat alive at 1/2. Wigner takes the measurement only after "two half-lives" and for him the probability of seeing the cat alive is 1/4. Therefore, both observers estimate their probabilities differently, and their superpositions differently, i.e. their wave functions "are different" (at a different stage of evolution), but if we have N observation positions with two observers, the results of both statistical predictions will be correct (1/4 of N positions cats will live).
I was just thinking about this. Good timing with the video.
For perhaps the second or third time, I’ve just watched a PBSST episode and was able to follow along the whole way without suffering from any induced headscratchers.
That's awesome. I thought I got most of it, but still had to watch it again to make sure I wasn't fooling myself.
Uh-huh.
My mind immediately went to the ramifications of this to the Monty Hall problem, and there was an Elder God behind door number two.
Because it's already debunked. It tries to present itself as the "inevitable" and "obvious" consequence of a single premise, yet the premise presented in the video is actually immediately abandoned in the original paper it references when it tries to derive the Born rule for an updated version of the premise which is incredibly arbitrary and clearly chosen just to derive the Born rule, meaning most of the video is an irrelevant red herring. See the paper _"Epistemic Separability and Everettian Branches: A Critique of Sebens and Carroll."_
Can anyone explain to me why we abandon principles like energy and mass conservation in the universe for a hypothesis in which each single Planck time interval gazillions of whole new universes have to pop into existence and look as if they have been there for billions of years???
MWI seems to pose no problem with conservation. It is just QM, and QM does not violate conservation. but i think we need an explanation for it because it seems contradictory
@@nmarbletoe8210 What do you mean by "QM"?
@@Pit.Gutzmann quantum mechanics
@@nmarbletoe8210thanx. I still think, yes a universe can start out of quantum fluctuations, but generating a 13.8 billion years old universum at each quantum event...?
@@Pit.Gutzmann It does seem extraordinary. But it seems to be required even without Many Worlds.
In Feyman's Sum over Histories, all possible paths combine to form the probability wave function. Each path is like a world in the Many Worlds.
Since information requires some material carrier, perhaps this means the MW are material as well...?
Very interesting argument presented, feels like the first time the MWI has any weight to it other than your standard interpretation hypothesis. The Born rule always felt like a hack, well we can't use imaginary numbers so let's just take the squares to get a real number and voila it works. This is the first time an actual possible explanation of why this works so well. How do we build from here on out though, assuming we're onto something with this?
I never thought justification of born rule was “to avoid complex values”.
If you have a bounded observable, some self-adjoint operator, it will be diagonalizable, and so you can take an eigenbasis, and split whatever state into a sum of orthogonal eigenstates of the observable.
And, the born rule just says that the probabilities of each observation are proportional to the norm squared of the lengths of these?
Which, like,
well, if the different lengths are to correspond to the probabilities in some way, and probabilities sum to one,
then, well, the squared lengths of orthogonal vectors sum to the squared length of the sum of those vectors (which is 1 if we assume state vector is normalized), so it all works out?
So like,
it’s a consequence of using the l^2 norm over the l^1 norm ,
and the l^2 norm is the natural thing to use,
and also, like, rotations.
To make the quantum coin and standard coin appear the same, we would take a picture of the standard coin flipping in the air, and that would count as the “check when flipped” - or the act of continuing to rotate while being observed being that superposition
Once it’s on the back of your hand the decision is already made, which is the way to measure “headsness or tailsness” of the standard coin at 50/50
But midair, it could be either, depending on the angle you observe it, and it will change after your observation too, but you can figure out what it was in a single moment from a certain angle
Of course, your taking of the picture can also change the outcome if someone else takes a follow-up picture, because you have disturbed the system in place if your detector isn’t completely passive
I'd buy playing cards with that design (from ~12:30) on 'em. Y'all should turn that into merch too!