Read about about "Complexity Theory’s 50-Year Journey to the Limits of Knowledge" at Quanta Magazine: www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/
Confused Here... Please Help... ???? First Statement of Logical Facts and Action in the Video... ""A Robot arrives in a foreign land where everyone either always tells the truth or always lies. The robot reaches a fork in the road with two choices. One path leads to safety in the land of truth tellers, while the other leads to doom in the land of liars. A sentry appears. But it is unclear which group they belong too."" What question can the robot ask to determine the safe route." Q1. Which path leads to the land of liars? Q2. Are you a liar or a truth teller? Q3. Which path leads to your home? Number 3 is the videos answer to robot's quandary... But WTF? THE SENTRY COULD BE LYING AND POINTING OUT THE WRONG PATH TO HIS HOME.... SINCE HE COULD BE FROM THE LIARS GROUP. How does the video factually, logically, conclude from the stated logic RULES prior to the tested answer... That it is SAFE to go right. Why is it safe to go right. HELLO ? I don't see the robot being able to correctly draw a conclusion from the facts give up to that point ????????? So how can we move on to multiple sentries if the prior action is based on a logical fallacy ????
As a CS graduate student, the theoretical sections of the field are quite mind-bending and very profound in a way. I thnink often times people underestimate what deep insights questions in computer science can give back to the world. Thank you for showing such a nice summary of one of them!
Yes. People assume human creations are somehow unnatural, but many if not all are an interation of natures patterns. We have an innate desire to express them, or their expression is just inevitable. The further we go the more we realize we are just creating something already created. In a way, unraveling the story the universe is trying to tell us.
I think Theoretical Computer Science is as fundamental to the nature of the reality as Physics. These days more and more physicists look at problems through the lens of information theory and computational complexity.
@iamtheusualguy2611 Absolutely. When I started studying CS I had a hard time with mathematics and didn't see much use of all the theoretical parts. But it didn't last long until I've seen the wonders of Logic, Complexity- and Computability-Theory. I consider them the absolute highlights of my time at university, and it often felt like some kind of mathematical philosophie, so deep were the results I learnt there. I wish more people would know about the deep wonders of theoretical computer science.
This video is good, but a few small details to add. 1. Solving P vs NP doesn't mean all encryption breaks overnight. RSA encryption could be broken if a polynomial algorithm is found for an np complete problem, but only if the polynomial isn't too big. Even polynomial algorithms can be unusable in practice. This is all assuming an explicit constructive proof P = NP is found. Non constructive proofs will not help solve any of the real world problems, and if it is shown P is not equal to NP, nothing will change. Even if an algorithm to break RSA is found, we can build other encryption methods using NP Hard problems like Travelling Salesman Problem (shortest path version). 2. The Travelling Salesman Problem (TSP) is NP hard in it's usual statement. It is only NP complete if you ask the question "is it possible to find a path that is shorter than a given length". If you ask the problem of finding the shortest path, this is not verifiable in polynomial time.
If TSP Shortest Path is NP-Hard due to the complexity of verifying a solution, wouldn't that render it unusable for encryption, as quick verification is a requirement?
@@TheEzypzy No. If you actually know the solution, it is easy to verify. If you do not know the actual solution, it is hard to verify if a proposed solution is actually a solution. If I know the actual shortest path, it is easy to see if someone else found the same shortest path. What is hard is if I do not know the shortest path, but I have a pretty short path, how do I ensure it is actually shortest.
I think RSA is Co-NP not NP and can be solved by Shor's Algorithm in polynomial time, but more recently there have been proposals for quantum safe cryptography that use random walks in higher dimensions with secret basis vectors, I think.
So glad to see Shannon mentioned. He is massively underrated, he basically is the father of modern computers (let alone Communication and Information Theory)
Shannon cannot be underrated. it's taught at CS school for entropy. So as programmers we understand the concept of compression, huffman encoding, the theoretical minimum limit of compression. And after that the implications on physics. Shannon "law" is as big as the second law of thermodynamics. It's as fundamental a truth of this universe to sit with those massive fundamental verities as 2nd law of thermo. Because of that, it can't be underrated.
@@vindieu you're validating my point. He has done such massively important things, yet only CS/ECE majors know about him. He should be up there with the scientists known to general audiance
@@bennyklabarpan7002 Genuinely interested in your perspective. Are you saying as a net no lives were saved (ally and axis combined) or no lives even of the allies were saved? My assumption was he did save allies' lives indirectly and did have a hand in winning the war for the allies.
Im a teacher and I have to applaud your video style. It's excellent from an educational perspective with very sharp and clear visualisations and superb pace. Bravo.
great video but we must be in velocity of a better living styles in 25 many jobless people and empty retails building! this is great for the mind but we need appilcation theory work in energy and goodness in this country ai written!!!chp
This was excellent! Scott Aaronson praised it highly for accuracy but did state that it would have been improved by addressing the difference between Turing machines and circuits (i.e., between uniform and non-uniform computation), and where the rough identities “polynomial = efficient” and “exponential = inefficient” hold or fail to hold.
UG in CS and MS in CS now job less. Life is very hard. Watching the subject in this video brought me tears. The enthusiasm I shown in each and every class. Theory of computation, compiler construction and Micro processor integration. All three core subjects are simplified in this video. Thank you bringing my memory back. I will fight again to survive. 💔
@@damareswaramaddepalli9714 I'm also from India and I took arts for my bachelors degree but through self study I have some knowledge of Cs and programming and I want to learn more so I want pursue Msc Cs. Toh mei kya karu confused hai, aap toh experienced hai isliye aapka advice chaiye tha brother 🙏
@@SumiNaga19 masters in US not ment for learning. People just need to get work permit to work in USA 🇺🇸 they pursue masters. You won’t study anything in masters rather you will end up in 6 hour on campus partime, after partime go home cook something and sleep. After Master, you will be jobless hundreds of students not even hundred thousands of students were staying in Usa without job and their initial OPT from past two semesters. And some people I saw even their stem extension they didn’t get any job. Still doing partime and hundreds of students went back.
@@damareswaramaddepalli9714 where do you think you can actually learn something? maybe at phd level? or in europe/asia? I'm now doing ms in cs in usa and yes some classes aren't that good, but some are good. however agree most are jobless, better to work in walmart
What a well constructed video. I appreciate how it went from basic Computer Science knowledge and gradually introduced higher level Computer Science topics in a simply put way.
As someone who has a degree is philosophy and now working on a cs degree, it’s fascinating to see all the overlap when it comes to thinking about these kinds of problems. Great video!
His last question "Will we be able to understand the solution?" is the most profound question of all. It reminds me of the computer "Deep Thought" in "The Hitchiker's Guide to the Galaxy" which spent generations trying to solve the problem of "Life the Universe and Everything". After many hundreds of years it came up with the answer.....42. But what does THAT mean ????
Exactly! Let's say that matter and energy sprout out of nothing, and both can cancel out. What made them separate to start with, and rejoin later? c? the speed of light? c can survive any black hole, but we can't appreciate the speed of light as a "thing". So 42 could be an answer but we can't understand that as a solution.
It means that the person who posed the question and the brainy computer were functional idiots. Garbage in and out. In other words, bad question, bozo.
One of the best explanations of P, NP. I recalled my Theory of Computation, Information Security lectures and found it really fascinating. The insights are really cool and best explained. Thank you so much !!!
As a theoretical comput scientist i can tell you there is false information in this video. Like the one in 9:46 . But overall a good attempt on hard concept to grasp
Plain Boolean formulas cannot operate like a Turing machine because they have a fixed bound on computation (once you assign values to the variables you can simplify the expression in a fixed amount of time). For a paradigm to be Turing complete it must be possible for it to run forever, which requires conditional loops. Boolean circuits can be made Turing complete by introducing registers (for memory) and a clock to synchronize computational steps, but they can no longer be represented as pure Boolean formulas.
📝 Summary of Key Points: 📌 The P versus NP problem is a conundrum in math and computer science that asks whether it is possible to invent a computer that can solve any problem quickly or if there are problems that are too complex for computation. 🧐 Computers solve problems by following algorithms, which are step-by-step procedures, and their core function is to compute. The theoretical framework for all digital computers is based on the concept of a Turing machine. 🚀 Computational complexity is the study of the resources needed to solve computational problems. P problems are relatively easy for computers to solve in polynomial time, while NP problems can be quickly verified if given the solution but are difficult to solve. 🌐 The P versus NP problem asks whether all NP problems are actually P problems. If P equals NP, it would have far-reaching consequences, including advancements in AI and optimization, but it would also render current encryption and security measures obsolete. 💬 Circuit complexity studies the complexity of Boolean functions when represented as circuits, and researchers study it to understand the limits of computation and optimize algorithm and hardware design. 📊 The natural proofs barrier is a mathematical roadblock that has hindered progress in proving P doesn't equal NP using circuit complexity techniques. 🧐 Meta-complexity is a field of computer science that explores the difficulty of determining the hardness of computational problems. Researchers in meta-complexity are searching for new approaches to solve important unanswered questions in computer science. 📊 The minimum circuit size problem is interested in determining the smallest possible circuit that can accurately compute a given Boolean function. 📣 The pursuit of meta-complexity may lead to an answer to the P versus NP problem, raising the question of whether humans or AI will solve these problems and if we will be able to understand the solution. 💡 Additional Insights and Observations: 💬 "The solution to the P versus NP problem could lead to breakthroughs in various fields, including medicine, artificial intelligence, and gaming." 🌐 The video mentions the concept of computational complexity theorists wanting to know which problems are solvable using clever algorithms and which problems are difficult or even impossible for computers to solve. 🌐 The video highlights the potential negative consequences of finding a solution to the P versus NP problem, such as breaking encryption and security measures. 📣 Concluding Remarks: The P versus NP problem is a significant conundrum in math and computer science that explores the possibility of inventing a computer that can solve any problem quickly. While finding a solution could lead to breakthroughs in various fields, it could also have negative consequences. The video discusses computational complexity, circuit complexity, and meta-complexity as areas of study that may contribute to solving this problem. The pursuit of meta-complexity raises questions about whether humans or AI will solve these problems and if we will be able to understand the solution. Generated using Talkbud (Browser Extension)
it's fun looking into the CSAT problem and trying to figure out exactly why you can't just assume the output to be 1 and trace it back to a random possible input combination. It eventually comes down to these two facts: ----------------------------------------------------------------------- 1. logic gate outputs can get shared between logic gate inputs. E.g: say, the output of an OR gate is connected to 2 (or more) AND gate inputs. 2. Logic gates AND and OR have multiple inputs mapped to their outputs. This means, they are many-to-one functions and do not have inverse functions. As a result, when you try to trace back from a given output, you have to randomly guess an input every time. This wouldn't be a problem on its own but due to our first point (logic gate outputs can get shared between logic gate inputs), when two guessed inputs end up meeting at a common output, they won't necessarily match so you end up with 1 and 0 being output value simultaneously. And that's where the computational complexity increases drastically, because now you have to keep an account of all those common outputs branching into multiple inputs, so that your guessed inputs align when they meet there.
One of the best explanations of P, NP. I recalled my Theory of Computation, Information Security lectures and found it really fascinating. The insights are really cool and best explained. Thank you so much !!!
I have discovered that some programs have parts of code of P class and parts of code of NP class. There are classes of relaxations that use heuristics that can solve some problems but with no warranty about its success or about its optimality.
I think Dijkstra (thus also A* Traversal) already encapsulates the "variational method/technique" discrete analog to guarantee the "best possible/shortest path" for a search/traversal" of-sorts. Sort'a-like how we *optimize* for the distance between points to be a straight line in Euclidean space using variational principles/techniques, but in discrete spaces with edges carrying weights, we can already see from the Dijsktra/A* that it is, indeed, the shortest/least path Dijkstra already gives this information when the algorithm reaches it's halt state. I know you already know this, I'm just stating all of this info for the sake of completion & to fill in the gap for an audience who may interact with the above problem/post. I understand you are NOT referring specifically to traversal algorithms (which, I think, was EXACTLY what your original statement was attempting to express/convey) but rather to GENERALIZE the term "algorithm" for which we know solutions may/may-not exist, but we are NOT sure whether it is OPTIMAL or how to even implement it in the first place. So there's NO REASON to "Question" whether/NOT certain algorithms are already "best" BUT, it is hard to know (for algorithms, in general) whether we can even find a solution. Such as with Chess or Soduko or the n-th Queens Problem/ Durer's Magic Squares/Mastermind/Any Verifiably-Correct Solution puzzle given to an algorithm as input, in the sense that there are multiple known/implemented/algorithms for which the above tasks can easily verify a solution as input, but for which, we do not know the best strategy to do so. If we look at Mastermind (MM) for Example, the "best known" implementation is Knuth's Five-Step solution for a verifiably-correct result with the VERY-Inefficient "trade-off" for a HUGE/n-facorial (n! to be more precise) amount of memory on the stack at program runtime. This is actually a problem that we need to address (I am not sure whether there is research being conducted in this direction or not - it would be interesting to know if there is ...) Knuth's algorithm for solving MM (in 5 steps) works excellently on a small enough input, i.e. if you consider say 6 colors, but say that we parametrize the number of colors for the input of Knuth's Algorithm, then the problem becomes n!-hard to solve. That's just my two-cents on Knuth's Algorithm. It's NOT even hard to prove, we just have to look at the first 1 or 2 steps of Knuth's algorithm to see WHY it is n!-difficult in terms of storage. Knuth's Algorithm starts by LISTING-OUT all the possibile combinations for the correct color-sequence, & then eliminating the color sequences that cannot possibly be correct. Now if we scale the input to say, 10 000 colors ...
[That would be like having 10 000 different hex/RGB values for color inputs or just 10 000 different "things"] (it's irrelevant what the color inputs mean - soling MM has other mathematical application as any algorithm, specifically for larger inputs) [The colors are just place-holders] then, the result of scaling the input so drastically, is that we cannot efficiently solve MM this way, because, ya-know ... Stack Overflow! So, Knuth's is efficient on 6 Pegs (in terms of STEPS), but a performs very poorly, if at-all, on say 10 000 Pegs (due to Memory Limitations). So ... there's questions ... MANY QUESTIONS ...
Im an electrical engineering graduate besides understanding in electricity what really shakes me is understanding of how computer thinks. I really love it as side hobby. 😊
I briefly dabbled into this topic in a Computer arithmetic hardware implementation course that I took in grad school ( MS Microelectronics Engineering) . Mainly the part on circuit complexity ( as that was the applicable concept to the course). It’s was a really interesting topic/course, especially the part of the course where I got to optimise a hardware implementation of an FFT algorithm on an FPGA by applying the techniques in the course.
Its like the RSA encryption 'problem' where it took normal computers an exponential ammount of time to bruteforce RSA encryption, it took a polynomial ammount of time to bruteforce on a quantum computer.
to add consistency: iff constructively solved with small constant/polynom to yield presented disruptions, PKI failure would be no factor anyway because zero-marginal cost economy/society equals non-monetary economy provided the materialized P=NP (non-deterministic processor (NDP)) is public domain, e.g., via IPFS
Nobody is pointing out the potential halo effect of quantum computing, that quantum computers will be able to find minimum circuits and thus design vastly more powerful classical computers. That’s probably because everyone expects quantum computers will be so unimaginably faster than even such minimized circuit designed classical computers that it won't even matter. But i suspect classical computers will act as an interface to quantum computers. And they will, at least initially, be used to solve different kinds of problems and perform different kinds of internet tasks.
no they wont. quantum computers dont work like regular computers. they have a really high error rate, and are slower than regular computers for any tasks that are sequencial. quantum computers are only useful for sustained parallel operations, like graphics processing, computer simulations, brute forcing private keys and passwords, etc. Right now, quantum computers do exist, but their error rate is too high to be usable for any of those tasks, and there is barely a use for them. and there is also no need for faster computers. even cheap modern computers can do anything almost instantly that a person normally does on a computer. high performance computing only matters for saving costs on server infrastructure, graphics processing and scientific work like computer simulations, physics simulations, etc.
5:13 You'll have to amend the comment on how "Boolean boolean formulas can operate like a Turing machine." There is something to be said about representation of decision problems (computably enumerable sets being Diophantine), but a fixed circuit has a maximal number of binary inputs, while an abstract Turing machine can be initialized with arbitrary size inputs.
Ugh...listen. I'm glad you took the time to explain the P=NP problem...but your conclusion that if P=NP, then suddenly we have the polynomial algorithm for every NP problem is just silly. Knowing that P=NP just tells us that a solution exists. It would not make us any closer to finding it.
Furthermore, not all P-time programs halt in a practical amount of time! The video gets this wrong! A solution that takes a number of operations proportional to the length of the input to the power of Graham's Number is in P, but it will not halt before the sun dies. P is often taken to mean practical time because: a) No matter what the exponent is, exponential time is still worse for sufficiently large inputs. (But for some exponents, this is a totally theoretical concern because by the time the graphs cross over, neither program will halt before the sun dies and it's not close. There are plenty of possible exponents where some exponential time algorithms are more practical than a P-time algorithm with that exponent because the exponential time algorithm at least halts on *some* inputs.) b) Computer scientists often think of P as meaning P-with-very-small-exponents because the majority of P-time algorithms we've found so far have very small exponents. But not all of them: there's absolutely papers in theoretical computer science that prove a problem is in P because it can be solved with n^40000-ish steps. (For context, n^3 is a bad time complexity for most practical applications and n^4 is terrible.)
Nah, if we have a polynomial time solution to any NP-complete problem, we have one for all; definitionally, any NP-complete problem cam be expressed as (and used to express) any other NP-complete problem, through a conversion operation that's polynomial in time, and importantly with a polynomial growth in input-size. A polynomial of another polynomial is polynomial, so if there's a polynomial-time solution to one problem found, that same solution can be applied to the rest, through some conversion-step of the problem. Large polynomials, likely, but polynomials none the less.
@@chrism6880 search word you're looking for is karp reduction. Providing one from a known np-hard problem is the proof for np hardness, so will be well published for any given problem (but no, there's no universal operation.) Cook-Levin shows every NP-problem as reducible to SAT, so that reduction is pretty much trivial and universal; from there it's just daisy-chaining reductions in the "tree" of proofs (viewing SAT as the root). So, let's say you want to solve the TSP-problem, and some mofo's stumbled upon a polynomial time solution to the clique-problem. You might then take your TSP-problem, reduce that to SAT, from there to 3SAT, to Independent Set-pronlem, and from there to the Clique.
@chrism6880 Search word you're looking for is karp reduction. First comment of mine was a bit shoddy; karp reduction (expressing one problem in terms of another, in polynomial time) is essentially trivial for converting any NP problem to SAT - comes with the Cook-Levin theorem and proofs thereof. If the a problem's in NP, showing NP hardness is sufficient for NP completeness, which is done by providing a reduction from any NP-complete problem to the "new" one, so a published known karp reduction will already be known and available (though these may need to be chained, which is fine, since polynomial...ness is transitive). Say you want to solve TSP, and some mofo's solved the CLIQUE-problem already. Reduce TSP to SAT, then follow the chain of known proofs, something like SAT->3SAT->INDSET->CLIQUE, and you've a 4-step reduction doable in polynomial time - and polynomial growth - for expressing your TSP as the hypothetically "easy" CLIQUE. (EDIT: Above was a separate comment; disappeared when the following paragraph was added in the youtube app, dunno why.): Also, sorry if my original comment was a bit shoddy, haven't read this stuff for years; had to break out an old coursebook to even get the names/proof relations.
There are some contradictory statements in your video, but it is a great presentation of N and NP problems. Thank you for creating such a wonderful explanation in layman's terms.
Advancements in quantum computing's measurement problem could solve N = NP. Quantum computers can represent 2^N positions per q-bit in quantum memory. So all N and NP problems are computational. The bottleneck arises with measuring all the positions in the computation at once to get a complete view of the quantum memory. So, if advancements are made in measuring q-bits then it may be possible to measure and observe all positions at once, which would solve N vs NP.
I think an important point to be made in the field of algorithms is that a lot of the efficiency of certain algorithms depends on input size. Theoretically, if you have a P problem and an NP problem that solve the same problem, the P problem runs in n^200 time whereas the NP problem runs in 2^n time, if your input size is always small, say 2 or 3, then actually the NP problem is more efficient. You can also apply this idea to just the area of P problems - for example with sorting algorithms, if we know the input size will always be very small, sometimes it is more efficient to use what is learned as the “slow” algorithms (like selection sort/ bubble sort) vs the “fast” algorithms like merge sort or quicksort. Some may argue this can’t be since bubble sort runs in n^2 vs merge sort running in nlogn , so merge sort is always at least as good, however there are also hidden constants in these runtimes that get overlooked easily. You should really see the runtimes as (n^2 + c) and (nlogn + d) respectively, where c and d are constants and c < d. You can see the constants as additional overhead needed to set up the algorithm. in academia we usually ignore the constants as n grows large, but in practice, there may be cases where n is guaranteed to be small to the point where it is actually more efficient to use the simpler “slower” algorithm. A good analogy that my professor made is why use a sledgehammer to pound in a nail when a regular hammer will do. ( this also applies to deciding which kinds of data structures to use) Anyways nicely made video! 👍
Depending on whether or not the sentry ALWAYS tells lies, or just CAN tell lies, and whether or not you can ask multiple questions, then there can be an optimal guaranteed solution to the problem. All you have to do is ask a question with a known answer. "What is 2+2?" If they give you a false answer, then they lie, and the remaining answers just need to be inverted.
If you are limited to one question, it could be this: "If I asked the other guard which way is the safe path, what would he tell me?" Both guards would point to the unsafe route. Take the other route to safety. I'm not sure the answer in the video is correct.
I'm in a bit of a confusion, in the video it's mentioned that some "seemingly" NP problems were later solved using clever polynomial algorithms which brought them back to the P problems. - (1) But, it's also mentioned that all NP problems are equivalent and solving one would mean that N=NP and all NP problems would be theoretically solvable/computable. - (2) In that case, comparing (1) and (2), doesn't it mean we've already sort of solved it already? If someone can help me clear this conundrum, I'd really appreciate it. Thanks in advance :)
not all NP problems are equivalent but all NP-complete problems are equivalent, a problem is said to be NP-complete must be in NP AND Every problem in NP is reducible to it in polynomial time. for example prime factorisation is in NP, if you find a polynomial time that solves it (wich will be a huge dicovery) does not mean P=NP because there is no known algorithm to reduce any npcomplete problem (like SAT) to prime factorization.
Ah I see, thank you so much for your response and the apt example you've provided me with. It seems there was a misunderstanding in my part and I ended up thinking all NP problems are equivalent and didn't consider they've to be NP-complete to be equivalent to each other, and so I think to have N=NP we need to have a NP problem which is also *NP-complete* that can be solved in polynomial time.
Intuitively i would say, there always be problems that are two much dificult to computers to solve. There always will exist NP problems. Even with quantum computers, as computers arquitecture helps finding more complex solutions in polynomial time, it will always raise NP questions. Therefore its like solving all mysteries of life in computational time.
Some of us are developing the octonionic theory of unification in which gravitation and quantum theory are emergent phenomena. It is far more successful than other ideas. Please look it up. It is unprofessional that Quanta is wedded to ideas coming only from Perimeter, Princeton and such. There are other countries on earth:-)
Verifying a number is prime is in P (proved in 2002, Agrawal et al). A good example of a problem that regardless of being in P cannot be solved quickly. Be careful with comparing complexity classes.
This video greatly overblows the real-world significance of P vs. NP. The question is a very theoretical one and, although it would be unintuitive to learn that P = NP, it is by no means a given that a proof of P = NP would leap down from the ivory tower and cause any direct or "overnight" effect at all on the internet, AI applications, business, cryptography, etc. For people other than academics to care, we would need to see new algorithms that are actually greatly more efficient on real-world problems than current real-world approaches are, but no such algorithm is necessarily provided by a proof in this area regardless of its conclusion.
A proof doesn't even prove that such algorithms exist. P == NP is not the same thing as P-with-very-small-exponents == NP. It could be that the knapsack problem is in P because someone finds an O(n^10,000,000) algorithm. This would be a constructive proof that P == NP but also have essentially no practical implications.
@@brianb.6356 I think the point that the @suzannecarte445 is trying to make above is that, current encryption algorithms & their solvability in less-than Exponential time are NOT our primary concern. Our PRIMARY concern is to establish whether/not we CAN reduce a known algorithm, but just haven't discovered the means for it yet on a program level, so should we continue searching for an implementation that we KNOW doesn't exist? It closes a whole lot of questions on whether or not we CAN (actually) solve certain problems in less steps [in which case, that would be a good thing ... ], while for other problems, we are FORCED to use Recursion, especially because recursion is such a good replacement for the loop in a turing-complete model.
You’re video is great. My professor was talking about this in class, and you went into so much detail. I like the parts you included with the other guy too. The back and forth was cool!
The liar's home is the "land of liars", and the truth teller's home is the "land of truth tellers". A truth teller would point to their home, "the land of truth tellers", but the liar would NOT point to their home. Since their home is the "land of liars", it means they must be pointing to the "land of truth tellers" as well.
at 1:34 there is a flaw in the logic. If you don't know the answer to the question "Which path leads to the land of liars?", how answer to the question "Which path leads to your home?" will help you? You still don't know if home is a land of liars or not.
1:20 That question is more straighforward than mine, which was: What would an enemy guard answer to the question "Does the left path lead to the village of the truth tellers?"? From the enemy's answer to the guard's answer to me, there will be exactly one lie, therefore if the guard says yes then I take the right path; if it is no I take the left one.
Sorry, but I do not get it. First off, do we not know if it is only you would be doomed in the bad place and therefore do we not know where that person lives. Second, if we assume that all will be doomed if they live in the bad place, and we ask: "where is your home", if it's the liar is the home on the path to the left, and if it is a truth teller, is the home on the right. EDIT: Your question is logical and smart! :-)
I promise you that most people seriously trying to solve these types of problems could not give much of a fu ck about the money. The clout in the mathematics community is worth more than the money being offered. In fact, these people are so intelligent that they could already work as quantitatives in some bank and make more than a million dollars if they really just cared about the money lol.
@rtothec1234 but you are only talking about mathematicians. What about other people? Who might solve the problem. Also P vs NP is part of computer science too. How about those people? Btw if these people don't care about money. They shouldn't ask big salaries in the first place.
Ok, encryption is important to everybody but the best example is the 3-body problem aka 3-body instability. When you compute the future position of three bodies subject to gravitation you'll get an answer that will always have an error. So now you are searching for an algorithm that will give you zero error. This is easy for two bodies but with three (or more) bodies one must take a time increment and compute the next position at the next time increment that is also the source of the error. Further, the 3-body systems are unstable and chaotic (have no repeating period). Nevertheless you shorten the time increment and lower the error but this will take more time to perform the computation with the result that the error reaches zero when the computing time reaches infinity. Here is a good spot to see that Turing's assertion of giving the computing machine unlimited time in his definition of the universal machine --- *it does not make it universal.* So, ending my comment right here right now is ok and we can "seriously muse" about all this. Unfortunately the cat is out of the bag and implications are fundamental (if not existential). The thing is that our solar system is said to consist of more than two bodies. Ignoring creation/evolution the solar system will break up sooner or later and even theoretically it is now teetering on the edge and can fly apart any minute. Or you can take the 3-body instability as prime reality, chuck the solar system and figure out that the Moon is not a mass rock (hologram image?). Oh, before you call me out as a flat earther, consider that the Earth, Sun, and Moon are three bodies that are working nicely together thank you.
16:23 What you actually said: "Functions with a number of necessary logic gates grows exponentially with increase in input variables are said to have High Circuit Complexity." Should've said, 'Functions with a number of necessary logic gates that grows exponentially with an increase in the number of input variables are said to have High Circuit Complexity.'
Even if P vs NP has positive solution it has just an existential nature. It does not give us a way to contract polynomial-time algorithm for any NP hard ones. There are many problems for which we have the guarantee of solution's existence but yet no one could find it. So even if P VS NP solved there is no immediate danger of losing internet or bank cash.
I do theoretical computer science research, and I found some of the explanations by the narrator in this video kinda confusing and misleading. Some things are just false. I love your videos, and before I've seen some great explanations about my area of expertise, and I was amazed by how good you manage to collect your sources to make things correct. But this one is sad :(
it boggles me how many people in the comments that call themselves computer scientists call this video „the best explanation there is“ with these heavy mistakes or straight up wrong statements
@@Gin-tokiproving that proving p=np would just prove that p=np and that’s it. nothing would change overnight. it’s like saying that proving that there exists a solution for a given equation is the same as knowing the solution which is not really true. those polynomial algorithms would still have to be found and also they could be still giga slow in practice. n^10000 is still polynomial but unusable. it would be just be a theoretical result. Actually the methods to prove it would probably be even more interesting
If you proof P = NP you likely have an algoritim that reduces an NP-complete problem to one in P. Since all NP-hard problems can be reduced to one another, this algorithm can also solve all other problems in NP in polynomial time. This would indeed lead to cryptography being broken et cetera. I don't see why this is an exaggeration.
Fun little fact for the boolean algebra fans out there, AND, OR, and NOT gates form what is called a functionally complete set. There are other functionally complete sets, including entire sets which contain only one gate. NAND and XNOR are two such gates which are able to replace all three of the normal boolean operators through complicated replacement proofs. That is to say, you can construct an AND gate with only NAND gates, along with OR and NOT gates. It's kinda wild really...
I remember having to formally prove this and use it to construct boolean networks in first semester of my cs degree. I remember once having to use only those NAND gates. they also had us doing the same thing with multiplexers and constants. I remember one where the entire boolean function was performed by just a tree of multiplexers with 2 inputs, and one with a giant multiplexer with many inputs and multiple selection inputs, which made them look regular and symmetrical, but also would require many more gates, cause multiplexers are composed of multiple different basic gates. There is quite a bit of freedom in terms of what gates can be used. from the many circuits that can perform a boolean function, the one with the least amount of gates is not always the prettiest one.
Note that even if P isn't equal to NP, this only means that _current_ computing schemes can't efficiently solve these problems. It's already known that time-loops/causal-loops can be used to quickly solve NP Complete problems, we just don't know of any ways to create them with our current knowledge of physics.
@5:17: the schematic has a mistake. one of the input of 3 AND gates is not connected to any output, and is therefore floating, making the circuit unpredictable. @6:04: the AND and OR gates are "current" gates, while the NOT gate is more of a Voltage gate. In the real world of logic gates, all gates are "voltage" gates, where almost no current circulates, and the actual minimal representation of a AND and OR gate are quite different from what is shown here, but interestingly, they are so similar, they are actually vertical mirrors of each other.
Conclusion The statement "security(P) = mathematics(NP)" highlights the crucial dependence of cryptographic security on the mathematical complexity of certain problems. Security protocols rely on problems in NP being hard to solve (not in P). If it were discovered that NP problems are in P, it would have dramatic implications for security, requiring a fundamental overhaul of cryptographic systems and protocols. This relationship underscores the importance of the P vs. NP question not just in theoretical computer science but also in practical applications like cybersecurity.
There are a good chunk of standard gates, in boolean logic, but foundationally you only need 2. Not, and either and, or or. You can construct the other with these parts. Or for example is not(not A and not B) the and only returns true now if both A and B are false, meaning the whole construct becomes an or. The same trick can create and from or
The thing that is jumping my mind is not clear, but is n is not np just how thinking works. Understanding understanding is how to 'solve' it maybe. It is how epistemology works, its about self conscience and the difference between observation and thinking. I can think about my observation, and i can think about the thoughts when they become observable. That makes this last sort of observations very suitable for developing math...
As a mathematician and former chemical engineer, I absolutely want P=NP and practical useful algorithms to be found. Billions of important practical engineering and science problems and logistics problems need solving. I couldn't give a shit about encryption. But it is sadly almost certain that P is not equal to NP.
9:48 this is misleading, if not downright wrong "all problems in P can be solved in a practical amount of time". The exponent can be any number, ie n⁵⁰⁰⁰, which will be impractical for any n>2. There are plenty polynomial algorithms that aren't practical because the fixed exponent is too high, or they take too many fixed steps for C (the constant in the polynomial). Likewise, 11:20, some exponential algorithms are faster in practice than polynomial ones, for reasonable inputs, ie for f(n)=1.01ⁿ for instance.
I've always been in the "NP is just P we haven't figured out yet" category. That's how *all* science *_has always_* been; just like how it turned out magic was just a symptom of our lack of understanding of physics, NP is just a side-effect of our lack of understanding of mathematics.
An awesome video gave a lot of insights on the the kind of problems and it would be nice if the solutions are always used for the benefit of the society
Minimum Circuit Size Problem ... nature probably solved it (implicitly by trial and error using that in played out construction plan within DNA) ? if we knew the solution ... would it mean our brain would have been modeled somewhat accordingly to this "size" (probably not necessary)? It is not a circuit in typical sense, but could be described at least by a infinite amount of those? If not, why?
12:11 "Any Problem would be well in our reach." This is false. Solving P=NP does not give a solution for NP-Hard Problems, but more importantly there is a complexity class (ER, Existential Theory of the reals), that is harder than NP Problems. A solution to those Problems can't be verifyed in Polynomial Time
5:25 "truth circuits" are essentially analogue neural nets that cant take into account partial voltages like digital or biological nodes can to process the input layer and calculate and output.
This is a semantics thing, but i think we ought to consider this "computational science" over "computer science". My understanding of "computer" implies a system which performs computations, whereas this pertains more so to the study of computation. Not to be pedantic or anything, i just feel like P=NP is a more general topic that is somewhat independent to systems that perform computations.
Maybe the two scenarios has an connection to each other? Like if i have an solution or if i solved an complex problem,i would easily and more faster solve the problem.But if you had like no solution at all then it would be hard and it would take time to solve the problem? So maybe P=NP:P≠NP indicating that there's some relationship with the two scenarios?
"Tricky", this is the absolute "reinventing the wheel" situation, not a problem to do, it is Euler's flash-fractal projection-drawing of e-Pi-i 1-0-infinity Singularity positioning in logarithmic resonance, universal relative-timing. (Can't you see, hear or sense-in-common?, this mono-dualistic modulation cause-effect mechanism is absolutely everything) I can't either, it's the original Vapourware of Perfect Gas Thermodynamics terminology.
Transistors are not switches, they're current amplifiers that can be configured as a switch. Saying a transistor is a switch is like saying a car is a taxi. Sure a car can be a taxi, but not all cars are taxis.
An algorithm having polynomial time is equivalent to the following fact: Doubling you input will multiply the processing time by a bounded amount. Btw, great that the video only focussed on the problem itself, not the prize money.
Doing one hard puzzle might take an infinite amount of time, but running an infinite amount of hard puzzles once would give you atleast ONE solution that you can verify.
Kinda like the liar’s paradox. If the sentry is a “truth” teller he would point at the true location of his home. If he is a liar he would lie about the location of his home to the robot and point to the the opposite of his home(liar land) ie truth land.
I solved this problem earlier this week. I published my first rough draft earlier this week. It turns out P is not equal to NP. I have rigorously proven it using very advanced mathematics that was thought impossible to do. Amazingly, the proof shows us two amazing facts: all polynomial time problems have optimal substructure and there is an intricate relationship between mathematics we grew up learning: commutativity, associativity, and one more property: logical universality. It turns out, they don't play nice together and therefore P is not equal to NP. You can't have all three properties.
@@Schnorzel1337 I found a gap in my proof, but it is not a large gap. Basically, I use two way containment to show that P is equivalent to polynomial time dynamic programming. However, one of the directions has a slight mistake in the proof. If I fix that, then yes, I will have proven it. I have spent years and years (decades now) trying to solve this problem chasing it through an amazing journey that I could write a book on. I went through group theory, ring theory, semigroups, magmas, etc.... Honestly, I thought I had proven the opposite, but I realized my attempt to prove P=NP led me to a proof of P!=NP.
"All polynomial time problems..." By Rices theorem there is no one way to decide whether a problem is in P. On the other hand, that's what natural proofs assume, yet natural proofs have been proven independent of P-NP. Sorry.
It could turn out that quantum powered AI could break any user security code, and that the internet architecture itself would have to be a configuration of trillions of quantum computers. Each one would act as a dynamic code generator. Or it could be that communication across a quantum internet would involve communication between classical computers that coordinate intangaling gazillions of protons in 2 quantum computers located hudreds of miles apart in order to communicate
If you’ve heard about Asimov’s “I, Robot”, the answer is always “My responses are limited, you must ask the right questions” until you finally hear “That, detective, is the right question”. If you are unable find an answer to a question, you should at least consider that you might not be asking the right question to begin with.
My discernment is tingling and telling me I'm in the right place; considering "gatekeeping hate". 'Seems like someone who works on Quantum computers doesn't want this to be known.' If this challenge is further advanced, the solution can lead the cost of quantum computers going wayyyyyyy down, due to the efficiency of the "Compound-hypervisors" which work together to solve more computations and store memory efficiently, without overheating. Just a sense, don't mind me...
given: The separate experience of many databases, each Ai'ed, each as one The exponent is adding up all the ones. an exponent of the same problem, as a whole, equals a solution
Read about about "Complexity Theory’s 50-Year Journey to the Limits of Knowledge" at Quanta Magazine: www.quantamagazine.org/complexity-theorys-50-year-journey-to-the-limits-of-knowledge-20230817/
Confused Here... Please Help... ????
First Statement of Logical Facts and Action in the Video...
""A Robot arrives in a foreign land where everyone either always tells the truth or always lies. The robot reaches a fork in the road with two choices. One path leads to safety in the land of truth tellers, while the other leads to doom in the land of liars. A sentry appears. But it is unclear which group they belong too."" What question can the robot ask to determine the safe route."
Q1. Which path leads to the land of liars?
Q2. Are you a liar or a truth teller?
Q3. Which path leads to your home?
Number 3 is the videos answer to robot's quandary... But WTF? THE SENTRY COULD BE LYING AND POINTING OUT THE WRONG PATH TO HIS HOME.... SINCE HE COULD BE FROM THE LIARS GROUP.
How does the video factually, logically, conclude from the stated logic RULES prior to the tested answer... That it is SAFE to go right. Why is it safe to go right.
HELLO ?
I don't see the robot being able to correctly draw a conclusion from the facts give up to that point ?????????
So how can we move on to multiple sentries if the prior action is based on a logical fallacy ????
As a CS graduate student, the theoretical sections of the field are quite mind-bending and very profound in a way. I thnink often times people underestimate what deep insights questions in computer science can give back to the world. Thank you for showing such a nice summary of one of them!
Yes. People assume human creations are somehow unnatural, but many if not all are an interation of natures patterns. We have an innate desire to express them, or their expression is just inevitable. The further we go the more we realize we are just creating something already created. In a way, unraveling the story the universe is trying to tell us.
In Sha Allah you will solve this
I think Theoretical Computer Science is as fundamental to the nature of the reality as Physics. These days more and more physicists look at problems through the lens of information theory and computational complexity.
@@test-zg4hv Wait when did we get sex robots? Where can I buy one right now? 😅😁
@iamtheusualguy2611 Absolutely. When I started studying CS I had a hard time with mathematics and didn't see much use of all the theoretical parts. But it didn't last long until I've seen the wonders of Logic, Complexity- and Computability-Theory. I consider them the absolute highlights of my time at university, and it often felt like some kind of mathematical philosophie, so deep were the results I learnt there.
I wish more people would know about the deep wonders of theoretical computer science.
This video is good, but a few small details to add.
1. Solving P vs NP doesn't mean all encryption breaks overnight. RSA encryption could be broken if a polynomial algorithm is found for an np complete problem, but only if the polynomial isn't too big. Even polynomial algorithms can be unusable in practice. This is all assuming an explicit constructive proof P = NP is found. Non constructive proofs will not help solve any of the real world problems, and if it is shown P is not equal to NP, nothing will change. Even if an algorithm to break RSA is found, we can build other encryption methods using NP Hard problems like Travelling Salesman Problem (shortest path version).
2. The Travelling Salesman Problem (TSP) is NP hard in it's usual statement. It is only NP complete if you ask the question "is it possible to find a path that is shorter than a given length". If you ask the problem of finding the shortest path, this is not verifiable in polynomial time.
If TSP Shortest Path is NP-Hard due to the complexity of verifying a solution, wouldn't that render it unusable for encryption, as quick verification is a requirement?
@@TheEzypzy No. If you actually know the solution, it is easy to verify. If you do not know the actual solution, it is hard to verify if a proposed solution is actually a solution. If I know the actual shortest path, it is easy to see if someone else found the same shortest path.
What is hard is if I do not know the shortest path, but I have a pretty short path, how do I ensure it is actually shortest.
@@patrickgambill9326how do you verify(for yourself or others) the answer you claim to be correct is actually correct
yeah you're right:) Like the proof that LPs can be solved in poly time:)
Its nice to know but the algorithm is unusuable compared to the Simplex Alg:)
I think RSA is Co-NP not NP and can be solved by Shor's Algorithm in polynomial time, but more recently there have been proposals for quantum safe cryptography that use random walks in higher dimensions with secret basis vectors, I think.
So glad to see Shannon mentioned. He is massively underrated, he basically is the father of modern computers (let alone Communication and Information Theory)
If I was ever getting a tattoo, it would be the formula for entropy
Shannon cannot be underrated. it's taught at CS school for entropy. So as programmers we understand the concept of compression, huffman encoding, the theoretical minimum limit of compression. And after that the implications on physics. Shannon "law" is as big as the second law of thermodynamics. It's as fundamental a truth of this universe to sit with those massive fundamental verities as 2nd law of thermo. Because of that, it can't be underrated.
@@vindieu you're validating my point. He has done such massively important things, yet only CS/ECE majors know about him. He should be up there with the scientists known to general audiance
well, John Nash wasn't mentioned either in a video about Game theory on veritasium.
Turing was brilliant and saved untold lives in WWII with his encryption work. How they treated him was horrible.
I agree with u, The Imitation Game tells it.
@@bennyklabarpan7002 Many, many lives were saved because of Alan Turing and his team in breaking the Enigma code.
@@bennyklabarpan7002 still doesn't change how much he contributed to science
@@bennyklabarpan7002 Genuinely interested in your perspective. Are you saying as a net no lives were saved (ally and axis combined) or no lives even of the allies were saved? My assumption was he did save allies' lives indirectly and did have a hand in winning the war for the allies.
@@bennyklabarpan7002 what makes you say the conflict was prolonged by Turing though?
Im a teacher and I have to applaud your video style. It's excellent from an educational perspective with very sharp and clear visualisations and superb pace. Bravo.
great video but we must be in velocity of a better living styles in 25 many jobless people and empty retails building! this is great for the mind but we need appilcation theory work in energy and goodness in this country ai written!!!chp
This was excellent! Scott Aaronson praised it highly for accuracy but did state that it would have been improved by addressing the difference between Turing machines and circuits (i.e., between uniform and non-uniform computation), and where the rough identities “polynomial = efficient” and “exponential = inefficient” hold or fail to hold.
UG in CS and MS in CS now job less. Life is very hard. Watching the subject in this video brought me tears. The enthusiasm I shown in each and every class. Theory of computation, compiler construction and Micro processor integration. All three core subjects are simplified in this video. Thank you bringing my memory back. I will fight again to survive. 💔
Bhaiya I'm from BA with some knowledge of cs/programming, I was planning to take msc cs. Can you please give me some advice?
@@damareswaramaddepalli9714 I'm also from India and I took arts for my bachelors degree but through self study I have some knowledge of Cs and programming and I want to learn more so I want pursue Msc Cs. Toh mei kya karu confused hai, aap toh experienced hai isliye aapka advice chaiye tha brother 🙏
@@SumiNaga19 masters in US not ment for learning. People just need to get work permit to work in USA 🇺🇸 they pursue masters. You won’t study anything in masters rather you will end up in 6 hour on campus partime, after partime go home cook something and sleep. After Master, you will be jobless hundreds of students not even hundred thousands of students were staying in Usa without job and their initial OPT from past two semesters. And some people I saw even their stem extension they didn’t get any job. Still doing partime and hundreds of students went back.
you are just incompetent
@@damareswaramaddepalli9714 where do you think you can actually learn something? maybe at phd level? or in europe/asia? I'm now doing ms in cs in usa and yes some classes aren't that good, but some are good. however agree most are jobless, better to work in walmart
What a well constructed video. I appreciate how it went from basic Computer Science knowledge and gradually introduced higher level Computer Science topics in a simply put way.
Watching a video on advanced computer science topics and not knowing how to adjust the playback speed on youtube is making me laugh
Clearest explanation I’ve seen. Maybe it could’ve been paced slightly slower at points but nothing a manual pause and rewind won’t fix.
I love how he says 'if humanity survives long enough'. Great reminder, we really need to do better.
I'll get right on it 😉
As someone who has a degree is philosophy and now working on a cs degree, it’s fascinating to see all the overlap when it comes to thinking about these kinds of problems. Great video!
His last question "Will we be able to understand the solution?" is the most profound question of all.
It reminds me of the computer "Deep Thought" in "The Hitchiker's Guide to the Galaxy" which spent generations trying to solve the problem of "Life the Universe and Everything".
After many hundreds of years it came up with the answer.....42.
But what does THAT mean ????
42 just means that author of The Hitchiker's Guide to the Galaxy ran out of ideas what to write so he thrown a dice and put down the value he got.
Exactly! Let's say that matter and energy sprout out of nothing, and both can cancel out. What made them separate to start with, and rejoin later? c? the speed of light? c can survive any black hole, but we can't appreciate the speed of light as a "thing". So 42 could be an answer but we can't understand that as a solution.
It means that the person who posed the question and the brainy computer were functional idiots. Garbage in and out. In other words, bad question, bozo.
It's always about semantics, isn't it? 💀
I wanted to like this comment, but it's at 42 likes
One of the best explanations of P, NP. I recalled my Theory of Computation, Information Security lectures and found it really fascinating. The insights are really cool and best explained. Thank you so much !!!
As a theoretical comput scientist i can tell you there is false information in this video. Like the one in 9:46 . But overall a good attempt on hard concept to grasp
Which part was false at 9:46?
Its mind boggling how many consistently great videos Quanta Magazine puts out frequently. Thank you for this gift to the world.
lol settle down
@@codycast no thanks
Plain Boolean formulas cannot operate like a Turing machine because they have a fixed bound on computation (once you assign values to the variables you can simplify the expression in a fixed amount of time). For a paradigm to be Turing complete it must be possible for it to run forever, which requires conditional loops. Boolean circuits can be made Turing complete by introducing registers (for memory) and a clock to synchronize computational steps, but they can no longer be represented as pure Boolean formulas.
Thanks for this, when I took computer science classes at UTD, this was glossed over in a slide, and given maybe two sentences in the textbook.
📝 Summary of Key Points:
📌 The P versus NP problem is a conundrum in math and computer science that asks whether it is possible to invent a computer that can solve any problem quickly or if there are problems that are too complex for computation.
🧐 Computers solve problems by following algorithms, which are step-by-step procedures, and their core function is to compute. The theoretical framework for all digital computers is based on the concept of a Turing machine.
🚀 Computational complexity is the study of the resources needed to solve computational problems. P problems are relatively easy for computers to solve in polynomial time, while NP problems can be quickly verified if given the solution but are difficult to solve.
🌐 The P versus NP problem asks whether all NP problems are actually P problems. If P equals NP, it would have far-reaching consequences, including advancements in AI and optimization, but it would also render current encryption and security measures obsolete.
💬 Circuit complexity studies the complexity of Boolean functions when represented as circuits, and researchers study it to understand the limits of computation and optimize algorithm and hardware design.
📊 The natural proofs barrier is a mathematical roadblock that has hindered progress in proving P doesn't equal NP using circuit complexity techniques.
🧐 Meta-complexity is a field of computer science that explores the difficulty of determining the hardness of computational problems. Researchers in meta-complexity are searching for new approaches to solve important unanswered questions in computer science.
📊 The minimum circuit size problem is interested in determining the smallest possible circuit that can accurately compute a given Boolean function.
📣 The pursuit of meta-complexity may lead to an answer to the P versus NP problem, raising the question of whether humans or AI will solve these problems and if we will be able to understand the solution.
💡 Additional Insights and Observations:
💬 "The solution to the P versus NP problem could lead to breakthroughs in various fields, including medicine, artificial intelligence, and gaming."
🌐 The video mentions the concept of computational complexity theorists wanting to know which problems are solvable using clever algorithms and which problems are difficult or even impossible for computers to solve.
🌐 The video highlights the potential negative consequences of finding a solution to the P versus NP problem, such as breaking encryption and security measures.
📣 Concluding Remarks:
The P versus NP problem is a significant conundrum in math and computer science that explores the possibility of inventing a computer that can solve any problem quickly. While finding a solution could lead to breakthroughs in various fields, it could also have negative consequences. The video discusses computational complexity, circuit complexity, and meta-complexity as areas of study that may contribute to solving this problem. The pursuit of meta-complexity raises questions about whether humans or AI will solve these problems and if we will be able to understand the solution.
Generated using Talkbud (Browser Extension)
2:17 study of inherent resources, such as T & S needed to solve a computational problem
woah woah, so precisely put definition. awesome.
the quality of these videos is so incredible
it's fun looking into the CSAT problem and trying to figure out exactly why you can't just assume the output to be 1 and trace it back to a random possible input combination.
It eventually comes down to these two facts:
-----------------------------------------------------------------------
1. logic gate outputs can get shared between logic gate inputs.
E.g: say, the output of an OR gate is connected to 2 (or more) AND gate inputs.
2. Logic gates AND and OR have multiple inputs mapped to their outputs. This means, they are many-to-one functions and do not have inverse functions. As a result, when you try to trace back from a given output, you have to randomly guess an input every time. This wouldn't be a problem on its own but due to our first point (logic gate outputs can get shared between logic gate inputs), when two guessed inputs end up meeting at a common output, they won't necessarily match so you end up with 1 and 0 being output value simultaneously.
And that's where the computational complexity increases drastically, because now you have to keep an account of all those common outputs branching into multiple inputs, so that your guessed inputs align when they meet there.
One of the best explanations of P, NP. I recalled my Theory of Computation, Information Security lectures and found it really fascinating. The insights are really cool and best explained. Thank you so much !!!
I have discovered that some programs have parts of code of P class and parts of code of NP class. There are classes of relaxations that use heuristics that can solve some problems but with no warranty about its success or about its optimality.
I think Dijkstra (thus also A* Traversal) already encapsulates the "variational method/technique"
discrete analog to guarantee the "best possible/shortest path" for a search/traversal" of-sorts.
Sort'a-like how we *optimize* for the distance between points to be a straight line in Euclidean space
using variational principles/techniques, but in discrete spaces with edges carrying weights,
we can already see from the Dijsktra/A* that it is, indeed, the shortest/least path
Dijkstra already gives this information when the algorithm reaches it's halt state.
I know you already know this, I'm just stating all of this info for the sake of completion & to fill in the gap for an audience who may interact with the above problem/post.
I understand you are NOT referring specifically to traversal algorithms
(which, I think, was EXACTLY what your original statement was attempting to express/convey)
but rather to GENERALIZE the term "algorithm" for which we know solutions may/may-not exist,
but we are NOT sure whether it is OPTIMAL or how to even implement it in the first place.
So there's NO REASON to "Question" whether/NOT certain algorithms are already "best"
BUT, it is hard to know (for algorithms, in general) whether we can even find a solution.
Such as with Chess or Soduko or the n-th Queens Problem/ Durer's Magic Squares/Mastermind/Any Verifiably-Correct Solution puzzle given to an algorithm as input,
in the sense that there are multiple known/implemented/algorithms for which the above tasks can easily verify a solution as input, but for which, we do not know the best strategy to do so.
If we look at Mastermind (MM) for Example, the "best known" implementation is Knuth's Five-Step solution for a verifiably-correct result with the VERY-Inefficient
"trade-off" for a HUGE/n-facorial (n! to be more precise) amount of memory on the stack at program runtime.
This is actually a problem that we need to address
(I am not sure whether there is research being conducted in this direction or not - it would be interesting to know if there is ...)
Knuth's algorithm for solving MM (in 5 steps) works excellently on a small enough input, i.e. if you consider say 6 colors,
but say that we parametrize the number of colors for the input of Knuth's Algorithm, then the problem becomes n!-hard to solve.
That's just my two-cents on Knuth's Algorithm. It's NOT even hard to prove, we just have to look at the first 1 or 2 steps
of Knuth's algorithm to see WHY it is n!-difficult in terms of storage.
Knuth's Algorithm starts by LISTING-OUT all the possibile combinations for the correct color-sequence,
& then eliminating the color sequences that cannot possibly be correct.
Now if we scale the input to say, 10 000 colors ...
[That would be like having 10 000 different hex/RGB values for color inputs or just 10 000 different "things"]
(it's irrelevant what the color inputs mean - soling MM has other mathematical application as any algorithm, specifically for larger inputs)
[The colors are just place-holders]
then, the result of scaling the input so drastically, is that we cannot efficiently solve MM this way, because, ya-know ... Stack Overflow!
So, Knuth's is efficient on 6 Pegs (in terms of STEPS), but a performs very poorly, if at-all, on say 10 000 Pegs (due to Memory Limitations).
So ... there's questions ... MANY QUESTIONS ...
Im an electrical engineering graduate besides understanding in electricity what really shakes me is understanding of how computer thinks. I really love it as side hobby. 😊
I briefly dabbled into this topic in a Computer arithmetic hardware implementation course that I took in grad school ( MS Microelectronics Engineering) . Mainly the part on circuit complexity ( as that was the applicable concept to the course).
It’s was a really interesting topic/course, especially the part of the course where I got to optimise a hardware implementation of an FFT algorithm on an FPGA by applying the techniques in the course.
Best video I’ve seen in a long time. Honestly. Great, on the point presentation of distinct very important, fundamental concepts of our time
Phenomenal visuals and amazingly concise definition. Simply beautiful.
Absolutely fascinating video, animators deserve a raise
On it ..
Its like the RSA encryption 'problem' where it took normal computers an exponential ammount of time to bruteforce RSA encryption, it took a polynomial ammount of time to bruteforce on a quantum computer.
to add consistency: iff constructively solved with small constant/polynom to yield presented disruptions, PKI failure would be no factor anyway because zero-marginal cost economy/society equals non-monetary economy provided the materialized P=NP (non-deterministic processor (NDP)) is public domain, e.g., via IPFS
P=NP
Nobody is pointing out the potential halo effect of quantum computing, that quantum computers will be able to find minimum circuits and thus design vastly more powerful classical computers. That’s probably because everyone expects quantum computers will be so unimaginably faster than even such minimized circuit designed classical computers that it won't even matter. But i suspect classical computers will act as an interface to quantum computers. And they will, at least initially, be used to solve different kinds of problems and perform different kinds of internet tasks.
no they wont. quantum computers dont work like regular computers. they have a really high error rate, and are slower than regular computers for any tasks that are sequencial. quantum computers are only useful for sustained parallel operations, like graphics processing, computer simulations, brute forcing private keys and passwords, etc. Right now, quantum computers do exist, but their error rate is too high to be usable for any of those tasks, and there is barely a use for them.
and there is also no need for faster computers. even cheap modern computers can do anything almost instantly that a person normally does on a computer. high performance computing only matters for saving costs on server infrastructure, graphics processing and scientific work like computer simulations, physics simulations, etc.
Okay, can we talk about how cool this animation is though??
I was struggling with all these concepts; this video was a gift for me! Keep up the good work.
What a delight to watch this and “feel” like I understand more
Well done - and huge thanks
5:13 You'll have to amend the comment on how "Boolean boolean formulas can operate like a Turing machine." There is something to be said about representation of decision problems (computably enumerable sets being Diophantine), but a fixed circuit has a maximal number of binary inputs, while an abstract Turing machine can be initialized with arbitrary size inputs.
Ugh...listen. I'm glad you took the time to explain the P=NP problem...but your conclusion that if P=NP, then suddenly we have the polynomial algorithm for every NP problem is just silly. Knowing that P=NP just tells us that a solution exists. It would not make us any closer to finding it.
Furthermore, not all P-time programs halt in a practical amount of time! The video gets this wrong!
A solution that takes a number of operations proportional to the length of the input to the power of Graham's Number is in P, but it will not halt before the sun dies. P is often taken to mean practical time because:
a) No matter what the exponent is, exponential time is still worse for sufficiently large inputs. (But for some exponents, this is a totally theoretical concern because by the time the graphs cross over, neither program will halt before the sun dies and it's not close. There are plenty of possible exponents where some exponential time algorithms are more practical than a P-time algorithm with that exponent because the exponential time algorithm at least halts on *some* inputs.)
b) Computer scientists often think of P as meaning P-with-very-small-exponents because the majority of P-time algorithms we've found so far have very small exponents. But not all of them: there's absolutely papers in theoretical computer science that prove a problem is in P because it can be solved with n^40000-ish steps. (For context, n^3 is a bad time complexity for most practical applications and n^4 is terrible.)
Nah, if we have a polynomial time solution to any NP-complete problem, we have one for all; definitionally, any NP-complete problem cam be expressed as (and used to express) any other NP-complete problem, through a conversion operation that's polynomial in time, and importantly with a polynomial growth in input-size. A polynomial of another polynomial is polynomial, so if there's a polynomial-time solution to one problem found, that same solution can be applied to the rest, through some conversion-step of the problem.
Large polynomials, likely, but polynomials none the less.
@@TheKahiron and what is that conversion operation?
@@chrism6880 search word you're looking for is karp reduction. Providing one from a known np-hard problem is the proof for np hardness, so will be well published for any given problem (but no, there's no universal operation.) Cook-Levin shows every NP-problem as reducible to SAT, so that reduction is pretty much trivial and universal; from there it's just daisy-chaining reductions in the "tree" of proofs (viewing SAT as the root).
So, let's say you want to solve the TSP-problem, and some mofo's stumbled upon a polynomial time solution to the clique-problem. You might then take your TSP-problem, reduce that to SAT, from there to 3SAT, to Independent Set-pronlem, and from there to the Clique.
@chrism6880 Search word you're looking for is karp reduction. First comment of mine was a bit shoddy; karp reduction (expressing one problem in terms of another, in polynomial time) is essentially trivial for converting any NP problem to SAT - comes with the Cook-Levin theorem and proofs thereof. If the a problem's in NP, showing NP hardness is sufficient for NP completeness, which is done by providing a reduction from any NP-complete problem to the "new" one, so a published known karp reduction will already be known and available (though these may need to be chained, which is fine, since polynomial...ness is transitive). Say you want to solve TSP, and some mofo's solved the CLIQUE-problem already. Reduce TSP to SAT, then follow the chain of known proofs, something like SAT->3SAT->INDSET->CLIQUE, and you've a 4-step reduction doable in polynomial time - and polynomial growth - for expressing your TSP as the hypothetically "easy" CLIQUE.
(EDIT: Above was a separate comment; disappeared when the following paragraph was added in the youtube app, dunno why.):
Also, sorry if my original comment was a bit shoddy, haven't read this stuff for years; had to break out an old coursebook to even get the names/proof relations.
There are some contradictory statements in your video, but it is a great presentation of N and NP problems. Thank you for creating such a wonderful explanation in layman's terms.
Advancements in quantum computing's measurement problem could solve N = NP. Quantum computers can represent 2^N positions per q-bit in quantum memory. So all N and NP problems are computational. The bottleneck arises with measuring all the positions in the computation at once to get a complete view of the quantum memory. So, if advancements are made in measuring q-bits then it may be possible to measure and observe all positions at once, which would solve N vs NP.
I think an important point to be made in the field of algorithms is that a lot of the efficiency of certain algorithms depends on input size.
Theoretically, if you have a P problem and an NP problem that solve the same problem, the P problem runs in n^200 time whereas the NP problem runs in 2^n time, if your input size is always small, say 2 or 3, then actually the NP problem is more efficient.
You can also apply this idea to just the area of P problems - for example with sorting algorithms, if we know the input size will always be very small, sometimes it is more efficient to use what is learned as the “slow” algorithms (like selection sort/ bubble sort) vs the “fast” algorithms like merge sort or quicksort. Some may argue this can’t be since bubble sort runs in n^2 vs merge sort running in nlogn , so merge sort is always at least as good, however there are also hidden constants in these runtimes that get overlooked easily. You should really see the runtimes as (n^2 + c) and (nlogn + d) respectively, where c and d are constants and c < d. You can see the constants as additional overhead needed to set up the algorithm.
in academia we usually ignore the constants as n grows large, but in practice, there may be cases where n is guaranteed to be small to the point where it is actually more efficient to use the simpler “slower” algorithm. A good analogy that my professor made is why use a sledgehammer to pound in a nail when a regular hammer will do. ( this also applies to deciding which kinds of data structures to use)
Anyways nicely made video! 👍
No, it isn't.
Simple things interacting can lead to complexity and emergence, this applies to algorithms...
Depending on whether or not the sentry ALWAYS tells lies, or just CAN tell lies, and whether or not you can ask multiple questions, then there can be an optimal guaranteed solution to the problem. All you have to do is ask a question with a known answer. "What is 2+2?" If they give you a false answer, then they lie, and the remaining answers just need to be inverted.
If you are limited to one question, it could be this: "If I asked the other guard which way is the safe path, what would he tell me?" Both guards would point to the unsafe route. Take the other route to safety.
I'm not sure the answer in the video is correct.
I'm in a bit of a confusion, in the video it's mentioned that some "seemingly" NP problems were later solved using clever polynomial algorithms which brought them back to the P problems. - (1)
But, it's also mentioned that all NP problems are equivalent and solving one would mean that N=NP and all NP problems would be theoretically solvable/computable. - (2)
In that case, comparing (1) and (2), doesn't it mean we've already sort of solved it already?
If someone can help me clear this conundrum, I'd really appreciate it. Thanks in advance :)
not all NP problems are equivalent but all NP-complete problems are equivalent, a problem is said to be NP-complete must be in NP AND Every problem in NP is reducible to it in polynomial time.
for example prime factorisation is in NP, if you find a polynomial time that solves it (wich will be a huge dicovery) does not mean P=NP because there is no known algorithm to reduce any npcomplete problem (like SAT) to prime factorization.
Ah I see, thank you so much for your response and the apt example you've provided me with.
It seems there was a misunderstanding in my part and I ended up thinking all NP problems are equivalent and didn't consider they've to be NP-complete to be equivalent to each other, and so I think to have N=NP we need to have a NP problem which is also *NP-complete* that can be solved in polynomial time.
Ok maybe I'm dumb but at 1:30 how would answer C determine whether a lying sentry is telling the truth or not?
They're always going to lie so they're going to point to the path that's not their home which will be the safe path.
Intuitively i would say, there always be problems that are two much dificult to computers to solve. There always will exist NP problems. Even with quantum computers, as computers arquitecture helps finding more complex solutions in polynomial time, it will always raise NP questions. Therefore its like solving all mysteries of life in computational time.
Some of us are developing the octonionic theory of unification in which gravitation and quantum theory are emergent phenomena. It is far more successful than other ideas. Please look it up. It is unprofessional that Quanta is wedded to ideas coming only from Perimeter, Princeton and such. There are other countries on earth:-)
Thank you for a great explanation of p vs np! Never got it during my computer science studies
Verifying a number is prime is in P (proved in 2002, Agrawal et al). A good example of a problem that regardless of being in P cannot be solved quickly. Be careful with comparing complexity classes.
thanks, I love complexity, so now I am going to research what you mean by classes of complexities!
This video greatly overblows the real-world significance of P vs. NP. The question is a very theoretical one and, although it would be unintuitive to learn that P = NP, it is by no means a given that a proof of P = NP would leap down from the ivory tower and cause any direct or "overnight" effect at all on the internet, AI applications, business, cryptography, etc. For people other than academics to care, we would need to see new algorithms that are actually greatly more efficient on real-world problems than current real-world approaches are, but no such algorithm is necessarily provided by a proof in this area regardless of its conclusion.
A proof doesn't even prove that such algorithms exist. P == NP is not the same thing as P-with-very-small-exponents == NP.
It could be that the knapsack problem is in P because someone finds an O(n^10,000,000) algorithm. This would be a constructive proof that P == NP but also have essentially no practical implications.
@@brianb.6356 I think the point that the @suzannecarte445 is trying to make above is that, current encryption algorithms & their solvability in less-than Exponential time are NOT our primary concern. Our PRIMARY concern is to establish whether/not we CAN reduce a known algorithm, but just haven't discovered the means for it yet on a program level, so should we continue searching for an implementation that we KNOW doesn't exist? It closes a whole lot of questions on whether or not we CAN (actually) solve certain problems in less steps [in which case, that would be a good thing ... ], while for other problems, we are FORCED to use Recursion, especially because recursion is such a good replacement for the loop in a turing-complete model.
You’re video is great. My professor was talking about this in class, and you went into so much detail. I like the parts you included with the other guy too. The back and forth was cool!
Can anyone pls explain why exactly the question c at 1:37 would guarantee the right way?
The liar's home is the "land of liars", and the truth teller's home is the "land of truth tellers". A truth teller would point to their home, "the land of truth tellers", but the liar would NOT point to their home. Since their home is the "land of liars", it means they must be pointing to the "land of truth tellers" as well.
Ohh okay i undertand thank you very much @@Killua2001
at 1:34 there is a flaw in the logic. If you don't know the answer to the question "Which path leads to the land of liars?", how answer to the question "Which path leads to your home?" will help you? You still don't know if home is a land of liars or not.
Both will direct you to the land of truth
1:20 That question is more straighforward than mine, which was: What would an enemy guard answer to the question "Does the left path lead to the village of the truth tellers?"?
From the enemy's answer to the guard's answer to me, there will be exactly one lie, therefore if the guard says yes then I take the right path; if it is no I take the left one.
Sorry, but I do not get it. First off, do we not know if it is only you would be doomed in the bad place and therefore do we not know where that person lives. Second, if we assume that all will be doomed if they live in the bad place, and we ask: "where is your home", if it's the liar is the home on the path to the left, and if it is a truth teller, is the home on the right.
EDIT: Your question is logical and smart! :-)
1 million price is not enough reward for this.
I promise you that most people seriously trying to solve these types of problems could not give much of a fu ck about the money.
The clout in the mathematics community is worth more than the money being offered.
In fact, these people are so intelligent that they could already work as quantitatives in some bank and make more than a million dollars if they really just cared about the money lol.
@rtothec1234 but you are only talking about mathematicians. What about other people? Who might solve the problem.
Also P vs NP is part of computer science too. How about those people? Btw if these people don't care about money. They shouldn't ask big salaries in the first place.
"Wow, imagine how many things we could do if P = NP!"
*Jevons' paradox's honest reaction:*
Ok, encryption is important to everybody but the best example is the 3-body problem aka 3-body instability. When you compute the future position of three bodies subject to gravitation you'll get an answer that will always have an error. So now you are searching for an algorithm that will give you zero error. This is easy for two bodies but with three (or more) bodies one must take a time increment and compute the next position at the next time increment that is also the source of the error. Further, the 3-body systems are unstable and chaotic (have no repeating period). Nevertheless you shorten the time increment and lower the error but this will take more time to perform the computation with the result that the error reaches zero when the computing time reaches infinity. Here is a good spot to see that Turing's assertion of giving the computing machine unlimited time in his definition of the universal machine --- *it does not make it universal.*
So, ending my comment right here right now is ok and we can "seriously muse" about all this. Unfortunately the cat is out of the bag and implications are fundamental (if not existential). The thing is that our solar system is said to consist of more than two bodies. Ignoring creation/evolution the solar system will break up sooner or later and even theoretically it is now teetering on the edge and can fly apart any minute. Or you can take the 3-body instability as prime reality, chuck the solar system and figure out that the Moon is not a mass rock (hologram image?). Oh, before you call me out as a flat earther, consider that the Earth, Sun, and Moon are three bodies that are working nicely together thank you.
16:23
What you actually said: "Functions with a number of necessary logic gates grows exponentially with increase in input variables are said to have High Circuit Complexity."
Should've said, 'Functions with a number of necessary logic gates that grows exponentially with an increase in the number of input variables are said to have High Circuit Complexity.'
Even if P vs NP has positive solution it has just an existential nature. It does not give us a way to contract polynomial-time algorithm for any NP hard ones. There are many problems for which we have the guarantee of solution's existence but yet no one could find it. So even if P VS NP solved there is no immediate danger of losing internet or bank cash.
As Knuth says, existence doesn't necessarily mean embodiment.
If P=NP that does not mean that the polynomial time solution is within reach. Polynomials of high degree also grow fast...
I do theoretical computer science research, and I found some of the explanations by the narrator in this video kinda confusing and misleading. Some things are just false. I love your videos, and before I've seen some great explanations about my area of expertise, and I was amazed by how good you manage to collect your sources to make things correct. But this one is sad :(
Could you elaborate on which parts are wrong/false?
it boggles me how many people in the comments that call themselves computer scientists call this video „the best explanation there is“ with these heavy mistakes or straight up wrong statements
@@Gin-tokiproving that proving p=np would just prove that p=np and that’s it. nothing would change overnight. it’s like saying that proving that there exists a solution for a given equation is the same as knowing the solution which is not really true. those polynomial algorithms would still have to be found and also they could be still giga slow in practice. n^10000 is still polynomial but unusable. it would be just be a theoretical result. Actually the methods to prove it would probably be even more interesting
If you proof P = NP you likely have an algoritim that reduces an NP-complete problem to one in P. Since all NP-hard problems can be reduced to one another, this algorithm can also solve all other problems in NP in polynomial time. This would indeed lead to cryptography being broken et cetera. I don't see why this is an exaggeration.
Fun little fact for the boolean algebra fans out there, AND, OR, and NOT gates form what is called a functionally complete set. There are other functionally complete sets, including entire sets which contain only one gate. NAND and XNOR are two such gates which are able to replace all three of the normal boolean operators through complicated replacement proofs. That is to say, you can construct an AND gate with only NAND gates, along with OR and NOT gates. It's kinda wild really...
I remember having to formally prove this and use it to construct boolean networks in first semester of my cs degree. I remember once having to use only those NAND gates. they also had us doing the same thing with multiplexers and constants. I remember one where the entire boolean function was performed by just a tree of multiplexers with 2 inputs, and one with a giant multiplexer with many inputs and multiple selection inputs, which made them look regular and symmetrical, but also would require many more gates, cause multiplexers are composed of multiple different basic gates. There is quite a bit of freedom in terms of what gates can be used. from the many circuits that can perform a boolean function, the one with the least amount of gates is not always the prettiest one.
Note that even if P isn't equal to NP, this only means that _current_ computing schemes can't efficiently solve these problems. It's already known that time-loops/causal-loops can be used to quickly solve NP Complete problems, we just don't know of any ways to create them with our current knowledge of physics.
Word salad
@@reh3884 there's something called Google if you don't know what a term means.
Very well done basic explanation video(not only about the problem but also CS fundamentals in general). Absolutely loved it. 👍
This is an amazing video, I'm really impressed by the quality of the animations and explanations. Thank you for putting the time in to make this!
@5:17: the schematic has a mistake. one of the input of 3 AND gates is not connected to any output, and is therefore floating, making the circuit unpredictable.
@6:04: the AND and OR gates are "current" gates, while the NOT gate is more of a Voltage gate. In the real world of logic gates, all gates are "voltage" gates, where almost no current circulates, and the actual minimal representation of a AND and OR gate are quite different from what is shown here, but interestingly, they are so similar, they are actually vertical mirrors of each other.
Conclusion
The statement "security(P) = mathematics(NP)" highlights the crucial dependence of cryptographic security on the mathematical complexity of certain problems. Security protocols rely on problems in NP being hard to solve (not in P). If it were discovered that NP problems are in P, it would have dramatic implications for security, requiring a fundamental overhaul of cryptographic systems and protocols. This relationship underscores the importance of the P vs. NP question not just in theoretical computer science but also in practical applications like cybersecurity.
NP(security(problem)) = P(mathematics(maths.dll))
Very impressible visualizations. particularly love how the stuff represtented actually makes sense and works.
Did NOT expect a Boolean logic primer in a P versus NP video. 🎉
There are a good chunk of standard gates, in boolean logic, but foundationally you only need 2. Not, and either and, or or. You can construct the other with these parts. Or for example is not(not A and not B) the and only returns true now if both A and B are false, meaning the whole construct becomes an or. The same trick can create and from or
The thing that is jumping my mind is not clear, but is n is not np just how thinking works. Understanding understanding is how to 'solve' it maybe. It is how epistemology works, its about self conscience and the difference between observation and thinking. I can think about my observation, and i can think about the thoughts when they become observable. That makes this last sort of observations very suitable for developing math...
As a mathematician and former chemical engineer, I absolutely want P=NP and practical useful algorithms to be found. Billions of important practical engineering and science problems and logistics problems need solving.
I couldn't give a shit about encryption. But it is sadly almost certain that P is not equal to NP.
Some problems are NP, but you could make approximation algoritms that almost solves the problem, but good enough. And in P time.
Best video ever on the topics of explaining P vs NP problem
9:48 this is misleading, if not downright wrong "all problems in P can be solved in a practical amount of time".
The exponent can be any number, ie n⁵⁰⁰⁰, which will be impractical for any n>2. There are plenty polynomial algorithms that aren't practical because the fixed exponent is too high, or they take too many fixed steps for C (the constant in the polynomial).
Likewise, 11:20, some exponential algorithms are faster in practice than polynomial ones, for reasonable inputs, ie for f(n)=1.01ⁿ for instance.
I've always been in the "NP is just P we haven't figured out yet" category. That's how *all* science *_has always_* been; just like how it turned out magic was just a symptom of our lack of understanding of physics, NP is just a side-effect of our lack of understanding of mathematics.
An awesome video gave a lot of insights on the the kind of problems and it would be nice if the solutions are always used for the benefit of the society
Minimum Circuit Size Problem ... nature probably solved it (implicitly by trial and error using that in played out construction plan within DNA) ? if we knew the solution ... would it mean our brain would have been modeled somewhat accordingly to this "size" (probably not necessary)? It is not a circuit in typical sense, but could be described at least by a infinite amount of those? If not, why?
12:11 "Any Problem would be well in our reach." This is false. Solving P=NP does not give a solution for NP-Hard Problems, but more importantly there is a complexity class (ER, Existential Theory of the reals), that is harder than NP Problems. A solution to those Problems can't be verifyed in Polynomial Time
5:25 "truth circuits" are essentially analogue neural nets that cant take into account partial voltages like digital or biological nodes can to process the input layer and calculate and output.
9:20 shortest path on a map? uhhh you heard of the Traveling Salesman problem right? Or is this (theoretical) computable?
I would say those are two different problems. Salesman problem is in a loop. The other one connects two points and the routs between them.
This is a semantics thing, but i think we ought to consider this "computational science" over "computer science". My understanding of "computer" implies a system which performs computations, whereas this pertains more so to the study of computation.
Not to be pedantic or anything, i just feel like P=NP is a more general topic that is somewhat independent to systems that perform computations.
12:12 will this vid give definitien of NP-hard & NP-complete? lets see.
Maybe the two scenarios has an connection to each other? Like if i have an solution or if i solved an complex problem,i would easily and more faster solve the problem.But if you had like no solution at all then it would be hard and it would take time to solve the problem? So maybe P=NP:P≠NP indicating that there's some relationship with the two scenarios?
Excellent, goes in to serious accurate detail.
I miss studying CS at Carnegie Mellon. So much cool shit going on there. But P will never equal NP.
"Tricky", this is the absolute "reinventing the wheel" situation, not a problem to do, it is Euler's flash-fractal projection-drawing of e-Pi-i 1-0-infinity Singularity positioning in logarithmic resonance, universal relative-timing. (Can't you see, hear or sense-in-common?, this mono-dualistic modulation cause-effect mechanism is absolutely everything)
I can't either, it's the original Vapourware of Perfect Gas Thermodynamics terminology.
Interesting. Can you make a follow up video? Is there another possible approach?
I just saw Scott Aaronson 5 days ago (when this was released) at UT and studied P vs NP. nice timing
Transistors are not switches, they're current amplifiers that can be configured as a switch. Saying a transistor is a switch is like saying a car is a taxi. Sure a car can be a taxi, but not all cars are taxis.
An algorithm having polynomial time is equivalent to the following fact:
Doubling you input will multiply the processing time by a bounded amount.
Btw, great that the video only focussed on the problem itself, not the prize money.
Doing one hard puzzle might take an infinite amount of time, but running an infinite amount of hard puzzles once would give you atleast ONE solution that you can verify.
At 1:30 why would (C) lead us to the correct path? Anyone mind explaining ?
Kinda like the liar’s paradox. If the sentry is a “truth” teller he would point at the true location of his home. If he is a liar he would lie about the location of his home to the robot and point to the the opposite of his home(liar land) ie truth land.
I and several others in these comments agree with you!
That was in response to praagyadhungel1357
who made the animation in this video is a genius. As well as the people behind the script!! Great job!
It’s simple, you assume p not equal 0 then divide by p, you have your answer N = 1
I solved this problem earlier this week. I published my first rough draft earlier this week. It turns out P is not equal to NP. I have rigorously proven it using very advanced mathematics that was thought impossible to do. Amazingly, the proof shows us two amazing facts: all polynomial time problems have optimal substructure and there is an intricate relationship between mathematics we grew up learning: commutativity, associativity, and one more property: logical universality. It turns out, they don't play nice together and therefore P is not equal to NP. You can't have all three properties.
Sure you did.
@@Schnorzel1337 I found a gap in my proof, but it is not a large gap. Basically, I use two way containment to show that P is equivalent to polynomial time dynamic programming. However, one of the directions has a slight mistake in the proof. If I fix that, then yes, I will have proven it. I have spent years and years (decades now) trying to solve this problem chasing it through an amazing journey that I could write a book on. I went through group theory, ring theory, semigroups, magmas, etc.... Honestly, I thought I had proven the opposite, but I realized my attempt to prove P=NP led me to a proof of P!=NP.
"All polynomial time problems..."
By Rices theorem there is no one way to decide whether a problem is in P.
On the other hand, that's what natural proofs assume, yet natural proofs have been proven independent of P-NP.
Sorry.
It could turn out that quantum powered AI could break any user security code, and that the internet architecture itself would have to be a configuration of trillions of quantum computers. Each one would act as a dynamic code generator. Or it could be that communication across a quantum internet would involve communication between classical computers that coordinate intangaling gazillions of protons in 2 quantum computers located hudreds of miles apart in order to communicate
If you’ve heard about Asimov’s “I, Robot”, the answer is always “My responses are limited, you must ask the right questions” until you finally hear “That, detective, is the right question”. If you are unable find an answer to a question, you should at least consider that you might not be asking the right question to begin with.
My discernment is tingling and telling me I'm in the right place; considering "gatekeeping hate". 'Seems like someone who works on Quantum computers doesn't want this to be known.' If this challenge is further advanced, the solution can lead the cost of quantum computers going wayyyyyyy down, due to the efficiency of the "Compound-hypervisors" which work together to solve more computations and store memory efficiently, without overheating. Just a sense, don't mind me...
This is a wonderful video. Thank you.
given: The separate experience of many databases, each Ai'ed, each as one
The exponent is adding up all the ones.
an exponent of the same problem, as a whole, equals a solution
Finally real facts & credits to Alan Turing in a youtube video!❤