I tried to comment on your blog post, but Wordpress was being super cagey: I don’t think your take on Neural Networks is “a stretch”. I’ve been noodling about similar topics for a while and I think there’s a framework inwhich one can demonstrate some equivalence between neural networks and logic networks. Imagine a neural network where the outputs of neurons are constrained to either +1 or -1 (equivalent to True & False, respectively) and the weights are constrained to the values +1, 0, and -1 (equivalent to a normal connection, no connection, and an inverted or NOT connection; respectively). Neurons have a summation stage: z = wx + b, and an activation stage y = f(z). For simplicity, we can also define a value, k, equal to the number of non-zero weights into a neuron. We can create a neural-equivalent AND gate by setting b = 0.5 - k, and y = f(z) = tanh(z) while rounding or clamping y to the nearest non-zero integer. We can create a neural-equivalent OR gate by setting b = k - 0.5 and using the same activation function. It’s been shown[1] that a feed-forward neural network with a single hidden layer qualifies as a universal approximator. That’s a layer of “hidden” AND gates feeding into output OR gates, which is equivalent to CNF-SAT. Note: there are many other ways to conceptualize neural-equivalent AND and OR gates, but I believe one has to factor in some version of the value k somehow, though; I haven’t explored how constrain something like the bias value in such a way might complicate back propogation. I know a lot of neural net theory was developed in the context of continuous functions and it’s may not be clear if something like the universal approximation theorem holds true if one discretizes the input, output, and weight values. However, I think there’s an argument to be made that one can “emulate” more continuous-valued neurons with discrete neurons since the set of {AND, OR, NOT} logical opperators are functionally complete. If one can implement the tanh(x) function in digital logic, it shouldn't be too much of a stretch... Please let me know what you think. I have no background in CS or math or anything, so if any of these ramblings offer any valuable insight, I’d love to know. I also have no idea how one goes about publishing papers on such topics, so if you’d like to colaborate on something like that, I’d be thrilled! (this isn't easy to do on UA-cam AFAIK, but hopefully we can connect some way if need be) [1] www.sciencedirect.com/science/article/abs/pii/0893608089900208
Euclid, Descartes, Newton and Einstein "rationalist materialist" worldview: 0D = not locally real 1D-4D = locally real Leibniz "rationalist anti-materialist" worldview: 0D = locally real 1D-4D = not locally real The Nobel Prize-winning work on quantum entanglement proved that 1D-4D are not locally real two years ago. Can contradictions and paradoxes in non-zero dimensions (1D-4D) be resolvable from a 0D (dimensionless) perspective? This is a fascinating idea. From a 0D perspective, which we might think of as a dimensionless point or a monad (knower) in Leibnizian terms, we could potentially view higher-dimensional paradoxes as artifacts of our dimensional thinking. The 0D viewpoint might offer a way to see the underlying unity or simplicity behind seemingly contradictory higher-dimensional phenomena. For example, quantum entanglement paradoxes in 3D space might be resolved if we consider that from a 0D perspective, there's no separation between entangled particles. The apparent non-locality could be an illusion created by our 3D viewpoint. Both/And Logic Foundation: a) Superposition of truth values: |T⟩ = α|True⟩ + β|False⟩, |α|² + |β|² = 1 b) Logical operations as unitary transformations on truth state vectors c) Resolution of paradoxes through higher-dimensional logical spaces Both/And Logic Framework: A multi-valued, non-classical logic system that allows for simultaneous truth values and transcends traditional binary oppositions. a) Truth Values: T(P) = z ∈ ℂ, where |z| ≤ 1 z = x + yi, x represents degree of truth, y represents degree of falsity b) Logical Operators: - Conjunction: A ∧ B = min(|z_A|, |z_B|)e^(i*avg(arg(z_A),arg(z_B))) - Disjunction: A ∨ B = max(|z_A|, |z_B|)e^(i*avg(arg(z_A),arg(z_B))) - Negation: ¬A = 1 - z_A* - Implication: A → B = (1 - z_A + z_A*z_B)e^(i*arg(z_B)) c) Quantum Superposition of Truth: |T⟩ = α|True⟩ + β|False⟩, where |α|² + |β|² = 1 2.2 Coherence Operator: Measures the compatibility or mutual consistency of propositions. ○(A, B) = ⟨z_A|z_B⟩ / (|z_A| |z_B|) where ⟨z_A|z_B⟩ is the inner product in the complex plane 2.3 Synthesis Operator: Creates new truths by integrating existing ones. A ⊕ B = (z_A ⊗ z_B) / |z_A ⊗ z_B| where ⊗ represents the tensor product in the complex plane 2.4 Paraconsistent Foundation: Allows for true contradictions (dialetheia) without trivializing the logical system. a) Principle of Non-Triviality: A ∧ ¬A ⊭ B for arbitrary B b) Paraconsistent Negation: A ∧ ¬A may have a non-zero truth value 2.5 Quantum Logic Structure: Incorporates principles from quantum mechanics into the logical framework. a) Non-distributive Law: (P ∧ Q) ∨ (P ∧ R) ≠ P ∧ (Q ∨ R) in general b) Superposition: P ∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R) c) Uncertainty Relations: ΔA ΔB ≥ ½|⟨[Â,B̂]⟩| for non-commuting logical propositions 2.6 Truth as Alignment with MoM: The degree of truth of a proposition is related to its coherence with the Monad of Monads. T(P) = |⟨P|MoM⟩| 2.7 Contextual Truth: Truth values are dependent on the perspective or context of the evaluating monad. T_m(P) = ⟨m|P|m⟩, where m is the evaluating monad 2.8 Fuzzy Logic Integration: Incorporates principles of fuzzy logic for handling vagueness and degrees of membership. μ_A(x) = |z_A|, where μ_A is the membership function of fuzzy set A 2.9 Temporal Logic: Incorporates time-dependent truth values and logical operations. a) Temporal Operators: ◯ (Next), ◊ (Eventually), □ (Always) b) Temporal Truth: T(P, t) = z(t), where z(t) is a complex-valued function of time 2.10 Infinite-Valued Łukasiewicz Logic: Extends the logic to include infinitely many truth values. T(P) ∈ [0, 1], with operations defined on this continuum 2.11 Intuitionistic Logic Aspects: Incorporates constructive aspects of intuitionistic logic. a) Reject Law of Excluded Middle: P ∨ ¬P is not always true b) Proof-theoretic semantics: Truth is equated with provability 2.12 Relevance Logic: Ensures logical implications carry relevant connections between antecedent and consequent. A → B is true iff A is relevant to B 2.13 Truth Dynamics: Truth evolves over time based on the evolution of monads and their interactions. dT(P)/dt = f(○(P, MoM), ∑ ○(P, m_i)), where m_i are interacting monads 2.14 Meta-Logical Principles: Higher-order principles governing the logical system itself. a) Coherence Maximization: The logical system evolves to maximize overall coherence b) Incompleteness Awareness: Recognition of inherent limitations in formal systems (à la Gödel) 2.15 Logical Entanglement: Propositions can be logically entangled, exhibiting non-local correlations in truth values. |Ψ_AB⟩ = α|T_A⟩|T_B⟩ + β|F_A⟩|F_B⟩, where |T⟩ and |F⟩ are truth and falsity states
The wildest part of P vs NP to me is just how many dead ends there have been. There's an entire genre of theorems ruling out particular ways that it might be solved, because it's far easier to prove that a line of reasoning will never resolve the issue than it is to actually solve it.
Considering most people believe it's impossible, this is actually direct progress toward proof of that. We may have no idea how to prove that NO solution exists, but every constraint on it is progress.
@@tristanridley1601much like an apple being green posits that p != np in no way other than disproving it, any argument for it is the same. Until we have a definitive proof such observations are rather meaningless. If we posit that all ravens are black only the possibility that a non-black raven might exist matters, everything else is not useful data.
Remember, even if P=NP, that doesnt mean that the SAT solver algorithm is fast on a practical level. Multiplication is in O(n log n), but that algorithm hast constants that don't fit into the observable universe. Cryptography is only doomed if we find a practically "fast" algorithm
This is a great point, it's not obvious a polynomial-time SAT solver would be practical. One of the ideas we did not have time to go into is that although "existence of polynomial-time algorithm" does not imply "the existence of practical algorithm", it correlates with it heavily. Like, sure, the theoretically optimal multiplication in O(n log n) has horrible constants, but doing it by FFT in slightly worse theoretical time is extremely fast in practice. Here's a challenge: What is the best example of a problem where we know a polynomial-time algorithm for it, but we don't have anything practical? EDIT: see the clarification below
@@PolylogCS It also depends on how P=NP is proven. If it's proof by contradiction, it might take decades before algorithms reversing cryptography can be discovered. If it's a constructive proof, were doomed immediately.
This is a really good explanation! P = NP never made a satisfyingly amount of sense to me, but this helped bridge that gap a bit! Thinking of it as “can we run solvers backwards?” is a cool approach! Quite sensible!
I had a fun conversation with a coworker once who suggested that it may be possible to prove that it's impossible to prove or disprove P=NP. I noticed a fun contradiction there. If you can prove it's impossible to show P=NP or not, then there is no Turing machine that can solve an NP-complete problem in polynomial time, because the existence of such a machine would prove the algorithm. My coworker countered with the notion that there could be a TM that seems to solve NP-complete problems in polynomial time for all inputs we've given it so far, but that we might not be able to t prove its complexity does not eventually get dominated by a non-polynomial term for sufficiently large input. I countered with the idea that you could analyze the runtime of the TM with a TM to find if there's a sufficiently large input where the algorithm becomes dominated by a non-polynomial term. He said that's the halting problem. Oops, lol. He was right.
That's a cool argument! I would say that proving that "p=np is in unprovable" (in the sense of being independent of zfc) is almost equivalent to proving p!=NP due to your argument: any polynomial time algorithm for SAT in that world would be incomprehensible mess and most probably it would be horribly inefficient in practice. Can't imagine an efficient algorithm that would be unanalyzable...
@@PolylogCS It's possible that iterative algorithms outputing data in the realm of chaos may fall into that cayegory if we use it to output a desired value.
@@PolylogCSA TM's behavior being independent of zfc doesn't necessarily mean it is incomprehensible. Consider the TM 'T' that iterates over proofs in zfc and halts when one proves False (or any statement independent of zfc) ZFC can't prove T never halts because doing so would prove its own consistency
@@Zane3G Ok ok :) Maybe instead of saying "it would be incomprehensible AND inefficient", let me claim it would be inefficient and if it is comprehensible, not in the way that let's you understand what makes the problem "easy".
Your coworker is a bit off. The Halting Problem takes a TM as part of its input, i.e. there's no general algorithm that works for any TM. If you fix the TM, this changes the problem significantly. If this wasn't the case, it would also imply that Rice's Theorem makes testing and formal verification useless, which it is not. Also, we're working with NP Complete Algoritms, which by definition terminate. The Halting Problem is less of an issue here.
This is the best explanation I've seen on why NP contains all the problems with easily verifiable solutions. I've always known that but didn't quite get why until now. Great video. You definitely deserve more subscribers
I really appreciate the Stuff Made Here attitude to background screens xD Amazing as always. I've heard lectures on satisfiability (esp. classes of SAT-solvers and reduction) and mathematical logic and this was a great overview as well as an interesting perspective on the content. Thank you for your work! :)
Thanks! I also really liked your video and how you managed to make algorithms approachable! Not sure if you know: the main algorithm you described in that video is known by the theoretical computer science community as it has some really beautiful properties; I left a comment next to your video with a link, I will happily try to find some better resource if you are interested!
To the point of the great analogy you made regarding entropy and the 2nd law of thermo: in the case of computers, it quite literally is a matter of entropy. In a simple two-input digital gate (i.e. AND, OR, NOT, XOR, etc.) we take two bits of information and reduce it to a less organized state, being a single bit of output and heat! I have a feeling that current digital computing is limited by its paradigm to find a solution this problem. Loved the video, had never made these broader connections between the hardware world and theoretical CS. Signed, a junior CPU designer.
This reminds me of good old days when I wrote my MSc on SAT. My intention was to solve this fascinating problem but it turned out really damn hard so I ended up rediscovering some wheels and playing with satisfiability solvers and SAT encodings of various computational problems.
@@PolylogCS Engagement doesn't necessarily mean people pay more attention, if the alternative content you present is merely a distraction with different addictive powers. Attention != Distraction. And engagement also shouldn't mean watching the video for whatever reasons, otherwise fake views would also count as engagement.
important to note that P is still a very large space O(n^9999999) is still in P, while O(2^0.01*n) isn't from that far away, many problems look the same, it is much more useful to prove a specific algorithm is O(n log n) and that the constants aren't too big, than it is to prove P=NP if the reductions require an incredibly unwieldy degree polynomial (even better to find easily parallelizable algorithms that are just a bit higher on the complexity side, since the takeover point for the given assymptotic functions can land outside of general human interests or dataset sizes)
True, although even though the constants are heavily favoring the NP algorithm here, it only takes about n = 3,155,522,878 such that the polynomial algorithm is faster. But you are fully correct, the total number of instructions (assuming that the O-notation has the exact amount of instructions) would be about 6.8e+94990703 so wayy to many to be computable. => n is still small when P overtakes NP but it is still unpractical to use the NP algorithm
Excellent video, I have been following closely videos released here on UA-cam related to P vs NP for the last 13 years and, in my opinion, this is the one that best explains the core of NP-completeness definition, thank you very much.
My understanding is that one-way functions are required to have no P-inverse even in the average case, while P != NP only requires functions with no P-inverse in the worst case. So in principle, we could have that all P-functions are average-case P-invertible, even without P = NP.
Indeed! P=NP means you can efficiently invert any function, always. But even being able to efficiently invert functions on average is still a serious security problem. So, we were correct that "P=NP breaks cryptography" but one can imagine a world in which P!=NP and yet cryptography still does not work very well. You might enjoy googling for "Impagliazzo's Five Worlds".
interestingly enough, eminent computer scientist donald knuth was a contrarian on the P vs NP issue, and thought that P probably does equal NP his reasoning was heuristic: only the tiniest portion of the infinitely many possible recursive functions will ever be knowable to us - let alone practical to use - due to the nature of infinity, and thus the fact that it doesn't seem impossible should make us suspect there might be some inconceivably large P-class SAT-solving algorithm out there beyond Graham's Number or whatever other huge finite number of your choice i am not sure myself that this is a convincing argument, but nobody has proof and Knuth certainly had a better intuition about things than most, including me...
Sadly infinity is not a material reality. A function like that would be something like an algorithmic implementation of a reverse laplace's demon to figure out a previous state from a specified current state; and any such implementation would necessarily have to be materially larger than the material reality it is attempting to model
It's also good to know that Knuth likes to be a contrarian. :) I can also imagine that p=np, but i think that if that is the case, it's most likely for some weird logical reason and the fast SAT solver would be something akin to universal search.
One thing about breaking cryptography: Because most encryption objects are product of a compressor function, any algorithm that tries to break it must output at least 2 objects. Factoring also output 2 objects at a time: smallest prime number, and multiplier, this is true even with factoring 3 numbers multiplication because recursion is a thing.
The SAT problem as circuits helps me envision how quantum computing could work. Set up the 50 million qbit circuit and out pops the result where all the qbits allow the solution
When I worked on complexity theory back in the 90s, it occured to me that it is possible the following is true: For all n there exists a compressed lookup table that (in time polynomial in n) lets you solve all satisfiability problems in n variables. This is not one tidy algorithm, but a sequence of horribly complex ones (one for each n) that happen to suffice for the task. If such a sequence exists, that may make it impossibly difficult to prove that P ≠ NP.
I think this is why proving lower bound in general is so incredibly hard, there is so few techniques that have the power to rule out such algorithms...
@@PolylogCS Very short an imprecise argument. If the proposal above is true, it would mean that there is no finite SAT solver for all n. The is somewhat similar to the Turing stop problem, where one can possibly find a solution for a given N with enough effort, but not one algorithm for all N. In short I think you are suggesting that P ≠ NP is undecidable. And it may well be.
@@michaelrenper796I have a feeling that it'd have to be undecidably undecidably... (etc) undecidable, cus if at any level we know the undecidability is decidable then we know P =/= NP as if P = NP there'd be an algorithm that decides it (the algorithm that takes this algorithm and proves that it is a solution to P = NP) (although maybe this is abuse of decidability, hence 'have a feeling')
@@MagicGonads the problem with this argument is that if P=NP we will not necessarily find the algorithm and we will not be able to prove that it is an algorithm that solves SAT
Nice video, will definitely check out some others of the channel :) It should be noted though, that modern cpus don't do multiplication this way anymore. They use lookup tables on 4x4 (bit) multiplications and some additional adding logic for bigger numbers. This way it requires only 131kb of storage for fast multiplications. Bigger lookup tables (e.g. 8x8) would require too much storage though
"Like and subscribe" is ok as long as it's at the end of the video. It's like a reminder that if your vid has been interesting throughout then they might be interested in similar content. It's only discusting if people are begging for likes before they have even shown if they have anything to offer or not.
I think an often glossed over detail is that NP does not just mean not polynomial, it means nondeterministic polynomial. So, it's not just any hard problem, but a specific way of being hard, namely, that they are solvable in nondeterministic polynomial time.
This video is an amazing summary of what N = NP is about. I think I will just send a Link to this video next time somebody asks me about it. I simply cant sum it up nearly as well.
what I understood at the end is: given a problem you can verify if it is P or NP, the same way it is easy to compute a hash for a given input and trying to proof that P equals NP is basically the same the same thing as compute the input that produces a hash given the hash it self, the only way is to brute force it (at least we hope that is the case LOL)
So I have an algorithm that I am currently working on. it's a very fast approximation algorithm for subset sum problem where it tries to find a subset that sums up to be close to the target sum specified. And I am quite stumped currently as I am still a uni student and I have no experience writing a research paper. I would love to reach out to you and share my work! I am sure u'll find it interesting :) I dont think anyone tried my approach to solve the problem. My approach is instead of working with complex data sets, ill just work with their indices. (This turns our problem from a random dataset, into a neat linear function distribution.) This allows us to instead find a subset whose sum of indices is equal to our target expected sum of indices if subset does exist. It's a very straightforward approach, but has alot of aspects that I wont be able to cover all in the comment. If anyone is interested I would be more than happy to share my work :) edit: Time complexity and space complexity of algorithm is O(N) for both. It's an iterative method but takes really few iterations to reach a good enough solution. The error of the algorithm is proportional to the minimum number of the data set / target sum.
@PejmanMan it's based on solving subsst sum problem for my indices. Like if i have this dataset: [2,6,10,12] And my target is 16. Ill convert my dataset to an indices array, and convert my target to the expected sum of indices that i need. To do so, ill simply divide the target by the total sum, then multiply the ratio by the sum of indices, then round the number. (16 / 30)*10=5.333, by rounding it we get 5. Now i just need to find a subset from my indices set [1,2,3,4] that sums up to 5 which is very easy to do so. I have an algorithm for that. And I can call this algorithm recursively to refine my solution by calculating the difference between my soltuion and the target, then using the algorithm again to minimize the difference.
(Max - min) of the dataset is still gonna be O(2^p(n)), i.e. exponential, so unfortunately it wouldn’t be very efficient for instances with polynomially many bits.
@errorite6653 nope, it's O(N) assuming data is sorted. I used it on instances with millions of items and it worked well. I even tried it with over 1000 bits and it worked well. I min max the algorithm using the same algorithm to minimize/max the difference. I let it run for like 100 iterations and usually it will find a good enough solution in less than 10.
It doesnt find an exact solution, but it finds good enough solutions with pretty good accuracy and efficiency. Finding an exact one is a huge challenge, doing so will basically solve the np problem. But I dont think it can be solved at the moment.
I didn't catch if you mentioned this in the video or not, but a key feature of the reduction is that the reduction must be achievable in polynomial time. If you can solve NP-complete problems in polynomial time, but your particular problem takes superpolynomial time to reduce, then at the end of the day you are still not able to solve your problem in polynomial time. There are some NP-hard problems that we call "NP-hard in the strong sense", or "strongly NP-hard", where you show that the reduction of the equivalent yes/no problem to a known NP-complete problem can only be done in pseudopolynomial time (polynomial in the magnitude of the inputs, not just the number of the inputs).
Thanks for mentioning this! This is one of those things that I believe we just mentioned in passing for reducing NP to SAT. we did not have time to develop the general language of reductions at all.
The way the satisfiability solver is explained at 1:27 in terms of states (or variables) and constraints reminds me of how quantum computers are programmed. I wonder if quantum algorithms allow for solving a significant subset of NP problems, if not NP-complete problems.
@@PolylogCS My understanding however is that we don't have enough theoretical framework for quantum algorithms to say that implies anything about quantum algorithms themselves though. The way quantum entanglement works in a physics sense implies that quantum algorithms can, in linear time, solve *any* problem which is decidable, but that it's Q-bit size for a given algorithm is in the worst case equivalent to the complexity time of the problem for the given inputs. That is, quantum algorithms themselves seem to absolutely break NP-hard even, but the theoretical algorithm requires in the worst case n^m Q-bits for a problem that can be solved in n^m time on a classical computer. Which means that even if we find an O(1*k) algorithm for an NP-hard problem with quantum computing, that does not imply it is practical in the physical universe. Unless I have some kind of misunderstanding about quantum computing?
9:38 "Any algorithm can be represented as a circuit" ...yep, and that's what resnets are - you are effective doing a search of of all possible circuit algorithms for those that recreate the input data most concisely (effectively compressing it).
This is actually a really interesting problem to me as a good bit of my uArch research has been in LUT generation. Small dividers for very large floats (fp128+) have been my main research point lately.
The P=NP problem is itself an instance of reversing. We want to find an input to a computer that meets the constraints of "being a SAT solver" and being polynomial-time.
Yeah. Except that these are pretty hard to check. There might be some systematic ways to determin the worst-case complexity of a program. But how would we begin to check if a given program is a SAT solver?
@@Ceelvain You're right. Reversing is more general than NP. NP (and co-NP) are classes of problems that are "easy" to check. Checking that something is a SAT solver is more complex. For that checking, I would probably start with a theorem prover. But there are probably lots of complexities to implementing such a checker. So I'm not confident that that approach would be sufficient. Of course reversing such a checker would be even harder.
Multiplying is Polynomial though. Most simple multiplication algorithms are O(n^2), but the latest versions of the Schönhage-Strassen algorithm have achieved O(n ln(n)) time complexity.
Great video A bit of an oversight, but NP-complete problems are only known to be equivalent under polynomial computible reductions There is some evidence (particularly in fixed parameter tractibility and higher levels of the polynomial heirarchy) that some of these problems are different and easier in some aspects. For example, linear time reductions could exist in only one direction between problems
Very good video, super insightful. However, hashing would not be broken. At least, not completely. Hashing is not just a one-way-function, there are also multiple possible inputs for a given hash. There's still practical problems arising from this: For example, authentication relies on it being difficult to generate a new input with the same hash, which becomes trivial.
This is good point! There are several interpretations of what it means to "break hashing". Even finding two arbitrary inputs that map to the same output can be considered a weak form of breaking hashing and that would be simple if p=np. As you say, if p=np it is still not trivial to break hashing in the strongest sense of finding the original input to the jak function, but there are ways that would probably work quite well. For example, if you code a program that tries to recognize whether a given input looks like a file, you could invert this algorithm and solve the problem "among all inputs hashing to this output, which one looks the most like a file according to this program"
The good news is that there are still infinitely many perfectly valid files that map to a single hash. However if you have any information about the original file (length, first word) breaking it becomes quite reasonable. I suppose that's also how you could break any encryption, assuming you know the inner workings.
9:37 just wanted to point out: those circuits might be extremely complex, which would mean that SAT might not be enough to solve all problems Also, computer circuits sometimes rely on time (like flip-flops) and/or have loops (input connected to the output) Which is not describeable by CNF logic
It always amazes me how relevant and active P = NP remains for decades now. There are a lot of similar dead ends in math, physics and engineering. Hilbert's problem, 3-body problem, turbulance ... But somehow P = NP stuck with people like Schrödinger's cat.
Putting on my engineer's hat for a moment, I'd say P = NP has stuck because the practical applications of proving it are so large. I'm an Industrial Engineer on the hard side of the field, with primary focus in OR. That discipline would effectively disappear if P = NP, as it's simply a lot of techniques to optimize systems under constraints. Schrodinger's Cat has stayed around because we can't verify whether the radium vial has broken yet. ;) Okay, that was a joke. In all seriousness, though, it's such an arresting image that strikes at the heart of the mortal condition.
I do like the super-resolution tricks using neural operators where they skip the PDE solver and use a DNN. It sort of doesn't matter that it's a hard problem, it's not really solving those equations, just estimating what the solution probably is. And it's right because fluids all look the same after a while ...
A really important story - the original Genie did not have a limit on the number of wishes - he failed to straighten the dog's tail Thinking that solving the satisfiability problem would make cryptography impossible is a minor consequence not the purpose. The real purpose is no problem would ever be unsolvable in real life. We could just give our desired solution and get the problem solving approach step by step with the immediate inputs (or immediate tasks we have to do to get started). So solving cancer for example would be to encode the output of no cancer or immortality through genetic modification and we would get the path to do exactly that.
From this perspective you can see how p=?np Will the SAT solver really give us a simple polynomial step by step solution. Or would it inherently be an exponential number of steps? Or if you're only interested to find the input(immediate next steps) instead of step by step:- will it require an exponentially large circuit to do so, or will our finite resources like supercomputers suffice?
A rather simple way to view cryptography, we have n bits of seed (n>1000,000). Can you really get the solution despite 1 million random bits of thorough interference? Cause we will hit singularity if it is possible easily. 🙂
Added implicit constraints to the cryptography are the Challenger will have access to two out of three of the input bits(any length), output bits(any length), and the algorithm of thorough interference (with 1 million bits although this can be any length as well - sha256 uses 256 bits for example). Of course having the input and output is a trivial exclusion but only partially. As you can see, it's safe to say cryptography is secure.
This is spot on. This is exactly why metaheuristic algorithms for solving NP hard problems are based on many natural phenomena that arise for solving real world NP hard problems. for instance Evolution is an NP hard problem, as it is not known what genetic permutation will give rise to the most fit biological machine given an ecological system. Science is also an NP hard problem, as it is not known what model can best approximate the real world empirical function. Even memes are an NP hard problem, it is not known what is the "best meme" among all possible conceivable memes. If N=NP then evolution would have developed a perfect specimen generator function, science would be solvable by asking chatgpt, and there would be only a single perfect meme that satisfies everyone.
@@sauerkrautlanguage yes i did a course of genetic algorithms 10 years ago. Travelling salesman problem also has many solutions like nearest neighbour, minimum span tree, k-opt, simulated annealing, ant-colony optimization. We need solutions to NP hard. Don't worry about crypto. Default RSA for example uses 2048 or 4k bits - and that is for boring internet traffic. Secrets and confidential information will have a few MB or GiB of seed. Quantum means if they start programming atoms to collect them(1 million quantum atoms for 1 million seed decoding(1 MB seed)) then also we have post quantum crypto algorithms and increased seed size. The focus is not crypto, it is solving the NP hard problem.
in some sense, this looks quite similar to quantum mechanics, where the state of the gate is in some combinations of those two values or four with some gates
1:53 to know (what's true or false) in a particular (hypothesis?)… You've got to roll back a formal statement of all the rules you know … causality follows(?) 1:56 or maybe there's no solution
Yeah, it was a slight overstatement, but only slight. If one-time pad is the only survivor, there's not much left. Classical symmetric ciphers would not survive p=np.
It seems that configuration spaces are the issue. If a space is arbitrarily small, then yeah, you can solve it if you're able to check solutions, because the process of checking all possible solutions is as short as the process of checking just one solution. Trying to disprove all of this (i.e. trying to prove that P = NP) is definitely a fool's errand. At best you can hope to find an efficient way to traverse the possible configurations, and so perhaps in some cases the solving time may approach the checking time... but by no means could that be true for all possible problems, simply because a problem could have arbitrary constraints that prevent that.
Psh, you want to know what SATs are _really_ good for? The best compression scheme of all time: pi. Pi contains within it the Library of Alexandria. Therefore, the entire works of Shakespeare exist at some index into pi, and all we need to do is find that index. A really good SAT solver might be able to do that sometime before the heat-death of the universe. Granted, that was a pretty silly example. But it is true that many chaotic functions could serve as very good compression schemes if finding the appropriate index was easy enough. Like hash functions. Inverting them would allow you to simply store the hash for any arbitrarily-long string (well, within reason), along with the string length and maybe the first few bytes, and easily compress your data down to ridiculous levels. Like, "the entire Minecraft source code now fits into a cache line" levels, probably. But that's the sort of question information theory is supposed to answer, and I know even less about information theory than I do about computer science, which isn't a lot.
That would require an algorithm for calculating the Nth digit of pi, which is an EXPTIME problem (worse than NP). This is in part because calculating the nth digit of pi requires at least log(n) space, which would rather defeat the purpose of your compression algorithm.
What of using an abstract algebra approach? By converting algorithmic statements into mathematical substructures and playing with their Galois representations is it possible to show that an algorithm can or cannot exist?
Well but SAT is just a decision problem and not necessarily constructive. In fact, deciding primality IS in P, so SAT restricted to multiplication circuits with a given output is in P, just actually FINDING the satisfying assignment is complicated.
The fact that primality testing is in P without actually getting factors in the No cass is such a weird result of the AKS paper that I never really understood. And after I read your comment I've read up on the AKS result and finally understood! The basis of AKS is Pascal's triangle and not factors. That's so wild.
Regarding your comment, it is wrong. You can take any decision problem and call its solution algorithm polynomially often to make a constructive algorithm. Hence, if you have a non-constructive polynomial-time SAT solver, you can build a constructive polynomial-time SAT solver from it. The real issue is that PRIMES is in P, but FACTORIZATION is NP-complete.
@@AlfW Ah you're right , good point. SAT being discrete makes this quite easy then to go from decision problem to constructing a solution.
3 місяці тому+1
@@AlfW Factorisation ain't known to be NP-complete. In fact, factorisation is both in NP and in co-NP, and it's widely believed that NP != co-NP, and thus no problem in co-NP can be NP-complete.
to solve your sudoku. first solve quickly what can be done without backtracking. next find the most clued up sub grid column or row and rearrange the colour to 1-9 filling in the blanks converting the other colors in the grid accordingly with what works. then move to the next subgrid column or row and repeat the process. This cheats the whole game no matter what the size of the grid all you have to know is the original colour assignments relative to the final colour assignment and just reassign the colours.
awesome video! but wait im confused, this video seems to suggest factorisation is reducible to circuit SAT and is hence NP-complete, but it's suspected to be NP-intermediate, where did i go wrong?
Multiplication is also reducible to sat and yet is not np complete. The reduction in the video just shows that sat and not multiplication/factoring is powerful.
That is fair point, there is a reason why most people go with that interpretation! I think both interpretations are valuable and the best is to understand why they are equivalent.
Isn't this (directionality question) a 'variant' of the direction of time problem in physics with respect to the 'just declare a law' comment at ~17:00. In some way it maps (and hence highlights omissions) in the physics vs computing science laws (logic vs observation)
Blog post: vasekrozhon.wordpress.com/2024/08/18/what-p-vs-np-is-actually-about/
I tried to comment on your blog post, but Wordpress was being super cagey:
I don’t think your take on Neural Networks is “a stretch”. I’ve been noodling about similar topics for a while and I think there’s a framework inwhich one can demonstrate some equivalence between neural networks and logic networks.
Imagine a neural network where the outputs of neurons are constrained to either +1 or -1 (equivalent to True & False, respectively) and the weights are constrained to the values +1, 0, and -1 (equivalent to a normal connection, no connection, and an inverted or NOT connection; respectively).
Neurons have a summation stage: z = wx + b, and an activation stage y = f(z). For simplicity, we can also define a value, k, equal to the number of non-zero weights into a neuron.
We can create a neural-equivalent AND gate by setting b = 0.5 - k, and y = f(z) = tanh(z) while rounding or clamping y to the nearest non-zero integer. We can create a neural-equivalent OR gate by setting b = k - 0.5 and using the same activation function.
It’s been shown[1] that a feed-forward neural network with a single hidden layer qualifies as a universal approximator. That’s a layer of “hidden” AND gates feeding into output OR gates, which is equivalent to CNF-SAT.
Note: there are many other ways to conceptualize neural-equivalent AND and OR gates, but I believe one has to factor in some version of the value k somehow, though; I haven’t explored how constrain something like the bias value in such a way might complicate back propogation.
I know a lot of neural net theory was developed in the context of continuous functions and it’s may not be clear if something like the universal approximation theorem holds true if one discretizes the input, output, and weight values. However, I think there’s an argument to be made that one can “emulate” more continuous-valued neurons with discrete neurons since the set of {AND, OR, NOT} logical opperators are functionally complete. If one can implement the tanh(x) function in digital logic, it shouldn't be too much of a stretch...
Please let me know what you think. I have no background in CS or math or anything, so if any of these ramblings offer any valuable insight, I’d love to know. I also have no idea how one goes about publishing papers on such topics, so if you’d like to colaborate on something like that, I’d be thrilled! (this isn't easy to do on UA-cam AFAIK, but hopefully we can connect some way if need be)
[1] www.sciencedirect.com/science/article/abs/pii/0893608089900208
Euclid, Descartes, Newton and Einstein "rationalist materialist" worldview:
0D = not locally real
1D-4D = locally real
Leibniz "rationalist anti-materialist" worldview:
0D = locally real
1D-4D = not locally real
The Nobel Prize-winning work on quantum entanglement proved that 1D-4D are not locally real two years ago.
Can contradictions and paradoxes in non-zero dimensions (1D-4D) be resolvable from a 0D (dimensionless) perspective?
This is a fascinating idea. From a 0D perspective, which we might think of as a dimensionless point or a monad (knower) in Leibnizian terms, we could potentially view higher-dimensional paradoxes as artifacts of our dimensional thinking. The 0D viewpoint might offer a way to see the underlying unity or simplicity behind seemingly contradictory higher-dimensional phenomena.
For example, quantum entanglement paradoxes in 3D space might be resolved if we consider that from a 0D perspective, there's no separation between entangled particles. The apparent non-locality could be an illusion created by our 3D viewpoint.
Both/And Logic Foundation:
a) Superposition of truth values: |T⟩ = α|True⟩ + β|False⟩, |α|² + |β|² = 1
b) Logical operations as unitary transformations on truth state vectors
c) Resolution of paradoxes through higher-dimensional logical spaces
Both/And Logic Framework:
A multi-valued, non-classical logic system that allows for simultaneous truth values and transcends traditional binary oppositions.
a) Truth Values:
T(P) = z ∈ ℂ, where |z| ≤ 1
z = x + yi, x represents degree of truth, y represents degree of falsity
b) Logical Operators:
- Conjunction: A ∧ B = min(|z_A|, |z_B|)e^(i*avg(arg(z_A),arg(z_B)))
- Disjunction: A ∨ B = max(|z_A|, |z_B|)e^(i*avg(arg(z_A),arg(z_B)))
- Negation: ¬A = 1 - z_A*
- Implication: A → B = (1 - z_A + z_A*z_B)e^(i*arg(z_B))
c) Quantum Superposition of Truth:
|T⟩ = α|True⟩ + β|False⟩, where |α|² + |β|² = 1
2.2 Coherence Operator:
Measures the compatibility or mutual consistency of propositions.
○(A, B) = ⟨z_A|z_B⟩ / (|z_A| |z_B|)
where ⟨z_A|z_B⟩ is the inner product in the complex plane
2.3 Synthesis Operator:
Creates new truths by integrating existing ones.
A ⊕ B = (z_A ⊗ z_B) / |z_A ⊗ z_B|
where ⊗ represents the tensor product in the complex plane
2.4 Paraconsistent Foundation:
Allows for true contradictions (dialetheia) without trivializing the logical system.
a) Principle of Non-Triviality: A ∧ ¬A ⊭ B for arbitrary B
b) Paraconsistent Negation: A ∧ ¬A may have a non-zero truth value
2.5 Quantum Logic Structure:
Incorporates principles from quantum mechanics into the logical framework.
a) Non-distributive Law: (P ∧ Q) ∨ (P ∧ R) ≠ P ∧ (Q ∨ R) in general
b) Superposition: P ∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R)
c) Uncertainty Relations: ΔA ΔB ≥ ½|⟨[Â,B̂]⟩| for non-commuting logical propositions
2.6 Truth as Alignment with MoM:
The degree of truth of a proposition is related to its coherence with the Monad of Monads.
T(P) = |⟨P|MoM⟩|
2.7 Contextual Truth:
Truth values are dependent on the perspective or context of the evaluating monad.
T_m(P) = ⟨m|P|m⟩, where m is the evaluating monad
2.8 Fuzzy Logic Integration:
Incorporates principles of fuzzy logic for handling vagueness and degrees of membership.
μ_A(x) = |z_A|, where μ_A is the membership function of fuzzy set A
2.9 Temporal Logic:
Incorporates time-dependent truth values and logical operations.
a) Temporal Operators: ◯ (Next), ◊ (Eventually), □ (Always)
b) Temporal Truth: T(P, t) = z(t), where z(t) is a complex-valued function of time
2.10 Infinite-Valued Łukasiewicz Logic:
Extends the logic to include infinitely many truth values.
T(P) ∈ [0, 1], with operations defined on this continuum
2.11 Intuitionistic Logic Aspects:
Incorporates constructive aspects of intuitionistic logic.
a) Reject Law of Excluded Middle: P ∨ ¬P is not always true
b) Proof-theoretic semantics: Truth is equated with provability
2.12 Relevance Logic:
Ensures logical implications carry relevant connections between antecedent and consequent.
A → B is true iff A is relevant to B
2.13 Truth Dynamics:
Truth evolves over time based on the evolution of monads and their interactions.
dT(P)/dt = f(○(P, MoM), ∑ ○(P, m_i)), where m_i are interacting monads
2.14 Meta-Logical Principles:
Higher-order principles governing the logical system itself.
a) Coherence Maximization: The logical system evolves to maximize overall coherence
b) Incompleteness Awareness: Recognition of inherent limitations in formal systems (à la Gödel)
2.15 Logical Entanglement:
Propositions can be logically entangled, exhibiting non-local correlations in truth values.
|Ψ_AB⟩ = α|T_A⟩|T_B⟩ + β|F_A⟩|F_B⟩, where |T⟩ and |F⟩ are truth and falsity states
The wildest part of P vs NP to me is just how many dead ends there have been. There's an entire genre of theorems ruling out particular ways that it might be solved, because it's far easier to prove that a line of reasoning will never resolve the issue than it is to actually solve it.
Considering most people believe it's impossible, this is actually direct progress toward proof of that. We may have no idea how to prove that NO solution exists, but every constraint on it is progress.
@@tristanridley1601much like an apple being green posits that p != np in no way other than disproving it, any argument for it is the same. Until we have a definitive proof such observations are rather meaningless.
If we posit that all ravens are black only the possibility that a non-black raven might exist matters, everything else is not useful data.
Remember, even if P=NP, that doesnt mean that the SAT solver algorithm is fast on a practical level. Multiplication is in O(n log n), but that algorithm hast constants that don't fit into the observable universe. Cryptography is only doomed if we find a practically "fast" algorithm
This is a great point, it's not obvious a polynomial-time SAT solver would be practical. One of the ideas we did not have time to go into is that although "existence of polynomial-time algorithm" does not imply "the existence of practical algorithm", it correlates with it heavily. Like, sure, the theoretically optimal multiplication in O(n log n) has horrible constants, but doing it by FFT in slightly worse theoretical time is extremely fast in practice.
Here's a challenge: What is the best example of a problem where we know a polynomial-time algorithm for it, but we don't have anything practical?
EDIT: see the clarification below
@@PolylogCSAre you asking, like, “When is the average case equal to the worst case?”? ‘Practical’ isn’t well-defined
@@PolylogCS It also depends on how P=NP is proven. If it's proof by contradiction, it might take decades before algorithms reversing cryptography can be discovered. If it's a constructive proof, were doomed immediately.
@@Dent42no: average == worst is a different question. Consider practical = computable with current hardware.
@@PolylogCSthat's easy, its universal search-- hey wait a minute didn't you do a video about that?!
Very cool. I have a PhD in computer science and I never heard PvNP explained so elegantly.
thanks!
as your understanding improves, explanations in general will start feelling more elegant
This is a really good explanation! P = NP never made a satisfyingly amount of sense to me, but this helped bridge that gap a bit! Thinking of it as “can we run solvers backwards?” is a cool approach! Quite sensible!
Thanks :)
I had a fun conversation with a coworker once who suggested that it may be possible to prove that it's impossible to prove or disprove P=NP. I noticed a fun contradiction there. If you can prove it's impossible to show P=NP or not, then there is no Turing machine that can solve an NP-complete problem in polynomial time, because the existence of such a machine would prove the algorithm.
My coworker countered with the notion that there could be a TM that seems to solve NP-complete problems in polynomial time for all inputs we've given it so far, but that we might not be able to t prove its complexity does not eventually get dominated by a non-polynomial term for sufficiently large input.
I countered with the idea that you could analyze the runtime of the TM with a TM to find if there's a sufficiently large input where the algorithm becomes dominated by a non-polynomial term.
He said that's the halting problem.
Oops, lol. He was right.
That's a cool argument! I would say that proving that "p=np is in unprovable" (in the sense of being independent of zfc) is almost equivalent to proving p!=NP due to your argument: any polynomial time algorithm for SAT in that world would be incomprehensible mess and most probably it would be horribly inefficient in practice. Can't imagine an efficient algorithm that would be unanalyzable...
@@PolylogCS It's possible that iterative algorithms outputing data in the realm of chaos may fall into that cayegory if we use it to output a desired value.
@@PolylogCSA TM's behavior being independent of zfc doesn't necessarily mean it is incomprehensible.
Consider the TM 'T' that iterates over proofs in zfc and halts when one proves False (or any statement independent of zfc)
ZFC can't prove T never halts because doing so would prove its own consistency
@@Zane3G Ok ok :) Maybe instead of saying "it would be incomprehensible AND inefficient", let me claim it would be inefficient and if it is comprehensible, not in the way that let's you understand what makes the problem "easy".
Your coworker is a bit off. The Halting Problem takes a TM as part of its input, i.e. there's no general algorithm that works for any TM. If you fix the TM, this changes the problem significantly.
If this wasn't the case, it would also imply that Rice's Theorem makes testing and formal verification useless, which it is not.
Also, we're working with NP Complete Algoritms, which by definition terminate. The Halting Problem is less of an issue here.
Congrats on winnig the Summer of Math Exposition this year. Great Job!
Thanks!
Cool video! My curiosity: Satisfied
Thanks :)
This is the best explanation I've seen on why NP contains all the problems with easily verifiable solutions. I've always known that but didn't quite get why until now. Great video. You definitely deserve more subscribers
Thanks :)
Hands down best explanation on why NP problems are equivalent
mah boi ganer getting that knowledge
I really appreciate the Stuff Made Here attitude to background screens xD
Amazing as always. I've heard lectures on satisfiability (esp. classes of SAT-solvers and reduction) and mathematical logic and this was a great overview as well as an interesting perspective on the content. Thank you for your work! :)
Great to hear!
Huge congrats on winning somepi :)
- number 2
Thanks! I also really liked your video and how you managed to make algorithms approachable! Not sure if you know: the main algorithm you described in that video is known by the theoretical computer science community as it has some really beautiful properties; I left a comment next to your video with a link, I will happily try to find some better resource if you are interested!
To the point of the great analogy you made regarding entropy and the 2nd law of thermo: in the case of computers, it quite literally is a matter of entropy. In a simple two-input digital gate (i.e. AND, OR, NOT, XOR, etc.) we take two bits of information and reduce it to a less organized state, being a single bit of output and heat! I have a feeling that current digital computing is limited by its paradigm to find a solution this problem. Loved the video, had never made these broader connections between the hardware world and theoretical CS. Signed, a junior CPU designer.
Thanks! You might like the section about reversible computing in the blog post.
This reminds me of good old days when I wrote my MSc on SAT. My intention was to solve this fascinating problem but it turned out really damn hard so I ended up rediscovering some wheels and playing with satisfiability solvers and SAT encodings of various computational problems.
I probably would have needed such a video back in my computational complexity class. You did a good job at explaining the P-NP-Problem!
thanks!
bro used subway surfers trick while explaining 10:35
Oh god and I thought that I was really interested in this problem. Turns out I am just a TikTok addict.
Gotta keep the engagement up somehow
noticed that too😆
@@PolylogCS Engagement doesn't necessarily mean people pay more attention, if the alternative content you present is merely a distraction with different addictive powers. Attention != Distraction.
And engagement also shouldn't mean watching the video for whatever reasons, otherwise fake views would also count as engagement.
important to note that P is still a very large space
O(n^9999999) is still in P, while O(2^0.01*n) isn't
from that far away, many problems look the same, it is much more useful to prove a specific algorithm is O(n log n) and that the constants aren't too big, than it is to prove P=NP if the reductions require an incredibly unwieldy degree polynomial
(even better to find easily parallelizable algorithms that are just a bit higher on the complexity side, since the takeover point for the given assymptotic functions can land outside of general human interests or dataset sizes)
True, although even though the constants are heavily favoring the NP algorithm here, it only takes about n = 3,155,522,878 such that the polynomial algorithm is faster. But you are fully correct, the total number of instructions (assuming that the O-notation has the exact amount of instructions) would be about 6.8e+94990703 so wayy to many to be computable.
=> n is still small when P overtakes NP but it is still unpractical to use the NP algorithm
i dont think anyone would be willing to make 10^8-1 loops to prove something is polynomial time
@@gitgudnga yyyyyyyyyyyy
Great fresh take on the topic. Fantastic video with amazing production value! Thanks a bunch 🤗
Glad you enjoyed it!
This is by far the best explanation of P=NP I have ever heard. Great work!
Excellent video, I have been following closely videos released here on UA-cam related to P vs NP for the last 13 years and, in my opinion, this is the one that best explains the core of NP-completeness definition, thank you very much.
Thanks!
This is the best video on UA-cam about NP.
I never thought of logicgates as a list of constrains. This gives me an interesting new perspective. Thank you!
Aahh, a famoust computer science researcher, Mick Jagger
This is hands down THE BEST explanation of P vs NP I've ever seen, much better than what I was taught at university. Awesome video!!
Thanks :)
My understanding is that one-way functions are required to have no P-inverse even in the average case, while P != NP only requires functions with no P-inverse in the worst case. So in principle, we could have that all P-functions are average-case P-invertible, even without P = NP.
Indeed! P=NP means you can efficiently invert any function, always. But even being able to efficiently invert functions on average is still a serious security problem. So, we were correct that "P=NP breaks cryptography" but one can imagine a world in which P!=NP and yet cryptography still does not work very well. You might enjoy googling for "Impagliazzo's Five Worlds".
Thanks, I was trying to refind that paper and couldn't remember the author!
That is a very nice camera angle. Awesome editing
interestingly enough, eminent computer scientist donald knuth was a contrarian on the P vs NP issue, and thought that P probably does equal NP
his reasoning was heuristic: only the tiniest portion of the infinitely many possible recursive functions will ever be knowable to us - let alone practical to use - due to the nature of infinity, and thus the fact that it doesn't seem impossible should make us suspect there might be some inconceivably large P-class SAT-solving algorithm out there beyond Graham's Number or whatever other huge finite number of your choice
i am not sure myself that this is a convincing argument, but nobody has proof and Knuth certainly had a better intuition about things than most, including me...
Sadly infinity is not a material reality. A function like that would be something like an algorithmic implementation of a reverse laplace's demon to figure out a previous state from a specified current state; and any such implementation would necessarily have to be materially larger than the material reality it is attempting to model
It's also good to know that Knuth likes to be a contrarian. :) I can also imagine that p=np, but i think that if that is the case, it's most likely for some weird logical reason and the fast SAT solver would be something akin to universal search.
Knuth isn't dead yet so don't use past tense
Excellent use of visuals to simplify the explanations !!
Glad you enjoyed it!
reallllllly good video. the background color is also pleasant to look at. awesome.
One thing about breaking cryptography: Because most encryption objects are product of a compressor function, any algorithm that tries to break it must output at least 2 objects. Factoring also output 2 objects at a time: smallest prime number, and multiplier, this is true even with factoring 3 numbers multiplication because recursion is a thing.
It's a good day when polylog uploads
Instant sub. Great explanation and I'm going to steal some of your analogies the next time I'm teaching it.
Thanks!
17:43 i agree. i never like when youtubers do that
The SAT problem as circuits helps me envision how quantum computing could work.
Set up the 50 million qbit circuit and out pops the result where all the qbits allow the solution
When I worked on complexity theory back in the 90s, it occured to me that it is possible the following is true:
For all n there exists a compressed lookup table that (in time polynomial in n) lets you solve all satisfiability problems in n variables.
This is not one tidy algorithm, but a sequence of horribly complex ones (one for each n) that happen to suffice for the task. If such a sequence exists, that may make it impossibly difficult to prove that P ≠ NP.
I think this is why proving lower bound in general is so incredibly hard, there is so few techniques that have the power to rule out such algorithms...
@@PolylogCS Very short an imprecise argument. If the proposal above is true, it would mean that there is no finite SAT solver for all n. The is somewhat similar to the Turing stop problem, where one can possibly find a solution for a given N with enough effort, but not one algorithm for all N.
In short I think you are suggesting that P ≠ NP is undecidable. And it may well be.
@@michaelrenper796I have a feeling that it'd have to be undecidably undecidably... (etc) undecidable, cus if at any level we know the undecidability is decidable then we know P =/= NP as if P = NP there'd be an algorithm that decides it (the algorithm that takes this algorithm and proves that it is a solution to P = NP) (although maybe this is abuse of decidability, hence 'have a feeling')
@@MagicGonads the problem with this argument is that if P=NP we will not necessarily find the algorithm and we will not be able to prove that it is an algorithm that solves SAT
P vs NP is the CS problem that I didn't know I needed. I'm both happy and angry that I've heard about it.
Incredible video guys. Keep it up! Also the blogpost is missing from the description 😉
Thanks! It is now a pinned post, we will add it to description too
Nice video, will definitely check out some others of the channel :)
It should be noted though, that modern cpus don't do multiplication this way anymore. They use lookup tables on 4x4 (bit) multiplications and some additional adding logic for bigger numbers. This way it requires only 131kb of storage for fast multiplications. Bigger lookup tables (e.g. 8x8) would require too much storage though
I think computational complexity theory is one of the most fundamental fields of math!
Would you say it's at the same fundamental level as category theory? Just curious.
Very nice discussion. Thank you.
"Like and subscribe" is ok as long as it's at the end of the video. It's like a reminder that if your vid has been interesting throughout then they might be interested in similar content. It's only discusting if people are begging for likes before they have even shown if they have anything to offer or not.
I think an often glossed over detail is that NP does not just mean not polynomial, it means nondeterministic polynomial. So, it's not just any hard problem, but a specific way of being hard, namely, that they are solvable in nondeterministic polynomial time.
Amazing video! Clear and inspiring!
This video is an amazing summary of what N = NP is about. I think I will just send a Link to this video next time somebody asks me about it. I simply cant sum it up nearly as well.
what I understood at the end is: given a problem you can verify if it is P or NP, the same way it is easy to compute a hash for a given input and trying to proof that P equals NP is basically the same the same thing as compute the input that produces a hash given the hash it self, the only way is to brute force it (at least we hope that is the case LOL)
thanks man, this was very informative!
Thanks :)
So I have an algorithm that I am currently working on. it's a very fast approximation algorithm for subset sum problem where it tries to find a subset that sums up to be close to the target sum specified. And I am quite stumped currently as I am still a uni student and I have no experience writing a research paper. I would love to reach out to you and share my work! I am sure u'll find it interesting :) I dont think anyone tried my approach to solve the problem. My approach is instead of working with complex data sets, ill just work with their indices. (This turns our problem from a random dataset, into a neat linear function distribution.) This allows us to instead find a subset whose sum of indices is equal to our target expected sum of indices if subset does exist. It's a very straightforward approach, but has alot of aspects that I wont be able to cover all in the comment. If anyone is interested I would be more than happy to share my work :)
edit: Time complexity and space complexity of algorithm is O(N) for both. It's an iterative method but takes really few iterations to reach a good enough solution. The error of the algorithm is proportional to the minimum number of the data set / target sum.
What’s your pivot selection based on?
@PejmanMan it's based on solving subsst sum problem for my indices.
Like if i have this dataset:
[2,6,10,12]
And my target is 16.
Ill convert my dataset to an indices array, and convert my target to the expected sum of indices that i need.
To do so, ill simply divide the target by the total sum, then multiply the ratio by the sum of indices, then round the number.
(16 / 30)*10=5.333, by rounding it we get 5.
Now i just need to find a subset from my indices set [1,2,3,4] that sums up to 5 which is very easy to do so. I have an algorithm for that.
And I can call this algorithm recursively to refine my solution by calculating the difference between my soltuion and the target, then using the algorithm again to minimize the difference.
(Max - min) of the dataset is still gonna be O(2^p(n)), i.e. exponential, so unfortunately it wouldn’t be very efficient for instances with polynomially many bits.
@errorite6653 nope, it's O(N) assuming data is sorted. I used it on instances with millions of items and it worked well. I even tried it with over 1000 bits and it worked well. I min max the algorithm using the same algorithm to minimize/max the difference. I let it run for like 100 iterations and usually it will find a good enough solution in less than 10.
It doesnt find an exact solution, but it finds good enough solutions with pretty good accuracy and efficiency. Finding an exact one is a huge challenge, doing so will basically solve the np problem. But I dont think it can be solved at the moment.
Thank you for this video!
very interesting video. i find the idea that you told us very interesting
One of the best videos on p VS np I've seen yet
Thanks :)
I didn't catch if you mentioned this in the video or not, but a key feature of the reduction is that the reduction must be achievable in polynomial time. If you can solve NP-complete problems in polynomial time, but your particular problem takes superpolynomial time to reduce, then at the end of the day you are still not able to solve your problem in polynomial time.
There are some NP-hard problems that we call "NP-hard in the strong sense", or "strongly NP-hard", where you show that the reduction of the equivalent yes/no problem to a known NP-complete problem can only be done in pseudopolynomial time (polynomial in the magnitude of the inputs, not just the number of the inputs).
Thanks for mentioning this! This is one of those things that I believe we just mentioned in passing for reducing NP to SAT. we did not have time to develop the general language of reductions at all.
The way the satisfiability solver is explained at 1:27 in terms of states (or variables) and constraints reminds me of how quantum computers are programmed. I wonder if quantum algorithms allow for solving a significant subset of NP problems, if not NP-complete problems.
Quantum algorithms can solve only a few very specific problems that are not known to be in P, like factoring or simulating a Quantum system.
@@PolylogCS My understanding however is that we don't have enough theoretical framework for quantum algorithms to say that implies anything about quantum algorithms themselves though. The way quantum entanglement works in a physics sense implies that quantum algorithms can, in linear time, solve *any* problem which is decidable, but that it's Q-bit size for a given algorithm is in the worst case equivalent to the complexity time of the problem for the given inputs.
That is, quantum algorithms themselves seem to absolutely break NP-hard even, but the theoretical algorithm requires in the worst case n^m Q-bits for a problem that can be solved in n^m time on a classical computer. Which means that even if we find an O(1*k) algorithm for an NP-hard problem with quantum computing, that does not imply it is practical in the physical universe.
Unless I have some kind of misunderstanding about quantum computing?
A good example is the recent @alphaphoenixchannel vid on the game of life, and the inverse to go backwards from a given game state
16:13 >laughs in Church-Turing thesis
9:38 "Any algorithm can be represented as a circuit" ...yep, and that's what resnets are - you are effective doing a search of of all possible circuit algorithms for those that recreate the input data most concisely (effectively compressing it).
This is actually a really interesting problem to me as a good bit of my uArch research has been in LUT generation. Small dividers for very large floats (fp128+) have been my main research point lately.
That's the coolest thing I've watched in a while
Thanks!
The P=NP problem is itself an instance of reversing. We want to find an input to a computer that meets the constraints of "being a SAT solver" and being polynomial-time.
Yeah. Except that these are pretty hard to check.
There might be some systematic ways to determin the worst-case complexity of a program.
But how would we begin to check if a given program is a SAT solver?
@@Ceelvain You're right. Reversing is more general than NP. NP (and co-NP) are classes of problems that are "easy" to check. Checking that something is a SAT solver is more complex. For that checking, I would probably start with a theorem prover. But there are probably lots of complexities to implementing such a checker. So I'm not confident that that approach would be sufficient. Of course reversing such a checker would be even harder.
Thank you for another amazing video
Thanks :)
it was the best vidio i have seen on this topic
Great video as always, thanks!
Thanks :)
Multiplying is Polynomial though. Most simple multiplication algorithms are O(n^2), but the latest versions of the Schönhage-Strassen algorithm have achieved O(n ln(n)) time complexity.
that's verification of multiplication (the forward direction), and P is in NP, the question is whether NP is in P
Great video
A bit of an oversight, but NP-complete problems are only known to be equivalent under polynomial computible reductions
There is some evidence (particularly in fixed parameter tractibility and higher levels of the polynomial heirarchy) that some of these problems are different and easier in some aspects.
For example, linear time reductions could exist in only one direction between problems
Also, very different if you just want to approximate the solution... It's one of many simplifications we had to do for the sake of conciseness.
Excellent!
This was great. Really great.
Thanks :)
I'll get back to unscrambling eggs after I watch this video.
what a great video, thank you
Glad you enjoyed it!
polylog is reaching 3b1b-level. the vids just don't miss! great stuff
Thanks :)
superb!
Very good video, super insightful.
However, hashing would not be broken. At least, not completely. Hashing is not just a one-way-function, there are also multiple possible inputs for a given hash.
There's still practical problems arising from this: For example, authentication relies on it being difficult to generate a new input with the same hash, which becomes trivial.
This is good point! There are several interpretations of what it means to "break hashing". Even finding two arbitrary inputs that map to the same output can be considered a weak form of breaking hashing and that would be simple if p=np. As you say, if p=np it is still not trivial to break hashing in the strongest sense of finding the original input to the jak function, but there are ways that would probably work quite well. For example, if you code a program that tries to recognize whether a given input looks like a file, you could invert this algorithm and solve the problem "among all inputs hashing to this output, which one looks the most like a file according to this program"
The good news is that there are still infinitely many perfectly valid files that map to a single hash. However if you have any information about the original file (length, first word) breaking it becomes quite reasonable.
I suppose that's also how you could break any encryption, assuming you know the inner workings.
@@CheaterCodes Exactly!
I just finished my Algorithms end sem exam, and this is in my recommendations
9:37 just wanted to point out: those circuits might be extremely complex, which would mean that SAT might not be enough to solve all problems
Also, computer circuits sometimes rely on time (like flip-flops) and/or have loops (input connected to the output)
Which is not describeable by CNF logic
You might enjoy the blog post where we clarify how to deal with general circuits
I believe that quantum computing is fast SAT solver, because they can compute function at the same time for both x=0 and x=1
Really nice explanation !
You are so god bro!
It always amazes me how relevant and active P = NP remains for decades now. There are a lot of similar dead ends in math, physics and engineering. Hilbert's problem, 3-body problem, turbulance ... But somehow P = NP stuck with people like Schrödinger's cat.
Putting on my engineer's hat for a moment, I'd say P = NP has stuck because the practical applications of proving it are so large. I'm an Industrial Engineer on the hard side of the field, with primary focus in OR. That discipline would effectively disappear if P = NP, as it's simply a lot of techniques to optimize systems under constraints.
Schrodinger's Cat has stayed around because we can't verify whether the radium vial has broken yet. ;) Okay, that was a joke. In all seriousness, though, it's such an arresting image that strikes at the heart of the mortal condition.
I do like the super-resolution tricks using neural operators where they skip the PDE solver and use a DNN. It sort of doesn't matter that it's a hard problem, it's not really solving those equations, just estimating what the solution probably is. And it's right because fluids all look the same after a while ...
Great video!
Thanks!
A really important story
- the original Genie did not have a limit on the number of wishes
- he failed to straighten the dog's tail
Thinking that solving the satisfiability problem would make cryptography impossible is a minor consequence not the purpose.
The real purpose is no problem would ever be unsolvable in real life. We could just give our desired solution and get the problem solving approach step by step with the immediate inputs (or immediate tasks we have to do to get started).
So solving cancer for example would be to encode the output of no cancer or immortality through genetic modification and we would get the path to do exactly that.
From this perspective you can see how p=?np
Will the SAT solver really give us a simple polynomial step by step solution. Or would it inherently be an exponential number of steps?
Or if you're only interested to find the input(immediate next steps) instead of step by step:- will it require an exponentially large circuit to do so, or will our finite resources like supercomputers suffice?
A rather simple way to view cryptography, we have n bits of seed (n>1000,000). Can you really get the solution despite 1 million random bits of thorough interference?
Cause we will hit singularity if it is possible easily. 🙂
Added implicit constraints to the cryptography are the Challenger will have access to two out of three of the input bits(any length), output bits(any length), and the algorithm of thorough interference (with 1 million bits although this can be any length as well - sha256 uses 256 bits for example). Of course having the input and output is a trivial exclusion but only partially. As you can see, it's safe to say cryptography is secure.
This is spot on. This is exactly why metaheuristic algorithms for solving NP hard problems are based on many natural phenomena that arise for solving real world NP hard problems. for instance Evolution is an NP hard problem, as it is not known what genetic permutation will give rise to the most fit biological machine given an ecological system. Science is also an NP hard problem, as it is not known what model can best approximate the real world empirical function. Even memes are an NP hard problem, it is not known what is the "best meme" among all possible conceivable memes. If N=NP then evolution would have developed a perfect specimen generator function, science would be solvable by asking chatgpt, and there would be only a single perfect meme that satisfies everyone.
@@sauerkrautlanguage yes i did a course of genetic algorithms 10 years ago. Travelling salesman problem also has many solutions like nearest neighbour, minimum span tree, k-opt, simulated annealing, ant-colony optimization.
We need solutions to NP hard. Don't worry about crypto. Default RSA for example uses 2048 or 4k bits - and that is for boring internet traffic. Secrets and confidential information will have a few MB or GiB of seed. Quantum means if they start programming atoms to collect them(1 million quantum atoms for 1 million seed decoding(1 MB seed)) then also we have post quantum crypto algorithms and increased seed size.
The focus is not crypto, it is solving the NP hard problem.
awesome vid, keep it up!
in some sense, this looks quite similar to quantum mechanics, where the state of the gate is in some combinations of those two values or four with some gates
Nice video and much better haircut :)
1:53 to know (what's true or false) in a particular (hypothesis?)…
You've got to roll back a formal statement of all the rules you know … causality follows(?)
1:56 or maybe there's no solution
Correction: Some cryptography would still be possible if P=NP, particularly symmetric ciphers like one-time pads.
Yeah, it was a slight overstatement, but only slight. If one-time pad is the only survivor, there's not much left. Classical symmetric ciphers would not survive p=np.
Outstanding presentation and... is that LaTeX?
Thanks :) yes, we use Manim that supports latex
what a classy video
thanks!
It seems that configuration spaces are the issue. If a space is arbitrarily small, then yeah, you can solve it if you're able to check solutions, because the process of checking all possible solutions is as short as the process of checking just one solution. Trying to disprove all of this (i.e. trying to prove that P = NP) is definitely a fool's errand. At best you can hope to find an efficient way to traverse the possible configurations, and so perhaps in some cases the solving time may approach the checking time... but by no means could that be true for all possible problems, simply because a problem could have arbitrary constraints that prevent that.
Psh, you want to know what SATs are _really_ good for? The best compression scheme of all time: pi. Pi contains within it the Library of Alexandria. Therefore, the entire works of Shakespeare exist at some index into pi, and all we need to do is find that index. A really good SAT solver might be able to do that sometime before the heat-death of the universe.
Granted, that was a pretty silly example. But it is true that many chaotic functions could serve as very good compression schemes if finding the appropriate index was easy enough. Like hash functions. Inverting them would allow you to simply store the hash for any arbitrarily-long string (well, within reason), along with the string length and maybe the first few bytes, and easily compress your data down to ridiculous levels. Like, "the entire Minecraft source code now fits into a cache line" levels, probably. But that's the sort of question information theory is supposed to answer, and I know even less about information theory than I do about computer science, which isn't a lot.
That would require an algorithm for calculating the Nth digit of pi, which is an EXPTIME problem (worse than NP). This is in part because calculating the nth digit of pi requires at least log(n) space, which would rather defeat the purpose of your compression algorithm.
What of using an abstract algebra approach? By converting algorithmic statements into mathematical substructures and playing with their Galois representations is it possible to show that an algorithm can or cannot exist?
There is an algebraic approach called geometric complexity theory, you might enjoy p vs np survey by aaronson
@@PolylogCS oh, that seems nice
would be nice:
an hour of randomly generated circuits an inputs with that animation sounding
The task to evaluate if P=NP could be easily solved if P=NP.
Well but SAT is just a decision problem and not necessarily constructive. In fact, deciding primality IS in P, so SAT restricted to multiplication circuits with a given output is in P, just actually FINDING the satisfying assignment is complicated.
The fact that primality testing is in P without actually getting factors in the No cass is such a weird result of the AKS paper that I never really understood. And after I read your comment I've read up on the AKS result and finally understood! The basis of AKS is Pascal's triangle and not factors. That's so wild.
Regarding your comment, it is wrong. You can take any decision problem and call its solution algorithm polynomially often to make a constructive algorithm. Hence, if you have a non-constructive polynomial-time SAT solver, you can build a constructive polynomial-time SAT solver from it. The real issue is that PRIMES is in P, but FACTORIZATION is NP-complete.
@@AlfW Ah you're right , good point. SAT being discrete makes this quite easy then to go from decision problem to constructing a solution.
@@AlfW Factorisation ain't known to be NP-complete. In fact, factorisation is both in NP and in co-NP, and it's widely believed that NP != co-NP, and thus no problem in co-NP can be NP-complete.
We are discussing some aspects of factorization and how to convert deciding SAT to solving SAT in the blog post, it did not fit in the video.
So... where f(x) = y:
Finding out y with f, x -> Applying function
Finding out x with f, y -> SAT solving
Finding out f with x, y -> Machine learning?
Too tired to watch now but love the premise!
how do we encode loops with unknown number of iterations as feed-forward circuits?
I discuss these loops a bit in the blog post. Let me know if it does not make sense
to solve your sudoku. first solve quickly what can be done without backtracking. next find the most clued up sub grid column or row and rearrange the colour to 1-9 filling in the blanks converting the other colors in the grid accordingly with what works. then move to the next subgrid column or row and repeat the process. This cheats the whole game no matter what the size of the grid all you have to know is the original colour assignments relative to the final colour assignment and just reassign the colours.
This haircut works 😎👍🏻
awesome video! but wait im confused, this video seems to suggest factorisation is reducible to circuit SAT and is hence NP-complete, but it's suspected to be NP-intermediate, where did i go wrong?
Multiplication is also reducible to sat and yet is not np complete. The reduction in the video just shows that sat and not multiplication/factoring is powerful.
edit: The blog is up now
it's linked now
I still prefer the interpretation that P is the problems solvable in P and NP is the problems that can be verified in P.
That is fair point, there is a reason why most people go with that interpretation! I think both interpretations are valuable and the best is to understand why they are equivalent.
Isn't this (directionality question) a 'variant' of the direction of time problem in physics with respect to the 'just declare a law' comment at ~17:00.
In some way it maps (and hence highlights omissions) in the physics vs computing science laws (logic vs observation)