Far more useful than my thousand-euro uni course that, after 4 hours, made me go home crying and with a headache. You've saved me hours of disappointment and confusion, wish you all the best
Switching the metaphor from *voting and politics* to *deciding what to do together when we just want to do something together but don't so much care what* was super helpful to me. Thanks.
Great presentation! Especially I like the presentations in which the algorithm is tweaked/changed as they go through examples, revealing the reasoning behind any subtle part of it. But I wish you also went through the case the proposer receives two different values accepted. That would have shed more light on it. Anyway, thank you for this video.
I don't understand how Paxos run is finished, because acceptor will send piggyback promise for each new promise request and proposer have to send accept request with that value instead of wanted value.
Proposer 2 compares its own understanding of the current state with the "value" returned back in the latest (given by proposalId) accept-request. In the example given at 14:20, Proposer 2 compares its own understanding and will therefore notice the discrepancy and send out the value "cat" so all the acceptors will be sure to have the latest accepted value. Proposer 2 will then decide whether the update it was planning should now be sent as Prepare 8.
oh my gawd! That initial conversation between friends trying to decide on what to do is such a great example for explaining the additional logic of saying "fiiiiine I'll accept the piggybacked value as others have already reached consensus"
Thanks for the clear explanation Luis! Just a couple of things: 1. Could you elaborate further on the 'exponential backoff' strategy that is put in place to avoid an infinite run of Paxos? 2. Could you provide more examples such as the bank storage system and highlight particularly on corner cases? Any reliable links would do too! Thanks!
1. When proposer gets a timeout (probably due to acceptors ignoring his proposal) - he waits for N + random(100) milliseconds and repeats again. If it happens again - he waits for 2N + random (100) seconds. Or something like that
What about the proposer who wasn’t in the majority and consensus was reached? Even with the exponential back off, when does that proposer node “accept” the consensus? Edit: proposer will propose a value, acceptors will piggy back the existing value. Got it.
Excellent tutorial for Paxos, if the example scenario were provided before the protocol explanation, it would better to understand the overall concepts before going into details.
Indeed, all nodes trust each other. Security isn't coming from Paxos but would be from the network layer through encryption and authentication. In practice all nodes would be under your control and running the same code.
Great talk, Luis. I'm still not sure of a few details. For instance, how do the nodes communicate to each other that a new Paxos run is starting? How is starting a new Paxos run different from just issuing new promises in the same Paxos run?
Consensus has bueen reached on that value. That is fundamental to a paxos run. One value one consensus one run. Different value different consensus and different run
Listen to ua-cam.com/video/d7nAGI_NZPk/v-deo.html carefully again. "Paxos will have consensus on one value that will never mutate". To start a new paxos run, you just work on a different value. Taking the "practical example in the video", the consensus on log position 0 (is paxos run 0), the consensus on log position 1 (is paxod run 1) and so on and so forth. Thus the consensus on log position 0 will be $100 for eternity and consensus on log position 1 will be $150 for eternity. The application just uses the latest log position as the balance.
In the bank account example, if the user reads from Replica C (before Replica C "catches up" and gets Log pos. 3), won't it get Log pos. 2? Isn't this inconsistent? I mean, if the client instead reads from Replica A or B, it'll get Log pos. 3...
When Replica C received a request from client(actually ,every paxos node will do the same steps), 1) first, he will start a proposing with (User luis, Log positon 2) to other Paxos node, 2) and then Acceptor (Replica A or B ) will return (User luis, Log position 3 and value) to Replica C 3) Replica C will append these message to his Log 4) at last ,he wil respond to the client with the right value.
Paxos guarantees safety (or consistency), which ensures two distinct nodes will never learn different values. However, it does not guarantee the 'timing' of when consistency is achieved. In your example, the returned value (stale or updated) will depend on the implementation of 'get' (or 'read'), which in turn will depend on the use case.
what happens if during this time some nodes go down? Say there are 7 nodes and 4 nodes have latest value, and 3 are still to "catch up". At this point, client makes a request to 7th node, but the first 4 nodes which had correct value go down. So therefore client will get wrong value from 3 nodes which were yet to "catch up"(but not cannot catch up because those 4 nodes with correct value are down).
If read is triggering a paxos run, then I believe the system is strongly consistent. If not, then it's eventually consistent. This means that yes, you would get an inconsistent value from replica c in the situation you described.
So basically the proposers are the leaders or wannabe leaders and the acceptors are replicas. The Wannabe leaders would catchup with the leaders when they are online as the consensus has already been reached by the other set of nodes. I suppose these Wannabe Leaders are called Learners, which are learning the values on which the consensus has already been reached. Nice presentation, now I have a decent overview of what Paxos is. But I reckon modelling nodes in different modes is going to be a difficult journey.
One of the best presentations on Paxos!! Great job Luis. Very easy to understand and simple examples make it clearer!
probably the only video on PAXOS that actually explains all the concepts clearly. Thank you so much!
I have watched quite a lot of videos about Paxos. The one is the most enjoyable. Thanks
What do you mean by piggyback? It’s very confusing
Far more useful than my thousand-euro uni course that, after 4 hours, made me go home crying and with a headache. You've saved me hours of disappointment and confusion, wish you all the best
Thank you soo much Luis!!!! It's no surprise, how these videos are better at teaching (for Free!) than my University!
Switching the metaphor from *voting and politics* to *deciding what to do together when we just want to do something together but don't so much care what* was super helpful to me. Thanks.
This is the best explanation of Paxos I've come across, thanks
Well Done. The slides showing communications between nodes made for great examples. Thank you!
Great presentation! Especially I like the presentations in which the algorithm is tweaked/changed as they go through examples, revealing the reasoning behind any subtle part of it. But I wish you also went through the case the proposer receives two different values accepted. That would have shed more light on it. Anyway, thank you for this video.
Great Presentation Luis👍
Excellent !! Please prepare the same type presentation for other type Paxoses.
I have an exam tomorrow and this video helped me a lot! Thank you very much!
great presentation! i was confused between log pos and Prepare Id. but it all made sense when i realized they are different things.
Brilliant Sir!! Looking forward to more such talks from you :D
Thank you. Also would be nice to get an overview of scenarios when nodes, including proposers, fail and then come back alive
exactly that's the core of fault tolerance
Thank you Luis. It was indeed a beautiful presentation explained with simple diagrams.
Muchas gracias luis por una explanación clarísima
Best explanation of Paxos that i've seen so far!
How does the proposer know how many majority acceptors are present in the network?
Good simplification of Paxos , thanks
This was very insightful! Thank you so much, Luis.
It is great, Luis. Thanks for your time explaining Paxos!
Thanks a lot for the video :). I watched it a couple of times to be sure to understand every detail. You saved my exam
best tutorial. Thanks!
Great Presentation! Simple and lucid.
Huge help! Thank you!
Simply the best sir great learning
I was familiar with Raft before I saw this. This is a really understandable explanation! Thanks!
I don't understand how Paxos run is finished, because acceptor will send piggyback promise for each new promise request and proposer have to send accept request with that value instead of wanted value.
Proposer 2 compares its own understanding of the current state with the "value" returned back in the latest (given by proposalId) accept-request. In the example given at 14:20, Proposer 2 compares its own understanding and will therefore notice the discrepancy and send out the value "cat" so all the acceptors will be sure to have the latest accepted value. Proposer 2 will then decide whether the update it was planning should now be sent as Prepare 8.
This is a good simplified version of Paxos, but does lack a lot of detail.
Thank you Luis for this super clear presentation on how Paxos works!
Thank U, Luis
Great presentation Luis. You made it very easy.
Love the video, thanks Luis.
Thanks Luis
May I use this video in the EDpuzzle platform?
it helps a lot, thank you
oh my gawd! That initial conversation between friends trying to decide on what to do is such a great example for explaining the additional logic of saying "fiiiiine I'll accept the piggybacked value as others have already reached consensus"
OMG. Thank you so much for the great explanation!!!
Thank you kind sir. Really covered all the cases that I had in mind. Keep up the great teaching skills. I really like the visual support too!
Thank you for the awesome presentation!
I'd like to organize it in my blog, can I use the image in The Paxos Algorithm slide?
Good one, Thank you.!
Great!! It Helped very much.
Thank you for the nice presentation
it is funny the practical case is presented as "a distributed storage" : ) Thanks a lot for the video.
thank You
Thanks for the clear explanation Luis! Just a couple of things:
1. Could you elaborate further on the 'exponential backoff' strategy that is put in place to avoid an infinite run of Paxos?
2. Could you provide more examples such as the bank storage system and highlight particularly on corner cases?
Any reliable links would do too! Thanks!
1. When proposer gets a timeout (probably due to acceptors ignoring his proposal) - he waits for N + random(100) milliseconds and repeats again. If it happens again - he waits for 2N + random (100) seconds. Or something like that
What about the proposer who wasn’t in the majority and consensus was reached? Even with the exponential back off, when does that proposer node “accept” the consensus? Edit: proposer will propose a value, acceptors will piggy back the existing value. Got it.
Excellent tutorial for Paxos, if the example scenario were provided before the protocol explanation, it would better to understand the overall concepts before going into details.
Great presentation but it seems like everyone wants to use financial incentives and a lot of energy to reach a consensus lol
Are we assuming none of the proposers/acceptors are malicious in this?
Indeed, all nodes trust each other. Security isn't coming from Paxos but would be from the network layer through encryption and authentication. In practice all nodes would be under your control and running the same code.
thanks!
So how easy is this to implement? Great vid btw
Great video!
what if proposer A start with Prepare "infinity". or aka Prepare "Max Value". proposer A will always win. How to prevent that?
For me it seems like it doesn't matter who wins here. The main goal is to let someone win, so there is nothing to prevent in this case.
You cannot ensure that proposer A is the first one to reach a majority of accepted values.
That's a byzantine failure and would require BFT to be implemented
If a proposer does not follow the Paxos rules, it is considered to be a Byzantine fault.
Paxos only guarantees consistency without byzantine faults.
*_...consensus by majority is indistinguishable from random-coin flipping, until the majority is outside the entropic zone σ√2π : σ ≈ ½√N..._*
Well explained. Lucid with necessary and sufficient details
Thanks! Great talk.
Great talk, Luis. I'm still not sure of a few details. For instance, how do the nodes communicate to each other that a new Paxos run is starting? How is starting a new Paxos run different from just issuing new promises in the same Paxos run?
Yup I would like to know the second one too.
Consensus has bueen reached on that value. That is fundamental to a paxos run. One value one consensus one run. Different value different consensus and different run
good question, I also want to know the answer
A Paxos run is just referring to the process to reach the next value for consensus. Once a value has been agreed, the next "Paxos run" starts.
Listen to ua-cam.com/video/d7nAGI_NZPk/v-deo.html carefully again.
"Paxos will have consensus on one value that will never mutate". To start a new paxos run, you just work on a different value. Taking the "practical example in the video", the consensus on log position 0 (is paxos run 0), the consensus on log position 1 (is paxod run 1) and so on and so forth.
Thus the consensus on log position 0 will be $100 for eternity and consensus on log position 1 will be $150 for eternity. The application just uses the latest log position as the balance.
Thank you for the presentation. Now I'm convinced that Paxos no different from Tether, but with a backdoor.
Dinner in afternoon 😮
In the bank account example, if the user reads from Replica C (before Replica C "catches up" and gets Log pos. 3), won't it get Log pos. 2? Isn't this inconsistent? I mean, if the client instead reads from Replica A or B, it'll get Log pos. 3...
Replica C will also launch a Paxos run during which it will learn the new log position.
When Replica C received a request from client(actually ,every paxos node will do the same steps),
1) first, he will start a proposing with (User luis, Log positon 2) to other Paxos node,
2) and then Acceptor (Replica A or B ) will return (User luis, Log position 3 and value) to Replica C
3) Replica C will append these message to his Log
4) at last ,he wil respond to the client with the right value.
Paxos guarantees safety (or consistency), which ensures two distinct nodes will never learn different values. However, it does not guarantee the 'timing' of when consistency is achieved. In your example, the returned value (stale or updated) will depend on the implementation of 'get' (or 'read'), which in turn will depend on the use case.
what happens if during this time some nodes go down?
Say there are 7 nodes and 4 nodes have latest value, and 3 are still to "catch up". At this point, client makes a request to 7th node, but the first 4 nodes which had correct value go down. So therefore client will get wrong value from 3 nodes which were yet to "catch up"(but not cannot catch up because those 4 nodes with correct value are down).
If read is triggering a paxos run, then I believe the system is strongly consistent. If not, then it's eventually consistent. This means that yes, you would get an inconsistent value from replica c in the situation you described.
So basically the proposers are the leaders or wannabe leaders and the acceptors are replicas.
The Wannabe leaders would catchup with the leaders when they are online as the consensus has already been reached by the other set of nodes.
I suppose these Wannabe Leaders are called Learners, which are learning the values on which the consensus has already been reached.
Nice presentation, now I have a decent overview of what Paxos is. But I reckon modelling nodes in different modes is going to be a difficult journey.
Thank you very much, this helped a lot!
Fantastic! Thank you
Tx a lot.
what the hell is "piggyback" ?
They could simply go to the cinema and then have dinner. Problem solved!
¿borrando comentarios que no te gustan porque te hacen una crítica? : demuestras realmente "tu nivel".
Lawsuit incoming...
hmm, confusing as shit
CD >:(
A great Paxos explanation here: angus.nyc/2012/paxos-by-example/
Thanks Luis