In Isolation Level Table: I meant Concurrency but instead i wrote Consistency. Read Uncommitted has high Concurrency and Serializable has Least Concurrency.
@@ConceptandCoding Hi sir because of you I was able to crack lld ,hld rounds in companies like amex,paytm payments bank, cars24,clearTax and all and joiner paytm payments bank.Thank you so much sir for all your videos and content
Hi sir because of you I was able to crack lld ,hld rounds in companies like amex,paytm payments bank, cars24,clearTax and all and joiner paytm payments bank.Thank you so much sir for all your videos and content
The IT Community is literally blessed by Having People like you, who are having quest for knowledge and understanding and more intense quest of Sharing that knowledge. Thanks a ton Shreyansh.
This is one of the finest indepth tutorials out there.. Trust me no-one teaches these things with this much level of depth and practical knowledge. Even I have 5 years of experience working at Flipkart but still this cleared my concepts. I feel the premium is so much useful of your channel Shrayansh.. Keeping doing this.
I'm watching this video on Vijayadashami '24, and what a great way to start the day! Fantastic explanation, Shrayansh. I learned a lot, and now I have a clear understanding of how to tackle distributed system design problems.
Thank you for this amazing video! You’ve explained every concept so clearly and beautifully. Really appreciate the effort you put into making it easy to understand
I have watched your concurrancy control mechanisms , Book my Show design and Cap theorem . Due to lack of time before my interview , i just sticked to watch limited videos . As expected they asked me about concurrancy control , and book my show follow up question and syncronized block . Interviewer is very happy about my answers . Finally i have cleared my dream designation . Thank you soo much . Felt like its worth to buy memebership.🎉🎉🎉
Hey Shreyansh, Thanks for discussing the important topic it'll definitely help a lot of backend engineers. I just wanted to point something on consistency, I think consistency is low for "Read Uncommitted" isolation level and High for "Serializable"
@@ConceptandCoding Yes, in notes it is mentioned as "consistency" next to the table but it should have been "concurrency". Anyways, mentioning it here so that people can get clarity on that.
really you way of explanation awesome 👌now i got very much clarity how transactions happening because i saw so many videos this much explanation i never seen , really thanks so much once again , one request pls provide English version of lld
Nice stuff! To add to it, the `Serializable` isolation level has the lowest efficiency/concurrency because it processes the tasks submitted to it in a sequential or serial fashion. It picks the tasks in the `ORDER OF SUBMISION` from the queue, and that's why it guarantees correctness.
Not entirely correct. Sequential execution of transactions is one approach to serializable isolation. (called execute the transactions serially) Other approaches like 2PL interleave the reads and writes of the concurrent transactions (that means transactions run concurrently), and only if one transaction wants to read or write a record which is also being concurrently modified by another transaction then it gets blocked because of the locks. This kind of blocking reduces the throughput as some transactions have to wait. Even deadlocks can result due to lock dependency and such transactions have to be aborted and started again - doing repeated work will result in performance degradation.
I appreciate you sharing knowledge. I think optimistic locking is essentially lock free, decision to commit or rollback depends on version values In your video you have mentioned that optimistic locking uses SELECT FOR UPDATE, this is inaccurate, taking a lock means pessimistic locking
I am not inline with it buddy. While comparing the version, that time you have to take lock else how you will handle the scenario when 2 request read and compare the version with DB at same time too? Both will go ahead and update the DB and "Lost Update" issue will arise. But glad you raised the point.
@@ConceptandCoding the way it is done is let say we have 2 sessions trying to update a row with primary key R1 and with Version V10 Both sessions will read V10 version and will try to update the row using following UPDATE SQL pattern UPDATE table SET someCol=someVal WHERE primaryKey=R1 AND version=V10 Since both sessions are firing the SQL, there will be a race condition and one will succeed and update one row while the other will update 0 rows, the session which updated 0 rows now knows that its update failed and it throws an exception(ORM such as Hibernate throw OptimisticLockException) As a programmer we can choose to retry this request being OPTIMISTIC that there will not a race condition again and its request will succeed this time.
@@bangbang86 make sense, as DB for each insert/update puts lock and during race condition one will fail. Generally handling this at application is faster than at DB. As you know that, in real code, application never directly connects with DB, there is one mediator which do lot of stuff like grouping of queries, diverting the query so that one DB server do not get overload etc. I am totally agree with your point, but i still feel handling this at application layer is much better. What do you think?
@@ConceptandCoding in any real world application there will be at least two application servers for fault tolerance and in such a scenario if the two requests working on same entity for e.g. user bank balance row, land on different app servers then it is impossible to do concurrency handling on app side since the conflicting requests lie outside the scope of each app server. This is why concurrency handling on common/converging part such as DB or any datastore is a better option as it has context of every request and is the place which stores the state of the entity. Hope I was able to explain my thought.
Agree, and that's why we are discussing distributed locking mechanism (lock on DB rather than distributed synchronised which app server do). But we both agree here now, optimistic is not actually Lock free. It does put a lock during update/insert :)
Kudos @ConceptandCoding Good Explanation and great effort. Watching from Tamilnadu without knowing Hindi. Please give english subtitles when you explain in your native language. In other videos, few places you explained in your native language. So I need to deeply concentrate to understand.
You have such an in-depth knowledge, great video. I have this doubt when you said no chance of deadlock in OCC when TA has to write on row 1,2 and TB has to write on 2,1 after first step if TA takes exclusive lock on 1 and TB takes exclusive lock on 2 and for second step both cant proceed as the shared resources needed are locked then in that case we get a deadlock even in OCC.
Thanks bro for this video. 1 Question, isolation level is working fine multiple application instances. But what if DB is also distributed, with active-active connection (2 primary DB)?
Great question. Locking works only on a single node DB. Or if there is a single leader replication setup. this is because there is a consensus on the order of writes as leader determines the order. In active active setups, concurrent writes detection needs consensus algorithms to get all nodes to agree on the order of writes. So all nodes agree that the first write in the order wins and completes the transaction, the other write aborts. Concepts like total order broadcast come into picture ..
Hi Shrayansh , Thank you for an awesome explanation. I have one doubt, so the distributed concurrency control is always handled at the DB level using Isoltation Level, nothing can be done on the application side? Is my understanding correct? Also do you recommend any book to explore more on this topic, looking for an practical example.
Optimistic concurrency control, in my opinion doesn't use locking. It is an approach that is optimistic and believes concurrency conflicts is rare. It allows the transactions to do reads and writes without blocking them, and only at the end when the transaction wants to commit it checks if any concurrency violation has happened with any other transaction, if so then it aborts the transaction. To do so it needs to keep track of a lot of state to make sure if any write in one transaction can affect a read in another transaction.(make the read outdated) This is the philosophy of optimistic concurrency control which tries to detect concurrency problems without locks. It is an approach used in SSI (Postgresql)
I checked with AI and got similar views: Optimistic concurrency control (OCC) does not use locks in the traditional sense employed by pessimistic concurrency control methods. Instead, OCC operates under the assumption that conflicts between transactions are rare. It allows multiple transactions to proceed without locking the data during reads or writes. The key idea behind OCC is that transactions can work concurrently, and potential conflicts are only checked at the end of a transaction during the commit phase. If a conflict is detected-meaning another transaction has modified the data in the meantime-the transaction is rolled back and can be retried. This approach avoids the overhead and performance bottlenecks associated with locking mechanisms, which are common in pessimistic concurrency control. Some people mistakenly associate using SELECT FOR UPDATE with optimistic concurrency control (OCC) when combined with the Read Committed isolation level because they misunderstand the nature of OCC and how locks are used in this context. Here's why this confusion arises and why it's incorrect: 1. Misunderstanding of Locking Behavior SELECT FOR UPDATE explicitly acquires locks on the rows it selects, preventing other transactions from modifying those rows until the current transaction is completed (either committed or rolled back) This behavior is characteristic of pessimistic concurrency control (PCC), where locks are used to prevent conflicts by blocking access to data that might be modified by other transactions. In contrast, optimistic concurrency control assumes conflicts are rare and does not acquire locks during the transaction's execution. Instead, OCC checks for conflicts only at the end of the transaction, rolling back if a conflict is detected. 2. Confusion Due to Short Lock Durations Some people might think that using Read Committed isolation with SELECT FOR UPDATE is "optimistic" because the locking is temporary and only applies during specific operations (i.e., during the SELECT FOR UPDATE). However, even though these locks may be short-lived compared to long-held locks in more restrictive isolation levels like Serializable, they still represent a pessimistic approach because they actively prevent other transactions from modifying the locked rows during the transaction. OCC, on the other hand, does not lock data during reads or writes but instead checks for conflicts at commit time. 3. Optimistic Concurrency Control Defined In true optimistic concurrency control, no locks are acquired during most of the transaction's execution. Instead, a transaction reads data and makes changes optimistically, assuming no other transactions will interfere. At commit time, it checks whether any other transactions have modified the data it read. If a conflict is detected, the transaction is rolled back and retried. This approach contrasts sharply with SELECT FOR UPDATE, which immediately locks rows to prevent concurrent modifications. Conclusion Using SELECT FOR UPDATE with Read Committed isolation is a form of pessimistic concurrency control, not optimistic concurrency control. The confusion likely stems from the fact that Read Committed isolation allows some level of concurrency by not locking data for reads without FOR UPDATE, but once FOR UPDATE is used, explicit row-level locking occurs, which is inherently pessimistic.
41:35 Correction : you mentioned range lock locks the neighbour entries as well. I don't think that is correct. Can u refer docs. Gap locking locks the rows which fall into where condition if index is part of query else it locks the all rows if I am not wrong
There is a big mistake in the video, when you have explained optimistic locking. You have taken two transaction and on each write you are changing the version. Version should only be changed after commit and not write operation. Ideally all these changes are happening in memory and both the transactions are independent before the final commit.
Hi, Can you elaborate on Snapshot isolation level, what is the locking strategy used in this isolation level. And some more anamolies like read skew, write skew, lost update as well.
Well done, a really well structured and clear explanation! PS: A little bit confused by the "consistency" scale decreasing while the isolation level is increasing. Perhaps you really meant "concurrency" there?
hi, @conceptandcoding, in OCC you have said that no deadlock is possible, take this case ID:1 and ID:2 ,Trans-A has acquired ex-lock on ID1 and trans-B has acquired ex-lock on ID2 after that Trans-A wants to put a shared lock on the ID2 and Trans-B wants to put a lock on ID1, in this case it is a deadlock right?
hi Pavan, pls check in comment section my discussion with @bangbang86 member. That will clarify your doubts fully. let me know if you able to find and got your answer.
@@ConceptandCoding i gone through all the conversation, it means that OCC never acquires the lock at all and it only checks while writing the data has been modified or not? am i right
@@pavankumar-cy6mg no OCC does help to achieve the isolation level below repeatable read. Only part is, application do not explicitly put lock, it adds the db row version in the query. thats it.
hi shreyanh ,wanted to ask a question, the following video could be answer to a scenario where multiple resources are trying to access a shared resource to compute on it right?
At 5:30 you mentioned how synchronized will not work in distributed scenario with multiple processes (because no shared memory between processes). And then you mentioned that for Distributed environment we've Distributed Concurrency Control. But you missed telling how exactly this can help in distributed environment as no shared memory will still be a concern here no matter what locks we use. So as per my understanding, in Optimistic Concurrency Control, systems use versioning or timestamps to track changes and ensure consistency *across different nodes* before committing the transaction. In Pessimistic Concurrency Control, a transaction might acquire a lock via *Zookeeper*, ensuring that no other transaction across any node can access the locked data until the lock is released.
Hi @ConceptandCoding I have one question, so for read committed you mentioned that dirty read is prevented as a write lock is acquired by one transaction and no other transaction can read it during then But wanted to understand what happens when 2 transactions at the sane time try to read and write, so which transaction would get the lock first and how is this decided?
Select for update query, Insert, update query put Exclusive lock. Normal select query internally put shared lock based on Isolation Level set for the txn.
Great content Shreyansh, I think after deadlock detection not every DB aborts all the transactions. For instance Postgresql aborts the transaction based on the victim selection and executes the other transactions post lock release
Hi Shreyansh , I wanted to understand , how a trasaction can read a uncommited change done by another transaction , is this the case of nested transactions? since if these 2 transactions are being performed by different threads , the read should be same until the change is commited by any one transaction. Please clarify
Thanks for a great tutorial... I can feel the passion in your teaching....Any one interested in forming a group to learn... as I am just starting out would be a great help
For range lock, can we say that the entire table is getting locked for a single transaction and as a result concurrency is lowest as other transactions have to wait ? Please correct me if I am wrong.
Hi sir, A quick question on RANGE LOCKING. "In case of Serializable Isolation Level - What is the purpose of locking neighboring rows via Range Locking?" Scenario: SERIALIZABLE ISOLATION LEVEL - In a banking setup, customer IDs with 1 - 10 have been queried/ Read in a transaction say "Txn A". ----- SHARED LOCK (S) IS APPLIED. - As per RANGE LOCKING, even the neighboring rows (customer's record with ID = 11) also gets locked. [LOCKED 1 - 11 RECORDS] - In a different transaction "Txn B", customer ID = 11 wants to transfer some money to customer X. - But Customer 11 is unable to send money as his record is in SHARED LOCKING & Once there is a SHARED LOCK, we cannot have an EXCLUSIVE LOCK on top of it. (As transferring money requires UPDATING Customer 11's record) To summarize, a query operation which does not involve Customer 11 is affecting his ability to transfer money. Could you please help me understand this (or) correct me if I am wrong? Thank you.
By neighbours it doesnt mean record 11 is getting locked. You need to understand how these predicate locks and range locks are applied. Index range locks are applied on an index. It is an approximation of predicate locking so that performance is optimized. So one index value can be associated with many values if the index is a secondary index. In such cases we can say locking the record in the index will prevent any row associated with that value (secondary key) getting inserted in to a table. Read the example of meeting room booking in DDIA.
Can you provide a brief overview on the transaction isolation levels that is employed in systems like BookMyShow and Tatkal booking on IRCTC to handle concurrency during their respective booking processes? I think strict isolation level like SERIALIZABLE suits best for BookMyShow to avoid phantom reads by applying range lock, because here the requirement is - user should get the ticket either Booked or Not Booked, there is no concept of Waiting ticket. But in terms of IRCTC Tatkal booking, there is short window of time with lot of concurrent requests and Waiting ticket is also a thing, So To trade of System performance, do you think they apply less strict Isolation Level?? ( Not sure on this- but they allow some level of incorrect reads, if we check ticket availability status on 2 laptops, one shows available, but when you book with other at the same time, it can give Waiting ticket),
In my view, BookMyShow might be using optimistic concurrency Control (with Read Committed) isolation level. Couple of reasons: 1. As you and me can select same seat at the same time but at the time of checkout one of us will see the issue. 2. I can select/ unselect multiple seats, i can not put locks on all seats, as you know, in pessimistic if lock is put it will be released only after end of txn. So optimistic is the best option. For IRCTC, i will think and get back to you (but seems very similar to BookMyshow usecase only)
Hi @Shreyansh, First of all a very big Thanks for all your efforts for educating the community. My doubt is platforms like BookmyShow and IRCTC, we can also book multiple seats in single transaction right. Shouldnt we use Serailizable isolation level as it involves range of seats to be booked.
Hey @shrayansh, I have a question please clarify. If only read is required, then I think no isolation required, then when this Read Committed isolation will be used?
Hey Shreyansh, Can you please explain the Range lock? Is this a lock on the whole table or what? In a transaction, I can have a query `select * from table1 where name='abc'` which can be executed more than one time. And I don't want a phantom read issue. Then I am left with 2 kinds of isolation levels - snapshot isolation and serializable isolation. In snapshot isolation, the DB can create a snapshot for my transaction and I will only see that snapshot in my whole transaction. In serializable isolation There should be a lock on the whole table, then only phantom read can be avoided.
in Optimistic Concurreny Control, what will happen if at time T3, the transaction B also want to to update? If both A and B put exclusive locks at the same time at time T3?
Where are we applying these Locks? On Db-Level or Code-Level( duty of AppDev or DBDev) ? How does the code look like ? 2 more requests: - Can you tell What Common Concurrency-questions, that are asked in interviews ? - Pls explain Class-level & Object-level lock.
Transaction and isolation level we have to define at code level, locks is at DB level. This is the most frequently asked interview question in concurrency buddy.
@@ConceptandCoding for isolation we will be writing code to get the shared / exclusive lock ? How does that look in Java? What are good books to refer concurrency ?
@@UtkarshSingh-cb8fq any SQL book is okay for it. When you do Select query it put shared lock. When you use update, delete, select for update it put exclusive lock
@@ConceptandCoding so when you say put shared lock at start of Read and hold it till transaction completes , then it will be DB property/responsibility to hold shared lock until transaction completed? And how DB knows about the transaction , like how many statements/operations are present in a transaction
I think in optimistic lock while reading it does not acquire any kind of locks not even shared locks but i am a little bit confused like how DB ensures the read_commited isolation level may be for that you are refering it does acquire a shared lock while reading the data.
Hi @ConceptandCoding I have one question, so for read committed you mentioned that dirty read is prevented as a write lock is acquired by one transaction and no other transaction can read it during then But wanted to understand what happens when 2 transactions at the sane time try to read and write, so which transaction would get the lock first and how is this decided?
Hi Shrayansh, In case of optimistic locking, if transaction A gets the read lock and after that transaction B also gets the read lock, no one will be able to get the write lock right? that is also a dead lock state, with just one row right?
Hey Shreyansh, Great video, just one minor clarification related to optimistic concurrency control both user will read that seat is free with version 1 now in your example you said first user will take the exclusive lock till the end of the transaction and if other user tries to update he will check first the version if version is different rollback and try again. one doubt here like both tries to read the row parallely both got the shared lock now if both tries to take exclusive lock parallely will database will handle this thing to give one transaction exclusive lock and other will be in waiting state am I right here it is database responsibility to give exclusive lock to a particular transaction if many are coming parllely
Yes it is database responsibility to provide only 1 exclusive lock to 1 txn at a time. And just one correction buddy, in optimistic concurrency control there is no lock required, (so that select for update, is generally select only, no lock is required ) In comment section, there is one long discussion happened, see if you can get that, that will clarify you, if not let me know, we will connect and try to clarify buddy.
Hi Shrayansh, the default isolation level in MySQL is REPEATABLE_READ. So, if we have to use optimistic locking in MySQL, is it required to change the isolation level to READ_COMMITTED first?
Hi Shreyanh @ConceptandCoding, You skipped one very important scenario i.e how repeatable read does not solves phanton read. Can you give the same example using lock strategy?
Can you please explain by taking scenario of two transactions T1 and T2 that how Repeatable Read is not able to solve phantom problem@@ConceptandCoding
Great playlist, but I am confused that at some places it is written that in OCC, no locks are used to perform Reads, not even shared locks to maximize concurrency in reads. Only (E) locks are used to perform updates on row. Can someone please clarify ?
@@sumurthdixit8482 in OCC when we read the row, we do not put into any transaction, so no locks placed. and after its work done only at the end when it want to update the data in DB, it uses the transaction because we want rollback feature right. And thats where exclusive lock is taken, and before update it also does the version check that no other transaction has updated the db
if i have multiple read transaction and also a write transaction present at same time, will DB going to priortize write transaction over read transactions ?
I am trying to implement this project as a part of my resume. I am a fresher with 0 experience. Which type of locking should i add in my spring boot Application.Optimistic or Pessimistic.?
how can you say that consistency is high on read uncommitted isolation ?? let say transaction t1 and t2 is there, and t2 made some change and didn't commit and rolled that back so dirty read would be there by t1 which an inconsistency case because latest changes are not there in DB.
One Doubbt sheryansh , Like in Read Uncommited Level --> AnyOne Transaction is Coming aur koi bhi aakar db ko apne hisab se change and read kr rha h ,, so i think it should have least consistency na ? . or can you explain me this
Great video as always Shrayansh!! Thanks a lot! I had one doubt though. How do we go about acquiring locks in case of replicated dbs ? For example, lets say I acquired an exclusive lock on row1 in db1, but some other server gets redirected to db2, and the user is able to update row1 in that replica. Can you please explain in brief about this ? Thanks!
There are distributed lock mechanism like usage of Zookeeper, which will send the msg to all the active dbs that put a lock on this data. There are other mechanism too , will cover that may be in separate video buddy
@@ConceptandCodingIsn't Zookeper to hold locks for multiple nodes of the microservice? I don't think Zookeeper has any relation with database or its replicas.
Next to your table for transaction Isolation, it should concurrency going from high to low. Consistency goes from low to high, in your table from top to bottom.
Hey nice tutorial, but there are some of wrong things taught in this tutorial. Optimistic locking doesn't use any lock. And what you mean db put share lock and after reading it releases the lock? Whats point of putting shared lock, it can just read it
Hey why is deadlock not possible in OCC, in if it used read committed, it still acquires X lock when write and if another transaction acquires lock for another row deadlock is still possible right? Also can u share the notes for sys des as well?
Read Phase: The application reads the data along with a version number or timestamp. Compute Phase: The application processes the data without holding any locks. Validation Phase: Before committing the transaction, the application checks if the data has been modified by another transaction using the version number or timestamp. If the data has changed, the transaction is aborted and retried. Why Deadlocks are Unlikely in OCC Short-Lived Locks: In OCC, locks are typically held only during the brief period of the commit phase. This minimizes the window in which a deadlock can occur. Conflict Detection: Instead of waiting for locks to be released (which is a primary cause of deadlocks), transactions in OCC check for conflicts and either proceed or abort based on version checks. Scenario Where Deadlocks Can Occur Even in OCC, deadlocks can occur If two transactions try to acquire locks on multiple resources simultaneously during the commit phase, a deadlock can occur. But very unlikely because of extra validation phase.
thanks for the videos but i could not understand how these strategies managed the distributed architecture..if read calls are to be done from different server of db...all the transaction are being on hit on same node even for the read?
There is a concept of distributed locking, means if there are multiple instance of DB having similar record, there there are service like zookeeper which make sure that all instances of db put the lock on that row, there are more apart from zookeeper
@@ConceptandCoding bhaiya thanks a lot apne toh abhi reply kar diya..bhai in technologies ko code level pe kaise seeekhein...scaler jaise course bhut mahnge hain..internet pe proper resouce smj nhi aa rhe bhai....please reply please...
@@navraj1995 code level Pe bhi Sikh jaoge. 1 way is as you grow in your career, you will automatically get exposed to these things. 2nd way is, take a sample app and start building it (take guidance from your seniors or other people)..
In Isolation Level Table: I meant Concurrency but instead i wrote Consistency.
Read Uncommitted has high Concurrency and Serializable has Least Concurrency.
I was about to post it 😀
hi Shreyansh. Great video. When is the 2 phase locking video coming? And do you mean 2 phase commit by 2 phase locking?
@@panmenia no two phase commit is different and two phase locking is different
@shreyansh you can consider adding an on-video note.
@@ConceptandCoding Hi sir because of you I was able to crack lld ,hld rounds in companies like amex,paytm payments bank, cars24,clearTax and all and joiner paytm payments bank.Thank you so much sir for all your videos and content
Hi sir because of you I was able to crack lld ,hld rounds in companies like amex,paytm payments bank, cars24,clearTax and all and joiner paytm payments bank.Thank you so much sir for all your videos and content
Congratulations bhai...can I get your mail for some guidance bhai.
how did u get the interview calls ?
The IT Community is literally blessed by Having People like you, who are having quest for knowledge and understanding and more intense quest of Sharing that knowledge.
Thanks a ton Shreyansh.
Hey
This is one of the finest indepth tutorials out there.. Trust me no-one teaches these things with this much level of depth and practical knowledge. Even I have 5 years of experience working at Flipkart but still this cleared my concepts. I feel the premium is so much useful of your channel Shrayansh.. Keeping doing this.
Hey Shivam
Hands down the best video I have seen on Concurrency Control.
i'm really blessed for having u as my teacher and thanks for clearing all my doubts. thank u soo soo much bhayiyaaa
I'm watching this video on Vijayadashami '24, and what a great way to start the day! Fantastic explanation, Shrayansh. I learned a lot, and now I have a clear understanding of how to tackle distributed system design problems.
Essential topic for understanding what your system might be doing, but rarely used in day to day tasks
This topic is very very frequently used buddy in day to day work.
Optimistic and pessimistic locking yes, but the mvcc is taken care by the database.
Thank you for this amazing video! You’ve explained every concept so clearly and beautifully. Really appreciate the effort you put into making it easy to understand
Extremely underrated playlist!! So happy to stumble upon it.
Thanks for your feedback
this is probably one of the most comprehensive tech videos, hands down
Hey
this video is truly a masterpiece - complex topic explained with clarity and simplicity. Thank You SJ !
Truely a gem. Thanks for makling this series, I could learn the most difficult low level system design just because of you. May god bless you
Thank you
This video cleared all my problems on how to handle concurrency in production as well as interviews :)
Hey Suraj
I have watched your concurrancy control mechanisms , Book my Show design and Cap theorem . Due to lack of time before my interview , i just sticked to watch limited videos . As expected they asked me about concurrancy control , and book my show follow up question and syncronized block . Interviewer is very happy about my answers . Finally i have cleared my dream designation . Thank you soo much . Felt like its worth to buy memebership.🎉🎉🎉
@@ydayanandareddy7283 congratulations buddy
Superb job done in imparting the concepts. You have an amazing talent in simplifying and detailing things.
Hey Shreyansh,
Thanks for discussing the important topic it'll definitely help a lot of backend engineers.
I just wanted to point something on consistency, I think consistency is low for "Read Uncommitted" isolation level and High for "Serializable"
Nope it's opposite. Serializable has the least concurrency as it use range lock.
Actually it is written "consistency" in the notes, but what Shrayansh meant is "concurrency"..
@@ConceptandCoding Yes, in notes it is mentioned as "consistency" next to the table but it should have been "concurrency".
Anyways, mentioning it here so that people can get clarity on that.
Oops my bad. Just now saw it again. Sorry for this
No need to be sorry you have done a good job discussing this topic we appreciate your efforts and corrections can be done in the comments section.
Shaandar jabardust jindabaad very clear explanation thank you 🙏🙏
thanks
Thanks a lot for sharing your knowledge, you´re the best professor I ever had.
Thank you for this very informative and knowledgeable session.
thanks
Very interesting. I've learned a lot. Thank You
Amazing explanation,Thanks Shrayansh.
really you way of explanation awesome 👌now i got very much clarity how transactions happening because i saw so many videos this much explanation i never seen , really thanks so much once again , one request pls provide English version of lld
Thanks, all latest videos of LLD are in English only and few initial which are in hindi, i have explained in English in LLD live playlist
Nice stuff!
To add to it, the `Serializable` isolation level has the lowest efficiency/concurrency because it processes the tasks submitted to it in a sequential or serial fashion. It picks the tasks in the `ORDER OF SUBMISION` from the queue, and that's why it guarantees correctness.
Right
Not entirely correct. Sequential execution of transactions is one approach to serializable isolation. (called execute the transactions serially)
Other approaches like 2PL interleave the reads and writes of the concurrent transactions (that means transactions run concurrently), and only if one transaction wants to read or write a record which is also being concurrently modified by another transaction then it gets blocked because of the locks.
This kind of blocking reduces the throughput as some transactions have to wait. Even deadlocks can result due to lock dependency and such transactions have to be aborted and started again - doing repeated work will result in performance degradation.
@ConceptandCoding informative Video, depth explained. Please share the notes. It will be handy during the preparation of interviews. 🙏
Nice Explanation, Keep up the momentum.
Thank you
I appreciate you sharing knowledge.
I think optimistic locking is essentially lock free, decision to commit or rollback depends on version values
In your video you have mentioned that optimistic locking uses SELECT FOR UPDATE, this is inaccurate, taking a lock means pessimistic locking
I am not inline with it buddy.
While comparing the version, that time you have to take lock else how you will handle the scenario when 2 request read and compare the version with DB at same time too?
Both will go ahead and update the DB and "Lost Update" issue will arise.
But glad you raised the point.
@@ConceptandCoding the way it is done is let say we have 2 sessions trying to update a row with primary key R1 and with Version V10
Both sessions will read V10 version and will try to update the row using following UPDATE SQL pattern
UPDATE table SET someCol=someVal
WHERE primaryKey=R1 AND version=V10
Since both sessions are firing the SQL, there will be a race condition and one will succeed and update one row while the other will update 0 rows, the session which updated 0 rows now knows that its update failed and it throws an exception(ORM such as Hibernate throw OptimisticLockException)
As a programmer we can choose to retry this request being OPTIMISTIC that there will not a race condition again and its request will succeed this time.
@@bangbang86 make sense, as DB for each insert/update puts lock and during race condition one will fail.
Generally handling this at application is faster than at DB.
As you know that, in real code, application never directly connects with DB, there is one mediator which do lot of stuff like grouping of queries, diverting the query so that one DB server do not get overload etc.
I am totally agree with your point, but i still feel handling this at application layer is much better.
What do you think?
@@ConceptandCoding in any real world application there will be at least two application servers for fault tolerance and in such a scenario if the two requests working on same entity for e.g. user bank balance row, land on different app servers then it is impossible to do concurrency handling on app side since the conflicting requests lie outside the scope of each app server. This is why concurrency handling on common/converging part such as DB or any datastore is a better option as it has context of every request and is the place which stores the state of the entity. Hope I was able to explain my thought.
Agree, and that's why we are discussing distributed locking mechanism (lock on DB rather than distributed synchronised which app server do).
But we both agree here now, optimistic is not actually Lock free. It does put a lock during update/insert :)
Kudos @ConceptandCoding Good Explanation and great effort. Watching from Tamilnadu without knowing Hindi. Please give english subtitles when you explain in your native language. In other videos, few places you explained in your native language. So I need to deeply concentrate to understand.
Sorry for that, i will make sure that I will do in full English
Outstanding video Sir, and the way you explain in depth about the problem and solution is exceptional❤❤
Thanks
Great explanation👏👏
thanks
You have such an in-depth knowledge, great video. I have this doubt when you said no chance of deadlock in OCC when TA has to write on row 1,2 and TB has to write on 2,1 after first step if TA takes exclusive lock on 1 and TB takes exclusive lock on 2 and for second step both cant proceed as the shared resources needed are locked then in that case we get a deadlock even in OCC.
Yes, i think the best way to say is OOC reduce the probability of deadlock, rather than it fully removes it.
Thanks bro for this video.
1 Question, isolation level is working fine multiple application instances. But what if DB is also distributed, with active-active connection (2 primary DB)?
Great question.
Locking works only on a single node DB. Or if there is a single leader replication setup. this is because there is a consensus on the order of writes as leader determines the order.
In active active setups, concurrent writes detection needs consensus algorithms to get all nodes to agree on the order of writes. So all nodes agree that the first write in the order wins and completes the transaction, the other write aborts. Concepts like total order broadcast come into picture ..
Hi Shrayansh ,
Thank you for an awesome explanation. I have one doubt, so the distributed concurrency control is always handled at the DB level using Isoltation Level, nothing can be done on the application side? Is my understanding correct?
Also do you recommend any book to explore more on this topic, looking for an practical example.
Great content
Thanks
nicely explained
very well explained.......awsome!!!
Very nicely explained....
Thank you
Optimistic concurrency control, in my opinion doesn't use locking. It is an approach that is optimistic and believes concurrency conflicts is rare. It allows the transactions to do reads and writes without blocking them, and only at the end when the transaction wants to commit it checks if any concurrency violation has happened with any other transaction, if so then it aborts the transaction. To do so it needs to keep track of a lot of state to make sure if any write in one transaction can affect a read in another transaction.(make the read outdated)
This is the philosophy of optimistic concurrency control which tries to detect concurrency problems without locks. It is an approach used in SSI (Postgresql)
I checked with AI and got similar views:
Optimistic concurrency control (OCC) does not use locks in the traditional sense employed by pessimistic concurrency control methods. Instead, OCC operates under the assumption that conflicts between transactions are rare. It allows multiple transactions to proceed without locking the data during reads or writes.
The key idea behind OCC is that transactions can work concurrently, and potential conflicts are only checked at the end of a transaction during the commit phase. If a conflict is detected-meaning another transaction has modified the data in the meantime-the transaction is rolled back and can be retried. This approach avoids the overhead and performance bottlenecks associated with locking mechanisms, which are common in pessimistic concurrency control.
Some people mistakenly associate using SELECT FOR UPDATE with optimistic concurrency control (OCC) when combined with the Read Committed isolation level because they misunderstand the nature of OCC and how locks are used in this context. Here's why this confusion arises and why it's incorrect:
1. Misunderstanding of Locking Behavior
SELECT FOR UPDATE explicitly acquires locks on the rows it selects, preventing other transactions from modifying those rows until the current transaction is completed (either committed or rolled back) This behavior is characteristic of pessimistic concurrency control (PCC), where locks are used to prevent conflicts by blocking access to data that might be modified by other transactions. In contrast, optimistic concurrency control assumes conflicts are rare and does not acquire locks during the transaction's execution. Instead, OCC checks for conflicts only at the end of the transaction, rolling back if a conflict is detected.
2. Confusion Due to Short Lock Durations
Some people might think that using Read Committed isolation with SELECT FOR UPDATE is "optimistic" because the locking is temporary and only applies during specific operations (i.e., during the SELECT FOR UPDATE). However, even though these locks may be short-lived compared to long-held locks in more restrictive isolation levels like Serializable, they still represent a pessimistic approach because they actively prevent other transactions from modifying the locked rows during the transaction. OCC, on the other hand, does not lock data during reads or writes but instead checks for conflicts at commit time.
3. Optimistic Concurrency Control Defined
In true optimistic concurrency control, no locks are acquired during most of the transaction's execution. Instead, a transaction reads data and makes changes optimistically, assuming no other transactions will interfere. At commit time, it checks whether any other transactions have modified the data it read. If a conflict is detected, the transaction is rolled back and retried. This approach contrasts sharply with SELECT FOR UPDATE, which immediately locks rows to prevent concurrent modifications.
Conclusion
Using SELECT FOR UPDATE with Read Committed isolation is a form of pessimistic concurrency control, not optimistic concurrency control. The confusion likely stems from the fact that Read Committed isolation allows some level of concurrency by not locking data for reads without FOR UPDATE, but once FOR UPDATE is used, explicit row-level locking occurs, which is inherently pessimistic.
41:35 Correction : you mentioned range lock locks the neighbour entries as well. I don't think that is correct. Can u refer docs. Gap locking locks the rows which fall into where condition if index is part of query else it locks the all rows if I am not wrong
Best Tutorial!
Nice explanation thanks!!
Hi durgesh
There is a big mistake in the video, when you have explained optimistic locking. You have taken two transaction and on each write you are changing the version. Version should only be changed after commit and not write operation. Ideally all these changes are happening in memory and both the transactions are independent before the final commit.
Hi, Can you elaborate on Snapshot isolation level, what is the locking strategy used in this isolation level.
And some more anamolies like read skew, write skew, lost update as well.
Hey Anshul
Very well explained !
thanks
Well done, a really well structured and clear explanation!
PS: A little bit confused by the "consistency" scale decreasing while the isolation level is increasing. Perhaps you really meant "concurrency" there?
yes
hi, @conceptandcoding, in OCC you have said that no deadlock is possible, take this case ID:1 and ID:2 ,Trans-A has acquired ex-lock on ID1 and trans-B has acquired ex-lock on ID2 after that Trans-A wants to put a shared lock on the ID2 and Trans-B wants to put a lock on ID1, in this case it is a deadlock right?
hi Pavan, pls check in comment section my discussion with @bangbang86 member. That will clarify your doubts fully. let me know if you able to find and got your answer.
@@ConceptandCoding i gone through all the conversation, it means that OCC never acquires the lock at all and it only checks while writing the data has been modified or not? am i right
@@pavankumar-cy6mg yes lock is done at DB level but application do not put the lock, application just add the row version in the query
@@ConceptandCoding and we could say then the Read committed does not comes under OCC, there is mistake in video
@@pavankumar-cy6mg no OCC does help to achieve the isolation level below repeatable read.
Only part is, application do not explicitly put lock, it adds the db row version in the query. thats it.
hi shreyanh ,wanted to ask a question, the following video could be answer to a scenario where multiple resources are trying to access a shared resource to compute on it right?
Yes
@@ConceptandCodingIsn't Zookeper (ZAB) solving the same problem? I think consensus algorithms like AB, RAFT, PAXOS solve the same
Awesome explanation
At 5:30 you mentioned how synchronized will not work in distributed scenario with multiple processes (because no shared memory between processes). And then you mentioned that for Distributed environment we've Distributed Concurrency Control. But you missed telling how exactly this can help in distributed environment as no shared memory will still be a concern here no matter what locks we use.
So as per my understanding, in Optimistic Concurrency Control, systems use versioning or timestamps to track changes and ensure consistency *across different nodes* before committing the transaction.
In Pessimistic Concurrency Control, a transaction might acquire a lock via *Zookeeper*, ensuring that no other transaction across any node can access the locked data until the lock is released.
Awesome video
Thanks!
Hello Shrayansh I think you inverted the consistency in seralisable we get high consistency or you can use availability instead.
very helpful video
Thanks
Hi @ConceptandCoding
I have one question, so for read committed you mentioned that dirty read is prevented as a write lock is acquired by one transaction and no other transaction can read it during then
But wanted to understand what happens when 2 transactions at the sane time try to read and write, so which transaction would get the lock first and how is this decided?
Hi Shreyansh,
Can you please make a video on Kubernetes?
Noted
Bro can you please add programatic example on Pessimaric and Opstimatic locking.
Very well explained.. Thanks Brother..
But can you how to take the shared and exclusive lock on table/resource in distributed env
Select for update query, Insert, update query put Exclusive lock.
Normal select query internally put shared lock based on Isolation Level set for the txn.
sir , how phantom problem is possible in repeatable read isolation level?
Great content Shreyansh, I think after deadlock detection not every DB aborts all the transactions. For instance Postgresql aborts the transaction based on the victim selection and executes the other transactions post lock release
True depends upon locking mechanism.
For read uncommitted, why does it have highest consistency?? Shouldn't it be the lowest?
Its not high consistency it will have high concurrency.
Thanks Shrayanah Bhai
Hi Shreyansh , I wanted to understand , how a trasaction can read a uncommited change done by another transaction , is this the case of nested transactions? since if these 2 transactions are being performed by different threads , the read should be same until the change is commited by any one transaction. Please clarify
Hi Shrayansh, can you please explain how PHANTOM READ is possible in REPEATABLE READ scenario, it is confusing?
Thanks for a great tutorial... I can feel the passion in your teaching....Any one interested in forming a group to learn... as I am just starting out would be a great help
Very useful content .Waiting for your 2-phase locking session
It's already there
Thanks for the detailed explanation 😊
Thanks
I was waiting for this one..thanks ❣️
Hope you will find it useful.
For range lock, can we say that the entire table is getting locked for a single transaction and as a result concurrency is lowest as other transactions have to wait ? Please correct me if I am wrong.
Hi sir, A quick question on RANGE LOCKING.
"In case of Serializable Isolation Level - What is the purpose of locking neighboring rows via Range Locking?"
Scenario: SERIALIZABLE ISOLATION LEVEL
- In a banking setup, customer IDs with 1 - 10 have been queried/ Read in a transaction say "Txn A". ----- SHARED LOCK (S) IS APPLIED.
- As per RANGE LOCKING, even the neighboring rows (customer's record with ID = 11) also gets locked. [LOCKED 1 - 11 RECORDS]
- In a different transaction "Txn B", customer ID = 11 wants to transfer some money to customer X.
- But Customer 11 is unable to send money as his record is in SHARED LOCKING & Once there is a SHARED LOCK, we cannot have an EXCLUSIVE LOCK on top of it. (As transferring money requires UPDATING Customer 11's record)
To summarize, a query operation which does not involve Customer 11 is affecting his ability to transfer money.
Could you please help me understand this (or) correct me if I am wrong?
Thank you.
By neighbours it doesnt mean record 11 is getting locked.
You need to understand how these predicate locks and range locks are applied.
Index range locks are applied on an index. It is an approximation of predicate locking so that performance is optimized.
So one index value can be associated with many values if the index is a secondary index.
In such cases we can say locking the record in the index will prevent any row associated with that value (secondary key) getting inserted in to a table.
Read the example of meeting room booking in DDIA.
Can you provide a brief overview on the transaction isolation levels that is employed in systems like BookMyShow and Tatkal booking on IRCTC to handle concurrency during their respective booking processes?
I think strict isolation level like SERIALIZABLE suits best for BookMyShow to avoid phantom reads by applying range lock, because here the requirement is - user should get the ticket either Booked or Not Booked, there is no concept of Waiting ticket.
But in terms of IRCTC Tatkal booking, there is short window of time with lot of concurrent requests and Waiting ticket is also a thing, So To trade of System performance, do you think they apply less strict Isolation Level??
( Not sure on this- but they allow some level of incorrect reads, if we check ticket availability status on 2 laptops, one shows available, but when you book with other at the same time, it can give Waiting ticket),
In my view, BookMyShow might be using optimistic concurrency Control (with Read Committed) isolation level.
Couple of reasons:
1. As you and me can select same seat at the same time but at the time of checkout one of us will see the issue.
2. I can select/ unselect multiple seats, i can not put locks on all seats, as you know, in pessimistic if lock is put it will be released only after end of txn. So optimistic is the best option.
For IRCTC, i will think and get back to you (but seems very similar to BookMyshow usecase only)
Hi @Shreyansh,
First of all a very big Thanks for all your efforts for educating the community.
My doubt is platforms like BookmyShow and IRCTC, we can also book multiple seats in single transaction right. Shouldnt we use Serailizable isolation level as it involves range of seats to be booked.
Hey @shrayansh,
I have a question please clarify. If only read is required, then I think no isolation required, then when this Read Committed isolation will be used?
Hey Shreyansh,
Can you please explain the Range lock? Is this a lock on the whole table or what?
In a transaction, I can have a query `select * from table1 where name='abc'` which can be executed more than one time. And I don't want a phantom read issue. Then I am left with 2 kinds of isolation levels - snapshot isolation and serializable isolation.
In snapshot isolation, the DB can create a snapshot for my transaction and I will only see that snapshot in my whole transaction.
In serializable isolation There should be a lock on the whole table, then only phantom read can be avoided.
in Optimistic Concurreny Control, what will happen if at time T3, the transaction B also want to to update? If both A and B put exclusive locks at the same time at time T3?
How can we achieve this in case of batch operation I.e savel all
Where are we applying these Locks? On Db-Level or Code-Level( duty of AppDev or DBDev) ? How does the code look like ?
2 more requests:
- Can you tell What Common Concurrency-questions, that are asked in interviews ?
- Pls explain Class-level & Object-level lock.
Transaction and isolation level we have to define at code level, locks is at DB level.
This is the most frequently asked interview question in concurrency buddy.
@@ConceptandCoding for isolation we will be writing code to get the shared / exclusive lock ? How does that look in Java?
What are good books to refer concurrency ?
@@UtkarshSingh-cb8fq any SQL book is okay for it.
When you do Select query it put shared lock.
When you use update, delete, select for update it put exclusive lock
@@ConceptandCoding so when you say put shared lock at start of Read and hold it till transaction completes , then it will be DB property/responsibility to hold shared lock until transaction completed? And how DB knows about the transaction , like how many statements/operations are present in a transaction
@@UtkarshSingh-cb8fq DB does not know about how Many statements, DB know about Transaction start or aborted or committed
I think in optimistic lock while reading it does not acquire any kind of locks not even shared locks but i am a little bit confused like how DB ensures the read_commited isolation level may be for that you are refering it does acquire a shared lock while reading the data.
Hi @ConceptandCoding
I have one question, so for read committed you mentioned that dirty read is prevented as a write lock is acquired by one transaction and no other transaction can read it during then
But wanted to understand what happens when 2 transactions at the sane time try to read and write, so which transaction would get the lock first and how is this decided?
Hi Manisha
Hi Shrayansh,
In case of optimistic locking, if transaction A gets the read lock and after that transaction B also gets the read lock, no one will be able to get the write lock right?
that is also a dead lock state, with just one row right?
Transaction A will get because lock is happen by Ta and it will apply exclusive lock but another transaction cant
Hey Shreyansh,
Great video, just one minor clarification related to optimistic concurrency control
both user will read that seat is free with version 1
now in your example you said first user will take the exclusive lock till the end of the transaction and if other user tries to update he will check first the version if version is different rollback and try again.
one doubt here like both tries to read the row parallely both got the shared lock now if both tries to take exclusive lock parallely will database will handle this thing to give one transaction exclusive lock and other will be in waiting state am I right here it is database responsibility to give exclusive lock to a particular transaction if many are coming parllely
Yes it is database responsibility to provide only 1 exclusive lock to 1 txn at a time.
And just one correction buddy, in optimistic concurrency control there is no lock required, (so that select for update, is generally select only, no lock is required )
In comment section, there is one long discussion happened, see if you can get that, that will clarify you, if not let me know, we will connect and try to clarify buddy.
Hi Shrayansh, the default isolation level in MySQL is REPEATABLE_READ. So, if we have to use optimistic locking in MySQL, is it required to change the isolation level to READ_COMMITTED first?
Optimistic runs below Repeatable read.
We can define the isolation level while creating the Txn.
Hi Shreyanh @ConceptandCoding, You skipped one very important scenario i.e how repeatable read does not solves phanton read. Can you give the same example using lock strategy?
let me check
Can you please explain by taking scenario of two transactions T1 and T2 that how Repeatable Read is not able to solve phantom problem@@ConceptandCoding
If the database also duplicates and scales, then will this work ?
Great playlist, but I am confused that at some places it is written that in OCC, no locks are used to perform Reads, not even shared locks to maximize concurrency in reads. Only (E) locks are used to perform updates on row. Can someone please clarify ?
@@sumurthdixit8482 in OCC when we read the row, we do not put into any transaction, so no locks placed.
and after its work done only at the end when it want to update the data in DB, it uses the transaction because we want rollback feature right.
And thats where exclusive lock is taken, and before update it also does the version check that no other transaction has updated the db
How can one transaction can read uncommited changes ... will they be under same session?
How do you decide when to use which isolation level?
based on the trade off between consistency, concurrency and performance your application requires
@@ConceptandCoding Thanks for your reply. To help me understand better, can you give me some examples?
Nice video "-)
if i have multiple read transaction and also a write transaction present at same time, will DB going to priortize write transaction over read transactions ?
no prioritization as such i am aware of.
I am trying to implement this project as a part of my resume. I am a fresher with 0 experience. Which type of locking should i add in my spring boot Application.Optimistic or Pessimistic.?
how can you say that consistency is high on read uncommitted isolation ?? let say transaction t1 and t2 is there, and t2 made some change and didn't commit and rolled that back so dirty read would be there by t1 which an inconsistency case because latest changes are not there in DB.
One Doubbt sheryansh , Like in Read Uncommited Level --> AnyOne Transaction is Coming aur koi bhi aakar db ko apne hisab se change and read kr rha h ,, so i think it should have least consistency na ? . or can you explain me this
yes, i have mentioned in comment and pinged that too, you are right
Great video as always Shrayansh!! Thanks a lot!
I had one doubt though. How do we go about acquiring locks in case of replicated dbs ? For example, lets say I acquired an exclusive lock on row1 in db1, but some other server gets redirected to db2, and the user is able to update row1 in that replica. Can you please explain in brief about this ?
Thanks!
There are distributed lock mechanism like usage of Zookeeper, which will send the msg to all the active dbs that put a lock on this data.
There are other mechanism too , will cover that may be in separate video buddy
@@ConceptandCoding Ohh..ok. Thanks for the response!
@@ConceptandCodingIsn't Zookeper to hold locks for multiple nodes of the microservice? I don't think Zookeeper has any relation with database or its replicas.
Looks like consistency arrow is wrong in isolation level diagram
Next to your table for transaction Isolation, it should concurrency going from high to low. Consistency goes from low to high, in your table from top to bottom.
Your are right, i have pinned that in comment too.
@@ConceptandCoding If possible, could you please add asterisks (*) to the videos? The absence of asterisks is causing confusion.
@@harshagarwal_net * sorry did not bot get it, for what reason could you pls elaborate
Hey nice tutorial, but there are some of wrong things taught in this tutorial. Optimistic locking doesn't use any lock. And what you mean db put share lock and after reading it releases the lock? Whats point of putting shared lock, it can just read it
right, and same has been discussed in the comment section. Do check it out, hope that will clarify your doubt.
i love you shrayansh
Concureeency is low for serialiser and high for red uncommited right ?
Yes right
Non-repeatable read❌
Non-repetetable read✅
Hey why is deadlock not possible in OCC, in if it used read committed, it still acquires X lock when write and if another transaction acquires lock for another row deadlock is still possible right?
Also can u share the notes for sys des as well?
Read Phase: The application reads the data along with a version number or timestamp.
Compute Phase: The application processes the data without holding any locks.
Validation Phase: Before committing the transaction, the application checks if the data has been modified by another transaction using the version number or timestamp. If the data has changed, the transaction is aborted and retried.
Why Deadlocks are Unlikely in OCC
Short-Lived Locks: In OCC, locks are typically held only during the brief period of the commit phase. This minimizes the window in which a deadlock can occur.
Conflict Detection: Instead of waiting for locks to be released (which is a primary cause of deadlocks), transactions in OCC check for conflicts and either proceed or abort based on version checks.
Scenario Where Deadlocks Can Occur
Even in OCC, deadlocks can occur If two transactions try to acquire locks on multiple resources simultaneously during the commit phase, a deadlock can occur. But very unlikely because of extra validation phase.
Hey
thanks for the videos but i could not understand how these strategies managed the distributed architecture..if read calls are to be done from different server of db...all the transaction are being on hit on same node even for the read?
There is a concept of distributed locking, means if there are multiple instance of DB having similar record, there there are service like zookeeper which make sure that all instances of db put the lock on that row, there are more apart from zookeeper
@@ConceptandCoding bhaiya thanks a lot apne toh abhi reply kar diya..bhai in technologies ko code level pe kaise seeekhein...scaler jaise course bhut mahnge hain..internet pe proper resouce smj nhi aa rhe bhai....please reply please...
bhai ek baar apse baat ho jaaye toh jeevan sudhar jaaye😀😀 ...itni confusion h bhai..kya karein
@@navraj1995 code level Pe bhi Sikh jaoge.
1 way is as you grow in your career, you will automatically get exposed to these things.
2nd way is, take a sample app and start building it (take guidance from your seniors or other people)..
@@navraj1995 :) i am also learning buddy
when the next video on 2-phase-locking will come? are you preparing for it?
Yes working on it
Hi.... Interesting topic.... But can i get one POC.. Meaning full low level code ????
I will write it and upload on gitlab and update
@@ConceptandCoding thank you sooo much. 🥰
How is phantom read issue occurring in Repeatable isolation when the read lock is for the transaction? can anyone answer this
it do not put range lock. if any new row is inserted in between.