By learning Development I literally want to learn the implementation and Architecture of systems, not just want to learn how to use them with a different language/libraries . And that's what you are teaching. Love your channel.
Hi Arpit, totally loving these videos. I had few questions on WAL: 1- Suppose we configure flush frequency as 2 second and a update operation happens which gets appended to the log file and not yet flushed to the disk, in that time if other process tries to read the same row it will get the stale data right? 2- When we are appending log to the log file it will reply with some kind of acknowledgement right saying log is successfully written, what will happen if this process crashes and acknowledgement is not sent will it write the same log twice(due to retry)? If this is the case how do we ensure that we will discard duplicates while flushing logs to disk?
1. The updates are first made in the disk blocks the database fetched in memory. Because it might be possible due to some constraint that update might fail so writing to flush file without checking if it is even possible is futile. Hence the flow is: fetch the diskblocks to be updated in memory, update the blocks in memory, write to WAL, flush it to the disk My bad: I should have explained this in the video. 2. To be honest I am not sure. But my guess is because writing to a log file is not a network call, it is not unpredictable. Hence you will always get an ACK about writing it to the log file so no need of retry. Again my best guess. It would be awesome if we could deep dive into it sometime, If you find any resource let me know.
Ans 2) In order to ensure that duplicate log records are not written to the log file during a retry, most databases use a technique called "idempotent logging". In idempotent logging, each log record is assigned a unique identifier, and the database uses this identifier to check if the log record has already been written to the log file. If the log record has already been written, the database simply ignores the duplicate record and does not write it again.
if we are flushing the data every 1 minute lets say, how is the consistency ensured? Incase a read happens before the changes are made. It will fetch it from the disk but disk doesn't have the latest changes yet.
Nice video! Had one question. You mentioned that the committed data is not straightaway put in the database memory instead it is appended in the WAL file and then asynchronously these changes are applied to the memory. But suppose I write some data in a commit and that query is appended to WAL but not written to the memory. And in the mean time I read from the database so will I get the old data? How this situation is handled?
Hi arpit great explaination man🙌 just one question: you've said that while adding a new entry to the WAL file we generate CRC, first add that to the WAL file and then store correspoing SQL command/data as a new entry. Is this not a possibility that after writing the CRC for the new entry and as we've written some SQL command entry text, our system crashes and this latest entry is incorrect. But as the system will be fixed and up again, we'll read this latest entry as a valid one bcz CRC was assigned to this entry and we consider it as a valid entry bcz of that. Is it not better to first add the SQL command/data as a new entry first into WAL page and then if its written successfully assign CRC to this new entry. This way in what ever scenario our system goes down, we'll be correctly able to figure out whether the latest entry was correctly written or not!
So, you first want to read the entire record data and then read CRC code? Don't you think its not optimal? You first read CRC code, and then you keep reading fixed number of bytes, keep doing check, this way you don't overuse the memory.. CRC check is done sequentially on a file.. hence, the point is to only read fixed number of bytes of the actual record, apply check, discard whatever record data you read.. load next chunk of bytes and repeat the process. If you record is very huge.. you are unnecessarily overloading system into reading entire record.
By learning Development I literally want to learn the implementation and Architecture of systems,
not just want to learn how to use them with a different language/libraries .
And that's what you are teaching.
Love your channel.
Amazing content as always. The clarity you have on such important concepts is amazing!
Hi Arpit, totally loving these videos.
I had few questions on WAL:
1- Suppose we configure flush frequency as 2 second and a update operation happens which gets appended to the log file and not yet flushed to the disk, in that time if other process tries to read the same row it will get the stale data right?
2- When we are appending log to the log file it will reply with some kind of acknowledgement right saying log is successfully written, what will happen if this process crashes and acknowledgement is not sent will it write the same log twice(due to retry)? If this is the case how do we ensure that we will discard duplicates while flushing logs to disk?
1. The updates are first made in the disk blocks the database fetched in memory. Because it might be possible due to some constraint that update might fail so writing to flush file without checking if it is even possible is futile. Hence the flow is: fetch the diskblocks to be updated in memory, update the blocks in memory, write to WAL, flush it to the disk
My bad: I should have explained this in the video.
2. To be honest I am not sure. But my guess is because writing to a log file is not a network call, it is not unpredictable. Hence you will always get an ACK about writing it to the log file so no need of retry. Again my best guess.
It would be awesome if we could deep dive into it sometime, If you find any resource let me know.
Ans 2)
In order to ensure that duplicate log records are not written to the log file during a retry, most databases use a technique called "idempotent logging". In idempotent logging, each log record is assigned a unique identifier, and the database uses this identifier to check if the log record has already been written to the log file. If the log record has already been written, the database simply ignores the duplicate record and does not write it again.
Thanks Arpit for this videos.
In WAL file, if we don't keep the actual data with Insert/Update commands, how will we get the data in a new fresh DB by applying the WAL file?
Writing to wal means we are persisting changes in Disk which could be slow operation also. What is the way to make it faster?
Excellent explanation!
Nice explanation! I learnt a lot. Thank you!
This is just AWESOME
✅ finsihed .... 👌.... ❤
if we are flushing the data every 1 minute lets say, how is the consistency ensured? Incase a read happens before the changes are made. It will fetch it from the disk but disk doesn't have the latest changes yet.
Until it is flushed, it stays in memory. And reads are addressed from memory/cache first, so it wont go to disk to fetch it.
Netflix for developers.
nice analogy
Very true, with Arpit learning is fun
Hi Arpit Bhaiya , I am not able to understand when the db has to update million rows what happens can you please give a brief about that
Awesome Content. I am loving it Arpit. it is also useful in case of Master Slave configuration.
Nice video! Had one question. You mentioned that the committed data is not straightaway put in the database memory instead it is appended in the WAL file and then asynchronously these changes are applied to the memory. But suppose I write some data in a commit and that query is appended to WAL but not written to the memory. And in the mean time I read from the database so will I get the old data? How this situation is handled?
I think the write operation will happen in main memory which is fast. It will be written in disk asynchronously
Hi arpit great explaination man🙌
just one question: you've said that while adding a new entry to the WAL file we generate CRC, first add that to the WAL file and then store correspoing SQL command/data as a new entry. Is this not a possibility that after writing the CRC for the new entry and as we've written some SQL command entry text, our system crashes and this latest entry is incorrect. But as the system will be fixed and up again, we'll read this latest entry as a valid one bcz CRC was assigned to this entry and we consider it as a valid entry bcz of that.
Is it not better to first add the SQL command/data as a new entry first into WAL page and then if its written successfully assign CRC to this new entry. This way in what ever scenario our system goes down, we'll be correctly able to figure out whether the latest entry was correctly written or not!
So, you first want to read the entire record data and then read CRC code? Don't you think its not optimal? You first read CRC code, and then you keep reading fixed number of bytes, keep doing check, this way you don't overuse the memory.. CRC check is done sequentially on a file.. hence, the point is to only read fixed number of bytes of the actual record, apply check, discard whatever record data you read.. load next chunk of bytes and repeat the process. If you record is very huge.. you are unnecessarily overloading system into reading entire record.
What if there is a dirty read made to the same data written concurrently? How do we deal?
Hi Arpit, great explanation of WAL. Do you know what would be a great follow up
to this, ARIES for database recovery.
That's an excellent topic. Thanks for suggesting.
TL;DR - Crash recovery guarantees in ALMOST all DBs are enabled by the judicious use of WAL.
are you gujarati bhaiya ?
Yes.
@@AsliEngineering great i got it from your surname btw are you bornn and brought upin gujrat ?? btw you are kathiyawadi ??
@@AsliEngineering i like your in depth video for any system and concept
awesome content, would be great to have a discord / reddit page for each of the videos for further discussion
😍
😍