Man, 12 years ago you did a better job than recent articles and lessons. I swear I understood the algorithm in less than 5 minutes, which I'm looking for descriptive contents to understand for days. I can't thank you enough, I'm trying to write a new implementation of an image format! If I become Linus Torvalds someday, I'll make sure people will now you helped me a lot haha!
Thank you so much for this video. This is by far the best example I have come across which explains such a complex method so easily. Thank you...keep up the good work
It's not that hard. Figuring out if you're going to have a carry is a lot easier than figuring out what the result number will be. So you just sort of eyeball ahead and guess if you're going to have a carry or not as you proceed from left to right.
Starting from 25:00, how do we know that there are 3 chunks? If we use EOF, then we don't provide the length of code. However, the amount of chunks depends on the length of code.
So i understand how you decode it, but how are you gonna know the library or probabilities? So do you also have to send the library and the probabilities to decode it?
Why does the binary number have to correspond to a subinterval of [a,b)? Isn't it enough information to know that the number lands in the interval? I feel like in the explanation of the decoding process, whenever you say "the purple interval is in such and such chunk", you could also have said "the purple NUMBER is in such and such chunk" and the argument would have gone through. The explanation on wikipedia for these extra bits is that otherwise you would need some external information to know where the encoded stream ends. For many purposes you will have the size of the encoded file, so this isn't a problem. Furthermore, you can come up with some pathological examples where those extra bits make a message far exceeding the entropy. For example, take the alphabet {0,1, 2} where 0 has probability epsilon, 1 has probability .5, and 2 has probability .5 - epsilon. How is the sequence 1 encoded? It seems like it is sufficient to encode it simply as the binary digit 1. However, the encoding process described in the video requires you to pad with another -log2(epsilon) 0s. This is huge for small epsilon.
That is a lot of work to encode a simple 210 message. There are only 3 possible symbols so I would encode as 2 = 01, 1 = 1, 0 = 0 which would be 4 bits, compared to your 6 bits. So the entire encoding would be 0110 which decodes to 210, but unless this encoding scheme was already known on "both ends", the "mapping" table would also have to be sent. I am not sure what overhead arithmetic encoding has, for example, if the receiver knew nothing about how to decode it, what "extra" information would you have to send to them to ensure they can decode your encoded 210 message? Of course with a message that short, why encode at all? Just send them 210 uncompressed.
Seriously, you are doing this so much better, then my prof. Thanks, this helped alot!
he is always better
Man, 12 years ago you did a better job than recent articles and lessons. I swear I understood the algorithm in less than 5 minutes, which I'm looking for descriptive contents to understand for days. I can't thank you enough, I'm trying to write a new implementation of an image format! If I become Linus Torvalds someday, I'll make sure people will now you helped me a lot haha!
this is therapeutic. I watched this way longer then i had to
Thank you so much for this video. This is by far the best example I have come across which explains such a complex method so easily. Thank you...keep up the good work
The video is great. But I don't get how you added 0.6875 and 0.015625 in left-to-right fashion. I need that kind of superpower too.
It's not that hard. Figuring out if you're going to have a carry is a lot easier than figuring out what the result number will be. So you just sort of eyeball ahead and guess if you're going to have a carry or not as you proceed from left to right.
Thank you for this awesome video, a lot more pedagogical then lecture slides :) nice job!
excellent work! Many thanks for the straightforward explanation
thanks for this! wish my lecturers could explain things as clearly and make it this easy to understand :)
Thank you, you made me understand Arithmetic coding a lot better!
Thanks for the video, but i am looking for an example of matlab arithmentic coding/decoding implementation!
Read a lot of explanations and it made no sense. Watched this video got it straight away. Thanks good video.
very nice and illustrative explanation. thanks a lot.
excllent work! Many thanks for the straightforward explanation!
Great explanation, thanks.
Starting from 25:00, how do we know that there are 3 chunks? If we use EOF, then we don't provide the length of code. However, the amount of chunks depends on the length of code.
Once you hit the EOF, you stop decoding. You are done.
Thanks a lot for your video!!!Really clear and really easy to understand.
So i understand how you decode it, but how are you gonna know the library or probabilities? So do you also have to send the library and the probabilities to decode it?
yes, this would be saved on the file with the coding result
You *SAVED* my life!😩🤧 Khalid Sayood is lengthy 🤕
This video is amazing - thank you
Excellent explanation. Thanks!
Give this man a cookie :D Very well explained :D Thank you so much sir !
Fantastic video, thanks
Excellent Video! Well explained!
Very clear explaination, thx
ありがとう、先生👍
why do we need the probability mass function? wouldn't the steps be the same for encoding and decoding for an even distribution?
very clear explanation! thanks a lot!
You saved the day man !!
Thanks !!
5.1 is not showing up in the list of videos
Very good explanation, thanks
Why is our encoded string longer than our original string ? Hence what is the point ?
It isn't. The encoded version only uses 6 bits opposed to the 18 bits used by the unencoded string
what program you use to make the video .. ?
Why does the binary number have to correspond to a subinterval of [a,b)? Isn't it enough information to know that the number lands in the interval? I feel like in the explanation of the decoding process, whenever you say "the purple interval is in such and such chunk", you could also have said "the purple NUMBER is in such and such chunk" and the argument would have gone through. The explanation on wikipedia for these extra bits is that otherwise you would need some external information to know where the encoded stream ends. For many purposes you will have the size of the encoded file, so this isn't a problem.
Furthermore, you can come up with some pathological examples where those extra bits make a message far exceeding the entropy. For example, take the alphabet {0,1, 2} where 0 has probability epsilon, 1 has probability .5, and 2 has probability .5 - epsilon. How is the sequence 1 encoded? It seems like it is sufficient to encode it simply as the binary digit 1. However, the encoding process described in the video requires you to pad with another -log2(epsilon) 0s. This is huge for small epsilon.
Ah. If I had watched the beginning of the next video I would have gained some insight.
Awesome !!!
Thanks
you solved my problem! Thanks!!!
really good
Nice video, awesome explanation, bad algorithm... :P
Best explanation evaaaa! :D
That's the pronunciation when it's a noun. It's being used as an adjective here, which has a different pronunciation.
awesome
master
pro tip: hold shift to draw straight lines
good vid tho
subtitle? :D hehe
That is a lot of work to encode a simple 210 message. There are only 3 possible symbols so I would encode as 2 = 01, 1 = 1, 0 = 0 which would be 4 bits, compared to your 6 bits. So the entire encoding would be 0110 which decodes to 210, but unless this encoding scheme was already known on "both ends", the "mapping" table would also have to be sent. I am not sure what overhead arithmetic encoding has, for example, if the receiver knew nothing about how to decode it, what "extra" information would you have to send to them to ensure they can decode your encoded 210 message? Of course with a message that short, why encode at all? Just send them 210 uncompressed.
lol bro the point is to teach arithmetic encoding on a simple example-
dude you gotta take care of your OCD
awesome