The best teacher I ever had was an old guy with 30+ years at AT&T. I was a new tech and I got to ride along with this guy and he taught me a lot. I worked on a digital crew that installed & repaired T1, T3 & HDSL.
decoding is a mathematical set of ifs and nots . and what you have and what you don't have in front of you to decipher. so the trick is to stand outside the box and add up all what you have then see what you are missing. what you are missing is the final answer to the equation.
We can, it's not going to be very clear to us where one code ends and one code begins. For example, if we use a=0 and b=1, that's fine IF THOSE ARE THE ONLY TWO LETTERS WE WILL ENCODE THAT WAY. If they are the only two letters encoded like this, then we know 101001 will be babaab But what happens if we have more that two letters to encode? For example a=0 b=1 c=10 Then we don't know if the string 101001 is supposed to be babaab ie (1 0 1 0 0 1, where every digit is a separate letter) OR if that same string is supposed to be 101001 ccb -> 10 10 01 with every TWO digits representing a letter? This is why we have agreed on certain terms, for example, an 8 bit code will have ONE letter for every 8 digits so a in 8bit binary would look something like : 01100001 And b looks something like this: 01100010 so if we had a string of digits that represent letters like this: 0110000101100010 Let's say I want to send you a message that reads "ab". if we agreed before hand that every group of 8 digits represents a specific letter, then we know that our message will be broken up like this: 01100001 01100010 and the correct message that is encoded is ab. I have successfully sent the message to you, and you were able to read that same message on your end. if we did not know how many digits grouped together represented a letter, we might mistakenly decode this string as 0 1 1 0 0 0 0 1 0 1 1 0 0 0 1 0 or abbaaababbaaaba Which is not the message I sent - I sent you "ab" and you received "abbaaababbaaaba"! Similaryly, how do we know we should not decode that same string in groups of 2? 01 10 00 01 01 10 00 10 which would be bcabbcac So I sent you "ab" but you received "bcabbcac". In both cases this is because we did not agree on how the information I am sending you should be read, to make it super simple. We can encode messages using some sort of code where each letter is simply given ONE other symbol ex: a=0,b=1,c=2... this is called a substitution cypher, because we simply substitute one letter with another symbol. These are very easy to break, however, so we don't really use them for practical applications.
The Kraft inequality is a fundamental result in the theory of prefix-free codes. It states that for a set of codeword lengths {l1, l2, ..., ln}, if there exist prefix-free codes with those lengths, the following inequality must be satisfied: ∑(2^(-li)) ≤ 1, for i = 1 to n In other words, the sum of the inverses of 2 raised to the power of each codeword's length must be less than or equal to 1 for a valid prefix-free code. If this inequality is not satisfied, it's impossible to construct a prefix-free code with those codeword lengths. The Kraft inequality is a critical criterion for determining the feasibility of constructing a valid prefix-free code, and it's widely used in data compression algorithms like Huffman coding to ensure the uniqueness of code assignments and decoding. The example I provided is indeed a non-prefix-free code because 'a' is a prefix of 'b.' This violates the definition of a prefix-free code. However, the Kraft inequality can still be satisfied in this case. Let's calculate the values for the codeword lengths and see if the Kraft inequality holds: Length of 'a' (l1) = 1 Length of 'b' (l2) = 2 Length of 'c' (l3) = 2 Now, let's plug these values into the Kraft inequality: 2^(-1) + 2^(-2) + 2^(-2) = 0.5 + 0.25 + 0.25 = 1 As you can see, the sum of the inverses of 2 raised to the power of each codeword's length is equal to 1, which satisfies the Kraft inequality. The Kraft inequality, while a necessary condition for the existence of prefix-free codes, can still be satisfied in cases where the code is not actually prefix-free, as long as the lengths of the codewords are such that the inequality holds. This is an interesting property of the Kraft inequality, but it doesn't change the fact that a prefix-free code should not have codewords that are prefixes of other codewords for unambiguous decoding.
The best teacher I ever had was an old guy with 30+ years at AT&T. I was a new tech and I got to ride along with this guy and he taught me a lot. I worked on a digital crew that installed & repaired T1, T3 & HDSL.
decoding is a mathematical set of ifs and nots . and what you have and what you don't have in front of you to decipher. so the trick is to stand outside the box and add up all what you have then see what you are missing. what you are missing is the final answer to the equation.
this prof. teaches great
Excellent material.
Thank you.
Professor Gallegar
what's the meaning of becimal? In 56'
Can anyone explain this?
i did't understand why we cant code C(a)=0 and next one C(b)=1 ??
can any one explain this
We can, it's not going to be very clear to us where one code ends and one code begins.
For example, if we use a=0 and b=1, that's fine IF THOSE ARE THE ONLY TWO LETTERS WE WILL ENCODE THAT WAY.
If they are the only two letters encoded like this, then we know 101001 will be babaab
But what happens if we have more that two letters to encode? For example a=0 b=1 c=10
Then we don't know if the string 101001 is supposed to be babaab ie (1 0 1 0 0 1, where every digit is a separate letter)
OR if that same string is supposed to be 101001 ccb -> 10 10 01 with every TWO digits representing a letter?
This is why we have agreed on certain terms, for example, an 8 bit code will have ONE letter for every 8 digits
so a in 8bit binary would look something like : 01100001
And b looks something like this: 01100010
so if we had a string of digits that represent letters like this: 0110000101100010
Let's say I want to send you a message that reads "ab". if we agreed before hand that every group of 8 digits represents a specific letter, then we know that our message will be broken up like this:
01100001 01100010
and the correct message that is encoded is ab. I have successfully sent the message to you, and you were able to read that same message on your end.
if we did not know how many digits grouped together represented a letter, we might mistakenly decode this string as
0 1 1 0 0 0 0 1 0 1 1 0 0 0 1 0
or
abbaaababbaaaba
Which is not the message I sent - I sent you "ab" and you received "abbaaababbaaaba"!
Similaryly, how do we know we should not decode that same string in groups of 2?
01 10 00 01 01 10 00 10
which would be
bcabbcac
So I sent you "ab" but you received "bcabbcac".
In both cases this is because we did not agree on how the information I am sending you should be read, to make it super simple.
We can encode messages using some sort of code where each letter is simply given ONE other symbol ex: a=0,b=1,c=2... this is called a substitution cypher, because we simply substitute one letter with another symbol. These are very easy to break, however, so we don't really use them for practical applications.
Great, can I ask what books uses for this course ?
Thank you again !
Proakis most probably!!
I remember absolutely struggling through understanding Proakis. In retrospect, I found Sklaar’s book gave more intuition if maybe a bit less rigour
6:15 did anyone notice he flipped the bird
whats the name of the teacher ?
Prof. Robert Gallager
okay. one sec : say a=0, b=01 and c=11 a non prefix free code with a being a prefix of b, still passes the kraft inequality any comments?
The Kraft inequality is a fundamental result in the theory of prefix-free codes. It states that for a set of codeword lengths {l1, l2, ..., ln}, if there exist prefix-free codes with those lengths, the following inequality must be satisfied:
∑(2^(-li)) ≤ 1, for i = 1 to n
In other words, the sum of the inverses of 2 raised to the power of each codeword's length must be less than or equal to 1 for a valid prefix-free code. If this inequality is not satisfied, it's impossible to construct a prefix-free code with those codeword lengths.
The Kraft inequality is a critical criterion for determining the feasibility of constructing a valid prefix-free code, and it's widely used in data compression algorithms like Huffman coding to ensure the uniqueness of code assignments and decoding.
The example I provided is indeed a non-prefix-free code because 'a' is a prefix of 'b.' This violates the definition of a prefix-free code. However, the Kraft inequality can still be satisfied in this case.
Let's calculate the values for the codeword lengths and see if the Kraft inequality holds:
Length of 'a' (l1) = 1
Length of 'b' (l2) = 2
Length of 'c' (l3) = 2
Now, let's plug these values into the Kraft inequality:
2^(-1) + 2^(-2) + 2^(-2) = 0.5 + 0.25 + 0.25 = 1
As you can see, the sum of the inverses of 2 raised to the power of each codeword's length is equal to 1, which satisfies the Kraft inequality.
The Kraft inequality, while a necessary condition for the existence of prefix-free codes, can still be satisfied in cases where the code is not actually prefix-free, as long as the lengths of the codewords are such that the inequality holds. This is an interesting property of the Kraft inequality, but it doesn't change the fact that a prefix-free code should not have codewords that are prefixes of other codewords for unambiguous decoding.