Great videos. Drawn out figures make it much easier to comprehend. Learned more in these 15 minutes, than I have in 3 hours in my assembly language lecture
From a technical standpoint, little endian makes the most sense. It ensures that no matter the size of the value, the least significant byte, is always the same address. So eg. accessing a value as a short or a byte will access the same address, not having to offset. B = Big endian L = Little endian 0 1 2 3 B: 00 00 00 FF
Small definition-accuracy issue, the meaning and length of a "word" varies with the operating system, machine architecture, and programming language. Some use 2-bytes as the definition(and the bits per byte can depend on the machine, a few have used 9 or 12bit bytes over the years) others use the max atomic capacity of the CPU eg 64bits on a 64bit CPU. x86 assembly still uses 16bits due to the 16bit 8086 and 80286 origin of the x86[_64] instruction set.
16:30 Why characters have a same output? ㅡㅡ I see now. Endian is matter about a data type like int, word etc, not a combination of data type. So a single char is endianness.
I can't agree with you on the word size. It's architecture dependant. For 8 bit cpus its width is 8 bits. Other than that, nice you use C. This way people can look into memory, rather than rely on runtime environment and words of the authors of books, or tutorials.
Hello Prof. Harry Porter, good day thanks for your video. it helped me a lot. I want to know everything about computer Memory & CPU. can you please introduce me a book or any other resources? Appreciate a lot.
Would be great if someone can clarify if I understood this least sig byte & most sig byte correctly; for example given in the video, 32-bit Integer value is 0x01234567 with big-endian being 01 23 45 67 and little-endian being 67 45 23 01. If the 32-bit Integer value happened to be 0x87654321 instead, would the big-endian be 87 65 43 21 and little-endian being 21 43 65 87? Thanks in advance.
Actually, “sizeof” is built directly in to the C and C++ languages. It is a unary operator, like + and - . It is not a function, and it not included from a library. For example, you can say “sizeof buffer” although most programmers write “sizeof ( buffer )” instead. Note that you can’t leave off the parentheses for normal functions, so “foo ( buffer )” cannot be written as “foo buffer”.
I believe its a C format for saying that its hexadecimal. Not sure if this definition is correct, just from an unstable memory of a dude that started learning :)
Correct according to Intel and most electrical engineering courses a Word is exactly 16 bits. cse.unl.edu/~goddard/Courses/CSCE351/IntelArchitecture/IntelDataType.pdf
It depends with the architecture of the machine. The bit-iness of the cpu defines how many bits exists in a word this is because the CPU operates on a word. So a 16 bit machine has words of size 16 bits, a 32 bit machine has words of size 32 bits so on and so forth.
Great videos. Drawn out figures make it much easier to comprehend. Learned more in these 15 minutes, than I have in 3 hours in my assembly language lecture
Mr. Harry Potter does his magic once again. Thank you SO much
From a technical standpoint, little endian makes the most sense.
It ensures that no matter the size of the value, the least significant byte, is always the same address. So eg. accessing a value as a short or a byte will access the same address, not having to offset.
B = Big endian
L = Little endian
0 1 2 3
B: 00 00 00 FF
Excellent teaching! Thanks from Texas.
Thank you very much
Thanks for the explanation, this is really good explanation about endianess
Thanks for sharing
This helped tons! Thanks a lot for the clear explanations and nice examples!
Very good lecture, more people should view.
Small definition-accuracy issue, the meaning and length of a "word" varies with the operating system, machine architecture, and programming language. Some use 2-bytes as the definition(and the bits per byte can depend on the machine, a few have used 9 or 12bit bytes over the years) others use the max atomic capacity of the CPU eg 64bits on a 64bit CPU. x86 assembly still uses 16bits due to the 16bit 8086 and 80286 origin of the x86[_64] instruction set.
exactly - thank you
16:30 Why characters have a same output?
ㅡㅡ
I see now. Endian is matter about a data type like int, word etc, not a combination of data type. So a single char is endianness.
Very good explanations there.
Sorry if I'm wrong, but I think in the examples, the addresses used to store the data are not aligned? (just wanted to check my concept, thank you!
I wish I was one of your students, Professor Harry Porter.
💖
nice lecture......easy to understand
thank you
I can't agree with you on the word size. It's architecture dependant. For 8 bit cpus its width is 8 bits.
Other than that, nice you use C. This way people can look into memory, rather than rely on runtime environment and words of the authors of books, or tutorials.
Hello Prof. Harry Porter,
good day
thanks for your video. it helped me a lot.
I want to know everything about computer Memory & CPU. can you please introduce me a book or any other resources?
Appreciate a lot.
Good information
Double Word is called LONG WORD
Half Byte is a Nibble
Would be great if someone can clarify if I understood this least sig byte & most sig byte correctly;
for example given in the video, 32-bit Integer value is 0x01234567 with big-endian being 01 23 45 67 and little-endian being 67 45 23 01. If the 32-bit Integer value happened to be 0x87654321 instead, would the big-endian be 87 65 43 21 and little-endian being 21 43 65 87? Thanks in advance.
pretty sure "sizeof" function is in standard library
Actually, “sizeof” is built directly in to the C and C++ languages. It is a unary operator, like + and - . It is not a function, and it not included from a library. For example, you can say “sizeof buffer” although most programmers write “sizeof ( buffer )” instead. Note that you can’t leave off the parentheses for normal functions, so “foo ( buffer )” cannot be written as “foo buffer”.
thanks for sharing hhp3
What is the 0x for??
I believe its a C format for saying that its hexadecimal. Not sure if this definition is correct, just from an unstable memory of a dude that started learning :)
0x means Hexadecimal. So, every character before the 0x is an hexadecimal number
Please correct it , Word contain 16 bits (2 Byte) while Dword contain 32 bits (4 byte) and so on...
I thought a WORD is 16bits
Correct according to Intel and most electrical engineering courses a Word is exactly 16 bits. cse.unl.edu/~goddard/Courses/CSCE351/IntelArchitecture/IntelDataType.pdf
It depends with the architecture of the machine. The bit-iness of the cpu defines how many bits exists in a word this is because the CPU operates on a word. So a 16 bit machine has words of size 16 bits, a 32 bit machine has words of size 32 bits so on and so forth.
&ip is a pointer to a pointer
but it matters not for the purpose of your video