How images are represented in C using Data Structure ?
Вставка
- Опубліковано 25 сер 2024
- This video explains how the images are represented & stored as a data structure in C, and how to the access the pixel data from memory using C Structures.
[1] Voice Over generated using, ,
[2] Animations Created & Rendered using, github.com/3b1....
#C #OpenCV #Image #datasturcture #computervision #hypervision
This channel is amazing. May this channel grow.
Thanks
im convinced you pascal cased the struct keyword in the thumbnail to subtly fuck with me
Please release more videos.
And thanks a lot for this video!
Thank you, I will
This video was amazing :) please make more content like this I would support you on patreon if you did and Im sure many other would too if you keep making content this good
Nice one
Keep making videos on computer vision.
Beginner here.
Sure I will, Thanks
Why no videos? Please make video on
Opencv c library .
nice explanation keep going
thanks
Please use another voice next time. There are many different AI generated voices that you can pick from. For example tortoise-tts.
That unbelievable
Please tell me what programs animation u used for making videos like that
Manim , www.manim.community/
I have a project that is based on the explanation of the video, I am new to C and I don't know if you could guide me to make the code, nice video
i think is wrong to say "an integer requires 2 bytes of storage", that's not true if you're talking abount integers, the integer requires 4 bytes, the same as float type, the type that requires 2 bytes is the short, and some may think they mean the same thing but they don't.
other than that i think the video was really nice, i would love to see more videos like this one
Back in the day, when processors were 16 bit, an int was 2 bytes. Nowadays, it's most often 4 bytes on a 32-bit as well as 64-bit systems.
How can someone be this confidently wrong?
First of all, "an integer requires x bytes of storage" doesn't specify what type of integer we are talking about beforehand, so it is not necesarily wrong. char, short, int, etc... those are all integer types, so what exactly is wrong about saying that the integer type used for image formats may require 2 bytes of storage in most cases? char, short, long, etc are all terms specific to C and other C like languages, there is nothing that says that "int" or "integer" should be 4 bytes long. The standard's specifications at most say that the int data type should be at least 16 bits, because that is the meaning given in C to the "int" datatype, which doesnt mean that char and short are not integer types. In older systems you will find that compilers define int to be 16 bits, which is 2 bytes. This usually depends on the width of the architecture (16 bits, 32 bits, 64 bits computers...), the operating system and whatever the compiler implementers decide is best. Even today you can find discrepancies in what each type means across systems of the same architecture width. For example, in Windows 64 bit, long is 32 bit, which is the same as int, and to get 64 bits you need to use a long long, but in Linux and other UNIX systems, long is already 64 bits. Does any of this mean that int and long are not integers because their sizes are inconsistent across systems? Obviously not, which is precisely my point, that your statement is wrong and makes no sense, because the size of an integer doesn't determine wether it is an integer type or not. It just makes no sense and I don't know how you got to that conclusion.
Btw, never assume how many bytes take up a data structure. A float might be 4 bytes in most systems today... but if you ever touch an older system or an embedded system, you might be in for a surprise, because maybe that computer doesn't even have a floating point coprocessor and now you don't even have floats.
@@AlFredo-sx2yy Mostly true. Although, there were software libraries for floating point that were quite prevalent before FPU's were common, and even now they are still sometimes used for validation testing. Those were fun times back then, playing with individual bits to simulate floating point.