Funnily at 10:10 there might be a mistake - because the number "1 0000 111 0000" after adding "00000 111" lengthens to "1 0000 111 00 111", so it "feels" like there is additional 0 between 2 triples of "1". And it doesn't feel like we needed to expand due to number being too big (256,512 - we are in between). But I didn't have the time to check it.
Somebody else also noticed that the condition in the C version of the algorithm is wrong. `str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. The condition should be `str[i] < '0' || str[i] > '9'` My apologies for these mistakes.
@@zionmelson7936 I was formatting 1 and 0 separately, so one could see there was additional number there. I didn't go for actual formatting like it should be.
It's funny that right now at my job, I am dealing with serializing ASCII characters and you are making this video. I'm really glad I'm here George. Nicely done.
While on the topic, I know it's a bit early for the channel to explain it now, but whenever you get to architectures, please don't forget endianness explanation, there are always explanations of how but not of why. Great video as always!!
Jesus is the only way to salvation and to the father. Please repent today and turn away from your sins yo escape judgement 🙏🙏 There is no other way to get to the father but through him.
This is not casting, this is converting. Casting is a grammatical operation (forcing the compiler to think that a data has a certain type, but not actually doing any conversation).
Another way to do: 1. Take the string as argument 2. Access every character 3. Use fixed values with switch cases for every character till '0' to '9' like switch(str[i]) case '1' : 001 4. Do bit shifting to create a BCD value containing all characters 5. Convert BCD to binary 6. return binary It may or may not be faster
Amazing. I’m literally addicted to learning like this through your videos. They’re awesome ! I can’t wait for the next one and yes I would love a video on conversion of the binary values back do string to understand how the print function works !
Just want to say that you are the one i was searching for. You answers same questions as mines and in a way that i wanted. Hope you would get more known
I'm really happy I found this channel... I somewhat knew how it worked, but this just makes it really clear. You are great at explaining things. I am eagerly waiting for more videos
Using IEEE-754 binary floating point 32 or 64 format, you would have to manually decode the floating point. First bitcast the floating point to an unsigned integer of the same size, I.e float -> ui32 or double -> ui64, then using the encoding specification you extract the sign, exponent and mantissa from the integer.
Great video! I would really like to see a video explaining the problem with null values inside languages and how to avoid them, that would be very educative!
This channel is perfect to watch alongside taking CS50 to start my programming journey. Pretty excited about understanding everything in this video and learning more. Thanks for the quality videos.
Another video! I'm glad I checked your channel, since there was no notification. Typical of UA-cam sadly. Though it probably has to do with delay between the last part and this video. UA-cam deprioritizes notifications if you normally have 1 week cadence and then suddenly release video month later. Honestly being a UA-camr is a ton of work.
When it gets to converting decimal fractions as strings to floats things get a lot more complicated. Looking forward to seeing a new video about this case in the future!
literally Str(number) - 0x30 for 0-9, Str(uppercase letter) - 0x41 for A-Z, Str(lowercase)-0x61 for a-z Converting between the two is as simple as char(lower) = char(upper) ^ 0x20
ASCII allows for the use of a bitmask to get the number itself. The probably preferred way to convert these BCD numbers to an integer is reverse double dabble. There's a wiki article about it. This algorithm gets rid of expensive and area intensive (depending on your architecture, first for CPU, second for FPGA/custom silicon) multiplications and relies on fast/small shifts and add/sub operations.
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
My man your videos are awesome. Can you do an explanation on how the clock is used to move the process forward from the transistor level? For example, how do transistor gates use the clock to take the next instruction into the instruction register at the right time?
Great, that's a perfect illustration of what happens internally with the atoi() function. Ah, I noticed there is minor difference between converting a numeric string to a binary integer vs converting a numeric string to a BCD number. And that is multiplying by 10 vs shifting by 4 bits (since BCD numbers represents each numeric digit every 4 bits). I find it rather interesting, with the IBM mainframe, existing a single machine instruction (CVD) which can convert a numeric string (up to 31 digits) to BCD number. Likewise, there's another instruction (CVB) which can convert these BCD number into integers.
0:07 Yes... Just yes. Maybe this will be SUPER slow but yes) I have this in mind: 1. Represent each character in string with 4-bit binary number (Using Unicode) 2. Make BCD number from all characters 3. Convert BCD to binary. Now you have a number. For example: "532" 1. || "5" = 0101 || 3 = 0011 || 2 = 0010 || 2. 0101 0011 0010 (BCD to Binary algorithm) 3. "532" = 1000010100 __________ Now I'll watch video) ---------------------------------- Ps. Subtracting 48 is a very cleaver solution!! Now we can do same thing as i did. But initially i just wanted use table to store Unicode and number like this: | Unicode Number | Number in Binary | And use this table to convert each symbol to a number but yeah we can just subtract '0' encoding to get a number!
I would like a future video about converting an int to a string, but I am more interested in the much more complicated process of converting a float to a string.
This is actually easy how I would think Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them I mean we can even start backwards just tell it(computer) how long the number is ourselves but that means we have to know tell the length parameter so that way is better
Well, actually, there is a limit for integer numbers (as well as float), at least in C. And there is also negative numbers. So the more proper function is a little bit more complex. I wrote mine like this: int64_t StrToNum(char *Str) { int64_t Result = 0; uint32_t Index = 0; bool IsNegative = false; if (Str[0] == '-') { IsNegative = true; Index = 1; } while ((Str[Index] != '\0') && (Str[Index] >= '0') && (Str[Index]
"Shipping to Alaska, Hawaii, Puerto Rico, and International addresses is currently not available." -> pity I was actually looking for a new chair Anyway, good video, it's nice to see easier topics now and then.
I work on a php application where someone in the past reimplemented the string to number conversion... And if you have questions... Yes, it involved a loop with a bunch of ifs to check each digit Yes, they messed it up Yes, changing the usages of the function to "(int)$value" fixed a lot of bugs Yes, the person who did it (acording to git blame) still works there but was promoted to manager No, we dont do code reviews or anything like that
The conditionals you add at 11:06 are incorrect, the C code should have || instead of &&, and the Python code should have a ‘or’ and check both ends the same way the C code does; the way you wrote the C condition can never possibly trigger to raise the error you intend, because a character can't possibly be below 0 and above 9 at the same time, and the Python condition will behave completely differently than the way you intend, because first the “‘0’ < char” will evaluate to a boolean, and thus will never trigger the “char > ‘9’” because, just like in C, booleans are either 0 or 1. And even if the Python code behaved the way you intended, it's still missing a ‘not’, so it would trigger when the char IS numeric, not when it's NOT. I believe it's also a better idea to return null in C in this case, because -1 is a valid integer and is thus much more difficult to detect as an error value. Overall, still a great video! You explain the computer science concept very well, which is ultimately the value this video provides, and I'm perfectly happy to overlook erroneous code examples because this is not a programming tutorial. I've learned an incredible amount about computer science from your videos already, and this video has been no exception.
Before watching the response, this was the algorithm I came up with: ``` base = 10 str = "1030" println(string_to_int(str, base)) fn string_to_int(str: string, base: int) { let number = 0 each (index, char) of str { let digit = lookup_from(char) let exp = base ** len(str) - index - 1 number += digit * exp } return number } ```
Please make a video about big and little endianness, I always forget the order and don't understand the order of bits itself in comparison to the byte order.
Great video, and it is a very introductory version of the algorithm. However, this is not an efficient algorithm. The reason is due to the fact that the alu can't parallelize the multiplications and the additions. You should see Andrei Alexandrescu's lecture on this! But this can be a cool continuation of this video.
Thanks for the advice, I'll take a look at the lecture as soon as I get some free time. I'm assuming it is related to SIMD but if not I'm sure I'll enjoy it anyways.
I think it's more intuitive to multiply the numbers by magnitudes of 10 first and then adding them up. After that the better algorithm that you showed in the video would've been more clear I think
How to convert a number to a string: The key instrument is integer division. Let's consider the number 4327. Dividing by 10 we obtain 432 and remainder 7. Now, we already know how to convert a single digit to its corresponding ASCII code: just add 48 or ord('0'). So in this one step we obtained the so called least significant digit (7) and are left with 432. Now, we just have to repeat the same procedure until we are left with no more digits (when the last division yields 0 as the quotient). PS: Integer division is just a single processor instruction and actually gives both the quotient and the remainder in one go so it's pretty fast.
done the string to float double and it myself but a different approach stuff skiped in this video - Sign of a value for applaing a Sign multyplay output value by -1 if the '-' is found at the start of a string - decimal parsing the same way as string to int but - do it 2 times and when . was found instead of multiplying value just divide decimal it by 10 for each Ituretion and cheak if value is not to large
Yes please, make those 2 videos that you talked about in the video! Great job!! And may i give you a suggestion? Why don't you also make videos on DSA? Your animations are great! That way everyone will be able to understand completely and one more thing, can you please make the next video on recursion?
I'm guessing that in order to convert an integer to a string you have to make reverse process. Instead of multiplying you have to divide the number, take the reminder and add '0'
I've always found it rather beautiful that ASCII encodes decimal characters as 0x30 to 0x39 in hex, so mentally you can just remove 0x3 and know what the number is.
Funnily at 10:10 there might be a mistake - because the number "1 0000 111 0000" after adding "00000 111" lengthens to "1 0000 111 00 111", so it "feels" like there is additional 0 between 2 triples of "1". And it doesn't feel like we needed to expand due to number being too big (256,512 - we are in between). But I didn't have the time to check it.
Yeah, somehow that 0 got in between. I didn't noticed this while editing so thanks. I'll pin this comment.
Somebody else also noticed that the condition in the C version of the algorithm is wrong.
`str[i] < '0' && str[i] > '9'` will always return false, since it's checking if str[i] < 48 and str[i] > 57, which is never true. The condition should be `str[i] < '0' || str[i] > '9'`
My apologies for these mistakes.
your byte format sucks bruv 😐
@@zionmelson7936 I was formatting 1 and 0 separately, so one could see there was additional number there. I didn't go for actual formatting like it should be.
@@CoreDumpped No worries, Core. Programming is hard.
This channel is criminally underrrated. This is top tier content for free
To everyone in this chat, Jesus is calling you today. Come to him, repent from your sins, bear his cross and live the victorious life
Toda la maldita razón del mundo, amigo
@@idehenebenezerwe got people glazing Jesus before gta 6
"And on this channel, we hate black boxes."
*subscribed*
It's funny that right now at my job, I am dealing with serializing ASCII characters and you are making this video. I'm really glad I'm here George. Nicely done.
im learning c and tried to do i kind of failed and after that he makes that video
how did you send a comment 6 hours before the video uploaded?
@@vladsiaev12they pay for early access
@@vladsiaev12 probably a member of the channel
While on the topic, I know it's a bit early for the channel to explain it now, but whenever you get to architectures, please don't forget endianness explanation, there are always explanations of how but not of why. Great video as always!!
Yeah, there is a video about endianness already on the list.
Ah, that Little Endian vs Big Endian discussion. ;)
There is simply no why, computing machines should exist in one of the ways. Either one is a choice
A video on how computers represent negative and floating numbers. That would be amazing!
Jesus is the only way to salvation and to the father.
Please repent today and turn away from your sins yo escape judgement 🙏🙏 There is no other way to get to the father but through him.
@@idehenebenezer I cannot tell if this is a funny way of saying that my idea is insane, or if this is genuinely an ad for Christianity.
For negative numbers look into 2-compliment and for floating point number look into IEEE 754
@@xM0nsterFr3ak I figured out the basics, but a video on how that stuff is actually dealt with in the CPU would be amazing!
This is not casting, this is converting. Casting is a grammatical operation (forcing the compiler to think that a data has a certain type, but not actually doing any conversation).
Casting sometimes requires conversion.
“10” - 2 in JavaScript both casts *and* converts “10” into 10 in order to return 8
Another way to do:
1. Take the string as argument
2. Access every character
3. Use fixed values with switch cases for every character till '0' to '9'
like
switch(str[i])
case '1' : 001
4. Do bit shifting to create a BCD value containing all characters
5. Convert BCD to binary
6. return binary
It may or may not be faster
Once you get into SIMD instruction extensions, then a plethora of performance optimizations become available to you.
This channel is pure gold.
Amazing. I’m literally addicted to learning like this through your videos. They’re awesome ! I can’t wait for the next one and yes I would love a video on conversion of the binary values back do string to understand how the print function works !
Not the topic I expected after the last videos, but still a very welcome one.
I talked to my colleagues about this exact problem, specifically the one you mentioned in the end, great video!
I love channels that demystify these things
tks
Just want to say that you are the one i was searching for. You answers same questions as mines and in a way that i wanted. Hope you would get more known
Person reveal. Your a young lad. One of those prodigies I keep hearing about.
I'm really happy I found this channel... I somewhat knew how it worked, but this just makes it really clear. You are great at explaining things. I am eagerly waiting for more videos
Revolutionary idea of getting the actual number
I can sleep in peace now, I had exactly this question today and yes chair I was looking for double w.
This is the way. Would love to see a performant way to do the same with floating points numbers. This kind of video is what I really like to watch.
Using IEEE-754 binary floating point 32 or 64 format, you would have to manually decode the floating point. First bitcast the floating point to an unsigned integer of the same size, I.e float -> ui32 or double -> ui64, then using the encoding specification you extract the sign, exponent and mantissa from the integer.
This is so well explained, I don't think I'll ever be able to forget this.
Great video! I would really like to see a video explaining the problem with null values inside languages and how to avoid them, that would be very educative!
Beautiful explanation, especially if that code at the end. Thank you very much
This channel is perfect to watch alongside taking CS50 to start my programming journey. Pretty excited about understanding everything in this video and learning more. Thanks for the quality videos.
Another video! I'm glad I checked your channel, since there was no notification. Typical of UA-cam sadly. Though it probably has to do with delay between the last part and this video. UA-cam deprioritizes notifications if you normally have 1 week cadence and then suddenly release video month later. Honestly being a UA-camr is a ton of work.
From now i respect my computer, doing this all process within micro seconds...
Thanks for the best video...
great job thank you
i would love an explanation about formatting numbers into strings as well!
When it gets to converting decimal fractions as strings to floats things get a lot more complicated. Looking forward to seeing a new video about this case in the future!
Always high quality content 😊
The way I agree
This channel is very underrated
literally Str(number) - 0x30 for 0-9, Str(uppercase letter) - 0x41 for A-Z, Str(lowercase)-0x61 for a-z
Converting between the two is as simple as
char(lower) = char(upper) ^ 0x20
You my friend have done the impossible. You have actually made programming make sense.
Your videos are a blessing!
Man I love this channel so much, this would've been so helpful back when I was learning to do this kinda stuff lol
It reminds me about the college times! I really like this stuff, thank you!
Thank you so much, this was a question I had from some time ago. I would love to see the continuation of this video :)
ASCII allows for the use of a bitmask to get the number itself. The probably preferred way to convert these BCD numbers to an integer is reverse double dabble. There's a wiki article about it. This algorithm gets rid of expensive and area intensive (depending on your architecture, first for CPU, second for FPGA/custom silicon) multiplications and relies on fast/small shifts and add/sub operations.
Nice, I will show my class this. Well explained.
dude, youre going to the moon, and i'm liking your videos all the way there
This is actually easy how I would think
Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
My man your videos are awesome. Can you do an explanation on how the clock is used to move the process forward from the transistor level? For example, how do transistor gates use the clock to take the next instruction into the instruction register at the right time?
Subscribed, wanna see the second part
Great content as always!
Great, that's a perfect illustration of what happens internally with the atoi() function. Ah, I noticed there is minor difference between converting a numeric string to a binary integer vs converting a numeric string to a BCD number. And that is multiplying by 10 vs shifting by 4 bits (since BCD numbers represents each numeric digit every 4 bits).
I find it rather interesting, with the IBM mainframe, existing a single machine instruction (CVD) which can convert a numeric string (up to 31 digits) to BCD number. Likewise, there's another instruction (CVB) which can convert these BCD number into integers.
your AI voice is fine. dont change it... GOLD content as always!
0:07 Yes... Just yes. Maybe this will be SUPER slow but yes)
I have this in mind:
1. Represent each character in string with 4-bit binary number (Using Unicode)
2. Make BCD number from all characters
3. Convert BCD to binary.
Now you have a number.
For example:
"532"
1. || "5" = 0101 || 3 = 0011 || 2 = 0010 ||
2. 0101 0011 0010
(BCD to Binary algorithm)
3. "532" = 1000010100
__________
Now I'll watch video)
----------------------------------
Ps. Subtracting 48 is a very cleaver solution!! Now we can do same thing as i did.
But initially i just wanted use table to store Unicode and number like this:
| Unicode Number | Number in Binary |
And use this table to convert each symbol to a number but yeah we can just subtract '0' encoding to get a number!
Beautiful!!!! Thanks
This is just soo beautiful. 😍
Thanks for your video
Yes we need that too and don't forget to upload the remaining part of cpu episode
Hi, thanks for this video. What tools do you use for your animations? They are amazing.
Amazing! Thank you very much for doing this!
this channel is really good!
epic explanation
Simply awesome
Thank God I never thought about this before I saw the title of this video
I would like a future video about converting an int to a string, but I am more interested in the much more complicated process of converting a float to a string.
Nicely done, thank you ❤
Thanks again for this amazing content
This is actually easy how I would think
Since "0" is 48 we subtract 48 from it get the real value first then multiplying to the correct power of 10. So once the number is inputted "1234" turn them to binary 1 10 11 100 and multiply and adding(but computer does to know what index number to start with which isn't so hard) and we get the number before input another number. These process happened really fast we cannot notice them
I mean we can even start backwards just tell it(computer) how long the number is ourselves but that means we have to know tell the length parameter so that way is better
Arigatouu keep em coming 🔥🔥🔥
Great video, as always. Got me curious to understand how the process works with negative numbers.
Well, actually, there is a limit for integer numbers (as well as float), at least in C. And there is also negative numbers. So the more proper function is a little bit more complex.
I wrote mine like this:
int64_t StrToNum(char *Str) {
int64_t Result = 0;
uint32_t Index = 0;
bool IsNegative = false;
if (Str[0] == '-') {
IsNegative = true;
Index = 1;
}
while ((Str[Index] != '\0') && (Str[Index] >= '0') && (Str[Index]
Underrated channel
"Shipping to Alaska, Hawaii, Puerto Rico, and International addresses is currently not available." -> pity I was actually looking for a new chair
Anyway, good video, it's nice to see easier topics now and then.
I had to learn this when making my own programming language and i wish i had found this video sooner .-.
Banger video once again!
Can you make a video about how to virtual memory works in OS? Thanks a lot. All of your videos are so useful.
please do explain the process from getting from an integer to "string"/output. Keep up the great work!
sum 48 to it and convert to char
11:50 ...yes please! :)
My guy delivers the most random stuff when I actually needed the same stuff explained, mindblowing 😮
I work on a php application where someone in the past reimplemented the string to number conversion...
And if you have questions...
Yes, it involved a loop with a bunch of ifs to check each digit
Yes, they messed it up
Yes, changing the usages of the function to "(int)$value" fixed a lot of bugs
Yes, the person who did it (acording to git blame) still works there but was promoted to manager
No, we dont do code reviews or anything like that
11:55 spoiler, it's the double dabble. Look for Sebastian lagues visualizing data with displays video
The conditionals you add at 11:06 are incorrect, the C code should have || instead of &&, and the Python code should have a ‘or’ and check both ends the same way the C code does; the way you wrote the C condition can never possibly trigger to raise the error you intend, because a character can't possibly be below 0 and above 9 at the same time, and the Python condition will behave completely differently than the way you intend, because first the “‘0’ < char” will evaluate to a boolean, and thus will never trigger the “char > ‘9’” because, just like in C, booleans are either 0 or 1. And even if the Python code behaved the way you intended, it's still missing a ‘not’, so it would trigger when the char IS numeric, not when it's NOT.
I believe it's also a better idea to return null in C in this case, because -1 is a valid integer and is thus much more difficult to detect as an error value.
Overall, still a great video! You explain the computer science concept very well, which is ultimately the value this video provides, and I'm perfectly happy to overlook erroneous code examples because this is not a programming tutorial. I've learned an incredible amount about computer science from your videos already, and this video has been no exception.
Yeah, I already pinned a comment referring to this. My apologies, thanks for the feedback.
the sequential method in the video also solve the issue ,when the input string is like '0987''
Before watching the response, this was the algorithm I came up with:
```
base = 10
str = "1030"
println(string_to_int(str, base))
fn string_to_int(str: string, base: int) {
let number = 0
each (index, char) of str {
let digit = lookup_from(char)
let exp = base ** len(str) - index - 1
number += digit * exp
}
return number
}
```
Please make a video about big and little endianness, I always forget the order and don't understand the order of bits itself in comparison to the byte order.
please continue>
Great video, and it is a very introductory version of the algorithm. However, this is not an efficient algorithm. The reason is due to the fact that the alu can't parallelize the multiplications and the additions. You should see Andrei Alexandrescu's lecture on this! But this can be a cool continuation of this video.
Thanks for the advice, I'll take a look at the lecture as soon as I get some free time. I'm assuming it is related to SIMD but if not I'm sure I'll enjoy it anyways.
Amazing video
Thanks a lot❤❤
I would like you to explain and give an example of the end process that you asked about.
Please create a video explaining how CPUs handle floating-point numbers.
I would love to see an explanation for thr reverse!
I think it's more intuitive to multiply the numbers by magnitudes of 10 first and then adding them up. After that the better algorithm that you showed in the video would've been more clear I think
Love that sneaky "subscribe"❤.
Please please do a video explaining operating system
at 11:08 shouldn't we use || instead of && ?
Yes. The same mistake is in the python code on the bottom.
you are the best..
keeep this up! good vedio ❤ g
You are the best
Kindly provide clue at the end of the video when will be the next video released?
Please make a video about the reverse function, Binary to Numerical String.
I love your channel
How to convert a number to a string: The key instrument is integer division. Let's consider the number 4327. Dividing by 10 we obtain 432 and remainder 7. Now, we already know how to convert a single digit to its corresponding ASCII code: just add 48 or ord('0'). So in this one step we obtained the so called least significant digit (7) and are left with 432. Now, we just have to repeat the same procedure until we are left with no more digits (when the last division yields 0 as the quotient).
PS: Integer division is just a single processor instruction and actually gives both the quotient and the remainder in one go so it's pretty fast.
can you do kernel vs os
done the string to float double and it myself
but a different approach
stuff skiped in this video
- Sign of a value
for applaing a Sign
multyplay output value by -1 if the '-' is found at the start of a string
- decimal parsing
the same way as string to int
but
- do it 2 times
and when . was found instead of multiplying value just divide decimal it by 10 for each Ituretion
and cheak if value is not to large
Yes please, make those 2 videos that you talked about in the video! Great job!! And may i give you a suggestion? Why don't you also make videos on DSA? Your animations are great! That way everyone will be able to understand completely and one more thing, can you please make the next video on recursion?
If you mean Data Structures, I already posted a video about ArrayLists. More videos of that kind are already on my list.
@@CoreDumpped yeah yeah, i mean like binary trees and heaps, those advanced topic that are rare on UA-cam
I'm guessing that in order to convert an integer to a string you have to make reverse process. Instead of multiplying you have to divide the number, take the reminder and add '0'
I've always found it rather beautiful that ASCII encodes decimal characters as 0x30 to 0x39 in hex, so mentally you can just remove 0x3 and know what the number is.
Dude u R goated