I SCROLLED TO READ THE COMMENTS AND JUST AS I DID THAT THE PART CAME BUT I MISSED IT CUZ I WAS NOT WATCHING BUT READINGYOUR COMMENT BUT THEN I REALISED WHAT HAPPENED CUZ I HAD SEEN THE LABEKS AND HEARD HIM SPEAK IT AND THEN I REWINDED VIDEO TO SEE IT SO THANKS TO YOUR COMMENT I SAW WAVING WITHOUT YOUR COMMENT MAYBE NO WAVING FOR ME
Something I never see anyone talk about is how they use the mantissa in NaN as unofficial "NaN codes" so you can track what caused the NaN. Because it's completely meaningless, it's not regulated by the IEEE so you're free to put whatever you want in there. It can be useful for debugging if the code isn't just "Attempted to perform arithmetic on a NaN"
Always wondered this as the first time I read about fixed-point and floating point numbers, got me intrigued about how floating numbers work. Thank you.
What a great video, it's funny how often people blame the programming language for weird precision quirks, when in almost all cases it's actually the IEEE 754 standard they're taking issue with. Could be cool to do a discussion on how programmers handle currency in various languages, in order to avoid these precision errors.
It's funny how there's a programming language that makes #/0 = 0, and when bringing up how that breaks the IEEE 754 standard, the users just blamed JavaScript or Python for that. But I think that most people see how that's not a great idea, and that programming language doesn't see much usage except in tiny, niche circles. The same language also gets rid of errors entirely, instead opting for undefined behavior that prevents faults, but causes much worse problems down the line.
Some languages limit precision to make "correct results" 0.1 + 0.2 in Python and most languages is 0.3000... 4 wich is correct for the IEEE standard. PHP limits precision to 14 digits, wich makes 0.1 + 0.2 = 0.3.
I have learned about floating point representation multiple times and this was the clearest explanation I have heard. I especially like how you explain some of the decisions behind the standard so they don't seem to come from nowhere.
Love this! Back in college in the 80's I learned assembly on a Cyber 750 with 60 bit words. One of our assignments was to write the floating point normalization algorithm that was built into the hardware. I remember making a strip of 60 squares out of masking tape on a table and moving pennies around to test my code. Not sure why I enjoyed that so much...
There is another trick hiding in IEEE-754. If you reinterpret it as a 32-bit integer and add 1 to it, the new but pattern represents the next closest floating point value (except when the sign flips) So you can iterate over all floats *in order* starting from 10000…0 to 01111…1 by adding 1 repeatedly.
@@Delta0001-y yes, that's actually by design, too. just special-case for NAN (all nan comparisons must return FALSE always; that's < >= == while negated comparisons must return TRUE always, so != and similar)
Because you really don’t need to know this to learn how to code? Like all you need to know is “sometimes numbers with fractional portions are slightly different than what you think” and not explain floating and fixed point. Unless you want to make your own programming language, that is.
@@ExzaktVid You don't even need to know how floats are represented to make your programming language today. For compiled languages you can just write a translator of a parse tree into LLVM IR, that already has f32 and f64 types available.
I'm currently working on a cpu planning to have fixed point decimals, but also floating point isn't out of the question This video is great timing just for a refresher!
This video went a long way toward demystifying floating point numbers but one question will now haunt my dreams: Why does IEEE specify 127 be subtracted from the exponent instead of just using signed 8 or 11-bit notation in the exponent? 2s compliment has been known in computer science since the early days.
Місяць тому+2
Thank you for this. Really concise explanation. Can you make a follow-up video about why this format was chosen? I vaguely remember that it was because it's easy to calculate and hardware these representation, but I don't remember exactly the procedure
Amazing video quality of showing floating-point number principle and IEEE 754 standard trailer of Next video: how floating-point numbers are arithmetically operated?
When working with money in computer programs it's much easier to always use integers internally with the last two or three digits reserved for decimals. It's the closest thing to true numbers you can get while preserving precision at all times. You then calculate in sequences where you never do two calculations without rounding to the closest integer. This of course only work in isolated scenario such as invoicing or accounting. That's the simple way I've found to work with money. The other way is to use arbitrary precision that works with strings instead of integers or floating points, where the limitation is not architectual but in memory. Numbers are never easy with computers.
The fractional bits are actually treated the same as integer. It's just scaled 0 to 1 rather than 0 to 16. That's why fixed point numbers are often referred to as scaled integers, because that's basically all it is. You can take a fixed point number and add them just like you would with an integer. So even with what you're asking, using the values 0 to 16 still cannot represent the ratio for 0.8. 13/16 is the closest you can get which is 0.8125. The same value as what the video shows. Edit: Maybe you're thinking about binary coded decimals?
Had the same doubt. Instead of trying to represent a fraction 0.8. We will just use normal binary to represent a integer 8 using binary. I am guessing, we don't do this because, the original decimal fraction can be got back but doing any operation like addition subtraction on this doesn't work
@@anomanderrake5434 We don't do that because it doesn't make sense. If integer 8 is implied to be 0.8 then how do you represent a value such as 0.08? Or any number smaller than 0.1 for that matter.
A suggestion: you could describe how this representation does not represent the entire range with equal granularity as would a fixed point format. That is, the spacing between representable numbers when the exponent is zero is small, and as the exponent goes up the spacing increases. I remember seeing a graph of this somewhere...
IIRC in float you can accurately represent decimal numbers up to 2^53 - 1 / 10^308, after which you'll have to approximate. Also has been wondering perhaps we should store rational numbers as two numbers and let the software handle it the same way kids multiply fractions in school.
I think you're mixing up two things: A double can store *integers* accurately up to 2^53 − 1. A double can store numbers (inaccurately) up to (2−2^−52) × 2^1023 ≃ 1.8×10^308.
You never explicitly motivated the "floating" part of the name... how by changing the exponent the decimal point "floats around" back and forth in the value, such as: 1.2345 x 10^0 = 1.2345 1.2345 x 10^1 = 12.345 1.2345 x 10^2 = 123.45 1.2345 x 10^3 = 1234.5
Nice explanation!!! It took me a while to understand why the new 32bit floating point audio format had that huge dynamic range - mainly because I did not pay attention to the "floating point" part. Sound intensity perception is logarithmic - that's why we use dB to linearize sound pressure and why we use log potentiometers in audio amps. The 32bit floating point can represent a constantly spaced dB value that goes above 1500dB - whereas 32bit fixed point has a dynamic range of 192dB. Now ... could Laplace's Demon make a better audio format? The problem here is the fact that even with all Rational numbers we are left with an infinite number of Irrational numbers out - and with even with floating point we end up leaving out an infinite number of rational numbers. According to the Existence and Uniqueness Theorem, only linear differential equation systems have algebraic solutions and all we can do when we have non linear problems is - besides trying to find algebraic solutions for the few cases that may have algebraic solutions - to used Numerical Calculus. Since there always is an intrinsic error to every measurement for the starting conditions and since that error grows exponentially we can't predict the future even if our system is an ideal double pendulum. That's why when I found a video titled "AI just solved the three body problem" I said "All in - show me your cards!" - and ... guess what ... I won everything! The system had improved the numerical solution for the problem (the prediction went a little more into the future) for the problem ... in two dimensions! The reason why we believe we can create stable societies and find permanent solutions for all of humanities problems comes from the fact that in the early days of Newtonian mechanics there were so many problems to solve and even though computers were slow and a lot of linearization was made it gave impressive results. In the words of Elizabeth Anscombe: The high success of Newton's astronomy was in one way an intellectual disaster: it produced an illusion from which we tend still to suffer. This illusion was created by the circumstance that Newton's mechanics had a good model in the solar system. For this gave the impression that we had here an ideal of scientific explanation; whereas the truth was, it was mere obligingness on the part of the solar system, by having had so peaceful a history in recorded t~me, to provide such a model. For suppose that some planet had at some ume erupted with such violence that its shell was propelled rocket-like out of the solar system. Such an event would not have violated Newton's laws; on the contrary, it would have illustrated them. But also it would not have heen calculable as the past and future motions of the planets are presently calculated on the assumption that they can be treated as the simple 'bodies' of his mechanics, with no relevant properties but mass, position and velocity and no forces mattering except gravity.
0:44 does anyone one know the forth number top to bottom left to right? i figured it's some kind of constant since the other ones are e, sqrt(2), pi, and the golden ratio
Maybe it makes sense to call the first bit of the exponent the sign of the exponent -- equivalent, but easier to explain -- just as the first bit overall is the sign of the overall number, the first bit of the exponent is the sign is the exponent. (Ok, maybe some people will not find it the same: the sign of the number is actually the sign of the mantissa, and is separated from the rest of the mantissa by the exponent.)
I self-identify as a *supernormal number* and therefore cannot be represented by single-precision or double-precision. Without representation, I fortunately do not have to pay taxes.
Few Questions: Why don't we just use the normal way of 2's complement to represent the exponent? Similarly for Mantissa why don't we just use the usual binary representation, to avoid all the special cases?
Why not use a radix of 4 and then the exponent doesn't have to spill over into the next byte and you don't have to worry about an implied one or how to represent zero?
i always wonder why they choose this way of representation, why not just use an integer and then an exponent, like 0.402 would be stored as 402 and 3 for 1/10^3 it would take more memory but it would be precise
Doubles are so good, once you learn about them you'll never want to go back to singles. In fact some programming languages understand this intuitively which is why they just use doubles by default. It's a total no-brainer: Doubles can represent objects many times larger than our universe and many times smaller than a particle, and more importantly doesn't think that 1+1=3
Important to note that we can't even represent all Real Numbers using Rational Numbers - that's why it's impossible to predict the future of even deterministic non-linear systems.
@@rzezzy1713 yeah, the binary form will have some errors added to it due to the conversion, so it will be something like 0.300000000xxxx where xxxx is errors from the mantissa conversion.
You divide 5.456788 by 4 to you get 5.456788= 1.77... x 2^2 So mantissa is 1.77... and exponent is 2 And if you have other multiples, you can just divide it by a multiple of 2 such that the mantissa is between 1 and 2 And if the mantissa is less than 2 you multiple it by 2 and reduce the exponent by one It is important to notice that the exponents are to the power of 2 and not 10 so each power increase will multiply by 2 and not 10 and vise versa
good video, but there is no answer to the questions: why is sign, exponent and mantissa positioned that way and why exponent is not represented as a 2's complement rather then that way? because, with this representation, normalized floating point data can be compared (greater or less than each other) as if they are normal positive integers! there is no need for circuits for those operations.
want to see the floating point scheme collapsing???? try to plot the following function on any software f(x) = (2/|x|)^(n+1/2) J_{n+1/2}(|x|) where n is a parameter an J_n(x) is the Bessel function of first kind. The function f(x) looks pretty like a Gaussian shape as you could check by picking n=pi.... but let see what happens when n=4 .... chan!!!
Complex numbers are represented by two floating point numbers. Some programming languages have built-in routines for them making it possible to use them in expressions as with real numbers, in others both parts must be computed separately.
For a floating point explanation the fact that binary is used is unnecessary - if your goal is just to explain floating point without connecting it to computers.
The little guys waving when you talk about them is very sweet
o/
o/
o/
o/
I SCROLLED TO READ THE COMMENTS AND JUST AS I DID THAT THE PART CAME BUT I MISSED IT CUZ I WAS NOT WATCHING BUT READINGYOUR COMMENT BUT THEN I REALISED WHAT HAPPENED CUZ I HAD SEEN THE LABEKS AND HEARD HIM SPEAK IT AND THEN I REWINDED VIDEO TO SEE IT
SO THANKS TO YOUR COMMENT I SAW WAVING
WITHOUT YOUR COMMENT MAYBE NO WAVING FOR ME
This is the first time I've seen an explanation of floats that also explains inf and NaN. Thank you.
There was another video I watched that talked about the reasoning that went behind the standard, and how they arrived at a solution that works best.
And the vid is called "How floating point works" by jan Misali. It's a good video
Something I never see anyone talk about is how they use the mantissa in NaN as unofficial "NaN codes" so you can track what caused the NaN. Because it's completely meaningless, it's not regulated by the IEEE so you're free to put whatever you want in there. It can be useful for debugging if the code isn't just "Attempted to perform arithmetic on a NaN"
Always wondered this as the first time I read about fixed-point and floating point numbers, got me intrigued about how floating numbers work. Thank you.
What a great video, it's funny how often people blame the programming language for weird precision quirks, when in almost all cases it's actually the IEEE 754 standard they're taking issue with.
Could be cool to do a discussion on how programmers handle currency in various languages, in order to avoid these precision errors.
It's funny how there's a programming language that makes #/0 = 0, and when bringing up how that breaks the IEEE 754 standard, the users just blamed JavaScript or Python for that.
But I think that most people see how that's not a great idea, and that programming language doesn't see much usage except in tiny, niche circles. The same language also gets rid of errors entirely, instead opting for undefined behavior that prevents faults, but causes much worse problems down the line.
Some languages limit precision to make "correct results"
0.1 + 0.2 in Python and most languages is 0.3000... 4 wich is correct for the IEEE standard.
PHP limits precision to 14 digits, wich makes 0.1 + 0.2 = 0.3.
You should always handle currency with precision types like decimal or ints.
For data formats with a fixed number of decimal places, always use integers for calculations then add a decimal point when printing
Binary-coded decimal was designed to represent decimal fractions exactly. However not every processor supports it.
You can't imagine how happy I was when I saw this pop up. Your videos are explained and animated lovely, keep it up! :D
I have learned about floating point representation multiple times and this was the clearest explanation I have heard. I especially like how you explain some of the decisions behind the standard so they don't seem to come from nowhere.
0:44
Fun fact: 3.14 is π, 1.618 is golden ratio, 2.718 is e, 1.414 is √2, and 1.089 is a numerical value
It's the cosmological constant
Love this! Back in college in the 80's I learned assembly on a Cyber 750 with 60 bit words. One of our assignments was to write the floating point normalization algorithm that was built into the hardware. I remember making a strip of 60 squares out of masking tape on a table and moving pennies around to test my code. Not sure why I enjoyed that so much...
The Cyber 750 used one's complement representation, whereas most modern computers use two's complement representation.
There is another trick hiding in IEEE-754.
If you reinterpret it as a 32-bit integer and add 1 to it, the new but pattern represents the next closest floating point value (except when the sign flips)
So you can iterate over all floats *in order* starting from 10000…0 to 01111…1 by adding 1 repeatedly.
Does this sequential order stability mean that greater than/less than comparisons can be done much more efficiently?
@@Delta0001-y yes, that's actually by design, too. just special-case for NAN (all nan comparisons must return FALSE always; that's < >= == while negated comparisons must return TRUE always, so != and similar)
@Delta0001-y yep, that is how float comparisions are made in hardware. (Kinda of, it is always more complex)
I’m surprised it’s such a fundamental lesson of computer science has just been ignored by every computer science video or class I’ve ever seen.
Because you really don’t need to know this to learn how to code? Like all you need to know is “sometimes numbers with fractional portions are slightly different than what you think” and not explain floating and fixed point. Unless you want to make your own programming language, that is.
@ coding != computer science
@@jacob-4595 my bad
I've actually had this topic in a class in the first semester of computer science but it probably depends a lot on where you study.
@@ExzaktVid You don't even need to know how floats are represented to make your programming language today. For compiled languages you can just write a translator of a parse tree into LLVM IR, that already has f32 and f64 types available.
I am doing a class on C programming at my uni, and have been struggling to understand this topic lately. Brian, this is just in time. Thanks!
Wow, this is by far the best video explaining about floating point numbers. Thanks!
This was a fantastic visual guide
I'm currently working on a cpu planning to have fixed point decimals, but also floating point isn't out of the question
This video is great timing just for a refresher!
Real gold among all other videos. Awesome job 🎉
And for our next trick we will explain the fast inverse square root algorithm. Right?
lol. Thats how I got here.
those animations are incredible !
literally learned this in class today thank u
A really great work you do here!
Great succinct and clear explanation!
This video went a long way toward demystifying floating point numbers but one question will now haunt my dreams: Why does IEEE specify 127 be subtracted from the exponent instead of just using signed 8 or 11-bit notation in the exponent? 2s compliment has been known in computer science since the early days.
Thank you for this. Really concise explanation. Can you make a follow-up video about why this format was chosen? I vaguely remember that it was because it's easy to calculate and hardware these representation, but I don't remember exactly the procedure
Genuinely did not know float works like that until this. I've been using it with the thought that the point just moves automatically as you sums up
Amazing video quality of showing floating-point number principle and IEEE 754 standard
trailer of Next video: how floating-point numbers are arithmetically operated?
+1 for explaining the Infinity, NaN
When working with money in computer programs it's much easier to always use integers internally with the last two or three digits reserved for decimals. It's the closest thing to true numbers you can get while preserving precision at all times. You then calculate in sequences where you never do two calculations without rounding to the closest integer. This of course only work in isolated scenario such as invoicing or accounting.
That's the simple way I've found to work with money. The other way is to use arbitrary precision that works with strings instead of integers or floating points, where the limitation is not architectual but in memory.
Numbers are never easy with computers.
1:29 why the fractional bits are not represented just like the integer bits? Wouldn't that then make numbers like 2.8 representable?
The fractional bits are actually treated the same as integer. It's just scaled 0 to 1 rather than 0 to 16. That's why fixed point numbers are often referred to as scaled integers, because that's basically all it is. You can take a fixed point number and add them just like you would with an integer.
So even with what you're asking, using the values 0 to 16 still cannot represent the ratio for 0.8. 13/16 is the closest you can get which is 0.8125. The same value as what the video shows.
Edit: Maybe you're thinking about binary coded decimals?
Had the same doubt.
Instead of trying to represent a fraction 0.8. We will just use normal binary to represent a integer 8 using binary.
I am guessing, we don't do this because, the original decimal fraction can be got back but doing any operation like addition subtraction on this doesn't work
@@anomanderrake5434 We don't do that because it doesn't make sense. If integer 8 is implied to be 0.8 then how do you represent a value such as 0.08? Or any number smaller than 0.1 for that matter.
Loved the explanation. Would have been great to have a few examples.
A suggestion: you could describe how this representation does not represent the entire range with equal granularity as would a fixed point format. That is, the spacing between representable numbers when the exponent is zero is small, and as the exponent goes up the spacing increases. I remember seeing a graph of this somewhere...
IIRC in float you can accurately represent decimal numbers up to 2^53 - 1 / 10^308, after which you'll have to approximate.
Also has been wondering perhaps we should store rational numbers as two numbers and let the software handle it the same way kids multiply fractions in school.
This definitely is an approach you can do, and it's been implemented (like the "fractions" module in Python)
I think you're mixing up two things:
A double can store *integers* accurately up to 2^53 − 1.
A double can store numbers (inaccurately) up to (2−2^−52) × 2^1023 ≃ 1.8×10^308.
Floating point also breaks all arithmetic when you need to compare them to stuff.
In a Python console, try adding 0.1 to 0.2 and see if it equals 0.3.
Nice! I'd always considered floating point as akin to witchcraft. Now I have a glimmer of understanding. Thank you!
Great video ❤
You never explicitly motivated the "floating" part of the name... how by changing the exponent the decimal point "floats around" back and forth in the value, such as:
1.2345 x 10^0 = 1.2345
1.2345 x 10^1 = 12.345
1.2345 x 10^2 = 123.45
1.2345 x 10^3 = 1234.5
Literally posted right after the end of the lesson where I had an exam on this exact topic
Great explanation, Could you let us know what you used to create beautiful animation
0:30 I chose the same number for code for opening minecraft house
Nice explanation!!! It took me a while to understand why the new 32bit floating point audio format had that huge dynamic range - mainly because I did not pay attention to the "floating point" part. Sound intensity perception is logarithmic - that's why we use dB to linearize sound pressure and why we use log potentiometers in audio amps. The 32bit floating point can represent a constantly spaced dB value that goes above 1500dB - whereas 32bit fixed point has a dynamic range of 192dB.
Now ... could Laplace's Demon make a better audio format? The problem here is the fact that even with all Rational numbers we are left with an infinite number of Irrational numbers out - and with even with floating point we end up leaving out an infinite number of rational numbers. According to the Existence and Uniqueness Theorem, only linear differential equation systems have algebraic solutions and all we can do when we have non linear problems is - besides trying to find algebraic solutions for the few cases that may have algebraic solutions - to used Numerical Calculus. Since there always is an intrinsic error to every measurement for the starting conditions and since that error grows exponentially we can't predict the future even if our system is an ideal double pendulum. That's why when I found a video titled "AI just solved the three body problem" I said "All in - show me your cards!" - and ... guess what ... I won everything! The system had improved the numerical solution for the problem (the prediction went a little more into the future) for the problem ... in two dimensions!
The reason why we believe we can create stable societies and find permanent solutions for all of humanities problems comes from the fact that in the early days of Newtonian mechanics there were so many problems to solve and even though computers were slow and a lot of linearization was made it gave impressive results. In the words of Elizabeth Anscombe:
The high success of Newton's astronomy was in one way an intellectual
disaster: it produced an illusion from which we tend still to suffer. This
illusion was created by the circumstance that Newton's mechanics had a good
model in the solar system. For this gave the impression that we had here an ideal
of scientific explanation; whereas the truth was, it was mere obligingness on
the part of the solar system, by having had so peaceful a history in recorded
t~me, to provide such a model. For suppose that some planet had at some
ume erupted with such violence that its shell was propelled rocket-like out of
the solar system. Such an event would not have violated Newton's laws; on
the contrary, it would have illustrated them. But also it would not have heen
calculable as the past and future motions of the planets are presently
calculated on the assumption that they can be treated as the simple 'bodies'
of his mechanics, with no relevant properties but mass, position and velocity
and no forces mattering except gravity.
0:44 does anyone one know the forth number top to bottom left to right? i figured it's some kind of constant since the other ones are e, sqrt(2), pi, and the golden ratio
Maybe it makes sense to call the first bit of the exponent the sign of the exponent -- equivalent, but easier to explain -- just as the first bit overall is the sign of the overall number, the first bit of the exponent is the sign is the exponent.
(Ok, maybe some people will not find it the same: the sign of the number is actually the sign of the mantissa, and is separated from the rest of the mantissa by the exponent.)
Sick video
I self-identify as a *supernormal number* and therefore cannot be represented by single-precision or double-precision. Without representation, I fortunately do not have to pay taxes.
Why is 1.089 placed next to other significant numbers? Is it a reference to the Gibbs phenomenon? 0:40
Yeah that's what I was confused by.
Loved it
Few Questions:
Why don't we just use the normal way of 2's complement to represent the exponent?
Similarly for Mantissa why don't we just use the usual binary representation, to avoid all the special cases?
Why not use a radix of 4 and then the exponent doesn't have to spill over into the next byte and you don't have to worry about an implied one or how to represent zero?
Always wondered how it worked but didnt want to read the full IEEE 754 spec
i love you man
Ah yes. I just did this in the exam today.
i always wonder why they choose this way of representation, why not just use an integer and then an exponent, like 0.402 would be stored as 402 and 3 for 1/10^3 it would take more memory but it would be precise
Doubles are so good, once you learn about them you'll never want to go back to singles. In fact some programming languages understand this intuitively which is why they just use doubles by default. It's a total no-brainer: Doubles can represent objects many times larger than our universe and many times smaller than a particle, and more importantly doesn't think that 1+1=3
Perfect.
hey mr tree, i know you aint taking suggestions, but could we have a video on zero-knowledge proofs? thanks. nice vids btw, the lil bois are charming
0:50 I recognized the approximations for pi, e, phi and sqrt(2); but 1.089…? I couldn’t figure that one out…
Same
Cosmological Constant
Important to note that we can't even represent all Real Numbers using Rational Numbers - that's why it's impossible to predict the future of even deterministic non-linear systems.
7:35 √(-n)=n*i
Why don't use more bits for making more accurate nos
Great video, but I still have one question... What's 0.1 + 0.2?
It'd add the mantissa since their exponents are same, hence 0.3
@@shkhamd In decimal scientific notation, sure. But how about in binary as floats? ;)
@@rzezzy1713 yeah, the binary form will have some errors added to it due to the conversion, so it will be something like 0.300000000xxxx where xxxx is errors from the mantissa conversion.
whats 1.089?
But what if the number is like 5.456788 and not 1.456788 where does the 4 get stored?
You divide 5.456788 by 4 to you get
5.456788= 1.77... x 2^2
So mantissa is 1.77... and exponent is 2
And if you have other multiples, you can just divide it by a multiple of 2 such that the mantissa is between 1 and 2
And if the mantissa is less than 2 you multiple it by 2 and reduce the exponent by one
It is important to notice that the exponents are to the power of 2 and not 10 so each power increase will multiply by 2 and not 10 and vise versa
I making my own 16-uple, 32-uple, and 64-uple floats
good video, but there is no answer to the questions: why is sign, exponent and mantissa positioned that way and why exponent is not represented as a 2's complement rather then that way? because, with this representation, normalized floating point data can be compared (greater or less than each other) as if they are normal positive integers! there is no need for circuits for those operations.
0:50 me looking at 1.089: ¿Who are you?¿What are you doing with Celebrities?
want to see the floating point scheme collapsing????
try to plot the following function on any software
f(x) = (2/|x|)^(n+1/2) J_{n+1/2}(|x|)
where n is a parameter an J_n(x) is the Bessel function of first kind.
The function f(x) looks pretty like a Gaussian shape as you could check by picking n=pi.... but let see what happens when n=4 .... chan!!!
Why not get rid of the decimal, and add it back at the end. (I'm sure they thought of that, and decided again it. But why?)
today I realised why it's called "floating" point
Are imaginary numbers handled at all?
Complex numbers are represented by two floating point numbers. Some programming languages have built-in routines for them making it possible to use them in expressions as with real numbers, in others both parts must be computed separately.
nice
For a floating point explanation the fact that binary is used is unnecessary - if your goal is just to explain floating point without connecting it to computers.
:3
first before the bots :)
Does anyone else find the narrator's accent a little strange at the end of sentences?
I don't know, I'm just glad it was a real person.
Fixed point is better in every way.
End of story.
Floating point has ruined hardware and software specs.
Fix your lisp please
? What lisp?
I'll love if you also made a video about Unums, because I dont understand them.
en.m.wikipedia.org/wiki/Unum_(number_format)