Fixed Point Decimal Numbers - Including Fixed Point Arithmetic

Поділитися
Вставка
  • Опубліковано 28 тра 2024
  • Floating point numbers are used a lot in computing from 3D graphics to the latest AI models, they are everywhere! I want to make a video about floating point numbers, but before that I think it is important to cover fixed point numbers. So this video is about fixed point numbers and the next one will be about floating point numbers!
    ---
    Let Me Explain T-shirt: teespring.com/gary-explains-l...
    Twitter: / garyexplains
    Instagram: / garyexplains
    #garyexplains
  • Наука та технологія

КОМЕНТАРІ • 29

  • @Chalisque
    @Chalisque Місяць тому +2

    An interesting historical example, which is still current in some sense, is in how Knuth wrote TeX. If one reads "TeX The Program" which takes you through the source (written using Knuth's idea of Literate Programming), you see that he gave special care to reproducibility, to the point that he wrote fixed point arithmetic routines requiring only a CPU with integer arithmetic, the idea being that any conformant implementation of TeX will produce exactly the same .dvi from the same source .tex, without relying on all CPU's doing the same floating point calculations the same way (consider that the old x87 used 80bit internally, whereas most things now use 32bit or 64bit internally).

  • @shanehebert396
    @shanehebert396 Місяць тому +2

    Back in the day, Carrier Command was 'famous' for having such smooth graphics (for its time) because it used fixed point arithmetic.

  • @sicnemelpor
    @sicnemelpor Місяць тому

    I convert longints to ascii-decimal with a similar principle, but I first reserve a string buffer long enough and then loop in reverse, last to first digit, using the same principle of remainder of division by 10 (in Pico I use the hardware divisor that provides both quotient and remainder). Good Video BTW!

  • @lesh4357
    @lesh4357 Місяць тому +1

    I remember many years ago, delivering the bad news to a company who wrote their own in-house financial software.
    They had asked me why it would never tally correctly. I discovered they were using floating point.
    I told them about the rounding errors and the accuracy of computers in general (like the clock).
    They seemed shocked.
    I told them to decide on a lowest denomination (1 tenth of a penny it turned out) and stick with it. Then use integers and adust the display position of the decimal point acordingly.
    On division, they had to rewrite and use the algorithm you see on your bills, you know "please pay £33.34 followed by two payments of £33.33". etc

  • @chrisstott2775
    @chrisstott2775 Місяць тому

    Fixed point is always by accountants and the like because arithmetic operations are precise and controllable (especially in the case of division). Having worked in the foreign exchange field, rounding issues are well understood and easy to deal with. Using floating point leads to funny looks from the bean counters when things do not add up.

  • @aleksandardjurovic9203
    @aleksandardjurovic9203 Місяць тому +1

    Thank you!

  • @jecelassumpcaojr890
    @jecelassumpcaojr890 Місяць тому

    You skipped over the fact at 4:20 that the precision of the result of the multiplication is the sum of the precisions of the data (1 decimal place + 1 decimal place = 2 decimal places in this case, or 10 x 10 = 100 as the scaling factors)

  • @godnyx117
    @godnyx117 Місяць тому

    Is a 8bit floating point number even possible, lol!
    Thanks for sharing knowledge Gary! You are awesome!

    • @GaryExplains
      @GaryExplains  Місяць тому +3

      It is indeed. It is 1 sign bit, 4 bits for the expo, and 3 bits for the significand.

    • @godnyx117
      @godnyx117 Місяць тому

      @@GaryExplains What? Lol, 4 bits can only save up to "15". So, we are talking about 1 digit precision and up to "1.5" and "-1.5" (if I get how they work correctly, of course). Do you know any areas where this is practically used and what is won versus using 32bits?

    • @GaryExplains
      @GaryExplains  Місяць тому +3

      It is used almost exclusively in AI. It is fast to process but gives just enough precision for neural networks. See developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/

    • @Chalisque
      @Chalisque Місяць тому +1

      You could do a 2bit floating point number if you liked, 1 bit for the mantissa, one for the exponent. So you could represent e.g. 1, 1.5, 2 and 3 using those 2 bits. Completely useless, and no hardware would ever do this unless the AI guys come up with a use for numbers with so little precision. But floating point means you have a mantissa, needing at least one bit, and an exponent, again needing at least one bit.

    • @godnyx117
      @godnyx117 Місяць тому +1

      @@GaryExplains You are awesome!!!! Thank you so many Gary! Have a beautiful day!

  • @tonysheerness2427
    @tonysheerness2427 Місяць тому

    Thank you I enjoyed that as my maths is not that good.

  • @toby9999
    @toby9999 Місяць тому +1

    Very nice.

  • @PaulSpades
    @PaulSpades Місяць тому

    Floating point was a mistake. Fixed point is always faster, because fp hardware should be able to process both int and fp, and in most architectures you have additional int-only hardware.
    FP always has unreliable accuracy at all scales, but it gets worse the larger the significand, which makes equality checks very troublesome. FP is only useful when you have no idea of the scale of number you're dealing with, otherwise you can always get more reliable precision using fixed point. FP is popular because it doesn't overflow easily.
    The examples in decimal are irrelevant for comparing fixed and floating point, the hardware ALU uses the binary encoding for both. The compiler and math libraries display the binary number with whatever encoding scheme you're using, in whatever format you're after, with whatever precision is possible.

    • @axelBr1
      @axelBr1 Місяць тому +1

      I wouldn't say that floating point was a mistake, until 32 bit numbers were easily used by a micro processor, (noting that mainframes always used 32 bits or more), no useful range of numbers could be stored as fixed point numbers, e.g, 16 bits would only give you a max of 64k. (Okay a 16 bit IEEE float won't get you more than 64k, but you can get smaller numbers than 0.65536)
      There have always been Big Number numerical packages for handling large numbers with high precision, but presumably use a lot of compute power, which in the old days was also limited. Floating point numbers are "good enough" for most people. But with 64 bit, may be fixed point numbers might become popular, especially as IEEE 754 floating point numbers only terminate for rational numbers with denominators that are powers of base 2, all other numbers, e.g. our favourite base 10, (unless you insist on using Imperial units), numbers are not representable to a finite precision and subject to Rounding Error.

    • @PaulSpades
      @PaulSpades Місяць тому +1

      @@axelBr1 Bit depth doesn't define how big a number you can represent, but what range and with what precision. Binary FP is based on IBM's implementation on their 360 machines. It was useful for scientific calculations at the time. Home computers had no need for it, until graphics and GUIs needed more complex mathematical calculations - and porting professional software. Sadly this has become entrenched after FP units were included in x86, but other architectures did well enough without it.
      The use of binary FP in programming has only increased after 3d graphics accelerators and 3d APIs - even though it's slower than fixed point. But since FP32 and FP16 are benchmarks for 3d acceleration/shader calculations, there's no way to get it the hell out of GPUs.
      Now, for scientific and business we would've been better served with a decimal floating point encoding, but Intel was doing other things besides implementing acceleration for such a thing in hardware. Nvidia and ATI took turns including faster pipelines for int over the decades, but that doesn't translate to better performance when most software still uses FP.

  • @Garythefireman66
    @Garythefireman66 Місяць тому

    Thanks professor. Mind blown 🤯

  • @johng7rwf419
    @johng7rwf419 Місяць тому

    That took me back over 50 years!

  • @macko-dad
    @macko-dad Місяць тому

    Just a quick production note.
    I don't know what editing app you use, but the green screen cut out is horrible.
    It's not rocket science and can be done more accurately. Otherwise great content.

    • @GaryExplains
      @GaryExplains  Місяць тому

      Glad you liked the content.

    • @GaryExplains
      @GaryExplains  Місяць тому

      Are you referring to the thumbnail or the video itself? If the former then it isn't due to the green screen "cut out", I intentionally put a green outline around myself, to match the text.