29:38 your circle circumference is 35 and a bit in 4.4 format, but in 4 bits you cannot represent 35, is the compiler cheating and using a bigger type during constexpr evaluation?
Aren't all the registers on modern CPUs at least 32 bits? So it wouldn't necessarily overflow.. and maybe the intellisense is not masking the bits so it reads it as a 32-bit int? 🤷
My favourite explanation of 2's complement is treating most significant bit with "minus" sign. In this video, instead of weights 1,2,4,8, let's think of 1,2,4,(-8). Then 1001 becomes -8+0+0+1=-7 indeed. Easy to see that -8 = 1000 is biggest negative number in that representation and 7=0111 is biggest positive number.
To add to this, my favorite way to count negative in binary, if I don't want to use the 2's complement algorithm (for example, I'm counting on my hands or just inspecting a number visually), is to assume all 1s (e.g., 1111) is -1, and then count down from there with zeros. So 1110 is -2, 1101 is -3, 1100 is -4, and so forth, up to 1000 which is -8. It's almost exactly like counting up in binary, except you start on -1 instead of 0.
@@NDAtheist An even nicer way I think is to figure that if you start with *any* number whose e.g. 16 lowest bits are all zeroes and subtracts one from it, the result will have ones in the 16 lowest bits. If one starts with a number were the million lowest bits are all zeroes and subtracts one, one will get a number whose bottom million bits are all ones. If ons starts with a number where an unbounded number of the lowest bits are zeroes and subtracts one, one will get a number with an unbounded number of ones in the lowest bits. The upper bit in e.g. 16 bit two's complement isn't "negative", but instead represents the state of that bit and all bits to its left.
I absolutely love these basic dives, wrapping my head around conversions and number spaces was always tricky. I knew how to do it, but never really understood what I was doing. And that always doesn't sit right for me. Thanks!
Well give them a chance! There's not a great need for fixed point numbers in reality these days, so I can understand it not being on a syllabus. Cheers!
back when the Intel 80286 was the mainstream processor I wrote 32-bit fixed point math functions (in assembly language) and then implemented a matricies math functionality on top of that - was embedded into a very prominent desktop publishing software package and used to process render graphical image objects. Turned out to be a superb solution for a time when hardware floating point support in CPUs could not be assumed (nor SIMD)
I have learned so much from these type of videos on this channel, the way he breaks everything down and puts samples of code with it really teaches the fundamentals. If he did the best optimisation and everything it would be way too confusing and only the experts would be able to follow along. Please keep these videos coming!!!
I hark from the days of 8-bit processors that had no native ability at all to handle decimals, multiplication or anything much in the way of maths, so it was interesting to get an understandable explanation of how programmers might have got around such limitations, though I imagine for the most part they used various approximations to avoid decimals altogether, look-up tables, that sort of thing, it was amazing really what people were able to achieve. It blows my mind that modern processors can do these operations natively, millions of times a second.
@@javidx9 I do know that the original Playstation didn't have a maths coprocessor and couldn't handle floating point, which is why the graphics constantly shifted around the way they did, because they were constantly using approximated/rounded numbers. As I was just a kid using a Commodore 64 I didn't really know much, but what I do remember doing is programming bouncing ball effects by generating a sine function from BASIC and poking the rounded values into memory and just using them as a lookup table (in Assembly) for the Y coordinates. Pretty sure that's still how demo scene creators do most of the really clever effects on the 8-bits - using pregenerated look-up tables, it's always going to be the fastest way.
Ah yes, I remember implementing long multiplication in assembler because my controller had no hardware multiplier nor enough memory to load the library's multiplication routine.
The PS1 issue you noticed is not because of fixed point math. It was because of the fact that the PS1 used a simple affine mapped texturing as opposed to perspective correct texture mapping.
@@richardericlope3341 that's why the textures shift and warp, but the 'vertex-snapping' he's referring to **is** because the GTE used fixed-point maths i'm pretty sure - or at least contributed to it.
Love these videos. There’s a lot of interest in coding for older hardware- which rarely has fast or accurate FPUs, if they have it at all. Good resources like this are priceless imo.
Fantastic approach to teaching binary, floating point errors and fixed point math. As a mentor myself, I try hard to find ways to approach technical subjects in order to teach them. I will be using your approach for this topic. Thanks.
I plan to use fixed point numbers for increased determinism when synchronizing variables across a multiplayer game, so that there is less drift and less server reconciliation needed. Pretty much the perfect video for what I need
@@scottalexgray As long as you do the same operations on the same values in the same order - the result should be exactly the same. And if you don't do that - fixed won't help you.
You can get around using a twice as large integer type as an intermediary during multiplication if you do something like x*y = ((x>>n)*(y>>n))n)*(y & m) + (x & m)*(y >> n) + ((x & m)*(y & m))>>n, where n is the number of bits after the decimal sign, and m is a mask looking like 0b00001111 that selects the bits after the decimal sign. This is useful if you want to use your largest integer type as the base type.
Thank you for video. I was looking articles and try to understand the what and how the fixed point is working, this video is the best for the understanding how it works. At least for me.
When going from bigger format (like after multiplication) to standard one, it is preferable to round it correctly. In case of 4.4 which after multiplication becomes 8.8, it means adding 0.00001 and only then truncating. That's not a famous banker's rounding but seems OK for all practical purposes. It might seem strange to care about last digit that much. Come on, if it really matters then you use too little numbers, so use 32 bits instead of 16, or 64 instead of 32. But my experience tells huge difference, not just 1 bit. Point is: proper rounding leaves us with random errors which tend to cancel out in lengthy computation, or at least make zero mean error and some random noise. But truncating instead of rounding generates SYSTEMATIC errors which don't cancel out at all! And final result is biased. Sorry for "rant", but found it out hard way...
i tried so hard recently to figure out how to round off numbers.. it is so coincidence the video tells me how to do that.. for my purpose i just want three decimal points and an integer.. after some calculation.. i need 16 bits to represent my floating point number because in opengl it needs 32 bits floating point data type instead i want to use 16 bits because it will save some memory.. in other example i need only 6 decimal places.. with and one integer value between one and seven and another data type with 6 decimal places integer value between 100 and 103 so this will save me some memory
10:35 an observation I'd like to make is that 1111 is 15, and 0111.1000 is 7.5, which is exactly 15 / 2. Shifting right is akin to division by two, so it works out conveniently in that you would have to do no special math to do the division. 15 / 2 = 11110000 -> 01111000 = "7.5" Same for 2.1. 17 / 8 = 2.125. And 9.6, 1001101 = 9 x 8 + 5 = 77, 77 / 8 = 9.625. If it's unclear where / 8 is coming from, it's a shift right by 3, and 2^3 is 8.
4:02 IMO it would be more clear here to say that, since were limited to 4 bits, we disregard carrying to the 4th (non-existent) bit -- rather than saying there is no carry. By the way, for those who don't fully understand signed numbers, what we're doing here is arithmetic modulo 2^(# of bits). The reason we choose the range [-8, 8) rather than (-8, 8] is because it allows the MSB to serve as a quick sign bit to tell you if the number is negative. We *could* choose 0000 to represent *any* multiple of 16, 0001 to represent any multiple of 16 plus 1, etc. With unsigned numbers, we've chosen everything to be zero times 16 plus the relevant amount, and with the standard signed numbers we've chosen 0000-0111 to be the same, but 1000-1111 to be -1 * 16 + the relevant amount.
I worked out many of the things you show about fixed point math on my own as a kid aged 12 to 14 when I was doing Z80 machine code programming in the early '80s. I didn't know any of the proper terminology of course. I didn't do division though and I think I might've had some kind of more primitive biasing. Can't remember it all these days and I'm definitely less smart now (-:
Today's discovery was constexpr, though the lion kingdom doesn't know why the compiler can't discover that itself if the arguments are constant & it has the inline keyword we already had. Fixed point became a lot more relevant when the STM32F was discontinued.
about the plane-ray engine, if you have upright view point from the player only, ie, upright planes, then you can use the 2d dda grid, to get the world-vertical-line 2.5d general purpose 3d ray caster, but if you only use the plane-rays with z-buffer (both versions) and entity sphere/ellipse bounding boxes, then you get an all-view-points tilt camera 3d-engine, assuming n-entities
try this simple 3d (or 2d top-down): entity sphere boundary volume sorting (all entities kinda instanced), per-pixel ray casting intersection/culling per sphere boundary volumes (sorted closest entity first about in order draw), then same point sphere distance math for triangles (also any level of sphere boundary volume hierarchies, but it should be very simple & fast with entity single level boundary volume alone, maybe each level/arena/checkpoint with their own entity set, so that bloat is limited)
Great video. You've created a very simple Fixed Point class that does its job. Another cool thing to add would be a natvis file to see the proper floating point number in the VS debugger (by converting it to float for the debugger). I'm still impressed that the standard library doesn't have a fixed point class given how important that is for a deterministic cross-platform physics engine.
The problem is that most physics engines don't care about being cross-platform and deterministic, usually at best want to be cross platform. Not to mention the limited range and the decently problematic working space requirements mean higher cost multiplication.
@@donovan6320 Well, it's a trade off. Higher cost multiplication, but on the other hand, you can have cross-platform multiplayer games that only spam player inputs on UDP packets. Slightly more expensive physics in exchange for significantly less bandwidth and latency in certain types of multiplayer games.
@@h..h Fixed point faster than regular floats? As you can see in the video, fp division/multiplication isn't a single instruction like floats are, so they are slower. But, my experience with them is that they are not that much slower. And yes, you would want to use udp. Lost packets aren't a problem when you are constantly spamming them (one of them will arrive. If none arrive, then you got no internet connection). The idea of reliable-udp is that you keep spamming the same packet over and over again until the player gives you a confirmation that it was received. It uses more bandwidth, but latency is much smaller.
@@h..h I'm saying that fixed point math isn't single instruction for multiplications/division. Floats are single-instruction. Both are accessing the registry. Fixed point math isn't faster than floating points, otherwise everyone would use it instead. And TCP is much slower than UDP because it needs a confirmation that the packet was received, which is increasing the latency unnecessarily. It's best to just keep sending non-stop at least 15x per second until the other user confirms the packet was received. This is a common networking strategy, and we got libraries like ENet and Raknet that does that for us.
@@h..h the difference is that on TCP we only send the packet once and wait for the confirmation. We only send it again after we confirm that the user didnt receive, so that's a lot of delay between attempts. On reliable-UDP, we don't wait for the confirmation to send it again, so there's a lot less delay between attempts (in exchange for a lot more bandwidth being used since we need to send the same packet lots of times).
back in the 80s I did this to write a 3d rendering engine on my BBC Micro (and yes I was trying to see if I could write an Elite clone). at the time this was the only solution because there was no floating point support and it was fast. it would be a great demonstration of just how good modern floating point is. if you were to change your 3d code to use fixed point and do some timings. i know this is a lot of work because you would also have to write a trig library. but it might give the young players some insight as to how much we take for granted these days. great video btw.
The problem is that they're just not very good for most modern 3D applications. The sheer amount of register space and required temporary bit space make it not worth it, and the fact that floats with hardware acceleration are pretty much as fast and more space efficient.
@@donovan6320 agreed, I was thinking in terms of an academic exercise. I remember at university, a post doctorate was working on a system of plug and play real number representations, so you could switch between hardware floating point, rational and interval etc. On the fly.
Back in the '80s when we used fixed point for this stuff we used lookup tables for trig. Radians instead of degrees. Normalize your input to the range of your table and always scale the result down, never up. Sin and Cos can use the same table of course. My stuff never needed Tan or any of the Arc* stuff.
@@andrewdunbar828 I am aware, problem is for modern games, those techniques either in part or whole would often not work well or be limiting. The kinds of things expected from 3D, even 99s era 3D require higher precision in lighting.
Now I would like to see application and discussion of this Idea for fast Fourier transform. And also dual numbers x+\eps y to compute the derivative of algorithms and possibly the error.
Great videos. I just have an advice. You can sync the audio coming from the mic near you with the videos where you are appearing in it. Because the audio from the mic near you is better than the mic of the camera.
Will you do floating point numbers next? Like the IEEE 747 standard, mantissa, etc? I studied this in college but never really understood the concept. Your videos really solidify these concepts. Thank you!
I think that everyone had this idea before me but ... I think I have a way to do the multiplication without needing a bigger type. Note that the following should work for 16_16, 2_30 or any other distribution, and should work for bigger and smaller types. Let's assume we store things on 32bits. We want to multiply X and Y. Let's say Mx represents the 16 most significant bits (followed by 16 zeros), Lx the 16 least significant bits, My the 16 msb of Y (followed by 16 zeros) and Ly the 16 lsb of Y. X = Mx + Lx Y = My + Ly X * Y = (Mx + Lx) * (My + Ly) = MxMy + MxLy + LxMy + LxLy Now we have 4 multiplications but each component have only 16 bits with information. We can shift them so they occupy the 16lsb before the multiplication. I didn't find (yet) an equivalent method for division.
Awecome video! I enjoyed watching it. Just wondering what platform did you use to draw. Doesn't seem like a touch screen! Is it one of those Wacom sketch pads?
I love how are you teaching I really appreciate it and I’d like to be taught from you how can I create a calculator gui in c++ like wxWidgets by using the empty windows desktop app. Thank you so much, Sr. You’re my favory
I love fixed points, the simplicity of the concept is really cool for me. Every time I have the excuse to use them in a project I gladly do so. I wish they were supported as native types in programming languages instead of having to set them up. Right now I'm using them for an open world platformer where there are no load screens so the coordinates for the player character can get very large. Basically I don't want that the further away from (0,0) the player is, the less precise float gets, so this might introduce wacky movement glitches that only happen in certain areas of the world.
Yes. In minecraft there is the same problem. When you are on the edge of world. You are moving like 1 meter per free second. It looks like you have 1FPS.
I m working for library of Real Number infinitesimal small and big with SIMD and without SIMD; Without looking how CS and Architect doing it; i use only my mathematical package and basic of C/C++.
@26:23 I always got compiling error when trying to declare a static function.. "a static .. must be relative to a specific object" from this video you are using a specific object congratulations
would it not be more efficient to store a "signed" number as a struct containing a bool and an unsigned integer? then use logic to determine if we should swap the sign
how would you do this if you didn't have the double option? would we create functions like FP(value infront of decimal, value behind decimal). that both get an integer ? I am asking because the only reason i'm interested in this is because of cross platform determinism.
12:10, something confuses me about this representation, if we've rounded 0.625 up 1 bit to represent 0.1 then how do we know this doesn't represent 0.125 instead? Having come close to a full bignum implementation using just pointers to arbitrarily sized arrays I'm fully aware of the biased exponent, sign, NaN, Inifinity, -0, the mantissa that has the most significant bit chopped off since it's assumed to be there 9 times out of 10, but the rounding bit is where I struggled to get it right, could you elaborate on this part please.
To be honest i think fixed numbers are better than floating point numbers. For example if there is a big float number. For example sth like 4.000.000.000.000 than the next number is 4.000.000.000.004 and where there is for example 4 the mext number is 4,0000000000000000525 or sth like that. But when we are using fixed point numbers there is no such thing like that. You just can set for example 4 bits for the fractional part. For example 4. Next number is 4.0625 next 4.1250. For me there is more consistency.
Instead of using user-defined literal, I would have went with simply overloading operator* between fixed and double or even between 2 fixed and let the compiler implicitly convert the double
Hi hi Dear Javidx9 I m sheng here come form Malaysia, you may don’t know me.but I super super super love your channel. I m a dental technician.in this few year I want to push myself in dental line better.After it I try to learn c# and c++ to improve myself.i m try to do a small CAD software to help myself. Question 🙋♂️ But I have no idea where and how to start a CAD Software,May you giving me some advance?? Thank you so much,but don’t worry just reply if you are truly free.
Hello David, a bit of an off topic for the video and the channel maybe, but given your coding experience, I'm curious to know what do you think of the functional paradigm? A lot of people (particularly in academia) think it's the "last programming paradigm" and will replace OO. Others think it'll only complement it.
Hey Javid! Your videos are the best! I have a question though, On your RayCastWorld video, I seem to have a problem when I try to texture the walls with sprites. It gives me an error message on the olcPixelGameEngine header file. It says, "Exception thrown: read access violation. THIS was nullptr." What do I do? Thanks!
that means this pointer is pointing to null..this pointer refer to current object...open the olcPixelGameEngine header file and navigate to the line where the error throws exception at.. probably you need to override some base class in your class implementation and assign it some values
I struggle to think of a good use for IEEE floats... ever. High levels of precision: no, rounding errors. Large numbers: not really, rounding errors Scientific computing: no, as above Speed: no, software is slow, hardware is slower than fixed and massively bulks up the math unit. Slight rounding issues have been present per manufacturer. Aside from being convenient to use (excl a broken ==) due to compiler support I struggle to think of uses. It just feels like a case of everyone doing it and having to be compatible. Maybe 3D rendering, but it high precision is still hard to do as math operations (like matrix multiplication) destroy and lead to funny bugs / clipping
@@rsa5991 on one cpu yes, but across several the FPU actually behaves differently sometimes. If I remember correctly the game Trackmania uses fixed point because of this.
Discord rookie here...You said...you was putting the code on the discord...but alas I couldn't find it. Yea not sure if I am looking in the right place...so anybody got any tips on finding it. Just looking to commit these concepts to permenant storage by storing th code example.
Hello friend. I was interested in modifying the textures for a ps2 game, The Getaway: Black Monday. Well, according to what I researched to be able to do this, I would have to extract the textures from the game and then modify the texture, but there are no videos explaining how to get the extracted and modified texture and put it in the game, importing the modified texture into the game . Well, I saw a comment on a video that the 010 editor, this program, is used to import the texture that was exported and modified. Well is this true? If so, could you bring a tutorial explaining how to import the texture of the ps2 game that was exported to be modified? I could even pay money for a tutorial of yours explaining how to do this with the game I wanted to modify.
can you please make a video on how to access low level hardware without including any library. like accessing sound card, graphic card to write frame directly to frame buffer
virtually impossible. any modern os will not allow you to do that, only a driver can directly access hardware. any other program must use an appopriate API like winapi, sdl, directx etc, which in turn will talk to the driver and do the operations. now, if you want to learn how to create a driver for a device, that's a whole different huge can of worms. but you will still rely on the os api's for other things you might need like memory allocation, threads, interrupts and stuff like that. stay with the os's API's.
It's quite difficult without actually writing an OS these days. Also many hardware drivers are still proprietary. If course it is possible, but very hard on a modern day desktop.
@@giornikitop5373 from watching ben eater videos, let me try to answer this in theory i could say in general you need to reverse engineer the source code of the driver.. you ask yourself where is the virtual memory address (page memory address) and then you do some mathematic to know where is the pyhsical address of the hardware.. then figure out how many possible operation the output device/hardware can do.. if you have oscilloscope you could try to reverse engineer from there.. first there is a some sort of initial buffer that tells you what manufacturer of the device is and etc etc after that you need to distinguish the opcodes, data, and addresses.. for example if the graphic card can only do things like read and write so we can figure out its enumeration constants.. maybe in between 0 and 1.. and then if you know that, you can basically find a C++ compiler for a particular cpu.. although i did nt do this yet.. i really hope i can do this.. probably it is much more work than this i dont know
@@abacaabaca8131 um, no. source code does provide lots of info, but there are always binary blobs/firmware that are proprietary and thus, unknown. if there are gpu drivers that are 100% open source, then ok. now, for something simple, as @Just A cherry OnTop asked, like accessing the framebuffer, that might be enough. I haven't watched bean eater videos but i'm guessing he reversed some simple device or cpu, more or less already known, like 6502, z80 etc. reversing a modern gpu from zero is a whole different level of pain. let me put it this way: unless you're a gpu engineer of that lvl, know what you're looking for and have high-tech equipment, your chances are practicaly zero.
29:38 your circle circumference is 35 and a bit in 4.4 format, but in 4 bits you cannot represent 35, is the compiler cheating and using a bigger type during constexpr evaluation?
Lol great observation! Absolutely gonna investigate this!
Aren't all the registers on modern CPUs at least 32 bits? So it wouldn't necessarily overflow.. and maybe the intellisense is not masking the bits so it reads it as a 32-bit int? 🤷
With the help of some nerdy discordians, its been identified that intellisense may be playing a few tricks! I might do a follow up about this.
@@javidx9 thanks!
@@javidx9 Please do!
Thanks gigachad
My favourite explanation of 2's complement is treating most significant bit with "minus" sign. In this video, instead of weights 1,2,4,8, let's think of 1,2,4,(-8). Then 1001 becomes -8+0+0+1=-7 indeed. Easy to see that -8 = 1000 is biggest negative number in that representation and 7=0111 is biggest positive number.
To add to this, my favorite way to count negative in binary, if I don't want to use the 2's complement algorithm (for example, I'm counting on my hands or just inspecting a number visually), is to assume all 1s (e.g., 1111) is -1, and then count down from there with zeros. So 1110 is -2, 1101 is -3, 1100 is -4, and so forth, up to 1000 which is -8. It's almost exactly like counting up in binary, except you start on -1 instead of 0.
@@NDAtheist An even nicer way I think is to figure that if you start with *any* number whose e.g. 16 lowest bits are all zeroes and subtracts one from it, the result will have ones in the 16 lowest bits. If one starts with a number were the million lowest bits are all zeroes and subtracts one, one will get a number whose bottom million bits are all ones. If ons starts with a number where an unbounded number of the lowest bits are zeroes and subtracts one, one will get a number with an unbounded number of ones in the lowest bits. The upper bit in e.g. 16 bit two's complement isn't "negative", but instead represents the state of that bit and all bits to its left.
A rare UA-cam educational video with no BS and solid content - Thanks!
I absolutely love these basic dives, wrapping my head around conversions and number spaces was always tricky. I knew how to do it, but never really understood what I was doing. And that always doesn't sit right for me. Thanks!
This is great they never taught me this in school. I ignored my professors recorded lectures and learned C++ from your videos anyway.
Well give them a chance! There's not a great need for fixed point numbers in reality these days, so I can understand it not being on a syllabus. Cheers!
back when the Intel 80286 was the mainstream processor I wrote 32-bit fixed point math functions (in assembly language) and then implemented a matricies math functionality on top of that - was embedded into a very prominent desktop publishing software package and used to process render graphical image objects. Turned out to be a superb solution for a time when hardware floating point support in CPUs could not be assumed (nor SIMD)
We adore you teacher you are the best no kidding
Thanks Kween!
I have learned so much from these type of videos on this channel, the way he breaks everything down and puts samples of code with it really teaches the fundamentals. If he did the best optimisation and everything it would be way too confusing and only the experts would be able to follow along. Please keep these videos coming!!!
I hark from the days of 8-bit processors that had no native ability at all to handle decimals, multiplication or anything much in the way of maths, so it was interesting to get an understandable explanation of how programmers might have got around such limitations, though I imagine for the most part they used various approximations to avoid decimals altogether, look-up tables, that sort of thing, it was amazing really what people were able to achieve. It blows my mind that modern processors can do these operations natively, millions of times a second.
Well that's exactly it. The ingenuity of the programmers before us to make the world work with integers approximating everything is incredible.
@@javidx9 I do know that the original Playstation didn't have a maths coprocessor and couldn't handle floating point, which is why the graphics constantly shifted around the way they did, because they were constantly using approximated/rounded numbers. As I was just a kid using a Commodore 64 I didn't really know much, but what I do remember doing is programming bouncing ball effects by generating a sine function from BASIC and poking the rounded values into memory and just using them as a lookup table (in Assembly) for the Y coordinates. Pretty sure that's still how demo scene creators do most of the really clever effects on the 8-bits - using pregenerated look-up tables, it's always going to be the fastest way.
Ah yes, I remember implementing long multiplication in assembler because my controller had no hardware multiplier nor enough memory to load the library's multiplication routine.
The PS1 issue you noticed is not because of fixed point math. It was because of the fact that the PS1 used a simple affine mapped texturing as opposed to perspective correct texture mapping.
@@richardericlope3341 that's why the textures shift and warp, but the 'vertex-snapping' he's referring to **is** because the GTE used fixed-point maths i'm pretty sure - or at least contributed to it.
i really appreciate this, currently learning x86 assembly and now I understand why division is handled differently.
Love these videos. There’s a lot of interest in coding for older hardware- which rarely has fast or accurate FPUs, if they have it at all. Good resources like this are priceless imo.
Fantastic approach to teaching binary, floating point errors and fixed point math. As a mentor myself, I try hard to find ways to approach technical subjects in order to teach them. I will be using your approach for this topic. Thanks.
THIS is what I expected to see in College when started to go on math lessons. Your brain man it's made to understand numbers.
Javid, you're really cool. I enjoy watching you explain concepts in C++.
I plan to use fixed point numbers for increased determinism when synchronizing variables across a multiplayer game, so that there is less drift and less server reconciliation needed. Pretty much the perfect video for what I need
As long as you using the same code on both ends, floats should be completely deterministic.
@@rsa5991 And with rigid bodies where the physics system does it's thing?
@@scottalexgray As long as you do the same operations on the same values in the same order - the result should be exactly the same.
And if you don't do that - fixed won't help you.
@@rsa5991 Mmmm, my tests would say otherwise even when everything is in the exact same order (same function for determining the players next position)
@@scottalexgray That means something changes between the runs. Do you use a constant time delta, or calculate it from real elapsed time?
Afaik you could just write d*(1
You can get around using a twice as large integer type as an intermediary during multiplication if you do something like x*y = ((x>>n)*(y>>n))n)*(y & m) + (x & m)*(y >> n) + ((x & m)*(y & m))>>n, where n is the number of bits after the decimal sign, and m is a mask looking like 0b00001111 that selects the bits after the decimal sign. This is useful if you want to use your largest integer type as the base type.
I'm gonna give you a premature thank you for kickstarting my software engineer career. I haven't gotten a job yet...but i'm tryin!
do some AI stuff
This is unlikely sorry! My academic background is Machine Learning, and I'm done with it.
@@javidx9 Ok thanks
Thank you for video. I was looking articles and try to understand the what and how the fixed point is working, this video is the best for the understanding how it works. At least for me.
I love the way you explain, it just makes topics that would feel mundane after a while somehow interesting all the way through!
Oh god, I wait impatiently for these. I am new to C++ and the Back to Basics series is very helpful. Thank you
Glad to hear from you again!
Best teacher ever 🖤
Greetings from Brazil
Thanks Rodrigo!
@@javidx9 can you send me some tip of slopes in AABB collision? Like Megaman X
Also, I'm doing a Bomberman like with olc ^^
When going from bigger format (like after multiplication) to standard one, it is preferable to round it correctly. In case of 4.4 which after multiplication becomes 8.8, it means adding 0.00001 and only then truncating. That's not a famous banker's rounding but seems OK for all practical purposes.
It might seem strange to care about last digit that much. Come on, if it really matters then you use too little numbers, so use 32 bits instead of 16, or 64 instead of 32.
But my experience tells huge difference, not just 1 bit. Point is: proper rounding leaves us with random errors which tend to cancel out in lengthy computation, or at least make zero mean error and some random noise. But truncating instead of rounding generates SYSTEMATIC errors which don't cancel out at all! And final result is biased. Sorry for "rant", but found it out hard way...
i tried so hard recently to figure out how to round off numbers..
it is so coincidence the video tells me how to do that..
for my purpose i just want three decimal points and an integer..
after some calculation..
i need 16 bits to represent my floating point number
because in opengl it needs 32 bits floating point data type
instead i want to use 16 bits because it will save some memory..
in other example
i need only 6 decimal places..
with and one integer value between one and seven
and another data type with 6 decimal places integer value between 100 and 103
so this will save me some memory
10:35 an observation I'd like to make is that 1111 is 15, and 0111.1000 is 7.5, which is exactly 15 / 2. Shifting right is akin to division by two, so it works out conveniently in that you would have to do no special math to do the division. 15 / 2 = 11110000 -> 01111000 = "7.5" Same for 2.1. 17 / 8 = 2.125. And 9.6, 1001101 = 9 x 8 + 5 = 77, 77 / 8 = 9.625. If it's unclear where / 8 is coming from, it's a shift right by 3, and 2^3 is 8.
Been subbed to you for a long time, and I'm just now seeing this video, 3 weeks later.
4:02 IMO it would be more clear here to say that, since were limited to 4 bits, we disregard carrying to the 4th (non-existent) bit -- rather than saying there is no carry.
By the way, for those who don't fully understand signed numbers, what we're doing here is arithmetic modulo 2^(# of bits). The reason we choose the range [-8, 8) rather than (-8, 8] is because it allows the MSB to serve as a quick sign bit to tell you if the number is negative. We *could* choose 0000 to represent *any* multiple of 16, 0001 to represent any multiple of 16 plus 1, etc. With unsigned numbers, we've chosen everything to be zero times 16 plus the relevant amount, and with the standard signed numbers we've chosen 0000-0111 to be the same, but 1000-1111 to be -1 * 16 + the relevant amount.
I worked out many of the things you show about fixed point math on my own as a kid aged 12 to 14 when I was doing Z80 machine code programming in the early '80s. I didn't know any of the proper terminology of course. I didn't do division though and I think I might've had some kind of more primitive biasing. Can't remember it all these days and I'm definitely less smart now (-:
Thank You David. Always Learn something new. I enjoy your videos. Back to basics.
I use the fixed point numbers a lot in my design of sound synthesizers and audio effects in C/C++. FP-s are really cool :)
Today's discovery was constexpr, though the lion kingdom doesn't know why the compiler can't discover that itself if the arguments are constant & it has the inline keyword we already had. Fixed point became a lot more relevant when the STM32F was discontinued.
about the plane-ray engine, if you have upright view point from the player only, ie, upright planes, then you can use the 2d dda grid, to get the world-vertical-line 2.5d general purpose 3d ray caster, but if you only use the plane-rays with z-buffer (both versions) and entity sphere/ellipse bounding boxes, then you get an all-view-points tilt camera 3d-engine, assuming n-entities
try cube map global illumination plane-ray casting (6x pyramid cameras udlrbf-directions per pixel)
try this simple 3d (or 2d top-down): entity sphere boundary volume sorting (all entities kinda instanced), per-pixel ray casting intersection/culling per sphere boundary volumes (sorted closest entity first about in order draw), then same point sphere distance math for triangles (also any level of sphere boundary volume hierarchies, but it should be very simple & fast with entity single level boundary volume alone, maybe each level/arena/checkpoint with their own entity set, so that bloat is limited)
Great video. You've created a very simple Fixed Point class that does its job. Another cool thing to add would be a natvis file to see the proper floating point number in the VS debugger (by converting it to float for the debugger). I'm still impressed that the standard library doesn't have a fixed point class given how important that is for a deterministic cross-platform physics engine.
The problem is that most physics engines don't care about being cross-platform and deterministic, usually at best want to be cross platform. Not to mention the limited range and the decently problematic working space requirements mean higher cost multiplication.
@@donovan6320 Well, it's a trade off. Higher cost multiplication, but on the other hand, you can have cross-platform multiplayer games that only spam player inputs on UDP packets. Slightly more expensive physics in exchange for significantly less bandwidth and latency in certain types of multiplayer games.
@@h..h Fixed point faster than regular floats? As you can see in the video, fp division/multiplication isn't a single instruction like floats are, so they are slower. But, my experience with them is that they are not that much slower.
And yes, you would want to use udp. Lost packets aren't a problem when you are constantly spamming them (one of them will arrive. If none arrive, then you got no internet connection). The idea of reliable-udp is that you keep spamming the same packet over and over again until the player gives you a confirmation that it was received. It uses more bandwidth, but latency is much smaller.
@@h..h I'm saying that fixed point math isn't single instruction for multiplications/division. Floats are single-instruction. Both are accessing the registry. Fixed point math isn't faster than floating points, otherwise everyone would use it instead.
And TCP is much slower than UDP because it needs a confirmation that the packet was received, which is increasing the latency unnecessarily. It's best to just keep sending non-stop at least 15x per second until the other user confirms the packet was received. This is a common networking strategy, and we got libraries like ENet and Raknet that does that for us.
@@h..h the difference is that on TCP we only send the packet once and wait for the confirmation. We only send it again after we confirm that the user didnt receive, so that's a lot of delay between attempts. On reliable-UDP, we don't wait for the confirmation to send it again, so there's a lot less delay between attempts (in exchange for a lot more bandwidth being used since we need to send the same packet lots of times).
back in the 80s I did this to write a 3d rendering engine on my BBC Micro (and yes I was trying to see if I could write an Elite clone). at the time this was the only solution because there was no floating point support and it was fast. it would be a great demonstration of just how good modern floating point is. if you were to change your 3d code to use fixed point and do some timings. i know this is a lot of work because you would also have to write a trig library. but it might give the young players some insight as to how much we take for granted these days. great video btw.
The problem is that they're just not very good for most modern 3D applications. The sheer amount of register space and required temporary bit space make it not worth it, and the fact that floats with hardware acceleration are pretty much as fast and more space efficient.
@@donovan6320 agreed, I was thinking in terms of an academic exercise. I remember at university, a post doctorate was working on a system of plug and play real number representations, so you could switch between hardware floating point, rational and interval etc. On the fly.
@@theforthdoctor7872 interesting, sounds rather neat.
Back in the '80s when we used fixed point for this stuff we used lookup tables for trig. Radians instead of degrees. Normalize your input to the range of your table and always scale the result down, never up. Sin and Cos can use the same table of course. My stuff never needed Tan or any of the Arc* stuff.
@@andrewdunbar828 I am aware, problem is for modern games, those techniques either in part or whole would often not work well or be limiting. The kinds of things expected from 3D, even 99s era 3D require higher precision in lighting.
Now I would like to see application and discussion of this Idea for fast Fourier transform. And also dual numbers x+\eps y to compute the derivative of algorithms and possibly the error.
Need to watch it a few times and then implement it, Thank you very much !
King of magician in programmers world please accept my Endless respect
Javid is real mega gigachad
Great videos. I just have an advice. You can sync the audio coming from the mic near you with the videos where you are appearing in it. Because the audio from the mic near you is better than the mic of the camera.
Good Job Mr. Lone Coder.
Hello, good content. A video about probabilities, random number generation / content generation would be great. Thanks!
Check out my Programming The Universe video, I create an entire universe out of random numbers
“I’m going to throw the code on the discord so that you can steal it for your homework.” 😂
Will you do floating point numbers next? Like the IEEE 747 standard, mantissa, etc? I studied this in college but never really understood the concept.
Your videos really solidify these concepts. Thank you!
See this video by Simon Dev: ua-cam.com/video/Oo89kOv9pVk/v-deo.html
Best video ever explaining fixed point, big thanks! Have you thought about making a video explaining floating point?
I think that everyone had this idea before me but ... I think I have a way to do the multiplication without needing a bigger type.
Note that the following should work for 16_16, 2_30 or any other distribution, and should work for bigger and smaller types.
Let's assume we store things on 32bits. We want to multiply X and Y. Let's say Mx represents the 16 most significant bits (followed by 16 zeros), Lx the 16 least significant bits, My the 16 msb of Y (followed by 16 zeros) and Ly the 16 lsb of Y.
X = Mx + Lx
Y = My + Ly
X * Y = (Mx + Lx) * (My + Ly) = MxMy + MxLy + LxMy + LxLy
Now we have 4 multiplications but each component have only 16 bits with information. We can shift them so they occupy the 16lsb before the multiplication.
I didn't find (yet) an equivalent method for division.
Thank you very much for this video.
Yayyyyyyyy he’s backkkkkkk
OH MY HE STILL EXISTS!
Wish C had widely adopted standard for fixed point and saturation arithmetic.
I love C.... But ended up going back to Python haha
Thank you for your hard work!
Awecome video! I enjoyed watching it. Just wondering what platform did you use to draw. Doesn't seem like a touch screen! Is it one of those Wacom sketch pads?
Thanks! Yeah, a wacom intuous 4
Fixed Point Numbers are very useful if you are writing a DSP filter.
the man is back!
I love how are you teaching I really appreciate it and I’d like to be taught from you how can I create a calculator gui in c++ like wxWidgets by using the empty windows desktop app. Thank you so much, Sr. You’re my favory
Thank you so much for the great content you're putting out.
God bless!
Really nice video, I would've never thought to learn about this otherwise. Thanks :)
I love fixed points, the simplicity of the concept is really cool for me. Every time I have the excuse to use them in a project I gladly do so. I wish they were supported as native types in programming languages instead of having to set them up. Right now I'm using them for an open world platformer where there are no load screens so the coordinates for the player character can get very large. Basically I don't want that the further away from (0,0) the player is, the less precise float gets, so this might introduce wacky movement glitches that only happen in certain areas of the world.
Yes. In minecraft there is the same problem. When you are on the edge of world. You are moving like 1 meter per free second. It looks like you have 1FPS.
I also would like to have fixed point numbers as native.
@@MinecraftWitaminaPL Isn't Minecraft reset transform position every time it gets overflown like every other games?
@@Unit_00 I just think Minecraft is good example because the world have almost no boundaries and you can use tp command to see the effect
Thank you so much!👍
That vimto can probably has its own ecosystem inside of it by now
I love Basics
still patiently waiting for olc codejam 2021 showcase
I m working for library of Real Number infinitesimal small and big with SIMD and without SIMD;
Without looking how CS and Architect doing it; i use only my mathematical package and basic of C/C++.
@26:23
I always got compiling error
when trying to declare a static function..
"a static .. must be relative to a specific object"
from this video you are using a specific object
congratulations
Nice! Thank you
May you cover casting at some point? I feel that it's a bit of a strange and foreign subject to me when I try to work with this language.
would it not be more efficient to store a "signed" number as a struct containing a bool and an unsigned integer? then use logic to determine if we should swap the sign
The "then use logic" part of your question answers itself regarding efficiency 🙂
Hey man, what are your thoughts on Linux? Do you use it?
I use it occasionally. I prefer Windows.
how would you do this if you didn't have the double option? would we create functions like FP(value infront of decimal, value behind decimal). that both get an integer ?
I am asking because the only reason i'm interested in this is because of cross platform determinism.
12:10, something confuses me about this representation, if we've rounded 0.625 up 1 bit to represent 0.1 then how do we know this doesn't represent 0.125 instead? Having come close to a full bignum implementation using just pointers to arbitrarily sized arrays I'm fully aware of the biased exponent, sign, NaN, Inifinity, -0, the mantissa that has the most significant bit chopped off since it's assumed to be there 9 times out of 10, but the rounding bit is where I struggled to get it right, could you elaborate on this part please.
Easy. You can't. You lose information (pigeon hole principle)
If you want to know this doesn't represent 0.125, you need more resolution on the decimals.
Long time pal
Can you do a video on floating point? That would nicely complement this video.
Very nice. If you'd convert T value to a private member it would be perfect as far as I'm concerned.
What's the issue with having it public?
I have done those steps
Thanks
To be honest i think fixed numbers are better than floating point numbers. For example if there is a big float number. For example sth like 4.000.000.000.000 than the next number is 4.000.000.000.004 and where there is for example 4 the mext number is 4,0000000000000000525 or sth like that. But when we are using fixed point numbers there is no such thing like that. You just can set for example 4 bits for the fractional part. For example 4. Next number is 4.0625 next 4.1250. For me there is more consistency.
Instead of using user-defined literal, I would have went with simply overloading operator* between fixed and double or even between 2 fixed and let the compiler implicitly convert the double
Hi hi Dear Javidx9
I m sheng here come form Malaysia, you may don’t know me.but I super super super love your channel.
I m a dental technician.in this few year I want to push myself in dental line better.After it I try to learn c# and c++ to improve myself.i m try to do a small CAD software to help myself.
Question 🙋♂️
But I have no idea where and how to start a CAD Software,May you giving me some advance??
Thank you so much,but don’t worry just reply if you are truly free.
unbelievable
Hello David, a bit of an off topic for the video and the channel maybe, but given your coding experience, I'm curious to know what do you think of the functional paradigm? A lot of people (particularly in academia) think it's the "last programming paradigm" and will replace OO. Others think it'll only complement it.
UHU :) ... always a pleasure to watch Your videos. Regards
would you consider any videos on SDL2? love the content!
Thanks! It's unlikely since I have my own graphics interface that I prefer to use. Nothing wrong with SDL2, just not for me.
@@javidx9 makes sense! thank you for the reply, im starstruck.
Thank U very much 🙏🙏🙏🙏
nice video!
Hey Javid! Your videos are the best!
I have a question though, On your RayCastWorld video, I seem to have a problem when I try to texture the walls with sprites.
It gives me an error message on the olcPixelGameEngine header file. It says, "Exception thrown: read access violation. THIS was nullptr."
What do I do?
Thanks!
that means this pointer is pointing to null..this pointer refer to current object...open the olcPixelGameEngine header file and navigate to the line where the error throws exception at..
probably you need to override some base class in your class implementation and assign it some values
I sort of just started to program with C++ so I am not sure how to do that. But thank you!
Hop on the discord, much easier to post code examples than via YT. Though sounds like you are accessing something that doesn't exist.
WOW!!!
Ok, so I found out I put the image in the wrong file. So....that explains it.
Thank you so much!
awesome
I struggle to think of a good use for IEEE floats... ever.
High levels of precision: no, rounding errors.
Large numbers: not really, rounding errors
Scientific computing: no, as above
Speed: no, software is slow, hardware is slower than fixed and massively bulks up the math unit. Slight rounding issues have been present per manufacturer.
Aside from being convenient to use (excl a broken ==) due to compiler support I struggle to think of uses. It just feels like a case of everyone doing it and having to be compatible.
Maybe 3D rendering, but it high precision is still hard to do as math operations (like matrix multiplication) destroy and lead to funny bugs / clipping
OMG can you please write a book on c++ or make a very thorough lesson to modern c++?
a lot of these tutorials would be AMAZING content for books!
nice
A common use for fixed point is multiplayer games where the game physics have to be deterministic between users.
But floats are deterministic too.
@@rsa5991 on one cpu yes, but across several the FPU actually behaves differently sometimes. If I remember correctly the game Trackmania uses fixed point because of this.
@@rosen8757 Well there is IEEE 754-2008 which can be enabled in compiler and u can use floats deterministically
Discord rookie here...You said...you was putting the code on the discord...but alas I couldn't find it. Yea not sure if I am looking in the right place...so anybody got any tips on finding it. Just looking to commit these concepts to permenant storage by storing th code example.
I got someone on the discord chat to be kind enough to give me a link so got it, thank you.
Javid!!!!!
Please teach us about Quake's fast reverse sqrt algorithm
Hello friend. I was interested in modifying the textures for a ps2 game, The Getaway: Black Monday. Well, according to what I researched to be able to do this, I would have to extract the textures from the game and then modify the texture, but there are no videos explaining how to get the extracted and modified texture and put it in the game, importing the modified texture into the game . Well, I saw a comment on a video that the 010 editor, this program, is used to import the texture that was exported and modified. Well is this true? If so, could you bring a tutorial explaining how to import the texture of the ps2 game that was exported to be modified? I could even pay money for a tutorial of yours explaining how to do this with the game I wanted to modify.
can you please make a video on how to access low level hardware without including any library. like accessing sound card, graphic card to write frame directly to frame buffer
virtually impossible. any modern os will not allow you to do that, only a driver can directly access hardware. any other program must use an appopriate API like winapi, sdl, directx etc, which in turn will talk to the driver and do the operations. now, if you want to learn how to create a driver for a device, that's a whole different huge can of worms. but you will still rely on the os api's for other things you might need like memory allocation, threads, interrupts and stuff like that. stay with the os's API's.
It's quite difficult without actually writing an OS these days. Also many hardware drivers are still proprietary. If course it is possible, but very hard on a modern day desktop.
@@javidx9 I understand, thanks for the replay😊 but, you said you will be explaining in the future--> ua-cam.com/video/tgamhuQnOkM/v-deo.html at 6:45😁🤓
@@giornikitop5373
from watching ben eater videos,
let me try to answer this
in theory i could say in general you need to reverse engineer the source code of the driver..
you ask yourself where is the virtual memory address (page memory address) and then you do some mathematic to know where is the pyhsical address of the hardware..
then figure out how many possible operation the output device/hardware can do..
if you have oscilloscope you could try to reverse engineer from there..
first there is a some sort of initial buffer that tells you what manufacturer of the device is and etc etc after that you need to distinguish the opcodes, data, and addresses..
for example if the graphic card can only do things like read and write so we can figure out its enumeration constants..
maybe in between 0 and 1..
and then if you know that, you can basically find a C++ compiler for a particular cpu..
although i did nt do this yet..
i really hope i can do this..
probably it is much more work than this i dont know
@@abacaabaca8131 um, no. source code does provide lots of info, but there are always binary blobs/firmware that are proprietary and thus, unknown. if there are gpu drivers that are 100% open source, then ok. now, for something simple, as @Just A cherry OnTop asked, like accessing the framebuffer, that might be enough.
I haven't watched bean eater videos but i'm guessing he reversed some simple device or cpu, more or less already known, like 6502, z80 etc. reversing a modern gpu from zero is a whole different level of pain. let me put it this way: unless you're a gpu engineer of that lvl, know what you're looking for and have high-tech equipment, your chances are practicaly zero.
i just still dont get the conversion from float to fixed gosh
8:42 Sorry but I have to point this out......1/6??
Hi david!! lets play a video about IEEE 754
In the end it wasn't 4.4 but 8.8 ;-)