Correction: Tharo pointed out that the float registers are actually 64 bits in width! Though it seems like compilers at the time would always end up using them as 2 seperate 32 bit registers that combine them to a 64 bit register as described. This seems to be the same compiler shortcoming that I've described for the general purpose registers. Also a bunch of people pointed out that "64 bit" may refer to memory bus bandwidth. Well.... the RAMBUS on the N64 is 9bit. There is a 72 bit bus in there somewhere too which kind of looks like a 64 bit bus, so maybe that's what the "64" in "Nintendo 64" could have stood for,...
A CPU's 'bitness' is not determined by the size of its registers, otherwise the Mega Drive would be considered a 32-bit console. What is 16-bit in the Mega Drive's 68000 CPU is its data bus. Data bus size is important when designing a circuit which uses a CPU, hence why defining how many 'bits' a CPU is is necessary in the first place.
Fun fact Nintendo wouldn't make another 64bit console until the Switch, 20 years after the release of the N64. GameCube, Wii, and Wii U all used similar 32bit CPUs.
64 was pure marketing. We used the 64 bit R4300 because SGI owned MIPS and the recently developed R4300 CPU was available and had the power/performance/cost tradeoffs we needed. If there had been an equally performant 32 bit processor we could have use it, and as Kraze points out it would have worked just as well. A bigger consideration than the register size was the size of the bus, which dictates how much memory can be read or written to DRAM in one cycle. Memory bandwidth was the real bottleneck in the N64 (and is usually the bottleneck in a graphics system, even with modern GPUs). The R4200 has a 64 bit bus, but the R4300 in the N64 actually has a 32 bit bus. In contrast the RDP (N64's pixel processor) used a 1024 bit bus and 1024 bit vector registers. I always thought Nintendo should have called it the N1024. But in fact all the busses (the 32 bit CPU bus, the 1024 bit RDP pixel bus, the 16 bit audio) all funneled (via the memory controller in the MCP) through the Rambus high speed 8 (or 9*) bit bus to the RDRAM memory. * Fun fact: the N64 actually used the parity bit as a data bit. So the Rambus was actually a 9bit bus. The 9th bit was not used by the cpu, but the framebuffer was 18 bits (9+9) in R5 G5 B5 C3 format where the 3 C=Coverage bits helped with antialiasing. The z buffer was also 18 bits.
I'd actually bet that most of the N64's shenanigans were a consequence of the culture at SGI and where they were headed. They were not in the business of making game GPUs, but rather GPUs for business. Most of that knowledge would carry over yeah, but you'll get lots of weird stuff like high latency in the GPU, which for what businesses in the early 90s were using 3D graphics for wasn't as big of a deal. Likewise, the 64-bit CPU was probably because ALL their upcoming systems were 64-bit, so it was natural. Then N saw it and went OMG and hyped it to the moon.
@@kargaroc386 look up the video called oral history phil gossett. he explains a lot of the development of the chip. most of the problems were due to rambus, which was chosen by nintendo for cost. they were able to implement pretty high quality rendering for the time, under tight constraints.
I know some new game developers on modern hardware use 64 bits for large game worlds. Think of minecraft when you travel really far and the world gets funky. Anyway everything I said has nothing to do with the n64
@@pleasedontwatchthese9593 CPU bits have nothing to do with how large a game world can be. You can represent numbers far in excess of the largest register on a given CPU. Pac Man used a Zilog Z80, which was an 8 bit processor with some 16 bit features. The largest number it can represent is 65,536, but the highest score you can get in the game is 3,333,360. Game worlds are in the same boat. You could of course use a coordinate based system that would be tied to the maximum number the processor can fit in one of its registers, but you aren't solely limited to that. There are a few older games that could dynamically load the world on the fly and the player never saw a loading screen. The world size significantly exceeded the maximum CPU bit width of any register. Minecraft's infamous "far lands" are from a bug in the world generation code, it originally was never intended to generate worlds that large, and did tie the world coordinates to a fixed bit width. But while it did it that way, it didn't have to and could have been a truly infinite world if Notch had designed it that way from the start.
@@pleasedontwatchthese9593 yeah, Minecraft is the right example. On Bedrock, they still use 32 bit, so the world becomes unplayable at 2 million, not 30 like Java
Another thing to consider: Nintendo worked closely with sgi, who were a Big Deal in 3D modeling and animation. sgi machines were powerful workstations utilizing the 64 bit version of the MIPS architechture, and were the workstations of choice for Nintendo of that era. Familiarity with the architecture would have been a *huge* reason for choosing MIPS 64 over other, more purpose-efficient designs. It's also easier to run dev builds on machines that share the same architecture. In addition, it is usually cheaper to tweak an existing design than create a new one with new features (like 16-bit arithmetic acceleration).
"and were the workstations of choice for Nintendo of that era. Familiarity with the architecture would have been a huge reason for choosing MIPS 64 over other, more purpose-efficient designs." Sorry, no. Nintendo wasn't filled with SGI workstations as there was zero reason for them to be. A few people might have used them to model (and then drastically downscale) models, but they were programmers and thus had no familiarity with the MIPS architecture or how to program it. Please stop pretending you know about 1990s game development. You clearly do not.
@@Great-Documentaries But the peoples behind PilotWings64 were working on software that had to run on ꜱɢɪ workstation (commercial pilot training). Openɢl came from ꜱɢɪ.
@@Great-Documentaries Not saying either one of you is correct or not but... Nintendo did actually create and test at least SM64 on SGI workstations. There's literally a test build that runs on IRIX.
It wasnt done purely for marketing. SGi were world leaders in graphics and their big expensive workstations were 64bit. It made sense to cut that down to fit than to from scratch build a new cpu with custom instructions. That basically never happens. Nes and Snes had a cpu based on the popular 6502. There weren’t many choices in 1996. The 68k family of chip were aging and expensive. PowerPC really was just getting off the ground.
@@IncognitoActivado What cope, I don’t care what they called it. Look at the chips available in 1995. MIPS 64 bit, Sparc 64 bit (very expensive), Alpha 64 bit (super expensive), PPC 32bit (not available), Motorola 68030 32 bit (expensive and not fast replaced by PPC). Intel Pentium32 bit, hot and costs more than the entire console.
"PowerPC really was just getting off the ground." PowerPC was hardly just getting off the ground. It was based on the 1980s IBM RS/6000 workstation. The 68K family of chips were ANYTHING but expensive. And of course there was Intel. Nintendo had plenty to choose from and chose poorly.
They would have literally done what Sony did with the Playstation and just use a R3000A + custom GPU from Toshiba/Nvidia/ATI/etc (there were a lot of options in the 90s for GPU), the N64 was poorly balanced budget-wise all around, and many of the novelty ideas that SGi pushed (unified memory, etc.) were extremely poorly done. It wasn't even sold cheap, and the N64 ended up as a bad financial decision for Nintendo, hardware wise. They simply took bad decisions and bleeded out market share until the Wii+DS for it. However, bad hardware does actually make up the place for good games, ironically, as it forces creative solutions.
@@MsNyara The Wii wasn't even really successful. It was cheap compared to the other consoles. That's literally the only thing it had going for it. Look at the number of units sold vs the number of games sold. People bought one for their kids and one for grandma and grandpa to get them to move but it just sat there gathering dust. Pretty much the only titles that sold were first party titles... Meaning again, it was nothing more than a Mario/Zelda machine. It was an underpowered non-HD Mario machine and no serious gamer played it.
1:33 Note here that when we're talking about videos, 8 bits means 8 bits per color channel, so 24 bits in total. The alternative is 10 bits per channel, totalling 30 bits. For most use cases, 24 bits is perfectly fine.
In film, 24-bits is mostly only fine for final delivery formats. Many cameras capture 10 or 12 bit (per channel) footage in a Log color space for extra headroom in the color grading process. An alpha channel is also sometimes needed for transparency. Some cameras now actually have a 16-bit option, as do film scanners for those shooting analog. Modern HDR TVs and monitors output a 10-bit signal, with the lower 512 levels of brightness per channel representing the same dynamic range as the 256 of 8-bit, and the upper 512 being compressed depending on the max brightness limit of the display panel.
@@tyjuarez Oh, I definitely agree that when processing and editing video, the higher quality you have, the better. It's the same for photos or audio when it comes to that. "Most use cases" to me is when viewing non-HDR material on the average TV or monitor.
I think I notice a difference between h264 or hevc 8-bit and hevc 10-bit. I see more color banding (not sure if that's the right term) especially in dark scenes, like the gradation between two shades of a color isn't smooth enough with 8-bit. And I'm talking about noticing a difference with 8-bit and 10-bit SDR (of course HDR is different but I'm not even talking about that). So therefore I tend to prefer 10-bit color.
The hype was real - and went hand in hand with the first iteration of VR that wasn't consumer ready. Flat shaded real time polygons on a $250 home console was a BFD. Pilotwings 64 and Aerofighters Assault were developed by pioneers in VR simulation
@@mr.m2675 yes it existed in the 90s - but only as demos that you saw on TV or at trade shows- not a mass consumer product. The graphics were mostly colored polygons with few textures. Virtual Boy was riding that hype.
@@howilearned2stopworrying508 Virtual boy, while marketed wrong in the sense that it wasn't VR as we wanted. Was such a nice little machine. The fact that you had to be in a quiet area, the 32 Bit sound, (i think) and the whole experience on most the games are unlike anything that ever existed or will ever exist. Its underrated and a cool little device that is a bit of a cult classic now. Wario Land belongs in the top 50 best games ever made all things considered.
Same here, buddy. I thought it was talking about the number of bits on screen: 64 x 64 bits. The switch has a 1028-bit architecture. Each register can store 5.0706024e+30 different values.
Think 64 was simply a marketing term because at this time, I remember it well despite being so young, there was such a huge rapid growth in computers and tech, calling it a “64 bit console” despite anyone knowing what that meant, helped it ride a wave of tech obsession that pretty much captivated everyone. My dad was into building pcs at the time. So much that all his coworkers asked him to set them up with one, and I remember every year ram, disk space, all forms of memory were doubling. what you bought high end one year, was low end the next, cpus were getting faster and faster and the gpu race was just beginning. I remember the first nvidia gpus being released. I believe previously, the only thing that revolutionary was the 3dfx voodoo card. It was an amazing time for tech and video games and I think people who otherwise wouldn’t have cared at all about tech were learning terms like “64bit” it was really fascinating. He still has a cupboard full of dos floppy disks and old ms software. It was a really nostalgic wave of tech excitement that I don’t think will ever happen again. It was magical. I don’t feel that same magic when I look at today’s tech.
The annoying thing about the 3dfx voodoo cards is that a lot of the games that were built to use them are barely playable on modern systems. Elder Scrolls Adventures Redguard was released in 1998 so it should run like a piece of cake, but unless you have a 3dfx graphics card it will run at like 10fps.
For what it’s worth, most modern sound is still 16-bit 44.1kHz or 16-bit 48kHz. I say this having developed an audio player for a final project in uni.
It gets worse - the Jag doesn't even have a 64-bit CPU. It's just two 32-bit CPUs and the ad was born from Atari "doing the math." This isn't the first time that Atari tried to use bits to sell hardware though because the ST in Atari ST means Sixteen/Thirty Two and is in reference to the Motorola 68000 chip that the ST uses.
Technically, modern CPUs are 512-bit due to AVX512 SIMD instructions. The Dreamcast's superscalar architecture uses 128 bit SIMD and I think PS2 does a similar thing.
@@Kurriochi 128-bit vector co-processor integrated... which runs on 32-bit registers. Sitting effectively beside a classic 32-bit superscalar processor on the same die. There really is very little integration between them. The other 128-bit unit in the Dreamcast is the VRAM bus. Of course both are helpful to get Dremacast to the performance that it needed, but also... it could have conceivably been done differently with the same outcome. Bit wars are plain stupid.
@@Kurriochi No : the maximum size of an Integer inside a vector is still 64‒bits. Such instructions basically allows to use of several ᴀʟᴜ in the ᴄᴘᴜ but the ᴀʟᴜ are still 64‒bits. Even when you design cryptographic ᴀꜱɪᴄs, adders/multipliers are 64‒bits wide as adding 128 or 256‒bits integers directly requires more steps than doing it manually.
14:21 Not only did the Gamecube use a 32-bit processor. But the Wii, Wii U, and 3DS also did. The Switch was actually Nintendo's second 64-bit console. You could argue that Wii U should've been 64 bits. But that still means they were 16 years too soon to jump to 64 bits. The PS2 and Xbox were 32 bits as well.
@@mrmimeisfunny Yes and no. Actually the PowerPC 750 it's based on is from 1997, so it's even older. But that doesn't really matter as the Wii U's 3-core iteration went a long way from that basic design with far more cache, cores, clock speed and additional features. Another example: Intel went back after the Pentium 4 Netburst debacle and based their new core - architecture CPU's on their Notebook offerings, which themselves were based on the Pentium 3. So going back to an older design as a base can totally make sense, based on the given scenario. Changing the architecture at this point wouldn't have changed much, as Nintendo had no ambitions to build a technical competitive console anyways so they would just have swapped it for another low power alternative with additional costs in development both hardware and software tools.
@@mrmimeisfunny so? modern windows and linux PC's use x86_64 which is just the 64-bit version of x86 which was originally 16-bit and dates back to 1978. as long as the speed keeps being upgraded to keep up with the times it doesn't matter how old the architecture is
RE: 3:47 on the FPU registers - I worked a while on a decomp of Mischief Makers, and it actually was one of the few titles to compile into MIPS 1 instructions, meaning doubles were split in 2 32-bit registers. This was only for the game's code, libultra was still largely MIPS 2/3.
There's a bit of confusion around the floating point register modes. FR=0 selects the mode in which there are 16 64-bit floating-point registers, the even-numbered registers access the lower 32 bits of a double while the odd-numbered registers access the upper 32 bits of a double. FR=1 gives you 32 distinct *64-bit* floating-point registers, not 32 32-bit registers; this mode relies on having 64-bit GPRs and using dmtc1 to load double-precision values into the FPRs, the FR=0 mode was for older MIPS versions that did not have dmtc1 so required two mtc1 instead. The FR=1 mode crashes for you when using doubles as GCC cannot generate correct code for this mode on old ABIs for the VR4300, but this could conceivably be patched (not that there would be much benefit to using doubles)
ohh i had no idea, i definitely missunderstood this then. I thought the reason all the games don't use odd floats was because they were intended as the lower part of doubles. that's good to know.
So in theory using the fr=1 mode could theoretically still give better performance (because you have more registers to work with in that mode) as long as you can get it to play nicely with the compiler?
@@jlewwis1995 yeah, it was a 20us speedup in my case though it ran pretty unstable. i was thinking it ran unstable because of the doubles, but it seems like a compiler can circumvent that somehow. i'm no longer entirely confident what the underlying cause for the instability was there.
As of the floats issue. A quick tip, you can set some compiler directives to prevent the higher depth floats from ever being used. This will simply override the standards regarding the default float format. This is also the method to work around various .Net and Mono versions using different default integer and pointer formats in various 64 bit platforms breaking code. However, be careful, less depth in rendering does mean less precision. The larger the world and the further your view distance, the more likely you will hit this range. Still, with the low-poly N64 games, it just isn't the most likely thing to need to worry about. Notably on the Multimedia Co-processor, you can do 8 bit and 16bit SIMD operations. If you are looking to handle 8 bit and 16 bit math in bulk, this is a good option.
which directive are you suggesting? i've read through a lot of GCC stuff and never saw anything like that. rendering does not work via floats on the n64. the floats are purely a CPU thing. the GPU all works on fixed point integers. and you hit a lot of depth issues on the n64 due to the depth being brought down to an s16
While trying to replicate the snippet given, I noticed that GCC only seemed to emit the double-precision instructions when using the old 32-bit ABI. Using the new 32-bit ABI or the 64-bit ABI didn't seem to emit the double-precision instructions. Also I noticed that -fsingle-precision-constant appeared to have no effect regardless of ABI. Some notes about the "n32" ABI, it's essentially for 64-bit MIPS, but with 32-bit pointers and longs. (ILP32 as opposed to LP64.) It's basically just a 32-bit version of the 64-bit ABI, still using features exclusive to the 64-bit architecture, but without the memory cost of the larger pointers. It would almost certainly break existing code that assumes the older ABI, especially inline-assembly.
@@Dark_Mario_Bros.Except that it was "true" as the Megadrive's CPU was MUCH faster (but still less capable nonetheless) than the Super Nintendo's CPU
1. Coming from Saturn, having a 64-bit (really, 48 bit) register is extremely useful by itself. 2. What would have mattered more is the pulse width of the RAM bus, which other comments indicate the N64's bus was 9 bit. Hilarious; the Saturn had 32 and 16 bit busses. 3. Another thing which would have made a difference is the width of the instructions; the N64's instructions are 32-bits long. Imagine how much faster it could've been if the instructions were 16-bits long. (There are compromises to be made in doing so, but it definitely worked out for SuperH) 3. The reason Nintendo ended up with a 64-bit processor was almost certainly SGI who just kind of had it on-hand.
Thanks for the video Kaze, I'm still patiently waiting to play your elite optimized original N64 so I can experience my childhood in the best way possible! I've held off playing the game for so many years just to play yours, i can't wait to experience it!
Hey, 2 things: 1) 64-Bit colour is actually 16 bits per channel (Red, Green, Blue and Alpha) and it's only used for like, "green screened" stuff in editing if you know what I mean. 2) 16-Bit 44.1KHz audio is basically "CD quality" - that's probably why they specifically used it.
@@mirabilis Many audiophiles actually can't tell the difference. There's actually 32-bit float, and 32-bit float has a bunch of advantages when it comes to mixing
for anyone wondering, Atari Jaguar which came out 3 years prior to N64, marketed as 64-bit, had only a 64-bit object processor and blitter logic block, but the main CPU was 32-bit. a hard stretch to call 64-bit outside of marketing.
Sounds like that's what Nintendo should have done. Do the bare minimum to not get sued for false advertising when you market it as 64-bit, then actually just use a 32-bit CPU for everything for the performance safety advantages.
Actually, the main CPU is the same Motorola 68000 used in the Megadrive and Neo-Geo, which most people call 16 bit. Instead the Jaguar has two 32-bit RISC coprocessors, which maybe they thought should add up to 64. Bizzaro hardware design.
Imagine Dil's birth theme song playing irl during Kaze's birth. Which is an orchestral/cosmic remix of the Rugrats opening song. While also showing everything from the universe's beginning, to science, and history.
What's weird is that he develops for a retro console that was presumably in its prime when he was still a baby (if I understand correctly)? I mean, most of us have nostalgia for the stuff that was around in our _teens_ or something like that (give or take). But Kaze devotes his time to a game and system from the time when he was either _in the womb or an infant_ 😂
As somebody already pointed out, the r4300 was just a readily available product on the shelf, that SGI was selling for very cheap; their product sheet says it's intended for printers, routers, and other low-power devices. Buying an off-the-shelf product means Nintendo could reuse existing tools, like a compiler chain, already made for that product. The 64-bit integers could allow for 64-bit fixed-point calculations, if the developers ever wanted to use it; but the CPU lacked fixed-point instructions, so fixed-point would have to be made in software (and that kills all the speed advantage over floating-point.)
@@dkosmaripress release said $50 for samples, $35 for 100k, and other site implied a per-unit manufacturing cost of $15, so probably a bit less than that given their expected sales: I'm guessing $20-$25.
I work in 16 bit (and lower) floating point standards. There was no such thing as 16 bit (standard) floats until IEEE754-2008, which also brought us FMA instructions. As for 16 bit ints and uints, that’s a mips ISA thing
Fair note on the Gamecube and Wii, those *do* have 64-bit Float Registers, so it's still 64-bit in that way. They also use an extension of the instruction set which lets you hold two 32-bit Floats in a single register, which lets you do some fancy SIMD type stuff
For people who don't know, note that *nowadays* having larger registers can improve performance by a lot, even though it was worthless in 1996. Modern computers have SIMD instructions that allow us to do math on multiple values at once in a single (ish) instruction as long as you can stuff them all into a register. So for example, you could fit four 16 bit numbers into a 64 bit register and multiply 8 numbers in one instruction. Or even more! The newest x86 CPUs have registers as big as 512 bits, with AVX512 instructions. Also note though that most programs don't take advantage of the instructions at all, simd is a little situational and inconvenient to program in most languages. You usually see it used by the same breed of wackadoodle programmer as Kaze who sacrifice Yoshis to the vroom gods in exchange for incomprehensible powers of compilation.
rather than going for 64-bit to market it as a Silicon Graphics super computer, they should've thought about what a new era of game graphics would actually mean. They wouldnt have gone for CD but increasing the texture buffer sizes would've been nice. THAT and a rambus that goes vroom vroom!
@@davidmcgill1000 The PSX did fine with a similar amount of RAM, the problem was Nintendo/SGi going with an unified memory approach, which made coding memory use a real nightmare, as every operation bottlenecked the other, so you needed to over-optimize your code (aka see Kaze's work) to actually get to use your memory decently. Or also why N64's expansion pack (doubling total RAM) really just helped meaningfully toward the end. The other mistake was the 9-bit bus, which would bottleneck every other single part of the whole hardware, specially with unified memory making use of the tiny bus even more frequent.
Makes me think of this as a somewhat reversed situation with the Sega Genesis/Mega Drive. The Motorola 68000 CPU had 32-bit internal registers, but a 16-bit external bus, and was practically exclusively used as a 16-bit system. By marketing logic, Sega was underselling the console's performance on that basis. Then you got the IBM PC's 8088 CPU, which was 16-bit internally (and programmed as such), but the external data bus was 8 bits, yet I'd say the IBM PC is often seen more as a 16-bit system than 8-bit.
Yes. 68000 is slow in 32-bit operations, since it was designed to compete head on with 16-bit processors and couldn't dedicate much extra logic to it. I think this r4300 is similar in 64-bit being a second class citizen and being cut down in implementation cost as far as possible, since by all reason none of the systems using that era's r4000 series processors would really need 64-bit capability for the time being, but maybe in the distant future but also when the transistor budget and speed would be higher by then. The mixed 16/32-bit design gave the 68000 a massive advantage, in that the memory is flat 24-bit address space where you don't need to make special concessions for where in memory something is. True 16-bit processors used the more cumbersome page-offset memory access. Another advantage is forward compatibility with future true 32-bit systems, including software compatibility.
The SNES has pretty much the same situation too, its CPU is more or less just a souped up 6502 with a 16 bit register flag and 24-bit addressing. The data bus was still 8-bit which makes some 16 bit operations take almost twice as long.
68000 is a 32 bit processor through and through. But when people talk about 16 bit consoles they mean consoles with 16 bit like capabilities and that's all they mean.
It is kind of like car companies advertising the number of cylinders in an engine. It really doesn't tell you how the performance of the car is, but more cylinders sounds better.
@@rightwingsafetysquad9872 most modern fossil cars are using only the number of cylinders required and piping in sound to the cabin in order to meet both emissions requirements and customer satisfaction, especially the higher power rated engines
Indeed *fewer* cylinders is better as it means the engine is better optimised (outside of the cheap moped and cheap car market), with 0 being the true target
@ThePlayerOfGames Manufacturers doing something dumb with synthetic sound does not invalidate my point. 8 or 12 cylinders almost always sound better than 4 or 6. 5 and 10 are odd, 5 cylinders always sound better than 6, and 10 is peak engine music. Fewer cylinders do not necessarily mean an engine is smaller. The GM and Toyota 2.7L 4 cylinders are larger than many Ferrari V8s. But that doesn't matter either, a smaller engine does not at all mean it is more optimized. It just means it's smaller. First, what are you even optimizing for? Weight, manufacturing cost, fuel efficiency, noise, speed, tow capability, driver preference? Let's say you mean fuel efficiency. The most efficient engines are the Ford 2.5L, Toyota 2.4 and 2.0L. They regularly beat engines less than half their size. The most efficient truck engine is a GM 6.2L followed by a Foed 6.7L, both regularly do better than their own 2.7L engines. In motorcycles, BMW's 2-cylinder engines are very fuel hungry, even compared to something like Honda's 6-cylinder Goldwing engine.
Reminds me of that story where a chain of burger restaurants was marketing a third-of-a-pound burger (1/3) to combat McDonald's quarter-pound burger (1/4). It didn't catch on, because people thought the quarter pound burger was larger than the third-of-a-pound burger. As everyone knows 1/3 < 1/4. Never underestimate the stupidity of people.
Yeah, it should have been marketed using ounces. McDonald's measly 4 oz. patty (before cooked) compared to the monstrous 5 1/3 oz. slab of raw goodness. I'm pretty sure some of them still sell the 1/3 pound patties and I haven't met anyone besides children that think 1/3 is less than 1/4, but visually the odd 3 next to the almighty 4 is intuitively inferior before the brain even does the math. It's similar to why advertisements have only increased in quantity throughout the years despite the general public dislike of them, planting the seed in the animal brain will produce recognition and trust. We are incredible creatures with powerful minds but there is a lot of money to be made exploiting its weaknesses and it's present in nearly every industry, sometimes it brings results that many rely on for entertainment. We've gotten used to suspending reality so the rules that govern it have changed to reflect it, the 1/3 pound burger is for the slightly overweight thinking man who could have saved money and added a few extra heartbeats by eating food cooked at home.
I think the Sony breakup must have added fuel to the decision. Making a 32-bits CD based console would probably be the most effective solution, but since PlayStation was now on the table, they needed something to differentiate themselves. A 64-bit console was imminent (in the market perspective), so Nintendo took a shot.
The reason the N64 uses a "64 bit" processor is likely because it was fastest on paper rather than being optimised for that specific application. As the video even pointed out, when you want to do double or long long maths the speed gain from having instruction support for 64 bit types is massive over needing to emulate the calculations with multiple 32 bit instructions. This is what the CPU designers would have been looking at, since a lot of *industry* applications require the use of 64 bit types, and is one of the main driving reasons that current high performance CPUs offer instruction support for 64 bit types. The issue comes that outside basic operations, instructions on wider bit types are usually slower or use more hardware than their narrower bit type equivalents. As such when the additional precision is not required, such as for most of the logic in a *game*, you are usually better off going with the smaller types so do not see any of the benefit from the instructions for 64 bit types. The best modern example of this would be when using SIMD instructions such as AVX. With some implementations of AVX512 you could choose to perform an operation on 8x 64 bit values, or 16x 32 bit values, or even 32x 16 bit values, allowing significantly higher throughput at the cost of precision. Modern games would likely choose the 16x 32 bit values because render preparation still mostly deals with single precision floats. This does not really hold true for regular instructions on modern CPUs though, since they are pipelined to such an extent that the wider types usually at worst have longer *latency* but the throughput remains the same, being capable of executing multiple such instructions per cycle every cycle. The main reason to still use narrower types for such logic is for memory/cache density rather than execution performance, where they can still result in significant gains. Processor bit depths has little to do with allocteable memory. That is entirely determined by the processor data bus and accompanying support features. It was entirely possible for a x86 "32 bit" linux OS to address more than 4 GB or RAM due to the one of the x86 instruction extensions. 32 bit builds of Windows could have as well, but Microsoft choose not to support the feature widely, and instead push it as part of their x86-64 "64 bit" version of Windows. The x86-64 instructions would help process such bigger memory addresses more efficiently, but are not required to do so. This approach is how old 8 bit consoles such as the NES could access more than 256 bytes of RAM/ROM.
Considering the 64 bit instructions take about twice the time I'd guess that the CPU is basically doing 32 bit math like the software algorithms showed. A 16 bit mode would probably save a little bit of die space in some areas (registers mostly) and used more in others (cutting the 32 but registers in half) if it is implemented with two halves, otherwise if it is a real 32 bit ALU (arithmetic logic unit) then it would have saved a lot more of die space and been cheaper. I might be a little off here, but the gist should be about right.
Speaking of better suited processors, an SH-3 would be an interesting choice. A 32-bit CPU through and through, but it has condensed 16-bit instructions, reducing required memory bandwidth. When N64 was developed, it would reach 100 MHz with one instruction per cycle in Dhrrystone, ending up a little slower than the r4300 that they went with. But Hitachi wouldn't just give you the core and let you add coprocessors and periphery, you had to buy the processors from them as-is, while Nintendo used customised chips, licensing the core from MTI and letting NEC do the customisations. The way SEGA got their ideal (one could say dream) processor for the Dreamcast is by convincing automotive headunit designers in Japan that they needed the same exact chip with the same enhancements.
This sort of forward-compatible mixed design has popped up a number of times. The original 68000 is a 32-bit processor of the same complexity and intended to compete with typical 16-bit processors of the era... which would also nominally make Megadrive a 32-bit console if they chose to push that sort of angle? Which is stupid, i'm glad they hadn't done that. The 32-bit instructions are largely slow and not really intended to be used throughout, and the one day to day advantage the mixed design brings is flat addressing as opposed to segmented memory typical of 16-bit processors. I think you're right to suspect that 64-bit forward design is not being very wasteful in the MIPS core, that they kept the overhead to a minimum knowing it's going to compete in the 32-bit world, but it's also not very useful.
@@SianaGearz I would have been cool to have seen an upgraded Archimedes. Archimedes has no co-processors, so it does the pixels. With MMX instructions acting on 64 bit, this would be quite fast. Actually, ADD already works on vectors stored in 64 bit if you leave some 000s between the component (4x16 bit). Barrel-shift lets us mix and match components. I only miss a MUL instruction with built in shift and mask to manipulate brightness on RGB. 32x32->64 bit MUL could always be signed. The program would either sign extend one factor into 64 bit or not. Then we would use the same iMUL instruction for all products.
it's probably a lot to do with that was simply what SGI had. The MIPS cores had already migrated to 64bit some time before and the only powerful enough off the shelf budget core they had were that of the VR43-- range. To have made a 32bit only core would have taken even more development and to implement multiple 16bit more still. Given these available chips were already considerably more powerful than all used by the competitors and the system already massively undercutting the manufacturing costs of the available alternatives there was probably little need seen to change what they had.
14:02 Yep. That was me as a kid. I was a quiet kid too, but when I opened up that Christmas present... And the N64 played some role in leading me into doing embedded software as an adult.
Here's the thing: In the early 90's we were going from having significant limitations due to actual bitness - 8bit cpu's could access 64k ram for example to it being more important what the system as a whole could do. Having 64bit instructions was pointless in a system that was sprite based for example, or just didn't need 64bit etc. (most N64 code is 32bit) BUT having 64bit wide ram, and bus was critical when moving massive amounts of sprites and other graphics around. In the "32bit" era for instance, Intel CPU's were 32bit only all the way up to the Core 2 Duo (discounting other not so great cpus that didn't do so well) But those were totally fine, when paired with a super fast 3d accelerator card that had in some cases a 256 bit bus to ram. One of the Atari guys had a great quote back in the day "It is a 64bit 'system', meaning where it should be 64 bit it was, and where it didn't need to be, it wasn't" or something like that. And that is true now too. Your nice i9 whatever is 64bit, but your GPU, which needs to throw around massive amounts of data and huge computations, is likely far wider than 64bit.
One thing to keep in mind relating to the price of the silicon in the console is that by the time you are spinning your own chips with a bunch of customizations and other stuff like that the bits and registers are not what costs money. Maybe in the r&d stage it might change things a bit but no CPU manufacturer is charging its customers per register. Likely it was 64-bit because that's what was available even tho they had no real use for the functionality. Likely Nintendo went with 64 bits just because its more bits than the competition and had no real use case for it that made much sense or gave any real advantage. I can't imagine a few less instructions here and there was really making a big difference. If anything they needed to invest in a far better memory subsystem. That alone would have unlocked much more potential for the hardware.
I'm old enough and I can confirm marketing was strong. every kid wanted a nintendo or a ps1 and every new console was a real revolution. Great and interesting video
The R4300 in the N64 is just a commercial off the shelf part with some slight differences (pinout) to the standard NEC VR4300. NEC produced the R4300 in bulk for all sorts of low-cost applications (printers, networking, arcades), which probably made it an attractive (and inexpensive) solution over a custom developed part. The console did get a revised CPU, probably just a VR4310 CPU with better physical and electrical performance.
EVERYTHING ELSE from Silicon Graphics (GPU) was certainly NOT off the shelf, and cost Nintendo dearly. Nor were 4300s particularly cheap. Frankly it was a huge mistake to use THAT processor and a pseudo 64-bit architecture and then both starve it of RAM and not give it a CDROM for storage. Nintendo screwed up very badly.
@@Great-Documentaries Hindsight is always 20/20. At the time, nothing was set in stone, doing 3D right was hard, nobody really even knew what "doing 3D right" really meant (see e.g. the Saturn, 3DO, and nvidia's first PC GPU nv1 all using quad polygons rather than triangles). Using the expertise of a company that had lots of experience in making 3D hardware kind of made sense.
@@Great-DocumentariesEven if Nintendo went with CDs, they would’ve attempted to use a proprietary format since the company refuses to pay royalties. So CD playback may not be possible and would’ve been labeled as a pure gaming machine.
@@Great-Documentaries Going CD would've increased to BOM-costs, likely meaning they'd have to cut in other departments (like the PS1 did). In short, design is a matter of trade-offs.
@@sundhaug92 but the 16bit era already showed the potential of CDs, from the MegaCD to the PC Engine CD. Developers had so much more space to work on, and they were cheaper to produce.
As a digital logic developer, I would guess the register file being narrower wouldn't mean much in terms of chip cost. However it could possibly make timing closure easier which may have meant higher CPU clocks. However, since this system is memory bound due to the GPU sharing the bus, I think generally a 64 bit architecture makes sense because it means the internal data bus is 64 bit. Even though the external RAMBUS is 32 bit this still matters when moving data around the chip and getting it to the RAMBUS controller faster. In my opinion, 32 vs 64 bits is a tiny drop in the bucket as far as the overall chip design, there is so much more they could have done or not done, boiling it down to 32 vs 64 bits is like trying to compress 1000 design decision into a bool.
Agreed. I don't think it was so much that they said "we need a 64-bit system" but rather the vast majority of modern MIPS processors (at the time) were 64-bit and since the deal was with SGI, who uses MIPS for their workstations, there were very few options for 32-bit without using an outdated processor. Doesn't change the fact the marketing team "overhyped" the 64-bit aspect of the console but that's their job. The argument with 32-bit vs. 64-bit seems to be an allusion to the N64's faulty memory design. Choosing an UMA design was forward thinking but there may have been a miscalculation in the bandwidth demands of the CPU, RCP, and PI, alongside RDRAM's unusual (poor?) performance characteristics.
@@chucks9999 Yeah, there is definately some decisions around using a core that someone already has. Marketing will always just do almost whatever they want regardless.
PS2 had 128 bit registers that let you do bulk operations on 4 32bit floating point values which is the very basis of vector / matrix math which is why it was such a beast for things like physics simulation
In other words it had SIMD instructions, which were the new hotness of the late 90's in PCs. Gamecube had something similar where it could treat one 64 bit floating point register as two 32 bit floating point registers.
x86 also had 128-bit registers with four 32-bit floating point in SSE in 1999. And before that, in 1997 it had MMX which were 64-bit registers that can do bitwise 64-bit integers or operations on multiple 32-bit or 16-bit or 8-bit integers.
Boy, I was actually thinking this very same thing. I would love to see a breakdown of how many 64-bit instructions were actually used for each major N64 game. Honestly, they should've traded the 64-bit ALU for quadruple the texture cache.
I was a PC gamer in the 90s. I had a Genesis for a while, but that was it. I can’t describe to this day how strange it is to see PC gaming be so common and big. That simply wasn’t the case at the time. What it comes down to is that video gaming at the time was still a question of how much like Nintendo’s games something could be, which meant how simple the rules could be and still be entertaining. This is why you could build the technological masterpieces that were Doom, Quake, Half-Life, and Unreal, or the complex gameplay of Warcraft II, StarCraft, Daggerfall, Lands of Lore, or all the Space/Flight sims and so forth, but you still couldn’t compete at scale with something like Super Mario World, Zelda, or Mario 64. Gaming was expected to be much simpler. As a rule, this is still true today, with the Switch having have become what it did even though Nintendo was in trouble in the mid 2010s prior to the switch. The 90s was a time of technical growth, and it culminated with mp3s and the internet hitting full swing in the early 2000s. The 64 bits was just Nintendo latching on to that trend, even though their strength remained a question of pure simple gameplay. What matters more from a technical perspective is the instruction set, which is why video games of the mid 90s were so technologically advanced relative to anything on consoles. By the time of Unreal and Quake on PC, you could dedicate the CPU to the tasks of the gaming logic, and the GPU could be dedicated to graphical tasks because of cards like a Voodoo 1 or 2. So you could be playing something as complex and beautiful as Freespace on a PC machine, while the best that Nintendo could do was Starfox. Obviously, the playstation closed that gap, but it was still very wide. At the same time, Sony and Nintendo simply put out very good and entertaining games, like Final Fantasy VII, VIII, and IX, Spyro, Goldeneye, Crash bandicoot, etc. While these games weren’t as technically advanced as Unreal, Quake II, Descent Freespace, etc, their gameplay was still more inviting. To play any of those PC games required experience a mouse and keyboard. While PS and N64 games could be very inviting and enjoyed on a couch. The first game I ever saw to bridge the gap was obviously Halo. Not because the gameplay was very particular to the one or the other, but because the experience between the PC and console was very similar. To me, the last thing that the PC dominated technologically was network play. Again, all of these are questions of instruction sets, not register width. The network card on a PC has a controller with an instruction set, and the OS abstracts this for use by the game. The GPU does the same. And the CPU also has instructions that facilitates interactions of all of these from the perspective of the OS and therefore the game. That’s why games on the PC looked more realistic and were more capable for well over a decade on 32-Bit chipsets and memory. They simply had more instruction sets. And you could upgrade these easily because they came with a motherboard specifically designed to handle changing hardware and, therefore, more instruction sets.
Depends on where you lived during the 90s. In Germany everyone played on computers and consoles were seen as exclusively for little kids by the vast majority of people playing games. That only really changed by time of the Xbox 360 I'd say. People would still call others "console kiddies" when the PS2 was at its height.
@@tannhausergate7162 yes I remember seeing this! It was actually something I noticed growing up that made me feel Isolated hahaha. I grew up in the inner city of Los Angeles, so I didn’t have many people to talk to about PC games. But when I would go online and do stuff like IRC, I often found myself doing seeing people talk about games in other languages like German. I could understand the English, obviously, but that was it. It’s funny to me all these years later because I caught a lot of influences from that era, even though I grew up in a decidedly urban environment. So I was listening to rap and Spanish music, but I was also hearing really cool synth music in games like Unreal, or gleaning stuff here and there from some of the PC music making communities like Rebirth that I would come across in the other regional communities. All these years later, I can say that my preferred music when it comes to electronic is melodic German and French house hahaha.
At the time, leaning on bit count as a marketing schematic was important. The bit wars were certainly at their end pint around this time, but it proved to be nintendos folly here. Kaze proving how much better it would have been if nintendo just hadn't leaned into it, the N64 wouldn't have had as many limitations and complications as it did.
"bits" are an exceedingly dumb way to measure a computer's "power". Floating Point Operations Per Second is better, time to sort a list of 10,000 random integers by heap sort is better still, etc.
I was around for the N64 release and the jump to 3D was insane at the time. I don't think anyone cared that it was 64 bits or even knew what it meant, we just knew number was bigger and games were 3d now. It's kind of like how people thought the 1/3lb burger was smaller than the quarter pounder (1/4lb burger), obviously bigger number means more, right? It doesn't matter if it's actually true, as long as dumb people (the average person) thinks it is a certain way.
Just an aside to the N64 discussion here-- Castlevania: Legacy of Darkness actually used antialiasing in its Hi-res mode which equated to a horrible frame rate. You can disable the AA in Hi-Res mode using a Gameshark code and it actually improves the frame rate significantly. Its the only game so far Ive tested with such noticeable improvement.
Yay another technical video. Probably said this a lot already but you made me interested in low level programming and modding. I'm still a web dev jobwise but my hobby is assembly and C for modding old games and it's thanks to you I didn't gave up and got where I am right now
I feel Ninty realized halfway through the 64 Bits weren't necessary and probably but probably continued using it as they were fairly deep in dev of Mario 64, and continued not optimizing games as much out of fear of making Mario 64 look bad in comparison.
I like the way x86 handles it (before SSE/SSE2/x64) where all floating point values are loaded into a uniform format of eight 80-bit x87 registers (with 64 significant bits), which ensures that floating point remains precise, and MMX reuses the register space of x87 to store eight 64-bit vectorized registers that allow for 64-bit loads and stores as well as 64-bit bitwise and addition operations and there are operations on arrays of 32-bit, 16-bit, and 8-bit registers that can be used to perform same operation on multiple integers of the 64-bit MMX register.
Bro, you missed out by being a fetus in the run up to N64 launch. I still remember seeing an import N64 in New York a couple of months before it released here. That first glance at the Mario head attract screen will never stop being the most impressive thing I've ever seen with my eyes.
It wasn't just consumers that assumed 64-bit would automagically be better. From what I recall, one of the N64 developers/executives later went on to state that, they had made a misstep with the N64, by building it with the assumption that making the processor as big and as fast as possible would be the best choice, that the peak 1% high of performance was key; when in fact it was increasing the "cruising speed" they should have been targeting, that is, making the average operations per second faster. It wasn't really a corporate decision to knowingly make the console perform worse for marketing purposes. You have to keep in mind that, developers and engineers back then didn't have the firm grasp of what 3d gaming was supposed to look like, or how it was supposed to actually work. For as good as the N64 and Playstation did; it was still very much in the experimental stage of 3d graphics. God, what a time to be alive. We really thought that the technology from Ghost in the Shell and Blade Runner was just right around the corner.
A misstep by management maybe who have a naive understanding of the tech. I work as a software developer so this kind of uninformed technical decision making is kind of common. Any of their hardware engineers should've known what a 64-bit processor would implicate. So much so that even Bill Gates thought that going 64-bit was pointless (he didn't quite anticipate that RAM would get incredibly big and cheap which ultimately forced the 64-bit transition on PCs).
I think the real lesson is that higher-bit hardware isn't better just for doing more bits at once, it needs to do the operation in the same/similar time to actually perform better
hard to imagine that the entire videogame hardware industry throughout the 1990s was dictated by one dumb marketing decision by Sega when they released the Genesis in 1999, they could've picked any of a dozen metrics to compare it to the NES to. they could've picked the clock speed (7.67MHz on the Genesis vs 1.79MHz on the NES) they could've picked the amount of memory (64KB CPU RAM plus 64KB VRAM on the Genesis vs 2KB CPU RAM plus 2KB tile map on the NES) if they wanted to pick a metric that was actually _useful_ to compare video game console performance, they could've picked the number of simultaneous sprites (80 vs 16), parallax background layers (4 versus 1), or colors (the NES could only display 64 colors total, and only 4 per background tile) None of those metrics are particularly good measurements of how powerful a game console is, even back then, but any of them would've been a better indicator than the one they went with, which was the fucking CPU bus width. as shown in that video, that impacts absolutely nothing with regards to development, but for marketing reasons, every other console manufacturer at the time had to jump on the hype train, leading to dumb decisions like the 64-bit CPU in the N64 and the even dumber 64-bit graphics chip in the Atari Jaguar (which, by the way, came out 3 years before the N64)
Depth, not rate. Audio is sampled a certain number of times per second. Actually storing it requires quantizing those samples to some size. Usually 8-24 bits per sample, though you can certainly go lower or higher, and there have been many uses of lower depths historically. The difference between 8 and 16 bits per sample at a given sample rate is noticeable, and between 16 and 24 is hard to hear. Beyond that, you're not going to be able to tell, so it's only good for processing. 32-bit floating-point is basically the same accuracy as 24-bit, but easier to program.
The "64bit" designation really did come from the memory bus though. With the N64, the main processor is really the Reality Signal Processor - which did your animations, T&L, sound, and drove the rasterizer. It ran at 62.5MHz, and with 64bit registers interfacing with the unified memory bus that amounted to 500MB/s of bandwidth. Now the memory itself wasn't truly 64bit because it was serial RAMBUS technology. Instead, it was 9bit and clocked at 500 million transfers per second. However, the 9th bit was only accessible by the rasterizer - not by the RSP. It's thus not counted because it's reserved for a supporting processor, not the main one. So, the RSP has 500 million 8bit bytes of available bandwidth every second and which matches 64bits clocked at 62.5MHz. So you essentially had a 62.5MHz 'computer' with a 64bit main processor connected to 64 memory and that is where the designation comes from.
@@dacueba-games i would say by 1996, computers were starting to need a fan especially with things like the pentium and pentium pro requiring a fan and the older 486 dx 4 100 could be cooled passively, it was recommended you put a fan on there
Quick addition to the color bit depth in the beginning. Usually the bits are being labeld per color channel. Most monitors these days display 8bit per color channel (which is what's shown at 1:26 ), or if you multiply this by 3, since there are 3 color channels, we get a total of 24bits. With HDR and stuff there is a good reason to increase the bitdepth to 10bit or 12bit per color channel. 16bit per CC is completely overkill for pretty much everything.
We made this even more confusing for me is that generally when people talk about 64 bit architectures versus 32 bit architectures they generally are talking about the difference in the virtual address space size increasing beyond 4GB. Obviously the N64 has nowhere near 4GB of memory, but it DOES have virtual memory addressing! 😲
it's addressing space or instruction size. Mostly they are the same, but sometimes they are different. Fun fact: Most 32bit x86 CPUs could address more than 4GB since the late 90ies, due to their PAE unit. But if you were running Windows, you had to pay extra for the license to use this feature.
I agree that 64-bit was mostly hype but to be fair to the specific piece of marketing you pointed out, the speed difference is mostly attributed to cartridges vs. discs.
I've done a bunch of audio design lately, and one huge shortcoming for 32bit numbers turned out to be indexing an audio clip by sample, which is needed for some playback purposes. 32bit floating point numbers come out to about 350 seconds of audio (give or take a factor of two), so all sorts of factors are limited by that, including oversampled audio.
It's not a theory, it did. Each transfer to or from RAMBUS RDRAM is 64 bits read or write per RCP clock cycle, plus overheads for starting a new transfer. (Actually 72 bits because of the use of parity RAMs for antialiasing data, but for everything other than that part of the graphics pipeline, the extra 8 bits are invisible.)
@@forasago Yes, I'll spare you the details, but each pixel of the "16-bit" framebuffer is 18 bits, with 5 each for RGB and 3 for coverage, which has to do with antialiasing. Similarly each pixel of the Z buffer is 18 bits, with 14 for depth and 4 for the derivative of depth. To do this, they are using parity RAMs, which have 9 bits per byte instead of 8. Normally the extra bit is used as a checksum to detect memory corruption, but here it's used for the antialiasing.
I thought the bits of a CPU refers to the size of the ALU, how many bits it can add in one operation. For example the 68000 processor had 32-bit registers, but was a 16/32-bit CPU since it only had a 16-bit ALU.
It has 128bit SIMD registers, eg doing 4×32bit multiplications at once. Useful when you have a lot of colors (red, green, blue, and alpha) or vertex positions (x,y,z,w - where the w is used for perspective)
@@OliGaming-d1u depends what you mean. There's lots of different "bits* you can be. Intel chips have AVX-512 instructions, which operate on 512 bits, memory buses are up to kilobytes, and Nvidia's GPUs have "tensor cores" that operate on even more. But in general, day to day use CPUs are still occasionally in 32bit mode (though much less nowadays)
@@SimonBuchanNz Using the SIMD registers as the number of bits of one's CPU, rather than the general purpose registers of the ALU, is kinda cheating IMO. By that metric, the Pentium 1 MMX was a 64-bits CPU, the Pentium III was a 128-bits CPU and modern CPUs are 512bits. Which nobody really claims.
@@Liam3072 yeah, I would agree in a vacuum, but it does get weird as soon as you get outside of desktop CPUs. Like, what bitness is Nvidia's Ada Lovelace (40xx)? Each "SM" has hundreds of cores, including 32bit and 64bit floating point, 32bit int, matrix units supporting 8 to 64bit operands, and it's generally going to be invoking the same operation on 32 of those at once. The classic definitions of bitness fall apart here.
Almost no audio is 64bits. CDs are 16bit, as are MP3s and the audio from this UA-cam video. Your device's soundcard is very likely configured for either 16 or 24 bit. There's no reason to go any further for consumer gear, it's simply not needed. All it really does is lower the noise floor, which means that quieter sounds can be reproduced without being drowned out in noise. For 16bit that's already nearly imperceptible at normal listening volume. At 24bit, it is completely imperceptible. Higher bitdepths are only needed in studio recording and mixing scenarios, where a raw signal coming from an instrument or microphone might be at extremely low volume. You might need to amplify a source a considerable amount afterwards, which would also amplify the noise floor. That's where working with 32 or even 64bit files can make sense. In fact, most modern audio software internally mixes in 64bit float to avoid this exact problem.
@@gamecubeplayer While mp3s have an effective dynamic range of 20bits, in nearly all scenarios both the encoder and the decoder truncate that to 16bit PCM on the input/output, making mp3s 16bit (at best, not considering the possible signal degradation due to lossy compression). Same goes for AAC (in most cases).
Nintendo was kinda lucky the N64 worked out as a product. This could also have turned into the PS3 situation where developers are presented with hardware too powerful and complex for its own good. Sony's saving grace here was the tremendous success of the previous console generation. The CELL processor itself was ahead of its time and as such a disaster to develop for.
I like how 64 bits was basically something that developers only ever used on accident and the majority of the reason the expansion pak was used was due to some games breaking without it. Like, the entire console seems to be constantly facing this type of issue.
The Dk64 rumor was false. DK64 used it to enable more dynamic lighting. Other expansion required games used it just because they needed more ram for data.
It's a shame they didn't call it the original name of "Nintendo Ultra 64" due to some copyright or trademark issue or another. Heck I found an etsy page that makes a decent "Nintendo Ultra 64" jewel modelled after the original Nintendo Power advertising for the console (minus the curved surface, unfortunately). At least then the marketing would have been all about the "Ultra" part and not the "64" part. "Super Mario Ultra", "Kirby Ultra", and so on and so forth, just like all the "Super" names on the Super Nintendo/Famicom. Heck the massively loud "Ultra Combo!" from Killer Instinct was a not so subtle bit of marketing for that original name. So it seems what I'm hearing is yes it's a 64 bit console, no that isn't a good thing, but they'd run a few things in that mode purely to prevent bugs from accidental use of 64 bit floats. Now, I can't say whether or not Atari and Nintendo added that mode PURELY for marketing (I'm not sure if Nintendo of Japan marketed their consoles purely on bits at all), but in terms of manufacturing, it may actually have been cheaper to design the chips that way rather than custom design a purely 32 bit SGI chip with it's own instructions. Manufacturing costs MAY have been the likely culprit of this design decision. It's still funny to me that 64 bit processing didn't actually become something close to useful until PCs and later consoles actually started including more than 4GB of RAM.
Correction: Tharo pointed out that the float registers are actually 64 bits in width! Though it seems like compilers at the time would always end up using them as 2 seperate 32 bit registers that combine them to a 64 bit register as described. This seems to be the same compiler shortcoming that I've described for the general purpose registers.
Also a bunch of people pointed out that "64 bit" may refer to memory bus bandwidth. Well.... the RAMBUS on the N64 is 9bit. There is a 72 bit bus in there somewhere too which kind of looks like a 64 bit bus, so maybe that's what the "64" in "Nintendo 64" could have stood for,...
Are you saying that Nintendo could have called it the Nintendo 72? We would have had Super Mario 72 and Donkey Kong 72 instead?
Wow, can we use modern Gcc with a specific profile to fully use the N64 ISA?
9bit? Wth
A CPU's 'bitness' is not determined by the size of its registers, otherwise the Mega Drive would be considered a 32-bit console. What is 16-bit in the Mega Drive's 68000 CPU is its data bus. Data bus size is important when designing a circuit which uses a CPU, hence why defining how many 'bits' a CPU is is necessary in the first place.
@Clownacy wrong , the 68000 is a 32 bit processor hands down.
Fun fact Nintendo wouldn't make another 64bit console until the Switch, 20 years after the release of the N64. GameCube, Wii, and Wii U all used similar 32bit CPUs.
And the GameCube wasn't even a "128bit" console when it was just two 64 bit processors
64 bits. 32 bits.
@@alfombracitario Yeah, they're spitballing. GameCube is 32-bit.
@@alfombracitariobro didn't watch the video
Wait really why?
64 was pure marketing. We used the 64 bit R4300 because SGI owned MIPS and the recently developed R4300 CPU was available and had the power/performance/cost tradeoffs we needed. If there had been an equally performant 32 bit processor we could have use it, and as Kraze points out it would have worked just as well.
A bigger consideration than the register size was the size of the bus, which dictates how much memory can be read or written to DRAM in one cycle. Memory bandwidth was the real bottleneck in the N64 (and is usually the bottleneck in a graphics system, even with modern GPUs). The R4200 has a 64 bit bus, but the R4300 in the N64 actually has a 32 bit bus. In contrast the RDP (N64's pixel processor) used a 1024 bit bus and 1024 bit vector registers. I always thought Nintendo should have called it the N1024. But in fact all the busses (the 32 bit CPU bus, the 1024 bit RDP pixel bus, the 16 bit audio) all funneled (via the memory controller in the MCP) through the Rambus high speed 8 (or 9*) bit bus to the RDRAM memory.
* Fun fact: the N64 actually used the parity bit as a data bit. So the Rambus was actually a 9bit bus. The 9th bit was not used by the cpu, but the framebuffer was 18 bits (9+9) in R5 G5 B5 C3 format where the 3 C=Coverage bits helped with antialiasing. The z buffer was also 18 bits.
very good comment! thank you for the input.
I'd actually bet that most of the N64's shenanigans were a consequence of the culture at SGI and where they were headed. They were not in the business of making game GPUs, but rather GPUs for business. Most of that knowledge would carry over yeah, but you'll get lots of weird stuff like high latency in the GPU, which for what businesses in the early 90s were using 3D graphics for wasn't as big of a deal.
Likewise, the 64-bit CPU was probably because ALL their upcoming systems were 64-bit, so it was natural. Then N saw it and went OMG and hyped it to the moon.
Can you explain it to me like I'm 5?
Can I ask who are you? I assume someone who worked at SGI at the time based on your comment.
@@kargaroc386 look up the video called oral history phil gossett. he explains a lot of the development of the chip. most of the problems were due to rambus, which was chosen by nintendo for cost. they were able to implement pretty high quality rendering for the time, under tight constraints.
So what I understand is that Kaze is creating the first Nintendo 32 game because it saves him frames.
I know some new game developers on modern hardware use 64 bits for large game worlds. Think of minecraft when you travel really far and the world gets funky. Anyway everything I said has nothing to do with the n64
vroom vroom
@@pleasedontwatchthese9593 it also helps wuth drawing programs for planning where you can often run oom with 32 bit especially using multithreading.
@@pleasedontwatchthese9593 CPU bits have nothing to do with how large a game world can be. You can represent numbers far in excess of the largest register on a given CPU. Pac Man used a Zilog Z80, which was an 8 bit processor with some 16 bit features. The largest number it can represent is 65,536, but the highest score you can get in the game is 3,333,360.
Game worlds are in the same boat. You could of course use a coordinate based system that would be tied to the maximum number the processor can fit in one of its registers, but you aren't solely limited to that. There are a few older games that could dynamically load the world on the fly and the player never saw a loading screen. The world size significantly exceeded the maximum CPU bit width of any register.
Minecraft's infamous "far lands" are from a bug in the world generation code, it originally was never intended to generate worlds that large, and did tie the world coordinates to a fixed bit width. But while it did it that way, it didn't have to and could have been a truly infinite world if Notch had designed it that way from the start.
@@pleasedontwatchthese9593 yeah, Minecraft is the right example. On Bedrock, they still use 32 bit, so the world becomes unplayable at 2 million, not 30 like Java
Another thing to consider: Nintendo worked closely with sgi, who were a Big Deal in 3D modeling and animation. sgi machines were powerful workstations utilizing the 64 bit version of the MIPS architechture, and were the workstations of choice for Nintendo of that era.
Familiarity with the architecture would have been a *huge* reason for choosing MIPS 64 over other, more purpose-efficient designs. It's also easier to run dev builds on machines that share the same architecture. In addition, it is usually cheaper to tweak an existing design than create a new one with new features (like 16-bit arithmetic acceleration).
This happens in software, too. Skyrim mannequins, for one.
"and were the workstations of choice for Nintendo of that era.
Familiarity with the architecture would have been a huge reason for choosing MIPS 64 over other, more purpose-efficient designs."
Sorry, no. Nintendo wasn't filled with SGI workstations as there was zero reason for them to be. A few people might have used them to model (and then drastically downscale) models, but they were programmers and thus had no familiarity with the MIPS architecture or how to program it. Please stop pretending you know about 1990s game development. You clearly do not.
@@Great-Documentaries But the peoples behind PilotWings64 were working on software that had to run on ꜱɢɪ workstation (commercial pilot training). Openɢl came from ꜱɢɪ.
@@Great-Documentaries Not saying either one of you is correct or not but... Nintendo did actually create and test at least SM64 on SGI workstations. There's literally a test build that runs on IRIX.
@@Great-Documentarieswho are you and where did you work? Your comment is just wrong.
Funny to think that after N64, all Nintendo consoles until the Switch were 32 bit.
Switch 64 would have been a nice name XD
Except the later gameboy models of color/light Kappa
@@SuperM789I think he meant home consoles, not handhelds.
@@SuperM789The What
@@SuperM789 not sure where you got that from the gb and gbc were both 8bit, gba and gba sp were 32 bit
Because higher bit means more better
In 1996 it literally did mean that
More powerful LITERALLY meant better games until the 128 bit generation
It's got ALL the graphics you could ever need!
*more betterer
@@minecrafter3448 Watch the video.
It wasnt done purely for marketing. SGi were world leaders in graphics and their big expensive workstations were 64bit. It made sense to cut that down to fit than to from scratch build a new cpu with custom instructions.
That basically never happens. Nes and Snes had a cpu based on the popular 6502. There weren’t many choices in 1996. The 68k family of chip were aging and expensive. PowerPC really was just getting off the ground.
@@IncognitoActivado What cope, I don’t care what they called it. Look at the chips available in 1995. MIPS 64 bit, Sparc 64 bit (very expensive), Alpha 64 bit (super expensive), PPC 32bit (not available), Motorola 68030 32 bit (expensive and not fast replaced by PPC). Intel Pentium32 bit, hot and costs more than the entire console.
@@keyboard_g Not my problem.
"PowerPC really was just getting off the ground."
PowerPC was hardly just getting off the ground. It was based on the 1980s IBM RS/6000 workstation. The 68K family of chips were ANYTHING but expensive. And of course there was Intel. Nintendo had plenty to choose from and chose poorly.
They would have literally done what Sony did with the Playstation and just use a R3000A + custom GPU from Toshiba/Nvidia/ATI/etc (there were a lot of options in the 90s for GPU), the N64 was poorly balanced budget-wise all around, and many of the novelty ideas that SGi pushed (unified memory, etc.) were extremely poorly done. It wasn't even sold cheap, and the N64 ended up as a bad financial decision for Nintendo, hardware wise. They simply took bad decisions and bleeded out market share until the Wii+DS for it.
However, bad hardware does actually make up the place for good games, ironically, as it forces creative solutions.
@@MsNyara The Wii wasn't even really successful. It was cheap compared to the other consoles. That's literally the only thing it had going for it.
Look at the number of units sold vs the number of games sold.
People bought one for their kids and one for grandma and grandpa to get them to move but it just sat there gathering dust.
Pretty much the only titles that sold were first party titles... Meaning again, it was nothing more than a Mario/Zelda machine.
It was an underpowered non-HD Mario machine and no serious gamer played it.
1:33 Note here that when we're talking about videos, 8 bits means 8 bits per color channel, so 24 bits in total. The alternative is 10 bits per channel, totalling 30 bits. For most use cases, 24 bits is perfectly fine.
In film, 24-bits is mostly only fine for final delivery formats.
Many cameras capture 10 or 12 bit (per channel) footage in a Log color space for extra headroom in the color grading process. An alpha channel is also sometimes needed for transparency. Some cameras now actually have a 16-bit option, as do film scanners for those shooting analog.
Modern HDR TVs and monitors output a 10-bit signal, with the lower 512 levels of brightness per channel representing the same dynamic range as the 256 of 8-bit, and the upper 512 being compressed depending on the max brightness limit of the display panel.
@@tyjuarez Oh, I definitely agree that when processing and editing video, the higher quality you have, the better. It's the same for photos or audio when it comes to that. "Most use cases" to me is when viewing non-HDR material on the average TV or monitor.
@@DaVince21 Yep, same in audio, 24 or even 32 bits can be useful for audio edition. But for playback? 16 bits is plenty.
Old enough to remember when 16 bits images meant per pixel, not per component.
I think I notice a difference between h264 or hevc 8-bit and hevc 10-bit. I see more color banding (not sure if that's the right term) especially in dark scenes, like the gradation between two shades of a color isn't smooth enough with 8-bit. And I'm talking about noticing a difference with 8-bit and 10-bit SDR (of course HDR is different but I'm not even talking about that). So therefore I tend to prefer 10-bit color.
The hype was real - and went hand in hand with the first iteration of VR that wasn't consumer ready. Flat shaded real time polygons on a $250 home console was a BFD. Pilotwings 64 and Aerofighters Assault were developed by pioneers in VR simulation
vr, the actual vr with a head mounted display? (ofcource im not talking about vr we have now)
@@mr.m2675 yes it existed in the 90s - but only as demos that you saw on TV or at trade shows- not a mass consumer product. The graphics were mostly colored polygons with few textures. Virtual Boy was riding that hype.
@@howilearned2stopworrying508 i saw some arcade vr maschines on the internet but i didn't know consumers were able to have vr tech at home
@@howilearned2stopworrying508 Virtual boy, while marketed wrong in the sense that it wasn't VR as we wanted. Was such a nice little machine. The fact that you had to be in a quiet area, the 32 Bit sound, (i think) and the whole experience on most the games are unlike anything that ever existed or will ever exist. Its underrated and a cool little device that is a bit of a cult classic now. Wario Land belongs in the top 50 best games ever made all things considered.
@@howilearned2stopworrying508 VR was gonna be the next big thing like 92-93.
It was even in movies like Lawnmower Man and whatnot.
When I was a kid, I thought the GameCube had a 128-bit system and the Wii a 256 one because they should double every gen
Same here, buddy. I thought it was talking about the number of bits on screen: 64 x 64 bits.
The switch has a 1028-bit architecture. Each register can store 5.0706024e+30 different values.
Yeah me too. When I was a kid. NOT until I was 36. Ha ha.
@@sirbill_greebi3811 1028 is not a power of 10
Did you take it apart and count the bits to verify?
Nintendo doesn't want you to know this but the NES secretly has 5,000 extra bits that go unu-
Think 64 was simply a marketing term because at this time, I remember it well despite being so young, there was such a huge rapid growth in computers and tech, calling it a “64 bit console” despite anyone knowing what that meant, helped it ride a wave of tech obsession that pretty much captivated everyone. My dad was into building pcs at the time. So much that all his coworkers asked him to set them up with one, and I remember every year ram, disk space, all forms of memory were doubling.
what you bought high end one year, was low end the next, cpus were getting faster and faster and the gpu race was just beginning. I remember the first nvidia gpus being released. I believe previously, the only thing that revolutionary was the 3dfx voodoo card. It was an amazing time for tech and video games and I think people who otherwise wouldn’t have cared at all about tech were learning terms like “64bit” it was really fascinating. He still has a cupboard full of dos floppy disks and old ms software. It was a really nostalgic wave of tech excitement that I don’t think will ever happen again. It was magical. I don’t feel that same magic when I look at today’s tech.
The annoying thing about the 3dfx voodoo cards is that a lot of the games that were built to use them are barely playable on modern systems. Elder Scrolls Adventures Redguard was released in 1998 so it should run like a piece of cake, but unless you have a 3dfx graphics card it will run at like 10fps.
For what it’s worth, most modern sound is still 16-bit 44.1kHz or 16-bit 48kHz. I say this having developed an audio player for a final project in uni.
because it's enough for playback :)
That doesn't count as an excuse.
@IncognitoActivado ...I took their comment specifically as extra trivia for people who have zero knowledge in digital audio.
@@_SereneMangoLet's hope they are too young to use UA-cam with such a dumb attitude.
I'm curious, what was your uni major? It sounds interesting
Kaze: The N64 was the first 64 bit console.
Atari Jaguar: Am I a joke to you?
Literally everyone: Yes.
🙄
It gets worse - the Jag doesn't even have a 64-bit CPU. It's just two 32-bit CPUs and the ad was born from Atari "doing the math." This isn't the first time that Atari tried to use bits to sell hardware though because the ST in Atari ST means Sixteen/Thirty Two and is in reference to the Motorola 68000 chip that the ST uses.
@ShadowXeldron The bitter and object processor are 64 bit on the atari jaguar.
haha that am i a joke thing is such a funny joke you could keep saying it it gets funnier every time haha
but we are talking about the first one that actually sold lol
Wait till you look at the Dreamcast and PlayStation 2 where both were advertised as being 128 bits.
I wonder if there was ant benefit to that
Technically, modern CPUs are 512-bit due to AVX512 SIMD instructions.
The Dreamcast's superscalar architecture uses 128 bit SIMD and I think PS2 does a similar thing.
@@Kurriochi 128-bit vector co-processor integrated... which runs on 32-bit registers. Sitting effectively beside a classic 32-bit superscalar processor on the same die. There really is very little integration between them.
The other 128-bit unit in the Dreamcast is the VRAM bus.
Of course both are helpful to get Dremacast to the performance that it needed, but also... it could have conceivably been done differently with the same outcome.
Bit wars are plain stupid.
@@Kurriochi No : the maximum size of an Integer inside a vector is still 64‒bits. Such instructions basically allows to use of several ᴀʟᴜ in the ᴄᴘᴜ but the ᴀʟᴜ are still 64‒bits. Even when you design cryptographic ᴀꜱɪᴄs, adders/multipliers are 64‒bits wide as adding 128 or 256‒bits integers directly requires more steps than doing it manually.
Please direct me to an ad that says that.
14:21 Not only did the Gamecube use a 32-bit processor. But the Wii, Wii U, and 3DS also did. The Switch was actually Nintendo's second 64-bit console.
You could argue that Wii U should've been 64 bits. But that still means they were 16 years too soon to jump to 64 bits. The PS2 and Xbox were 32 bits as well.
Why? WiiU having 64 bit would only have mattered if it had more than 4GB RAM to go along with it.
@@Ashitaka0815 It's not really about the bits. More about how it still used an architecture from 2001.
@@mrmimeisfunny Yes and no. Actually the PowerPC 750 it's based on is from 1997, so it's even older. But that doesn't really matter as the Wii U's 3-core iteration went a long way from that basic design with far more cache, cores, clock speed and additional features.
Another example: Intel went back after the Pentium 4 Netburst debacle and based their new core - architecture CPU's on their Notebook offerings, which themselves were based on the Pentium 3. So going back to an older design as a base can totally make sense, based on the given scenario.
Changing the architecture at this point wouldn't have changed much, as Nintendo had no ambitions to build a technical competitive console anyways so they would just have swapped it for another low power alternative with additional costs in development both hardware and software tools.
@@mrmimeisfunny so? modern windows and linux PC's use x86_64 which is just the 64-bit version of x86 which was originally 16-bit and dates back to 1978. as long as the speed keeps being upgraded to keep up with the times it doesn't matter how old the architecture is
RE: 3:47 on the FPU registers - I worked a while on a decomp of Mischief Makers, and it actually was one of the few titles to compile into MIPS 1 instructions, meaning doubles were split in 2 32-bit registers.
This was only for the game's code, libultra was still largely MIPS 2/3.
It was fun to learn about the history of tech arms races and 1990s propaganda at the same time
The wanted poster for "DMCA Violation" is amazing
There's a bit of confusion around the floating point register modes. FR=0 selects the mode in which there are 16 64-bit floating-point registers, the even-numbered registers access the lower 32 bits of a double while the odd-numbered registers access the upper 32 bits of a double. FR=1 gives you 32 distinct *64-bit* floating-point registers, not 32 32-bit registers; this mode relies on having 64-bit GPRs and using dmtc1 to load double-precision values into the FPRs, the FR=0 mode was for older MIPS versions that did not have dmtc1 so required two mtc1 instead. The FR=1 mode crashes for you when using doubles as GCC cannot generate correct code for this mode on old ABIs for the VR4300, but this could conceivably be patched (not that there would be much benefit to using doubles)
ohh i had no idea, i definitely missunderstood this then. I thought the reason all the games don't use odd floats was because they were intended as the lower part of doubles. that's good to know.
So in theory using the fr=1 mode could theoretically still give better performance (because you have more registers to work with in that mode) as long as you can get it to play nicely with the compiler?
@@jlewwis1995 yeah, it was a 20us speedup in my case though it ran pretty unstable. i was thinking it ran unstable because of the doubles, but it seems like a compiler can circumvent that somehow. i'm no longer entirely confident what the underlying cause for the instability was there.
As of the floats issue. A quick tip, you can set some compiler directives to prevent the higher depth floats from ever being used. This will simply override the standards regarding the default float format. This is also the method to work around various .Net and Mono versions using different default integer and pointer formats in various 64 bit platforms breaking code.
However, be careful, less depth in rendering does mean less precision. The larger the world and the further your view distance, the more likely you will hit this range. Still, with the low-poly N64 games, it just isn't the most likely thing to need to worry about.
Notably on the Multimedia Co-processor, you can do 8 bit and 16bit SIMD operations. If you are looking to handle 8 bit and 16 bit math in bulk, this is a good option.
which directive are you suggesting? i've read through a lot of GCC stuff and never saw anything like that.
rendering does not work via floats on the n64. the floats are purely a CPU thing. the GPU all works on fixed point integers. and you hit a lot of depth issues on the n64 due to the depth being brought down to an s16
@@KazeN64 Maybe -fsingle-precision-constant ? This should fix atleast some of the issues I suppose.
While trying to replicate the snippet given, I noticed that GCC only seemed to emit the double-precision instructions when using the old 32-bit ABI. Using the new 32-bit ABI or the 64-bit ABI didn't seem to emit the double-precision instructions. Also I noticed that -fsingle-precision-constant appeared to have no effect regardless of ABI.
Some notes about the "n32" ABI, it's essentially for 64-bit MIPS, but with 32-bit pointers and longs. (ILP32 as opposed to LP64.) It's basically just a 32-bit version of the 64-bit ABI, still using features exclusive to the 64-bit architecture, but without the memory cost of the larger pointers. It would almost certainly break existing code that assumes the older ABI, especially inline-assembly.
So "64 Bits" was more or less Nintendo's own variant of "Blast Processing".
Yes, except it's real unlike the Genesis propaganda 😂
@@Dark_Mario_Bros. I would say it was. The M68000 of the Genesis was superior to the W65816 of the SNES.
@@Dark_Mario_Bros.I think Blast Processing was real too. I'm pretty sure I saw videos/forum threads proving it.
@@Dark_Mario_Bros.Except that it was "true" as the Megadrive's CPU was MUCH faster (but still less capable nonetheless) than the Super Nintendo's CPU
@@FunnyParadox one was better for some type of games and the same applies to the other.
1. Coming from Saturn, having a 64-bit (really, 48 bit) register is extremely useful by itself.
2. What would have mattered more is the pulse width of the RAM bus, which other comments indicate the N64's bus was 9 bit. Hilarious; the Saturn had 32 and 16 bit busses.
3. Another thing which would have made a difference is the width of the instructions; the N64's instructions are 32-bits long. Imagine how much faster it could've been if the instructions were 16-bits long. (There are compromises to be made in doing so, but it definitely worked out for SuperH)
3. The reason Nintendo ended up with a 64-bit processor was almost certainly SGI who just kind of had it on-hand.
Thanks for the video Kaze, I'm still patiently waiting to play your elite optimized original N64 so I can experience my childhood in the best way possible! I've held off playing the game for so many years just to play yours, i can't wait to experience it!
nah the fact he casually had a convo with DAN SALVATO kills me
I swear ddlc is in EVERY CORNER OF THE INTERNET
I mean, the guy's on twitter and twitch. Heck, I've seen him in streamers streams I watch. Chill guy.
Dan was one of the top Melee Link players and was a Project M developer/pro player. He's a Nintendo nerd.
@achunkofbutter i also remember hearing about him in a video about super mario sunshine speedrunning.
i knew who he was before DDLC, thanks to his old wii hacking tutorials!
Kaze, I can tell you, the marketing was insane. I reacted just like the N64 kid when we got it in Xmas 96.
At least the commodore 64 wouldnever scam me
Ultimate Wizard was a fantastic game
At least with the bread bin and its successor we know they referred to the internal memory.
A good portion of the RAM was dedicated to loading basic, so yeah, it did scam you. XD
@@dacueba-games ye but it is banked so you can replace it if you don't need it!
@@dacueba-games assembly games could bank that out tho and use all 64kB of memory
Hey, 2 things:
1) 64-Bit colour is actually 16 bits per channel (Red, Green, Blue and Alpha) and it's only used for like, "green screened" stuff in editing if you know what I mean.
2) 16-Bit 44.1KHz audio is basically "CD quality" - that's probably why they specifically used it.
And even the audiophiles usually only go to like, 24 bit 48khz because that's already overkill for the human ear, or any speaker on the market.
@@keiyakinsThere are 24bit/192kHz files. :) I have no idea why. Audiophiles must be bats.
@@mirabilis Many audiophiles actually can't tell the difference. There's actually 32-bit float, and 32-bit float has a bunch of advantages when it comes to mixing
@@JoeStuffzAlt I would say there's not a single person who can tell the difference between 96 and 192kHz. XD
@@mirabilis One issue is that people aren't probably going to buy both the 96 KHz and the 192 KHz version.
First Pannen's documentary, now Kaze blesses us as well
0:05 screams in atari jaguar
The MIPS processor was probably off the shelf and only came in 64-bit.
PlayStation 2 also used a MIPS processor
for anyone wondering, Atari Jaguar which came out 3 years prior to N64, marketed as 64-bit, had only a 64-bit object processor and blitter logic block, but the main CPU was 32-bit. a hard stretch to call 64-bit outside of marketing.
Sounds like that's what Nintendo should have done. Do the bare minimum to not get sued for false advertising when you market it as 64-bit, then actually just use a 32-bit CPU for everything for the performance safety advantages.
You could also use that same logic for 8bit and 16 systems because it often used an address register double it's standard register size.
Actually, the main CPU is the same Motorola 68000 used in the Megadrive and Neo-Geo, which most people call 16 bit. Instead the Jaguar has two 32-bit RISC coprocessors, which maybe they thought should add up to 64. Bizzaro hardware design.
@@Wuerfel21Genesis*
@@Wuerfel21 Most people, perhaps, but not Amiga people, who will point out that the 68000 is actually 32-bit under the hood.
"I was in my mother's womb at the time." Thanks for making me feel old. :D
Imagine Dil's birth theme song playing irl during Kaze's birth. Which is an orchestral/cosmic remix of the Rugrats opening song. While also showing everything from the universe's beginning, to science, and history.
Shut up. Lol
What's weird is that he develops for a retro console that was presumably in its prime when he was still a baby (if I understand correctly)? I mean, most of us have nostalgia for the stuff that was around in our _teens_ or something like that (give or take). But Kaze devotes his time to a game and system from the time when he was either _in the womb or an infant_ 😂
As somebody already pointed out, the r4300 was just a readily available product on the shelf, that SGI was selling for very cheap; their product sheet says it's intended for printers, routers, and other low-power devices. Buying an off-the-shelf product means Nintendo could reuse existing tools, like a compiler chain, already made for that product.
The 64-bit integers could allow for 64-bit fixed-point calculations, if the developers ever wanted to use it; but the CPU lacked fixed-point instructions, so fixed-point would have to be made in software (and that kills all the speed advantage over floating-point.)
EVERYTHING ELSE from Silicon Graphics (GPU) was certainly NOT off the shelf, and cost Nintendo dearly. Nor were 4300s particularly cheap.
@@Great-Documentaries How much did the 4300s cost Nintendo?
@@dkosmaripress release said $50 for samples, $35 for 100k, and other site implied a per-unit manufacturing cost of $15, so probably a bit less than that given their expected sales: I'm guessing $20-$25.
I work in 16 bit (and lower) floating point standards.
There was no such thing as 16 bit (standard) floats until IEEE754-2008, which also brought us FMA instructions.
As for 16 bit ints and uints, that’s a mips ISA thing
i was thinking of 16 bit instructions for the general purpose registers.
Fair note on the Gamecube and Wii, those *do* have 64-bit Float Registers, so it's still 64-bit in that way. They also use an extension of the instruction set which lets you hold two 32-bit Floats in a single register, which lets you do some fancy SIMD type stuff
For people who don't know, note that *nowadays* having larger registers can improve performance by a lot, even though it was worthless in 1996. Modern computers have SIMD instructions that allow us to do math on multiple values at once in a single (ish) instruction as long as you can stuff them all into a register. So for example, you could fit four 16 bit numbers into a 64 bit register and multiply 8 numbers in one instruction. Or even more!
The newest x86 CPUs have registers as big as 512 bits, with AVX512 instructions.
Also note though that most programs don't take advantage of the instructions at all, simd is a little situational and inconvenient to program in most languages. You usually see it used by the same breed of wackadoodle programmer as Kaze who sacrifice Yoshis to the vroom gods in exchange for incomprehensible powers of compilation.
rather than going for 64-bit to market it as a Silicon Graphics super computer, they should've thought about what a new era of game graphics would actually mean. They wouldnt have gone for CD but increasing the texture buffer sizes would've been nice. THAT and a rambus that goes vroom vroom!
TMEM lives on the other chip.
Hindsight is 20/20, though I agree
Could've always had double the RAM, and double the price. RAM was the bottleneck in a lot of ways in those years.
@@davidmcgill1000 The PSX did fine with a similar amount of RAM, the problem was Nintendo/SGi going with an unified memory approach, which made coding memory use a real nightmare, as every operation bottlenecked the other, so you needed to over-optimize your code (aka see Kaze's work) to actually get to use your memory decently. Or also why N64's expansion pack (doubling total RAM) really just helped meaningfully toward the end.
The other mistake was the 9-bit bus, which would bottleneck every other single part of the whole hardware, specially with unified memory making use of the tiny bus even more frequent.
Makes me think of this as a somewhat reversed situation with the Sega Genesis/Mega Drive. The Motorola 68000 CPU had 32-bit internal registers, but a 16-bit external bus, and was practically exclusively used as a 16-bit system. By marketing logic, Sega was underselling the console's performance on that basis. Then you got the IBM PC's 8088 CPU, which was 16-bit internally (and programmed as such), but the external data bus was 8 bits, yet I'd say the IBM PC is often seen more as a 16-bit system than 8-bit.
Yes. 68000 is slow in 32-bit operations, since it was designed to compete head on with 16-bit processors and couldn't dedicate much extra logic to it. I think this r4300 is similar in 64-bit being a second class citizen and being cut down in implementation cost as far as possible, since by all reason none of the systems using that era's r4000 series processors would really need 64-bit capability for the time being, but maybe in the distant future but also when the transistor budget and speed would be higher by then.
The mixed 16/32-bit design gave the 68000 a massive advantage, in that the memory is flat 24-bit address space where you don't need to make special concessions for where in memory something is. True 16-bit processors used the more cumbersome page-offset memory access. Another advantage is forward compatibility with future true 32-bit systems, including software compatibility.
The SNES has pretty much the same situation too, its CPU is more or less just a souped up 6502 with a 16 bit register flag and 24-bit addressing. The data bus was still 8-bit which makes some 16 bit operations take almost twice as long.
the main reason why people say the ibm pc was more 16 bit than 8 bit is because a lot of pc clones liked to use the 16 bit 8086
68000 is a 32 bit processor through and through. But when people talk about 16 bit consoles they mean consoles with 16 bit like capabilities and that's all they mean.
@@thewhitefalcon8539 the 68000 had a 24 bit address bus and the internal data bus was only 16 bit
It is kind of like car companies advertising the number of cylinders in an engine. It really doesn't tell you how the performance of the car is, but more cylinders sounds better.
But more cylinders usually literally sounds better.
@@rightwingsafetysquad9872 most modern fossil cars are using only the number of cylinders required and piping in sound to the cabin in order to meet both emissions requirements and customer satisfaction, especially the higher power rated engines
Indeed *fewer* cylinders is better as it means the engine is better optimised (outside of the cheap moped and cheap car market), with 0 being the true target
@ThePlayerOfGames Your comment is invalidated due to your PFP. Pls change or refrain from sharing what could be considered information
@ThePlayerOfGames Manufacturers doing something dumb with synthetic sound does not invalidate my point. 8 or 12 cylinders almost always sound better than 4 or 6. 5 and 10 are odd, 5 cylinders always sound better than 6, and 10 is peak engine music.
Fewer cylinders do not necessarily mean an engine is smaller. The GM and Toyota 2.7L 4 cylinders are larger than many Ferrari V8s. But that doesn't matter either, a smaller engine does not at all mean it is more optimized. It just means it's smaller. First, what are you even optimizing for? Weight, manufacturing cost, fuel efficiency, noise, speed, tow capability, driver preference?
Let's say you mean fuel efficiency. The most efficient engines are the Ford 2.5L, Toyota 2.4 and 2.0L. They regularly beat engines less than half their size. The most efficient truck engine is a GM 6.2L followed by a Foed 6.7L, both regularly do better than their own 2.7L engines. In motorcycles, BMW's 2-cylinder engines are very fuel hungry, even compared to something like Honda's 6-cylinder Goldwing engine.
Reminds me of that story where a chain of burger restaurants was marketing a third-of-a-pound burger (1/3) to combat McDonald's quarter-pound burger (1/4). It didn't catch on, because people thought the quarter pound burger was larger than the third-of-a-pound burger. As everyone knows 1/3 < 1/4. Never underestimate the stupidity of people.
Yeah, it should have been marketed using ounces. McDonald's measly 4 oz. patty (before cooked) compared to the monstrous 5 1/3 oz. slab of raw goodness. I'm pretty sure some of them still sell the 1/3 pound patties and I haven't met anyone besides children that think 1/3 is less than 1/4, but visually the odd 3 next to the almighty 4 is intuitively inferior before the brain even does the math. It's similar to why advertisements have only increased in quantity throughout the years despite the general public dislike of them, planting the seed in the animal brain will produce recognition and trust. We are incredible creatures with powerful minds but there is a lot of money to be made exploiting its weaknesses and it's present in nearly every industry, sometimes it brings results that many rely on for entertainment. We've gotten used to suspending reality so the rules that govern it have changed to reflect it, the 1/3 pound burger is for the slightly overweight thinking man who could have saved money and added a few extra heartbeats by eating food cooked at home.
Note that this is likely not true
the jaguar was "64" for its 32 bit video chip + 32 bit sound chip
I’m excited to see how making 3d games was a greater offense than the the virtual boy
Ah, so making it an N32 would've been more effective.
Or call it the Nintendo Ultra 3D
I think the Sony breakup must have added fuel to the decision. Making a 32-bits CD based console would probably be the most effective solution, but since PlayStation was now on the table, they needed something to differentiate themselves. A 64-bit console was imminent (in the market perspective), so Nintendo took a shot.
50% less effective
Bingo
The reason the N64 uses a "64 bit" processor is likely because it was fastest on paper rather than being optimised for that specific application. As the video even pointed out, when you want to do double or long long maths the speed gain from having instruction support for 64 bit types is massive over needing to emulate the calculations with multiple 32 bit instructions. This is what the CPU designers would have been looking at, since a lot of *industry* applications require the use of 64 bit types, and is one of the main driving reasons that current high performance CPUs offer instruction support for 64 bit types.
The issue comes that outside basic operations, instructions on wider bit types are usually slower or use more hardware than their narrower bit type equivalents. As such when the additional precision is not required, such as for most of the logic in a *game*, you are usually better off going with the smaller types so do not see any of the benefit from the instructions for 64 bit types. The best modern example of this would be when using SIMD instructions such as AVX. With some implementations of AVX512 you could choose to perform an operation on 8x 64 bit values, or 16x 32 bit values, or even 32x 16 bit values, allowing significantly higher throughput at the cost of precision. Modern games would likely choose the 16x 32 bit values because render preparation still mostly deals with single precision floats.
This does not really hold true for regular instructions on modern CPUs though, since they are pipelined to such an extent that the wider types usually at worst have longer *latency* but the throughput remains the same, being capable of executing multiple such instructions per cycle every cycle. The main reason to still use narrower types for such logic is for memory/cache density rather than execution performance, where they can still result in significant gains.
Processor bit depths has little to do with allocteable memory. That is entirely determined by the processor data bus and accompanying support features. It was entirely possible for a x86 "32 bit" linux OS to address more than 4 GB or RAM due to the one of the x86 instruction extensions. 32 bit builds of Windows could have as well, but Microsoft choose not to support the feature widely, and instead push it as part of their x86-64 "64 bit" version of Windows. The x86-64 instructions would help process such bigger memory addresses more efficiently, but are not required to do so. This approach is how old 8 bit consoles such as the NES could access more than 256 bytes of RAM/ROM.
Considering the 64 bit instructions take about twice the time I'd guess that the CPU is basically doing 32 bit math like the software algorithms showed. A 16 bit mode would probably save a little bit of die space in some areas (registers mostly) and used more in others (cutting the 32 but registers in half) if it is implemented with two halves, otherwise if it is a real 32 bit ALU (arithmetic logic unit) then it would have saved a lot more of die space and been cheaper.
I might be a little off here, but the gist should be about right.
Isn’t the RDP 2:54 RCP connection 32 Bit? And all the pointers ( Java: Reference) would be 64 Bit and stress RamBus.
@@ArneChristianRosenfeldt Vroom vroom.
Speaking of better suited processors, an SH-3 would be an interesting choice. A 32-bit CPU through and through, but it has condensed 16-bit instructions, reducing required memory bandwidth. When N64 was developed, it would reach 100 MHz with one instruction per cycle in Dhrrystone, ending up a little slower than the r4300 that they went with.
But Hitachi wouldn't just give you the core and let you add coprocessors and periphery, you had to buy the processors from them as-is, while Nintendo used customised chips, licensing the core from MTI and letting NEC do the customisations. The way SEGA got their ideal (one could say dream) processor for the Dreamcast is by convincing automotive headunit designers in Japan that they needed the same exact chip with the same enhancements.
This sort of forward-compatible mixed design has popped up a number of times. The original 68000 is a 32-bit processor of the same complexity and intended to compete with typical 16-bit processors of the era... which would also nominally make Megadrive a 32-bit console if they chose to push that sort of angle? Which is stupid, i'm glad they hadn't done that. The 32-bit instructions are largely slow and not really intended to be used throughout, and the one day to day advantage the mixed design brings is flat addressing as opposed to segmented memory typical of 16-bit processors. I think you're right to suspect that 64-bit forward design is not being very wasteful in the MIPS core, that they kept the overhead to a minimum knowing it's going to compete in the 32-bit world, but it's also not very useful.
@@SianaGearz I would have been cool to have seen an upgraded Archimedes. Archimedes has no co-processors, so it does the pixels. With MMX instructions acting on 64 bit, this would be quite fast. Actually, ADD already works on vectors stored in 64 bit if you leave some 000s between the component (4x16 bit). Barrel-shift lets us mix and match components. I only miss a MUL instruction with built in shift and mask to manipulate brightness on RGB. 32x32->64 bit MUL could always be signed. The program would either sign extend one factor into 64 bit or not. Then we would use the same iMUL instruction for all products.
imagine being the original m64 developers and thinking "ah this code is a little rushed, but who's really gonna be looking at it......"
it's probably a lot to do with that was simply what SGI had. The MIPS cores had already migrated to 64bit some time before and the only powerful enough off the shelf budget core they had were that of the VR43-- range. To have made a 32bit only core would have taken even more development and to implement multiple 16bit more still. Given these available chips were already considerably more powerful than all used by the competitors and the system already massively undercutting the manufacturing costs of the available alternatives there was probably little need seen to change what they had.
14:02 Yep. That was me as a kid. I was a quiet kid too, but when I opened up that Christmas present...
And the N64 played some role in leading me into doing embedded software as an adult.
The 64 bit refers to the 64 bits of doritos you have to push in through the vents as good luck charms.
Here's the thing:
In the early 90's we were going from having significant limitations due to actual bitness - 8bit cpu's could access 64k ram for example to it being more important what the system as a whole could do.
Having 64bit instructions was pointless in a system that was sprite based for example, or just didn't need 64bit etc. (most N64 code is 32bit)
BUT having 64bit wide ram, and bus was critical when moving massive amounts of sprites and other graphics around.
In the "32bit" era for instance, Intel CPU's were 32bit only all the way up to the Core 2 Duo (discounting other not so great cpus that didn't do so well)
But those were totally fine, when paired with a super fast 3d accelerator card that had in some cases a 256 bit bus to ram.
One of the Atari guys had a great quote back in the day "It is a 64bit 'system', meaning where it should be 64 bit it was, and where it didn't need to be, it wasn't" or something like that.
And that is true now too. Your nice i9 whatever is 64bit, but your GPU, which needs to throw around massive amounts of data and huge computations, is likely far wider than 64bit.
So conclusion. It's 64 Bit because they could. Not because they should. And as we all know, bigger number is bigger better.
One thing to keep in mind relating to the price of the silicon in the console is that by the time you are spinning your own chips with a bunch of customizations and other stuff like that the bits and registers are not what costs money. Maybe in the r&d stage it might change things a bit but no CPU manufacturer is charging its customers per register. Likely it was 64-bit because that's what was available even tho they had no real use for the functionality. Likely Nintendo went with 64 bits just because its more bits than the competition and had no real use case for it that made much sense or gave any real advantage. I can't imagine a few less instructions here and there was really making a big difference. If anything they needed to invest in a far better memory subsystem. That alone would have unlocked much more potential for the hardware.
Nintendo: 64 bits!
Consumers: Uh...
Sony: CD playback-
Consumers: Shut up and take my money!
It worked on me as a kid. :D
"64 is at least double the more of 32 so N64 better than Playstation hurdur"
Well guess whats more useful...
I'm old enough and I can confirm marketing was strong. every kid wanted a nintendo or a ps1 and every new console was a real revolution. Great and interesting video
The R4300 in the N64 is just a commercial off the shelf part with some slight differences (pinout) to the standard NEC VR4300. NEC produced the R4300 in bulk for all sorts of low-cost applications (printers, networking, arcades), which probably made it an attractive (and inexpensive) solution over a custom developed part. The console did get a revised CPU, probably just a VR4310 CPU with better physical and electrical performance.
EVERYTHING ELSE from Silicon Graphics (GPU) was certainly NOT off the shelf, and cost Nintendo dearly. Nor were 4300s particularly cheap. Frankly it was a huge mistake to use THAT processor and a pseudo 64-bit architecture and then both starve it of RAM and not give it a CDROM for storage. Nintendo screwed up very badly.
@@Great-Documentaries Hindsight is always 20/20. At the time, nothing was set in stone, doing 3D right was hard, nobody really even knew what "doing 3D right" really meant (see e.g. the Saturn, 3DO, and nvidia's first PC GPU nv1 all using quad polygons rather than triangles). Using the expertise of a company that had lots of experience in making 3D hardware kind of made sense.
@@Great-DocumentariesEven if Nintendo went with CDs, they would’ve attempted to use a proprietary format since the company refuses to pay royalties. So CD playback may not be possible and would’ve been labeled as a pure gaming machine.
@@Great-Documentaries Going CD would've increased to BOM-costs, likely meaning they'd have to cut in other departments (like the PS1 did). In short, design is a matter of trade-offs.
@@sundhaug92 but the 16bit era already showed the potential of CDs, from the MegaCD to the PC Engine CD. Developers had so much more space to work on, and they were cheaper to produce.
Nobody has encountered an explosive daisy and lived to tell the tale.
As a digital logic developer, I would guess the register file being narrower wouldn't mean much in terms of chip cost. However it could possibly make timing closure easier which may have meant higher CPU clocks. However, since this system is memory bound due to the GPU sharing the bus, I think generally a 64 bit architecture makes sense because it means the internal data bus is 64 bit. Even though the external RAMBUS is 32 bit this still matters when moving data around the chip and getting it to the RAMBUS controller faster. In my opinion, 32 vs 64 bits is a tiny drop in the bucket as far as the overall chip design, there is so much more they could have done or not done, boiling it down to 32 vs 64 bits is like trying to compress 1000 design decision into a bool.
Agreed. I don't think it was so much that they said "we need a 64-bit system" but rather the vast majority of modern MIPS processors (at the time) were 64-bit and since the deal was with SGI, who uses MIPS for their workstations, there were very few options for 32-bit without using an outdated processor. Doesn't change the fact the marketing team "overhyped" the 64-bit aspect of the console but that's their job. The argument with 32-bit vs. 64-bit seems to be an allusion to the N64's faulty memory design. Choosing an UMA design was forward thinking but there may have been a miscalculation in the bandwidth demands of the CPU, RCP, and PI, alongside RDRAM's unusual (poor?) performance characteristics.
@@chucks9999 Yeah, there is definately some decisions around using a core that someone already has. Marketing will always just do almost whatever they want regardless.
“It was the first 64-bit console”
I believe Atari has something to say about that…
PS2 had 128 bit registers that let you do bulk operations on 4 32bit floating point values which is the very basis of vector / matrix math which is why it was such a beast for things like physics simulation
In other words it had SIMD instructions, which were the new hotness of the late 90's in PCs. Gamecube had something similar where it could treat one 64 bit floating point register as two 32 bit floating point registers.
x86 also had 128-bit registers with four 32-bit floating point in SSE in 1999. And before that, in 1997 it had MMX which were 64-bit registers that can do bitwise 64-bit integers or operations on multiple 32-bit or 16-bit or 8-bit integers.
Psyduck needs HUGS
@@Margen67 psyduck has a headache and is full of angst
Boy, I was actually thinking this very same thing. I would love to see a breakdown of how many 64-bit instructions were actually used for each major N64 game. Honestly, they should've traded the 64-bit ALU for quadruple the texture cache.
Nintendo's biggest mistake was the naming of the Wii U
The year is 2067 and we're eagerly awaiting the release of a new update video on Kaze Emanuar's game and Mega Man X: Corrupted
I was a PC gamer in the 90s. I had a Genesis for a while, but that was it.
I can’t describe to this day how strange it is to see PC gaming be so common and big. That simply wasn’t the case at the time.
What it comes down to is that video gaming at the time was still a question of how much like Nintendo’s games something could be, which meant how simple the rules could be and still be entertaining.
This is why you could build the technological masterpieces that were Doom, Quake, Half-Life, and Unreal, or the complex gameplay of Warcraft II, StarCraft, Daggerfall, Lands of Lore, or all the Space/Flight sims and so forth, but you still couldn’t compete at scale with something like Super Mario World, Zelda, or Mario 64.
Gaming was expected to be much simpler. As a rule, this is still true today, with the Switch having have become what it did even though Nintendo was in trouble in the mid 2010s prior to the switch.
The 90s was a time of technical growth, and it culminated with mp3s and the internet hitting full swing in the early 2000s. The 64 bits was just Nintendo latching on to that trend, even though their strength remained a question of pure simple gameplay.
What matters more from a technical perspective is the instruction set, which is why video games of the mid 90s were so technologically advanced relative to anything on consoles.
By the time of Unreal and Quake on PC, you could dedicate the CPU to the tasks of the gaming logic, and the GPU could be dedicated to graphical tasks because of cards like a Voodoo 1 or 2.
So you could be playing something as complex and beautiful as Freespace on a PC machine, while the best that Nintendo could do was Starfox. Obviously, the playstation closed that gap, but it was still very wide.
At the same time, Sony and Nintendo simply put out very good and entertaining games, like Final Fantasy VII, VIII, and IX, Spyro, Goldeneye, Crash bandicoot, etc.
While these games weren’t as technically advanced as Unreal, Quake II, Descent Freespace, etc, their gameplay was still more inviting. To play any of those PC games required experience a mouse and keyboard. While PS and N64 games could be very inviting and enjoyed on a couch.
The first game I ever saw to bridge the gap was obviously Halo. Not because the gameplay was very particular to the one or the other, but because the experience between the PC and console was very similar.
To me, the last thing that the PC dominated technologically was network play.
Again, all of these are questions of instruction sets, not register width. The network card on a PC has a controller with an instruction set, and the OS abstracts this for use by the game. The GPU does the same. And the CPU also has instructions that facilitates interactions of all of these from the perspective of the OS and therefore the game.
That’s why games on the PC looked more realistic and were more capable for well over a decade on 32-Bit chipsets and memory. They simply had more instruction sets. And you could upgrade these easily because they came with a motherboard specifically designed to handle changing hardware and, therefore, more instruction sets.
Depends on where you lived during the 90s. In Germany everyone played on computers and consoles were seen as exclusively for little kids by the vast majority of people playing games. That only really changed by time of the Xbox 360 I'd say. People would still call others "console kiddies" when the PS2 was at its height.
@@tannhausergate7162 yes I remember seeing this! It was actually something I noticed growing up that made me feel
Isolated hahaha. I grew up in the inner city of Los Angeles, so I didn’t have many people to talk to about PC games.
But when I would go online and do stuff like IRC, I often found myself doing seeing people talk about games in other languages like German. I could understand the English, obviously, but that was it.
It’s funny to me all these years later because I caught a lot of influences from that era, even though I grew up in a decidedly urban environment. So I was listening to rap and Spanish music, but I was also hearing really cool synth music in games like Unreal, or gleaning stuff here and there from some of the PC music making communities like Rebirth that I would come across in the other regional communities.
All these years later, I can say that my preferred music when it comes to electronic is melodic German and French house hahaha.
At the time, leaning on bit count as a marketing schematic was important. The bit wars were certainly at their end pint around this time, but it proved to be nintendos folly here. Kaze proving how much better it would have been if nintendo just hadn't leaned into it, the N64 wouldn't have had as many limitations and complications as it did.
Just like the graphic wars are on its last legs nowadays.
"bits" are an exceedingly dumb way to measure a computer's "power". Floating Point Operations Per Second is better, time to sort a list of 10,000 random integers by heap sort is better still, etc.
@@Mrcake0103 FLOPS are also a dumb way to measure stuff as efficiency in how well they are used is way more important.
Oh my god, that train level your building shown at the end is beautiful!
Everyone knows the 64 stood for the amount of milliseconds of frametime
I was around for the N64 release and the jump to 3D was insane at the time. I don't think anyone cared that it was 64 bits or even knew what it meant, we just knew number was bigger and games were 3d now. It's kind of like how people thought the 1/3lb burger was smaller than the quarter pounder (1/4lb burger), obviously bigger number means more, right? It doesn't matter if it's actually true, as long as dumb people (the average person) thinks it is a certain way.
this reminds me of how sony used the cell for the ps3
"The PS3/cell can find the cure for cancer".
LMAO
The PS2’s Emotion Engine shares something in common with Cell: Both were difficult to program for.
Just an aside to the N64 discussion here-- Castlevania: Legacy of Darkness actually used antialiasing in its Hi-res mode which equated to a horrible frame rate. You can disable the AA in Hi-Res mode using a Gameshark code and it actually improves the frame rate significantly. Its the only game so far Ive tested with such noticeable improvement.
0:31 That looks familiar!
Nice video, I added it to the page 👾
oh that's your website!! that's pretty cool. i reference it a fair bit to explain stuff to people haha
Yay another technical video. Probably said this a lot already but you made me interested in low level programming and modding. I'm still a web dev jobwise but my hobby is assembly and C for modding old games and it's thanks to you I didn't gave up and got where I am right now
I feel Ninty realized halfway through the 64 Bits weren't necessary and probably but probably continued using it as they were fairly deep in dev of Mario 64, and continued not optimizing games as much out of fear of making Mario 64 look bad in comparison.
Didn't understand the talky parts, but ate some crayons and jammed to the background music
What a great day for SM64. PannenKoek and Kaze both uploaded major videos!!
I like the way x86 handles it (before SSE/SSE2/x64) where all floating point values are loaded into a uniform format of eight 80-bit x87 registers (with 64 significant bits), which ensures that floating point remains precise, and MMX reuses the register space of x87 to store eight 64-bit vectorized registers that allow for 64-bit loads and stores as well as 64-bit bitwise and addition operations and there are operations on arrays of 32-bit, 16-bit, and 8-bit registers that can be used to perform same operation on multiple integers of the 64-bit MMX register.
Bro, you missed out by being a fetus in the run up to N64 launch. I still remember seeing an import N64 in New York a couple of months before it released here. That first glance at the Mario head attract screen will never stop being the most impressive thing I've ever seen with my eyes.
The box of the N64 alone blew my mind. Got my imagination in hyperdrive with those SM64 pics.
I always found it funny how they went back to 32 bit with the wii and gamecube.
It wasn't just consumers that assumed 64-bit would automagically be better. From what I recall, one of the N64 developers/executives later went on to state that, they had made a misstep with the N64, by building it with the assumption that making the processor as big and as fast as possible would be the best choice, that the peak 1% high of performance was key; when in fact it was increasing the "cruising speed" they should have been targeting, that is, making the average operations per second faster.
It wasn't really a corporate decision to knowingly make the console perform worse for marketing purposes. You have to keep in mind that, developers and engineers back then didn't have the firm grasp of what 3d gaming was supposed to look like, or how it was supposed to actually work. For as good as the N64 and Playstation did; it was still very much in the experimental stage of 3d graphics.
God, what a time to be alive. We really thought that the technology from Ghost in the Shell and Blade Runner was just right around the corner.
Who is this "we" you're talking about? I never thought for a second back then that a Tachikoma was right around the corner for our reality.
A misstep by management maybe who have a naive understanding of the tech. I work as a software developer so this kind of uninformed technical decision making is kind of common. Any of their hardware engineers should've known what a 64-bit processor would implicate. So much so that even Bill Gates thought that going 64-bit was pointless (he didn't quite anticipate that RAM would get incredibly big and cheap which ultimately forced the 64-bit transition on PCs).
I think the real lesson is that higher-bit hardware isn't better just for doing more bits at once, it needs to do the operation in the same/similar time to actually perform better
Well, the Nintendo 32 just doesn't roll off the tongue as well, right?
hard to imagine that the entire videogame hardware industry throughout the 1990s was dictated by one dumb marketing decision by Sega
when they released the Genesis in 1999, they could've picked any of a dozen metrics to compare it to the NES to.
they could've picked the clock speed (7.67MHz on the Genesis vs 1.79MHz on the NES)
they could've picked the amount of memory (64KB CPU RAM plus 64KB VRAM on the Genesis vs 2KB CPU RAM plus 2KB tile map on the NES)
if they wanted to pick a metric that was actually _useful_ to compare video game console performance, they could've picked the number of simultaneous sprites (80 vs 16), parallax background layers (4 versus 1), or colors (the NES could only display 64 colors total, and only 4 per background tile)
None of those metrics are particularly good measurements of how powerful a game console is, even back then, but any of them would've been a better indicator than the one they went with, which was the fucking CPU bus width.
as shown in that video, that impacts absolutely nothing with regards to development, but for marketing reasons, every other console manufacturer at the time had to jump on the hype train, leading to dumb decisions like the 64-bit CPU in the N64 and the even dumber 64-bit graphics chip in the Atari Jaguar (which, by the way, came out 3 years before the N64)
64 bit audio sample rate might as well be infinite
Depth, not rate.
Audio is sampled a certain number of times per second. Actually storing it requires quantizing those samples to some size. Usually 8-24 bits per sample, though you can certainly go lower or higher, and there have been many uses of lower depths historically. The difference between 8 and 16 bits per sample at a given sample rate is noticeable, and between 16 and 24 is hard to hear. Beyond that, you're not going to be able to tell, so it's only good for processing. 32-bit floating-point is basically the same accuracy as 24-bit, but easier to program.
IMG! Imagine the size of a 64 bit mp3!
The "64bit" designation really did come from the memory bus though. With the N64, the main processor is really the Reality Signal Processor - which did your animations, T&L, sound, and drove the rasterizer. It ran at 62.5MHz, and with 64bit registers interfacing with the unified memory bus that amounted to 500MB/s of bandwidth. Now the memory itself wasn't truly 64bit because it was serial RAMBUS technology. Instead, it was 9bit and clocked at 500 million transfers per second. However, the 9th bit was only accessible by the rasterizer - not by the RSP. It's thus not counted because it's reserved for a supporting processor, not the main one. So, the RSP has 500 million 8bit bytes of available bandwidth every second and which matches 64bits clocked at 62.5MHz. So you essentially had a 62.5MHz 'computer' with a 64bit main processor connected to 64 memory and that is where the designation comes from.
The fact the N64 was passive cool'd just mindblows me still
Everything was passive cooled back then...
console gpus didn't consume enough power back then to cause overheating.
*passively cooled
@@dacueba-games i would say by 1996, computers were starting to need a fan especially with things like the pentium and pentium pro requiring a fan and the older 486 dx 4 100 could be cooled passively, it was recommended you put a fan on there
Quick addition to the color bit depth in the beginning. Usually the bits are being labeld per color channel. Most monitors these days display 8bit per color channel (which is what's shown at 1:26 ), or if you multiply this by 3, since there are 3 color channels, we get a total of 24bits. With HDR and stuff there is a good reason to increase the bitdepth to 10bit or 12bit per color channel. 16bit per CC is completely overkill for pretty much everything.
We made this even more confusing for me is that generally when people talk about 64 bit architectures versus 32 bit architectures they generally are talking about the difference in the virtual address space size increasing beyond 4GB. Obviously the N64 has nowhere near 4GB of memory, but it DOES have virtual memory addressing! 😲
it's addressing space or instruction size. Mostly they are the same, but sometimes they are different.
Fun fact: Most 32bit x86 CPUs could address more than 4GB since the late 90ies, due to their PAE unit.
But if you were running Windows, you had to pay extra for the license to use this feature.
I agree that 64-bit was mostly hype but to be fair to the specific piece of marketing you pointed out, the speed difference is mostly attributed to cartridges vs. discs.
Danke Kaze wieder was neues Gelernt
gelernt wird hier kleingeschrieben!
Jetzt hast du noch mehr gelernt!
@@obinator9065 niemand mag Klugscheißer Obi, aber danke!!
I've done a bunch of audio design lately, and one huge shortcoming for 32bit numbers turned out to be indexing an audio clip by sample, which is needed for some playback purposes. 32bit floating point numbers come out to about 350 seconds of audio (give or take a factor of two), so all sorts of factors are limited by that, including oversampled audio.
Finally a new Kaze video dropped
Very interesting, thank you for this!
Also looking forward to your game a lot!
What about the theory, that the GPU of the N64 maybe used a 64-bit memory bus?
It's not a theory, it did. Each transfer to or from RAMBUS RDRAM is 64 bits read or write per RCP clock cycle, plus overheads for starting a new transfer. (Actually 72 bits because of the use of parity RAMs for antialiasing data, but for everything other than that part of the graphics pipeline, the extra 8 bits are invisible.)
@@Sauraen Nice try, dude. XD
@@IncognitoActivado What?
@@Sauraen antialiasing data??
@@forasago Yes, I'll spare you the details, but each pixel of the "16-bit" framebuffer is 18 bits, with 5 each for RGB and 3 for coverage, which has to do with antialiasing. Similarly each pixel of the Z buffer is 18 bits, with 14 for depth and 4 for the derivative of depth. To do this, they are using parity RAMs, which have 9 bits per byte instead of 8. Normally the extra bit is used as a checksum to detect memory corruption, but here it's used for the antialiasing.
I thought the bits of a CPU refers to the size of the ALU, how many bits it can add in one operation. For example the 68000 processor had 32-bit registers, but was a 16/32-bit CPU since it only had a 16-bit ALU.
Sony is STILL advertising the PS2 as a 128-bit machine in Astro's Playroom -_-
It has 128bit SIMD registers, eg doing 4×32bit multiplications at once. Useful when you have a lot of colors (red, green, blue, and alpha) or vertex positions (x,y,z,w - where the w is used for perspective)
@@SimonBuchanNz I thought we never actually went past 64 bits. Modern PCs are 64 bits after all, or am I wrong?
@@OliGaming-d1u depends what you mean. There's lots of different "bits* you can be. Intel chips have AVX-512 instructions, which operate on 512 bits, memory buses are up to kilobytes, and Nvidia's GPUs have "tensor cores" that operate on even more. But in general, day to day use CPUs are still occasionally in 32bit mode (though much less nowadays)
@@SimonBuchanNz Using the SIMD registers as the number of bits of one's CPU, rather than the general purpose registers of the ALU, is kinda cheating IMO. By that metric, the Pentium 1 MMX was a 64-bits CPU, the Pentium III was a 128-bits CPU and modern CPUs are 512bits. Which nobody really claims.
@@Liam3072 yeah, I would agree in a vacuum, but it does get weird as soon as you get outside of desktop CPUs. Like, what bitness is Nvidia's Ada Lovelace (40xx)? Each "SM" has hundreds of cores, including 32bit and 64bit floating point, 32bit int, matrix units supporting 8 to 64bit operands, and it's generally going to be invoking the same operation on 32 of those at once. The classic definitions of bitness fall apart here.
It's fun to notice that modern GPUs such as Nvidia RTX series only have 16 and 32 bit cores.
Almost no audio is 64bits. CDs are 16bit, as are MP3s and the audio from this UA-cam video. Your device's soundcard is very likely configured for either 16 or 24 bit. There's no reason to go any further for consumer gear, it's simply not needed. All it really does is lower the noise floor, which means that quieter sounds can be reproduced without being drowned out in noise. For 16bit that's already nearly imperceptible at normal listening volume. At 24bit, it is completely imperceptible.
Higher bitdepths are only needed in studio recording and mixing scenarios, where a raw signal coming from an instrument or microphone might be at extremely low volume. You might need to amplify a source a considerable amount afterwards, which would also amplify the noise floor. That's where working with 32 or even 64bit files can make sense. In fact, most modern audio software internally mixes in 64bit float to avoid this exact problem.
yeah agreed, same for pixel depth. i just wanted to cover a bunch of different measures and bit counts so give the system a good look.
mp3s aren't 16bit because they're lossy just like this video
@@gamecubeplayer While mp3s have an effective dynamic range of 20bits, in nearly all scenarios both the encoder and the decoder truncate that to 16bit PCM on the input/output, making mp3s 16bit (at best, not considering the possible signal degradation due to lossy compression). Same goes for AAC (in most cases).
Nintendo was kinda lucky the N64 worked out as a product. This could also have turned into the PS3 situation where developers are presented with hardware too powerful and complex for its own good. Sony's saving grace here was the tremendous success of the previous console generation. The CELL processor itself was ahead of its time and as such a disaster to develop for.
I like how 64 bits was basically something that developers only ever used on accident and the majority of the reason the expansion pak was used was due to some games breaking without it.
Like, the entire console seems to be constantly facing this type of issue.
The Dk64 rumor was false. DK64 used it to enable more dynamic lighting. Other expansion required games used it just because they needed more ram for data.
It's a shame they didn't call it the original name of "Nintendo Ultra 64" due to some copyright or trademark issue or another. Heck I found an etsy page that makes a decent "Nintendo Ultra 64" jewel modelled after the original Nintendo Power advertising for the console (minus the curved surface, unfortunately).
At least then the marketing would have been all about the "Ultra" part and not the "64" part. "Super Mario Ultra", "Kirby Ultra", and so on and so forth, just like all the "Super" names on the Super Nintendo/Famicom. Heck the massively loud "Ultra Combo!" from Killer Instinct was a not so subtle bit of marketing for that original name.
So it seems what I'm hearing is yes it's a 64 bit console, no that isn't a good thing, but they'd run a few things in that mode purely to prevent bugs from accidental use of 64 bit floats. Now, I can't say whether or not Atari and Nintendo added that mode PURELY for marketing (I'm not sure if Nintendo of Japan marketed their consoles purely on bits at all), but in terms of manufacturing, it may actually have been cheaper to design the chips that way rather than custom design a purely 32 bit SGI chip with it's own instructions. Manufacturing costs MAY have been the likely culprit of this design decision.
It's still funny to me that 64 bit processing didn't actually become something close to useful until PCs and later consoles actually started including more than 4GB of RAM.
thank goodness I can get that extra 4 GB of RAM in my N64
4 MB :---)