I heard it to, I forgot he had no sound and I swore there was Dooms original sound in it till I saw your comment. Replayed and sure enough no Doom sounds just some other background noise.
And let's not forget just how long a legacy software can have! There are grandparents around today who are younger than the software being used by big banks.
Beat me to it by several weeks. I've worked in places that run very expensive manufacturing equipment that, at its core, is running some old version of DOS, plus proprietary software to run the machine. If that old processor goes out, they will spend the money to put another processor in, not replace a piece of equipment with a six- or seven-digit price tag.
@TheRealScooterGuy the oilfield is that way also I've seen some pre pc stuff still run and state of the art right next to it...its kinda cool to see tbh
idk what's surprising about it all these CPUs are compatible with their instruction sets most x86-64 CPUs should work just fine as long as they only use the common instructions and avoid newer specialized instructions
FreeDOS does not share any code with MS-DOS or older DOS systems. FreeDOS was fully made to support both newer and older hardware, which kinda makes it less impressive. It's still cool though
6:16 no, it doesn't waste space, this is a myth. Because the chip runs microcode and the actual architecture is very different than an actual 8086. It only wastes a bit of ROM for a couple of instructions, that's almost nothing, most of the space goes to cache, but there are plenty of space for smaller things like a fixed ROM made out of simple diodes. The old instructions can be perfectly emulated/interpreted on modern hardware by newer CPUs without wasting any extra space for any logic via the microcode mechanism. There's absolute no reason to remove them. The source for this is Jim Keller, he said that.
And just as importantly, even if it wasn't, putting an entire 386 (275,000 transistors) on a modern CPU (with _billions_ of transistors) barely even qualifies as a rounding error.
I am on an Android tablet with an ARM CPU/no FPU and a DosBox app installed and the emulation of 80386/80387 CPU/FPU works good. But i miss a graphic emulation of a VBE 3 bios. The svga-S3 emulation have only a mixture of VBE1.?/VBE 2 bios and not many mode numbers for higher resolutions. On my last DOS PC i used a Radeon 7950 PCIe card with VBE 3 bios with mode number for wide screen 16: 10 aspect ratio 1920x1200x32 resolution on 28" LCD.
It is wasted space, but not in terms of silicon space. Many modern x86 instructions rely on VEX or REX prefixes to be recognized, while many of the original 1-byte 8086 instructions are all but completely deprecated (looking at you, x87). Simple coding theory would tell you that this is a wasteful encoding scheme. It wastes the abstract 'code space' for the instructions, which in turn makes the instructions longer, which wastes the precious L1 instruction cache. CISC instruction sets are often credited as having better encoding efficiency, but this waste is so significant that RISC-V with C extension would often beat x86-64 in code density.
Agreed. The x64 registers are mostly 64-bit versions of the 32-bit registers, which are 32-bit version of 16-bit registers. This also means the operations on 16-bit will often have a 64-bit version. For the most part (there's always edge cases in computing), you can just use 16 bits or 32 bits of the 64-bit register very similarly. The operation of something like ADD is going to be very similar whether it's done in 16-bit vs 64-bit Jim Keller also mentioned that they don't need to optimize every single part of the CPU, just the parts that people mostly use. A Ryzen CPU will already be ridiculous for decades-old 16-bit DOS programs.
@@throwaway6478 _Barely even qualifies_ as a rounding error? I'd say proportionally, it's so small the appropriate phrase would be "doesn't even qualify". If you're dealing with several billion in total, anything less than several million might as well be free. You'd probably have to get up to the Pentium before you hit "rounding error" territory.
It was so funny to me when I found out that, being a DOS game, the modern computer, technically, cannot run DOOM... and now I find out that, in fact, it can...
@@cryomaniac-tm5mg This can be done, it's possible to build logic gates in Doom and, in theory, build a full processor. People have gotten calculators working. Decino talks about it in his video on voodoo dolls, iirc.
Dos is fully usable on new hardware, but when you start getting into drivers, you run into problems. Same with old windows versions. In most cases you wont have drivers at all, unless random people online chose to write new ones for the more commonly used parts. AC97 for example. Every computer I've built within the past 30 years has internal AC97 sound. If there isn't a driver for DOS yet, someone should have worked on it at some point.
Only if your BIOS/uEFI firmware still have legacy mode or the BIOS interrupts needed by DOS. 2 of my laptops cant run anything before protected mode. (and even then, it causes issues)
@@parad0xheart FreeDOS is the way to go. It doesnt rely on the old calls from BIOS and there are plenty of soft/emu solutions for almost anything. Still, to play around i prefer using PCEmu emulating hardware i had long ago. Its kinda weird 🙃
In 80s, I have handled equipment that were programmed in CP/M but it was fun. Once a REAL kid, on a freeDOS forum, told me that he finds it interesting that an old man wants to know internals of DOS
@@AmaroqStarwindGood thing OS/2 doesn't run in 64-bit long mode then. It's a 32-bit protected mode (retroactively named "legacy mode" in the AMD64 docs) OS, with all 4 rings at your disposal. I suspect you're mixing this up with Intel's monstrous x64-S proposal, which _won't_ have IOPLs 1 and 2 at the hardware level, by "virtue" of not having a legacy mode - but nothing will stop you emulating it with a sufficiently-intelligent hypervisor if you _must_ run it on one of these unnecessarily-crippled CPUs.
The backwards compatibility makes x86 processors always definitive. x86 users aren't missing out on MMX, SSE, and SSE2, even when they get the latest processor.
@@lordofhyphens There's always ways to detect whether it's supported. And a compatibility hierarchy is maintained, with processors supporting SSE2 implicitly supporting SSE, which in turn is implicitly supporting MMX as well, which implicitly is supporting x87.
What's cool: you can easily get MS-DOS to boot into any other program by creating an Autoexec.bat file. You can even make a simple menu, to let the user select what program they'd like to use.
@@michigandersea3485 Crafting the perfect config.sys and autoexec.bat was absolutely mandatory until really people started to move to Win 95 and that first MB of RAM didn't require all the babysitting.
It looks like at least two things are needed for a modern intel based or amd based computer to run old MSDOS. One thing is the computer needs to have a legacy bios mode available in its firmware and a legacy boot mode available in its firmware. Also the graphics card will need to have a legacy mode that it can be enabled. It is amazing that this backwards compatibility has been maintained for so long. Especially with planned obsolescence of so many items.
@@demonmonsterdave Removing it doesn't generate money. Correct. But. Removing it will save a lot of money in design time and less rate of defective chips. That being said, the reason they decided to keep it because backward compatibility was proven to be a big boost in sale. IBM became the top dog in the industry for several decades because of it.
It's how Microsoft enforced it's monopoly. Intel's not a monopoly, but still can be "anticompetitive". As long as there's a substantial segment that demands backward compatibility, it pays to cater to them. When they don't, it gives purchasers more incentive to go to another vendor. For example, if you don't need Win32, you might as well get ARM, and maybe get an ARM Chromebook. They would rather you not do that!
Apart from some extremely out of date enterprise software, pretty much everybody else is using some form of DOSBox, DOS running in a VM or FreeDOS for this sort of thing. And even with the enterprise stuff, it probably would be better to be run in that fashion, but it can be a massive headache to make sure it works properly in the new environment and move it to the new set up. At which point, you might as well have just fixed the program to work with more modern OSes anyways.
I'm impressed you didn't have any speed problems with these old games. I don't know when they started programming the games to use timers rather than just clock cycles, but I remember a few times in the 90s when I used a pentium to play286 games and they were crazy fast. Cool Vid
That's mostly because John Carmack was smart enough to not just assume that using the processor's ability to pump out frames would be a reasonable method of timing the game. There were other games during the early '90s that would run weirdly if there was too much processing power. I remember one of the King's Quest games, (I want to say IV) didn't have sound for me unless I dropped my Pentium from 75mhz to whatever it was with the turbo button in the right state.
Intel will never be able to rid the older x86 instructions simply because the commercial/industrial scape wont allow it for at least another 20 years or so.
Perhaps to rid completely.. But they can create a series of chips without the older instruction set for datacenter/scientific use. And then keep a gaming/consumer chip with all compatibility. In 20 years i hope to see RiscV as the main architecture.
I don't know, they went UEFI-only (that is, removed CSM support) with a bit of wailing and gnashing of teeth, but no actual impact to their bottom line (which is what really matters - revealed preference and all that).
not quite how it works, nor is there any particular reason to want to get rid of x86 instructions. There have been many sort of "replace PC with THIS!" type projects that have come and gone through the years, and they do not fall on anything relating to industrial anything or what I think you mean by commercial. They have no software, hardware or users. Boom. They offer nothing that the x86 PC's don't already do well enough for it to be a non-issue. There are plenty of single use type computers out there though, but the fun bit is even they are mostly x86 because standards and IDE's, plus of course support base, name recognition and so on forth, but primarily device add-on standards and IDE's and to a large degree code portability. We humans make everything to fit hands. Hands are the standard. There are alternatives that might be better suited for some purposes, but we won't genetically select for them. Think about it---
going UEFI-only is on the system manufacturers and the firmwares they use, not on the CPU manufacturers the CPU couldn't give less of a damn about what firmware it runs on startup, be it a traditional BIOS or UEFI with CSM or without
You bought a NAS to play Doom..... I approve 😆 Side note: Meanwhile Mac users have been forced into emulation to enjoy anything made before 2007. Kind of a shame, all my favorite games growing up rely on PowerPC architecture ☹️ Edit: Antonyms are fun! 😀
You might be able to shoehorn macOS into running on the IBM power series of servers that use a ppc architecture to this day, but the driver situation will likely be a nightmare
@@oggilein1also the wii u is ppc!....then again i dont think its worth the hassle to make it run actual pc software seen as its pretty much just wii but more wii😭
Once upon a time, editorials were published complaining that the latest x86 chips were "code museums" wasting much of their transistors to support backward compatibility. Today, the _vast_ majority of the complexity is the logistics of dealing with superscalar and out-of-order execution, with a surprisingly small part of it being the Execution Ports that do the actual work, and a tiny sliver being the front-end that parses the instruction set. So today, supporting old instructions is not significant; the entire chip is abstracted away from the published instruction set anyway.
Running WordStar or WordPerfect should be able to keep up with typing speed. No virus checking or dozens of background processes causing little pauses, important for music production. Software back then was usually sold outright with a key, not annual licences.
I consider myself fortunate to have seen, and worked on, computer technology from the 70s. 8in floppy disk, 9.6kbps data rate, punch cards, DEC/VAX minicomputers, PC DOS, CGA graphics, OS/2, IBM super mainframes and all the way to current state-of-the art gaming rigs. My home is connected to gigabit fiber which would have been unthinkable just a couple of decades ago. Such great memories!
3 місяці тому+3
I recall fondly my first PC. A Philips 80286 with 1MB of RAM and 14" colour display. I used the keyboard of that computer until 2023 when the left shift key finally gave up.
If it is mechanical keyboard, switch can be replaced (separatelly). In 286 times, most of keyboards were expensive and mechanical. Just dont throw up mechanical keyboard, it can be almost always fixed to (almost) 100% state.
"Quite literally everyone and his dog had an IBM PC" Um...most people from 1982-1992 did not have a home computer and had little reason to spend the money on one. In 1990 for instance, only 15% of households had a computer.
Yeah, and most people who did have home computers during that time had 8-bit machines from companies like Commodore, Tandy, Atari, or Apple. Or TI if they were unlucky. That said, if you had a computer at work during that time period, it was probably a PC.
@@jeffspaulding9834 28.1% of the adult population, reported they use a computer either at home, at work, or at school in 1989. As I remember, a lot of people at the time, who had PCs at work, bought a "clone" and pirated a lot of the software, particularly word processors and spreadsheets. I'm pretty sure Microsoft made this pat of their marketing plan. There was a lot less hassle about copying apps back then, and I think it was on purpose, to achieve monopoly power.
@@squirlmy An IBM PC back in those days would set you back at least a couple grand (in 80s money) for a minimal system. A "complete" system would cost you closer to $4-6k. Clones were usually cheaper, but the price point was still way above what you'd see on an 8-bit model like the Commodore. Back in the 80s I knew several people with 8-bit machines, but only two with PCs (my family had one for my step father's work). Software was readily available at your nearest mall, and the PC section was fairly small (and mostly focused on business apps). Magazines like BYTE catered mostly to the 8-bit user. It didn't help that the graphics on the PC were horrible - tiny, expensive, usually monochrome monitors. Many of the 8-bit machines had color and could use a TV for a monitor. Because of this, you saw a ton of games for the 8-bit machines, which also appealed to the home user. It really wasn't until the 486 era that the PC really took off for home users. By that time the prices had come down, graphics had improved, more games were available, and more people were using them at work with programs that required more than 64k. (Edit: typo)
@5:30 You're correct that the built-in BASIC is not available like it was back on those machines, but don't forget that you can (almost) always grab BASICA and BASICB and run those old basic programs by dropping those onto the system and using them to run 'em. On that note, QBASIC won't work with those much older basic applications, but BASICA and BASICB usually do the job when run with an OS that can run said BASICA and BASICB.
My PC might still be able to run DOS, but it also sadly fails at Windows 95, as it's way too fast, which makes the (unpatched) security code fail horribly. 😉
@@randomgamingin144p It'd be a waste of money and kind of silly to do that, but it sounds like exactly the kind of thing I would do just for the fun of it. I once forced a 2006-ish laptop to run Windows 98 including adding a compatible soundcard on a breakout board from its ExpressCard slot and wired in a voltage regulator to pump the otherwise missing 12V into the bus. It did work! And it was completely stupid, but that's beside the point. Realistically though, if you really want to run Windows 95, there's practically unlimited choice of various Pentium clone machines on eBay. And even at the price levels, it might be cheaper to just buy period-correct hardware than one of these specialized adapters plus requisite hardware. Whereas you could spend somewhere around $75-$200 and just get a whole period system suited for the task directly. And be way less "messy." Or, of course, just run a VM and be done with it.
Fascinating video. I just booted Windows 95 SE on my 24 core dual Xeon Workstation using an external USB dvd drive with the original disk. My 4k/60fps GPU came up in 640*480. A USB WIRED keyboard and a 2.4G wireless mouse were fine. [ My wireless keyboard was Bluetooth. ] I also plugged in a USB Sounblaster II clone, which gave me stereo sound. An USB FDD also worked. It does not have serial or parallel (printer) ports for me to test. No networking. No internal audio. No audio over HDMI. What fun.
I get this all the time with my old computers using Windows 98/98SE/2000. These operating systems *hate* wireless keyboards. I've had computers not boot with wireless devices, if the USB receiver is in the port. Many allow for wireless keyboard usage if you insert the thing when in windows, just not on startup.
@@warrax111 With good reason, most people couldn't play Doom if the FPS was too high, and a benchmark that is capped would be pretty useless once computers got past a certain point in terms of power.
@@SmallSpoonBrigade also frame cap is sparing electricity, as computer (and later 3D graphic card) doesn't have to compute too much, but can rest between frames. So it's even reasonable to introduce frame cap, rather then let it compute 300 frames every second.
I love your vid! Gives me real old school vibes. This is also quite fascinating to me, I remember running Windows 98 SE on a 2012 Ivy Lake Laptop, but I would never thought that something as recent as a 2016 Intel CPU would be able to run an Operating System as old as DOS!
The x86 _does not_ have backwards compatibility with the 8080 and it's derivatives (the 8085 and z80). Instead, it has a sufficiently similar opcode set that relatively simple binary translation is _almost_ the entirety of what's needed to move from 8080 to x86.
Interestingly enough, this is true with almost any modern RISC-style core as well. The complex address modes need to be translated to more than one instruction, but overall it's a very simple, mechanical process. 8080/Z80 code runs natively very well after binary translation, since it typically fits in the L1 cache, along with the data. Getting 1GIPS equivalent Z80 instruction rates on current x64 is common. Not that there's a lot of utility for it, but hey, old Mandelbrot programs run at 60FPS+ :)
True for code. However the hardware environment was backward compatible to a large extent and used the same peripheral chips. NOTHING was backward compatible to the 8008/ 4004 !
Nice video! If my own memory isn't faulty, BASIC was in the DOS directory, which should've been in the path, so not in ROM. I'm not sure if it came with PC-DOS, but I'm pretty sure it came with most versions of MS-DOS. Heck, even the C=64 came with a version of Microsoft BASIC. You forgot Cyrix as a competitor, but I suppose that's a subject for another video.
Sorry but your memory is a bit off. The original IBM PC and PC/XT had real ROM chips. I still have the HEX file of its contents. However most every PC "clone" had BASIC as a COM file - I think for licensing reasons. Cyrix chips were NOT QUITE X86 compatible. I used to build PCs for a living and it was always necessary to include minor patches to keep Windows stable - especially at clock speeds over 66MHz. But they were cheap !
@@adrianandrews2254 It wouldn't surprise me if my memory were a bit off. Thanks for the correction. I also used to build systems, and Cyrix chips were very cheap. I do remember they didn't 100% compatible, and yes they did present an issue at times. I think that's kind why they disappeared. They were cheap, but also sucked at the same time. AMD struck that cheaper chip with compatibility sweet spot.
I am not sure if it is possible to start with an UEFI bios in 64 bit mode and graphic mode to load an IBM compatible bios from a file into the memory and then switch into 16 bit mode and in text mode to boot MS DOS. I don’t know if an IBM compatible bios is compatible for the mainboard function of those mainboards without UEFI. The intel i7 architecture and the graphic cards can switch to 16 bit and text mode. But the problem is the missing IBM compatible bios.
Some instructions have been removed, for example intel removed some AVX-support over time (because they couldn't fit it in their efficiency-cores) while AMD made their equivalent cores more efficient by removing cache etc
The earliest versions of Intels processors supported AVX on the Performance cores but not the E-cores. I think it had to be disabled/removed due to complications with the OS level scheduler lacking the softistication to run threads which require AVX on P-cores only.
He seems blissfully unaware that PC-DOS did not need BASIC on disk because it was built-in on IBM's (ONLY). Clones used MS-DOS, which was NOT in ROM and had GW-BASIC (Gee Whiz Basic) on disk. IBM did offer BASICA (advanced BASIC) on disk. Any PC compatible that HAD put IBM's BASIC in ROM would have been sued instantly. You can still be considered 100% compatible without having BASIC in ROM (which was pretty useless anyway since you need a DOS to store programs!). 🤡🤡🤡
You can write BASIC programs just by using the one supplied by the ROM, but you cannot save it. To do that, you'll need BASICA from PC-DOS, which added the capability to save the program. ps. IBM BASIC was the default mode for "operating system not found" in IBM PC.
Hi. Great video! Another use for single board computers like this is to run lightweight Linux distributions such as DietPi. You could even use Batocera as well!
The cool kids got PCs with 8086, rather than the bottlenecked 8088. I had an Amstrad[!] with an 8086 and the full 640K of RAM. The good old days. And any game that relies on the intrinsic CPU speed for timing (lazy programmers!) will be quite interesting to run.
It wasn't lazy programmers...well, it kind of wasy; but there is a very good reason programmers chose to use loops rather than the 8253 was the 8253 wasn't guarnteed to be on there. In fact, they couldn't count on any hardware being the same on any of the systems back then. Standards? There were none. No APIs. Don't forget this is an era where your verison of DOS had to be customized by the OEM because the standards for what a motherboard were didn't exist. It was necessity. Using the 8253 would have limited the software to just IBM PCs and anyone who directly cloned it. By using "lowest common" methods, they ensured it worked on more hardware.
@@dewdude IBM didn't even name their ISA bus. It was later Compaq and a group of clone makers retroactively named it, along with their future plans for E-ISA (which soon gave way to PCI)
1:05 Making up over 80% of all new home computers sold by 1989, well maybe in the US, but in the UK, the ZX Spectrum basically had the most HOME* marketshare during the 80's, with the C64 coming 2nd, then Atari STe/Amiga 500 3rd, with the PC bringing up the rear, Apple really WAS NOT A THING in the UK until 1997. *Schools tended to have BBC Micro's/Acorn Archimedes before changing to PC's in the early-mid 90's
DOS was an ideal OS for gaming precisely because it did so _little_ to help the programmer. It offered exactly ZERO aid for things like sound, graphics etc. The BIOS helped somewhat, but no professionally written game wouldn't rely on BIOS routines for graphiics. DOS's strength was that it didn't get (much) in the way of the programmer, if we exclude the horribly complicated memory management standards.
Of course ARM CPUs have a long and storied heritage of their own, and if you get the right arm single board computer you can even run the original RISCOS -originally designed by acorn computers for use in their first generation ARM workstations.
None of the mainstream ARM chips can run ARM 1 or ARM 2 code. So the RISCOS binaries from Acorn won't work. Of course if you have a binary build for something newer it will work. But original RISCOS runs on original hardware and emulators only.
I ran dos on my laptop on bare metal back in 2014. I very very quickly discovered that I liked having USB drivers. Frankly dosbox on top of something modern will give you a better experience. Like, I can use my USB midi keyboard (which DOS would have no drivers for) as an audio output device for doom. Or a virtual synth like fluidsynth. If you want the games of yesteryear to survive, pin your hopes on emulation. It will give you a much better experience I swear…
My very first computer was a HealthKit that my father and I built together. It ran CPM (MS-DOS was really a copy of the CPM). When I was in university, for a short time, I worked in IT (I am no IT expert). We had a UNIX machine. It was command line and a great computer! I moved to Macintosh which, to this day still uses UNIX (I am sad they took away the ability to log in as root.). But on a current Mac, you can still get to the command line, if you really want and bypass the GUI. (I think the main reason why Mac stopped allowing people to log in as root was because idiots could use the RM -R command and with no checks, would just erase everything!) But today’s UNIX will still run every old UNIX program, no questions asked.
There are LGA1700 platform boards that take Alder Lake or Raptor Lake cpus that have a PCI slot. And it's possible on some motherboards to break out a fully functional ISA bus over the TPM connector. So you know what that means right?
@@AshBashVids motherboard still has to support DDMA if you want PCM sound over the PCI bus under dos, alot of boards with PCI slots don't, but then you find one that does and its golden, usually industrial boards today.
i think it's done through the lpc header, if the board is old enough, not the tpm. but even so, on one point in time and later, the way dma and irqs are handled by these ports (a.k.a the chipset), has fundamentaly changed, so dos sound etc. cannot work even if you managed to breakout an isa slot. you cannot use pci because it works completely different.
I am 50 years old. It is so fun to show small kids that their new PC can still load DOS and they surprised at what an ancient being is summoned LOL! They think it is black magic.
This is a big reason why I'm a bit of an x86 fanboy. Yes, the syntax for x86 asm is bad, but it's rare to write x86 by hand anymore. The compatibility is beautiful. I often bring up embedded ARM systems(usually zynq or zynqmp) and specifying devices in device trees, configuring u-boot is tiresome. This is why for instance with OpenWRT you see one build for x86 and 50 others for variants of ARM/MIPS devices.
Yeah compatibility is truly it's superpower, and by the looks of it AMD is getting even more serious about power efficiency with Strix and Intel is with Lunar Lake. There's lots of doomsaying about x86 but if history as proven anything it is one of, if not the most, adaptable architecture in history and clever engineers always find a way to break the mold with it.
Technically most CSM legacy support is emulation. API emulation that is. They use a combination of small stubs with Virtualization, IOMMU and/or ACPI functions to create a classic Interrupt BIOS that calls the newer UEFI/ACPI functions. Because UEFI functions must run in either 32bit or 64bit mode, they do need to use a minimum of Virtual 8086 mode for a portion of their CPU-Side function. It doesn't fully hide itself. If a DOS program traces down the calls it will see how they execute. They honestly won't care and will consider it like any other 32bit mode extended BIOS. This basically makes it more in the family of the Wine Win32/Win64 stack or or CP/M for DOS stacks, than native stacks like WinPR and MinGW.
USB floppy drives are bootable so long as the device can boot from that USB port, and WAAAAAY faster than the floppy interface. DOOM is what I still use a DOS P4-3GHz for. With ISA slots.
haven't you run into incompatibilities with the usb floppy drives? And more specifically modern ones, not the few IBM and other big makers put out back in the day. I was really shocked at the poor construction and lack of support. Maybe if all you do is boot FreeDOS it's all right. Actually moving data and updating firmware is a nightmare.
@@sundhaug92 The should have done it years ago... there is a ton of legacy stuff that can be removed, also it would simplify the boot process and the operating systems (basically because the OS doesn't have to start from a 16bit real mode and setup all the stuff to get to 64 bit mode).
@@alerighiyeah, removing that tiny bit of code that is already written and working will save so much. I also bet that removing the old instructions will save hundreds of transistors.
@@alerighi You'd be surprised how many niche things still depend on 32-bit kernels. Also, modern kernels don't have to start in 16-bit, either because the bootloader gets them up to that point or because of UEFI
@@harrkev removing old instructions won't save transistors because they're just microcode in the ROM, it'll save a couple of diodes, no reason to do that. they could remove the real mode because that can be emulated in user-mode by the operating system.
Intel is quietly working to drop 16-bit and 32-bit compatibility from future processors, which will let them squeeze out more performance in future chips. So, this is a feature that probably won't last another decade.
@@aligokcen5908 For an 80286 mainboard i used a floppy disk with a bios setup programm to enable the devices like hdd, because the mainboard have no setup on a bios chip.
The biggest issue is really the driver situation. You may have gadgets like CSM, SATA compatibility mode and SBEMU, but your first hurdle is probably lack of support for NVMe SSDs - only the first mainstream Samsung drives shipped with a legacy option ROM to allow booting from them. Thankfully USB sticks are well supported instead.
Intel microprocessor started with "real mode" at boot up. In Windows, it then followed by switching to "protected mode". The "protected mode" is the working environment for Windows. In DOS, it stayed at "real mode" (DOS was made before Intel had any idea about "protected mode"). In essence, DOS can still run in i9 computers, but it's confined to the 1 MB memory. Additional drivers can be used to reach the rest of the memory space, but it will always use in a swap mechanism.
Itanium lost not because of no compatibility with earlier processors but because of theoretical power which was not visible in a real life. Itanium required software tools and Itanium targeting software developers understanding this architecture = dozen of years developing it. Moreover it was visible pretty fast that size of working simultaneously on the same microcode part of the chip matters and it is better to have asynchronous chip doing small things in parallel than more organised one doing its job in theory better but in reality needing more space to do it. Some evolution of multi-cores and HT was the output of works on Itanium. I'm not English fluent enough and I've read about the problem with size of gates structures needed to maintain Itanium organisation decades ago so most likely my attempt here to review results is not too good. More or less the meritum of what I've read was that RISC and Itanium had 1 major problem - size of die needed to work in parallel. Whereas Intel could spread operations more flexible and sync from time to time ending with slightly chaotic but easier to maintain in a real world layout of gates.
To clarify, Itanium was built around the idea that certain optimizations could be made by the compiler in order to achieve the full performance of the processor. The problem was that those optimizations turned out to be impossible, so the chip was never able to reach its full potential. It never compared well to other 64 bit architectures such as SPARC64, POWER, or Alpha. The only reason HP adopted it was because they had stopped developing PA-RISC to help Intel develop the Itanium; by the time they realized it wasn't a great chip, it was too late to turn back. But what really killed it was the x86. Itanium was a great idea in 1994 when they first started working on it. By the time Itanium came out, x86 was eating into the high performance computing and business markets that used to be dominated by IBM, DEC, Sun, and HP. You didn't need a $20k workstation to run CAD anymore and you didn't need a $200k server to run your business logic. Commercial UNIX was dying, its business customers fleeing to Windows and its Internet customers fleeing to Linux. The only use HP found for it was for their legacy HP-UX customers - hardly anyone was developing new systems on HP-UX. Intel had made the x86 too good, and AMD brought it into the 64 bit world.
Trust me. The code that was written back 1960s does not waste as much space as you think. People back then didnt have the luxury of abundance of RAM, Storage and Processing power. That forced them to make their software incredibly optimized, and their quality of code is unmatched to today's standards. What is a problem though is modern hardware, unoptimized games and programs that developers don't bother optimizing because "eh, it runs well on my $10,000 dollar gaming rig" No one makes their own software anymore. They just pick the easiest or most popular option, no matter how abstract it is, and put a more expensive GPU / CPU in their software requirements page.
Actually I was mostly running ARM in the 1990s after running mostly 6502 in the 1980s. The ARM back then was mostly backwards compatible with the 6502 in a number of ways mostly because of how it came about though the RISC structure of the ARM did eventually make this somewhat harder to do. Of course ARM has come a long way since those days... So NO, not all of us went overboard for 8086 back then.
"ARM back then was mostly backwards compatible with the 6502" That's simply not true. Nothing about ARM 1 and ARM 2, and in fact any later ARM architecture, had anything to do with 6502. ARM 1 could run 6502 code very well after binary translation, but it was a waste of a 32-bit processor for the most part. Binary translators with good peephole optimizers could detect common 6502 instruction sequences like long addition/subtraction and translate them to fewer ARM instructions. Today you can do excellent binary translation from pretty much any retro processor to x64, RV32/64 and A32 (modern "desktop" ARM) instruction set. That's because there's a ton of very expensive software that is open source, like theorem provers.
The Celeron N3450 is anything but new. Is CSM mode still included in new motherboards? I modded my laptop's BIOS to unlock all options and it doesn't seem to have CSM support (it's an 11th gen Intel).
My Asus laptop doesn't have CSM support but I do have an old 2nd gen core i3 laptop that has a UEFI thats locked to bios only mode due to shipping with win7 and an 8th gen core i3 laptop that has a UEFI that supports CSM. I'd someone made an EFI bootloader that emulated CSM/BIOS that would allow you to boot older OS's on newer hardware.
@@Flopster101I remember many moons ago using TianoCore to emulate UEFI on my BIOS-only machines. Now I have to use Clover to emulate BIOS on my UEFI-only machines. 🤣
Great stuff. Funny bit about the missing floppy. 🤣 Strictly speaking, saying DOS provides a HAL isn't correct. A HAL is an internal OS layer meant to support porting it across more than one processor architecture. DOS, of course, is a one-trick pony. Windows NT was the first MS OS with a HAL.
IA-64 x86 compatibility was only 32-bit and never was a huge priority and it wasn't providing full compatibility. Intel at the time thought native IA-64 code would quickly obsolete x86 so putting a lot of engineering and die area on x86-compatibility would be wrong. Meanwhile AMD did their x86-64 thing and the markted decided. One of the early adopers of x86-64 was the Linux community which at the time did support several other 64-bit architectures including Alpha, SPARC64, MIPS, PPC64 so x86-64 was really just a formality. IA-64 has some niche success with supercomputing or highly available systems (IA-64 did support master-checker afair) - not enough to tip the scale in favor of Intel.
don't forget Itanium was created by Hewlett-Packard. Intel only joined in development later. And HP took a pretty big hit financially from the failure. I don't think the Linux community was even a factor, because they ported to everything, they would adapt to any 64-bit tech as you note yourself. A big factor was costs. It cost the market (as a whole) a lot less to hold on to Win32 and be discretionary about what to software to port(and/or buy) to 64-bit. Yes the market decided, but it was short term (maybe even quarterly) budgets vs upfront costs for moving to an entirely new architecture.
@@squirlmy The Linux community in that case was a whole bunch of companies including HP, Intel, SGI and others plus various Linux vendors, the usual suspects including smaller ones such as Turbolinux and contractors. SGI was among those early adapters because they wanted to get rid of their inhouse processor development. Which arguably is what IA-64 did best - it resulted in the end of the Alpha, HPPA aka PA-RISC and the high-end variants of MIPS. SGI's traditional customerbase controls their own software which was already running on 64-bit. So the 32-bit compatibility was never much of interest. In In fact due to memory size 32-bit was no practical option even back then. Afair the first SGI Itanium system had 64-bit processors. HP's customer base was probably to a large degree identical so I don't think they had much interest in 32-bit either. Intel was making a bunch of IA-64 systems such as the Big Sur but I'm not sure if they were ever planning to become a big mainboard or system vendor. Microsoft and IA-64 was an entirely different matter. Yes, the market did decide. Itanic took Intel a lot longer to finish development than they were planning for. At the same time Itanium 2 aka McKinley's was looking like it was going to be a faster processor than promised and arriving earlier. SGI eventually decided to cancel their Merced product in favor of the McKinley. Which however meant their first IA-64 product was being delayed yet again. Meanwhile AMD's x86-64 was looking much more promising and became available on the mass market at interesting prices. This left a few niche markets for McKinley-based systems. Floating-point-heavy applications did tremendously gain. Also initially systems with massive I/O and memory were better off with McKinley than x86-64 since the heavy hitters HP and SGI had been working on them. Another niche was HP's Tandem division. Tandem was building fault tolerant systems. The sorts that doesn't crash only because you yank out a few parts or is hit by the death star. Tandem was using MIPS; an attempted migration to Alpha had failed so eventually they migrated to Itanium. Meanwhile there was no support in x86-32 / x86-64 for that sort of fault tolerance, so they were stuck with Itanium for a long time. These were niches. x86-64 was meanwhile selling like fresh rolls. By the time Windows Vista, the first version of Windows with 64-bit support out of the box hit the market, x86-64 was already widespread. What also helped selling x86-64 was that it was a better architecture design than the i386 aka IA-32 architecture. For every other architecture 64-bit code was running slower than 32-bit because it was bigger resulting in a lower cache hit rate. Not in case of x86-64. Meanwhile Intel themselves appeared to have lost interest in IA-64 a bit. I think one reason was also that IA-64's EPIC (remember EPIC is the new VLIW ...) was hard to efficiently generate code for. That issue held compilers back; the actually achieved performance of compiled code was well less than what the hardware actually was capable of. And some features of the IA-64 such a giant register sets and register windows made for inherently large dies and complicated designs without achieving the original design goals. Then the unimaginable happened, Intel took an x86-64 license, had to catch up with AMD's x86-64 implementation and at the same time had that boat anchor IA-64 tied around their neck which they couldn't drop because they already had a customers.
Actually somewhere around the Nehalem Generation and with the availability of EFI1.0 (I have a Xeon Workstation from 2008 with EFI1.0, the predecessor to UEFI2.0 which we all use today) PC-CPUs did no longer start in 16Bit-mode but always in full 64Bit mode. You could force them to switch to 16Bit but that was about it. Besides that, all that old stuff, 16, 32, protected, real mode etcpp, uses around 100.000 bits of Microcode inside a current CPU. As modern CPUs have transistors in then 10ths of billions this is less 0,01% of the die-space. And btw, around the Intel 10000 generation support for CSM/16Bit was dropped from pretty much any BIOS/UEFI. Finding Boards still supporting that is... hard.
I would love to see compilers do more work than CPU. Stuff like branch prediction should be coded, tested, load registers based on results, and then executed. You can put other commands between these. Also, the very time-consuming load indirect should be multiple commands. These would decrease the complexity of the pipeline. There will still be latencies on cache misses, but the proper set of commands would reduce wait cycles.
I think they tried that with Itanium, didn't went well. The compiler had to calculate precisely the latencies and put nops in the proper places, and even do the branch prediction.
I don't think backwards compatibility is necessarily a bad thing. Remember, when Intel tried to introduce an entirely new architecture (Itanium), it was a total flop. But ARM is not a new architecture either. In fact, you can do this on a Raspberry Pi: Install RISC OS, and you can play games made for the Acorn Archimedes on bare metal.
Apple gave up backward compatibility in hardware when they switched from 68000 to PPC, from PPC to x86_64, and from x86_64 to ARM. They gave up software backward compatibility when they switched from macOS classic to MacOS X and partially when they gave up on supporting 32 bit software in macOS altogether. That's one of the things I truly love about Apple. They step forwards, cut off old tails, get rid of legacy crap. There's always a smooth transition (macOS classic could run on MacOS X for quite a while, PPC code could run on x86_64 CPUs, and x86_64 code can currently run on ARM CPUs) but after a while they just move on. If you truly want to run 45 year old software, just use an emulator. Not only can it emulate any hardware you like and hardware you cannot even buy anymore for ages, it can also fix issues that hardware had, it can make software even run better than on the original hardware. Some games today are way better playable in an emulator as they had ever been on true hardware. Some people may think cutting off backward compatibility is a bad thing, but they would probably not think that way if they knew that x86 CPUs could be faster, more efficient, and cheaper today and Windows would be a lot smaller, more stable and less resource hungry if they only dropped a bit of backward compatibility at least once every decade. You are paying a high price for compatibility that 99.999% of all users will never ever have any use for. Quite often the answer to "Can we do that?" is "Sure we could do that and it would be awesome but it would also break backward compatibility, so unfortunately no, we cannot do that". That happen in software dev, that happens in hardware design and it happens all the time.
"That's one of the things I truly love about Apple. They step forwards, cut off old tails, get rid of legacy crap. " ... and their market share showed the result of such decision 😏
@@gorilladisco9108 Their market share has been constantly raising in the last 20 years and the main reason why not more people buy Apple products is the price and the fact that they cannot run their Windows programs on it but that has nothing to do with backward compatibility, as even if Apple was backward compatible to their systems 30 years ago, people could still not run their Windows software on it.
It's good that Intel has such backwards compatibility. And with increasingly faster and more powerful processors, there really shouldn't be *that* much wasted space for such compatibility. And it's better than Microsoft's lack of backwards compatibility for DOS and 16-bit Windows programs.
the backwards compatibility is not just for the sake of running DOS the 16 bit real mode is also when the CPU and bios communicate with each other and the BIOS then builds the tables and charts that windows will read to grab all the details of all attached devices pci cards etc. Hardware we really only understand in 16bit real mode or at least its best to talk to it in that way because of simplicity during commands the 2 wire serial interface near cmos can be used to issue commands to the BIOS during 16bit real mode on bootup meaning you can adjust registers and modify read/writable bits while in the 16 bit real mode. You can also do it in 64 bit mode but the bitness changes and so do alot of the addresses. I like to poke around with hardware probing tools like R/W everything and etc. Also why most drivers are written in low level languages to maximize speed effeciency when polling hardware devices along the PCI/USB/sata bus. You essentially could run 16 bit programs by pointing to them or by switching hardware registers to correlate to the required cpu execution model meaning you can run programs directly from bios theoretically ive never tried it but interesting to experiment with. Can also probe the clock generator on alot of motherboards and get back pure hexadecimal data giving you details of the Clock gen allowing you to overclock and change usually non accessible settings sometimes chips are internally write protected and theres no way to modify the signals without messing with the continuity of the wire leading to write protection pin on the chip.
Bro, it’s not backwards compatibility. It’s x86 architecture all the way through it will even run 8-bit applications if you boot to DOS. But those 8-bit & some earlier 16-bit games all run way too fast
That's why we have ubiquitous (and reliable) slowdown TSRs these days. If you gobble up 9,999 out of 10,000 CPU cycles with a busy loop, even a firebreathing 7950X will do a convincing impression of a 286. 🤣
The different operation modes of x86 microprocessor have different instruction set. The same "mov ax, bx" will be translated to different bit pattern by the compiler/assembler based on the intended operation mode. About old games run too fast, the reason is they use software counter to time their movements. With computer running faster and faster, the 1990s kids had a challenge "play Tetris on new computer". DosBox has speed up and speed down buttons for that purpose.
The 8086 was assembly code and register compatible with the 8080 and 8008. It was NOT binary opcode compatible with them. The register set was a superset of the 8080 and 8008.
Naming a computer Zima is probably just going to have people associate it more with the failed alcoholic beverage more than I guess the intended cool operating temperature of a Russian winter. :p
"new" is questionable since you're running a 2016 chip. that said Skylake (which I think the N3450's Goldmont was based on) was the first CPU to begin trashing backwards compatibility. maybe Goldmont kept more of what Skylake dumped.
You may think you chose the 'No sound' option for Doom, but I could still hear every sound effect perfectly....
I can hear the Doom sounds even while I'm reading your comment. :-)
I heard it to, I forgot he had no sound and I swore there was Dooms original sound in it till I saw your comment. Replayed and sure enough no Doom sounds just some other background noise.
@@chrism3784 me too, same thing.
We all have played Doom so often that our brains are filling in the missing sound effects and music. ;-)
I don't know about you guys, but I heard Mick Gordon's Meathook playing in the background
memes aside, the ability to run legacy software is key to a lot of infrastructure
Most have 0 clue...😂
And let's not forget just how long a legacy software can have! There are grandparents around today who are younger than the software being used by big banks.
I'm finding Starglider a bit hard to play, though...
Beat me to it by several weeks. I've worked in places that run very expensive manufacturing equipment that, at its core, is running some old version of DOS, plus proprietary software to run the machine. If that old processor goes out, they will spend the money to put another processor in, not replace a piece of equipment with a six- or seven-digit price tag.
@TheRealScooterGuy the oilfield is that way also I've seen some pre pc stuff still run and state of the art right next to it...its kinda cool to see tbh
I once managed to run FreeDOS on a Xeon E-2224 machine just for fun. So yeah, it’s totally possible
idk what's surprising about it
all these CPUs are compatible with their instruction sets
most x86-64 CPUs should work just fine as long as they only use the common instructions and avoid newer specialized instructions
And a lot of laptops that are sold without a windows license come with a functional freedos installed
@agladkyi FreeDOS does have more modern extended memory drivers too?
FreeDOS does not share any code with MS-DOS or older DOS systems. FreeDOS was fully made to support both newer and older hardware, which kinda makes it less impressive. It's still cool though
FreeDOS is not an old OS, and of course it is not MS-DOS
6:16 no, it doesn't waste space, this is a myth.
Because the chip runs microcode and the actual architecture is very different than an actual 8086. It only wastes a bit of ROM for a couple of instructions, that's almost nothing, most of the space goes to cache, but there are plenty of space for smaller things like a fixed ROM made out of simple diodes.
The old instructions can be perfectly emulated/interpreted on modern hardware by newer CPUs without wasting any extra space for any logic via the microcode mechanism. There's absolute no reason to remove them.
The source for this is Jim Keller, he said that.
And just as importantly, even if it wasn't, putting an entire 386 (275,000 transistors) on a modern CPU (with _billions_ of transistors) barely even qualifies as a rounding error.
I am on an Android tablet with an ARM CPU/no FPU and a DosBox app installed and the emulation of 80386/80387 CPU/FPU works good. But i miss a graphic emulation of a VBE 3 bios. The svga-S3 emulation have only a mixture of VBE1.?/VBE 2 bios and not many mode numbers for higher resolutions. On my last DOS PC i used a Radeon 7950 PCIe card with VBE 3 bios with mode number for wide screen 16: 10 aspect ratio 1920x1200x32 resolution on 28" LCD.
It is wasted space, but not in terms of silicon space. Many modern x86 instructions rely on VEX or REX prefixes to be recognized, while many of the original 1-byte 8086 instructions are all but completely deprecated (looking at you, x87). Simple coding theory would tell you that this is a wasteful encoding scheme. It wastes the abstract 'code space' for the instructions, which in turn makes the instructions longer, which wastes the precious L1 instruction cache. CISC instruction sets are often credited as having better encoding efficiency, but this waste is so significant that RISC-V with C extension would often beat x86-64 in code density.
Agreed. The x64 registers are mostly 64-bit versions of the 32-bit registers, which are 32-bit version of 16-bit registers. This also means the operations on 16-bit will often have a 64-bit version. For the most part (there's always edge cases in computing), you can just use 16 bits or 32 bits of the 64-bit register very similarly. The operation of something like ADD is going to be very similar whether it's done in 16-bit vs 64-bit
Jim Keller also mentioned that they don't need to optimize every single part of the CPU, just the parts that people mostly use. A Ryzen CPU will already be ridiculous for decades-old 16-bit DOS programs.
@@throwaway6478 _Barely even qualifies_ as a rounding error? I'd say proportionally, it's so small the appropriate phrase would be "doesn't even qualify". If you're dealing with several billion in total, anything less than several million might as well be free. You'd probably have to get up to the Pentium before you hit "rounding error" territory.
I love that we've come full circle and gotten doom to run on a computer
It was so funny to me when I found out that, being a DOS game, the modern computer, technically, cannot run DOOM... and now I find out that, in fact, it can...
DOS -> old ass consoles -> Windows 95+ -> modern consoles -> random shit -> DOS but on a modern PC
we now gotta run a computer in doom
@@cryomaniac-tm5mg This can be done, it's possible to build logic gates in Doom and, in theory, build a full processor. People have gotten calculators working.
Decino talks about it in his video on voodoo dolls, iirc.
@@AdreKiseque thats wild Im gonna search about it
Dos is fully usable on new hardware, but when you start getting into drivers, you run into problems. Same with old windows versions. In most cases you wont have drivers at all, unless random people online chose to write new ones for the more commonly used parts. AC97 for example. Every computer I've built within the past 30 years has internal AC97 sound. If there isn't a driver for DOS yet, someone should have worked on it at some point.
There is SBEMU which allows you to emulate sound blaster on AC97. I've tested it on my own PC running FreeDOS and it works great!
Only if your BIOS/uEFI firmware still have legacy mode or the BIOS interrupts needed by DOS.
2 of my laptops cant run anything before protected mode. (and even then, it causes issues)
@@parad0xheart FreeDOS is the way to go. It doesnt rely on the old calls from BIOS and there are plenty of soft/emu solutions for almost anything.
Still, to play around i prefer using PCEmu emulating hardware i had long ago. Its kinda weird 🙃
@@parad0xheart My newest motherboard I bought last year came with AC97
Some BIOSES have native AC97 support
As someone born in 1981and grew up playing games on a friends 386 and later 486, this is SO cool!
Thank you for the trip down memory lane
In 80s, I have handled equipment that were programmed in CP/M but it was fun. Once a REAL kid, on a freeDOS forum, told me that he finds it interesting that an old man wants to know internals of DOS
RIP OS/2 compatibility in x86-64 processors
This is news to me. I knew early Ryzens had trouble running it, but it was errata that was fixed in the AF revisions.
@@throwaway6478 64-bit long mode only has two of the four rings needed to run OS/2, which makes even emulation/virtualization difficult
@@AmaroqStarwindGood thing OS/2 doesn't run in 64-bit long mode then. It's a 32-bit protected mode (retroactively named "legacy mode" in the AMD64 docs) OS, with all 4 rings at your disposal. I suspect you're mixing this up with Intel's monstrous x64-S proposal, which _won't_ have IOPLs 1 and 2 at the hardware level, by "virtue" of not having a legacy mode - but nothing will stop you emulating it with a sufficiently-intelligent hypervisor if you _must_ run it on one of these unnecessarily-crippled CPUs.
@@throwaway6478 It also makes some of OS/2’s functionality harder to replicate for future operating systems.
@@AmaroqStarwindThat I can agree with - but as I said, nothing that a sufficiently-advanced hypervisor won't fix.
Brilliant. This kind of stuff shows how we walk on the shoulders of giants whenever we do anything with computing.
The backwards compatibility makes x86 processors always definitive. x86 users aren't missing out on MMX, SSE, and SSE2, even when they get the latest processor.
Except when they are (AVX512), but yeah.
@@lordofhyphens There's always ways to detect whether it's supported. And a compatibility hierarchy is maintained, with processors supporting SSE2 implicitly supporting SSE, which in turn is implicitly supporting MMX as well, which implicitly is supporting x87.
It definitely helps that all x64 CPUs are required to implement SSE2.
Don’t forget about x87 floating-point math.
@@MaddTheSane Can you use x87 in Long Mode? Or only in 32-bit processes?
What's cool: you can easily get MS-DOS to boot into any other program by creating an Autoexec.bat file. You can even make a simple menu, to let the user select what program they'd like to use.
Setting programs to load on boot with autoexec.bat was fun as an 8 year old kid.
@@michigandersea3485 Crafting the perfect config.sys and autoexec.bat was absolutely mandatory until really people started to move to Win 95 and that first MB of RAM didn't require all the babysitting.
That's how Novell Netware servers started. However, OS/2 version ran under OS/2.
It looks like at least two things are needed for a modern intel based or amd based computer to run old MSDOS. One thing is the computer needs to have a legacy bios mode available in its firmware and a legacy boot mode available in its firmware. Also the graphics card will need to have a legacy mode that it can be enabled. It is amazing that this backwards compatibility has been maintained for so long. Especially with planned obsolescence of so many items.
Planned obsolescence is a hoax. ¯\_(ツ)_/¯
There's no money in removing it.
@@demonmonsterdave
Removing it doesn't generate money. Correct. But.
Removing it will save a lot of money in design time and less rate of defective chips.
That being said, the reason they decided to keep it because backward compatibility was proven to be a big boost in sale. IBM became the top dog in the industry for several decades because of it.
@@gorilladisco9108 That being said, it is what it is, having said that, to be honest.
It's how Microsoft enforced it's monopoly. Intel's not a monopoly, but still can be "anticompetitive". As long as there's a substantial segment that demands backward compatibility, it pays to cater to them. When they don't, it gives purchasers more incentive to go to another vendor. For example, if you don't need Win32, you might as well get ARM, and maybe get an ARM Chromebook. They would rather you not do that!
Backward compatibility just pays for itself in so many ways. Old software has tremendous value.
Apart from some extremely out of date enterprise software, pretty much everybody else is using some form of DOSBox, DOS running in a VM or FreeDOS for this sort of thing. And even with the enterprise stuff, it probably would be better to be run in that fashion, but it can be a massive headache to make sure it works properly in the new environment and move it to the new set up. At which point, you might as well have just fixed the program to work with more modern OSes anyways.
I'm impressed you didn't have any speed problems with these old games. I don't know when they started programming the games to use timers rather than just clock cycles, but I remember a few times in the 90s when I used a pentium to play286 games and they were crazy fast. Cool Vid
That's mostly because John Carmack was smart enough to not just assume that using the processor's ability to pump out frames would be a reasonable method of timing the game. There were other games during the early '90s that would run weirdly if there was too much processing power. I remember one of the King's Quest games, (I want to say IV) didn't have sound for me unless I dropped my Pentium from 75mhz to whatever it was with the turbo button in the right state.
I think that pretty much all DOS games made after like 1993 should already run in correct speed on faster CPU.
Bouncing Babies (game) ran fine on 4.77Mhz, but toggling to 8.00Mhz... was way too fast. :)
Intel will never be able to rid the older x86 instructions simply because the commercial/industrial scape wont allow it for at least another 20 years or so.
Perhaps to rid completely.. But they can create a series of chips without the older instruction set for datacenter/scientific use. And then keep a gaming/consumer chip with all compatibility.
In 20 years i hope to see RiscV as the main architecture.
I don't know, they went UEFI-only (that is, removed CSM support) with a bit of wailing and gnashing of teeth, but no actual impact to their bottom line (which is what really matters - revealed preference and all that).
not quite how it works, nor is there any particular reason to want to get rid of x86 instructions. There have been many sort of "replace PC with THIS!" type projects that have come and gone through the years, and they do not fall on anything relating to industrial anything or what I think you mean by commercial.
They have no software, hardware or users. Boom. They offer nothing that the x86 PC's don't already do well enough for it to be a non-issue. There are plenty of single use type computers out there though, but the fun bit is even they are mostly x86 because standards and IDE's, plus of course support base, name recognition and so on forth, but primarily device add-on standards and IDE's and to a large degree code portability.
We humans make everything to fit hands. Hands are the standard. There are alternatives that might be better suited for some purposes, but we won't genetically select for them. Think about it---
@@IvnSoft They did. It was called Itanium. Note my use of "was" past tense. Look it up 😉
going UEFI-only is on the system manufacturers and the firmwares they use, not on the CPU manufacturers
the CPU couldn't give less of a damn about what firmware it runs on startup, be it a traditional BIOS or UEFI with CSM or without
You bought a NAS to play Doom..... I approve 😆
Side note: Meanwhile Mac users have been forced into emulation to enjoy anything made before 2007. Kind of a shame, all my favorite games growing up rely on PowerPC architecture ☹️
Edit: Antonyms are fun! 😀
don't you mean made BEFORE 2007?
You might be able to shoehorn macOS into running on the IBM power series of servers that use a ppc architecture to this day, but the driver situation will likely be a nightmare
Before 2007 and also before 2021, Apple changed from x86 to ARM since then.
@@Pwnz0rServer2009 Welp, teach me not to comment on 6 hours of sleep 😆
@@oggilein1also the wii u is ppc!....then again i dont think its worth the hassle to make it run actual pc software seen as its pretty much just wii but more wii😭
You don't need a ROM BASIC chip to run BASIC programs, you just need a local BASIC interpreter like Q-Basic or GW-Basic.
for IBM DOS 1.0 it needed the ROM BASIC.
I remember people making copies of the BASIC ROM to put in their PC/XT clones.
2:22 saying this while intel CPUs are literally cooking themselves to death is wild!
intel cpus literally rebelling against the concept of existing in a functional state
Once upon a time, editorials were published complaining that the latest x86 chips were "code museums" wasting much of their transistors to support backward compatibility.
Today, the _vast_ majority of the complexity is the logistics of dealing with superscalar and out-of-order execution, with a surprisingly small part of it being the Execution Ports that do the actual work, and a tiny sliver being the front-end that parses the instruction set. So today, supporting old instructions is not significant; the entire chip is abstracted away from the published instruction set anyway.
Running WordStar or WordPerfect should be able to keep up with typing speed. No virus checking or dozens of background processes causing little pauses, important for music production. Software back then was usually sold outright with a key, not annual licences.
... which was then pirated 😅
I consider myself fortunate to have seen, and worked on, computer technology from the 70s. 8in floppy disk, 9.6kbps data rate, punch cards, DEC/VAX minicomputers, PC DOS, CGA graphics, OS/2, IBM super mainframes and all the way to current state-of-the art gaming rigs. My home is connected to gigabit fiber which would have been unthinkable just a couple of decades ago. Such great memories!
I recall fondly my first PC. A Philips 80286 with 1MB of RAM and 14" colour display. I used the keyboard of that computer until 2023 when the left shift key finally gave up.
If it is mechanical keyboard, switch can be replaced (separatelly).
In 286 times, most of keyboards were expensive and mechanical.
Just dont throw up mechanical keyboard, it can be almost always fixed to (almost) 100% state.
"Quite literally everyone and his dog had an IBM PC" Um...most people from 1982-1992 did not have a home computer and had little reason to spend the money on one. In 1990 for instance, only 15% of households had a computer.
Yeah, and most people who did have home computers during that time had 8-bit machines from companies like Commodore, Tandy, Atari, or Apple. Or TI if they were unlucky.
That said, if you had a computer at work during that time period, it was probably a PC.
@@jeffspaulding9834 28.1% of the adult population, reported they
use a computer either at home, at work, or at school in 1989. As I remember, a lot of people at the time, who had PCs at work, bought a "clone" and pirated a lot of the software, particularly word processors and spreadsheets. I'm pretty sure Microsoft made this pat of their marketing plan. There was a lot less hassle about copying apps back then, and I think it was on purpose, to achieve monopoly power.
@@squirlmy An IBM PC back in those days would set you back at least a couple grand (in 80s money) for a minimal system. A "complete" system would cost you closer to $4-6k. Clones were usually cheaper, but the price point was still way above what you'd see on an 8-bit model like the Commodore.
Back in the 80s I knew several people with 8-bit machines, but only two with PCs (my family had one for my step father's work). Software was readily available at your nearest mall, and the PC section was fairly small (and mostly focused on business apps). Magazines like BYTE catered mostly to the 8-bit user. It didn't help that the graphics on the PC were horrible - tiny, expensive, usually monochrome monitors. Many of the 8-bit machines had color and could use a TV for a monitor. Because of this, you saw a ton of games for the 8-bit machines, which also appealed to the home user.
It really wasn't until the 486 era that the PC really took off for home users. By that time the prices had come down, graphics had improved, more games were available, and more people were using them at work with programs that required more than 64k.
(Edit: typo)
I got my first computer, an IMSAI 8080, in 1976.
@@jeffspaulding9834 I used to play with (and maintain) VAX 11/780s. We had 7 of 'em where I worked.
@5:30 You're correct that the built-in BASIC is not available like it was back on those machines, but don't forget that you can (almost) always grab BASICA and BASICB and run those old basic programs by dropping those onto the system and using them to run 'em. On that note, QBASIC won't work with those much older basic applications, but BASICA and BASICB usually do the job when run with an OS that can run said BASICA and BASICB.
My PC might still be able to run DOS, but it also sadly fails at Windows 95, as it's way too fast, which makes the (unpatched) security code fail horribly. 😉
And even if you can technically get it to run, it's unlikely you'll find drivers for most of your hardware
@@CaptainSouthbird pcie to pci adapter, modern motherboards with isa slots:
@@randomgamingin144p It'd be a waste of money and kind of silly to do that, but it sounds like exactly the kind of thing I would do just for the fun of it. I once forced a 2006-ish laptop to run Windows 98 including adding a compatible soundcard on a breakout board from its ExpressCard slot and wired in a voltage regulator to pump the otherwise missing 12V into the bus. It did work! And it was completely stupid, but that's beside the point.
Realistically though, if you really want to run Windows 95, there's practically unlimited choice of various Pentium clone machines on eBay. And even at the price levels, it might be cheaper to just buy period-correct hardware than one of these specialized adapters plus requisite hardware. Whereas you could spend somewhere around $75-$200 and just get a whole period system suited for the task directly. And be way less "messy."
Or, of course, just run a VM and be done with it.
It's fine to install patch. I wonder why UA-cam deletes comments with its name however.
VMWare Workstation must inject delays to make Windows 95 work.
Fascinating video.
I just booted Windows 95 SE on my 24 core dual Xeon Workstation using an external USB dvd drive with the original disk. My 4k/60fps GPU came up in 640*480. A USB WIRED keyboard and a 2.4G wireless mouse were fine. [ My wireless keyboard was Bluetooth. ] I also plugged in a USB Sounblaster II clone, which gave me stereo sound. An USB FDD also worked. It does not have serial or parallel (printer) ports for me to test. No networking. No internal audio. No audio over HDMI. What fun.
I get this all the time with my old computers using Windows 98/98SE/2000. These operating systems *hate* wireless keyboards. I've had computers not boot with wireless devices, if the USB receiver is in the port. Many allow for wireless keyboard usage if you insert the thing when in windows, just not on startup.
AKA how to run classic Doom at 35,000 frames per second.
It's actually capped at 35.
Doom has FPS limit at 35 FPS.
But, Doom benchmark not! Doom benchmark is actually uncapped.
@@warrax111 With good reason, most people couldn't play Doom if the FPS was too high, and a benchmark that is capped would be pretty useless once computers got past a certain point in terms of power.
@@SmallSpoonBrigade also frame cap is sparing electricity, as computer (and later 3D graphic card) doesn't have to compute too much, but can rest between frames.
So it's even reasonable to introduce frame cap, rather then let it compute 300 frames every second.
I love your vid! Gives me real old school vibes. This is also quite fascinating to me, I remember running Windows 98 SE on a 2012 Ivy Lake Laptop, but I would never thought that something as recent as a 2016 Intel CPU would be able to run an Operating System as old as DOS!
The x86 _does not_ have backwards compatibility with the 8080 and it's derivatives (the 8085 and z80). Instead, it has a sufficiently similar opcode set that relatively simple binary translation is _almost_ the entirety of what's needed to move from 8080 to x86.
Interestingly enough, this is true with almost any modern RISC-style core as well. The complex address modes need to be translated to more than one instruction, but overall it's a very simple, mechanical process. 8080/Z80 code runs natively very well after binary translation, since it typically fits in the L1 cache, along with the data. Getting 1GIPS equivalent Z80 instruction rates on current x64 is common. Not that there's a lot of utility for it, but hey, old Mandelbrot programs run at 60FPS+ :)
True for code. However the hardware environment was backward compatible to a large extent and used the same peripheral chips. NOTHING was backward compatible to the 8008/ 4004 !
EPIC
love how much youve grown since ive been subbed for a long time
Nice video! If my own memory isn't faulty, BASIC was in the DOS directory, which should've been in the path, so not in ROM. I'm not sure if it came with PC-DOS, but I'm pretty sure it came with most versions of MS-DOS. Heck, even the C=64 came with a version of Microsoft BASIC. You forgot Cyrix as a competitor, but I suppose that's a subject for another video.
Sorry but your memory is a bit off. The original IBM PC and PC/XT had real ROM chips. I still have the HEX file of its contents. However most every PC "clone" had BASIC as a COM file - I think for licensing reasons.
Cyrix chips were NOT QUITE X86 compatible. I used to build PCs for a living and it was always necessary to include minor patches to keep Windows stable - especially at clock speeds over 66MHz. But they were cheap !
@@adrianandrews2254 It wouldn't surprise me if my memory were a bit off. Thanks for the correction.
I also used to build systems, and Cyrix chips were very cheap. I do remember they didn't 100% compatible, and yes they did present an issue at times. I think that's kind why they disappeared. They were cheap, but also sucked at the same time. AMD struck that cheaper chip with compatibility sweet spot.
We need an i7-8086K running DOS!
I am not sure if it is possible to start with an UEFI bios in 64 bit mode and graphic mode to load an IBM compatible bios from a file into the memory and then switch into 16 bit mode and in text mode to boot MS DOS. I don’t know if an IBM compatible bios is compatible for the mainboard function of those mainboards without UEFI. The intel i7 architecture and the graphic cards can switch to 16 bit and text mode. But the problem is the missing IBM compatible bios.
Can someone get on this ASAP.
This a bit of a pick, but;
Love how the mic quality has gotten progressively worse
soon it'll be period correct
It's running dos.
oh, hey koutsie. fancy seeing you again
Some instructions have been removed, for example intel removed some AVX-support over time (because they couldn't fit it in their efficiency-cores) while AMD made their equivalent cores more efficient by removing cache etc
The earliest versions of Intels processors supported AVX on the Performance cores but not the E-cores. I think it had to be disabled/removed due to complications with the OS level scheduler lacking the softistication to run threads which require AVX on P-cores only.
In 16 bit mode of MS DOS we can use 64 bit MMX instructions and 64 bit SSE instructions, but not AVX.
LOL you totally got me with Last ninja! Didnt expect commodore 64 reference :D
He seems blissfully unaware that PC-DOS did not need BASIC on disk because it was built-in on IBM's (ONLY). Clones used MS-DOS, which was NOT in ROM and had GW-BASIC (Gee Whiz Basic) on disk. IBM did offer BASICA (advanced BASIC) on disk.
Any PC compatible that HAD put IBM's BASIC in ROM would have been sued instantly. You can still be considered 100% compatible without having BASIC in ROM (which was pretty useless anyway since you need a DOS to store programs!). 🤡🤡🤡
You can write BASIC programs just by using the one supplied by the ROM, but you cannot save it. To do that, you'll need BASICA from PC-DOS, which added the capability to save the program.
ps. IBM BASIC was the default mode for "operating system not found" in IBM PC.
Hi. Great video! Another use for single board computers like this is to run lightweight Linux distributions such as DietPi. You could even use Batocera as well!
I have honestly done this so many times, but I never knew that it was actual DOS, I thought rufus gave you an emulator, thanks for the info!
The cool kids got PCs with 8086, rather than the bottlenecked 8088. I had an Amstrad[!] with an 8086 and the full 640K of RAM. The good old days.
And any game that relies on the intrinsic CPU speed for timing (lazy programmers!) will be quite interesting to run.
It wasn't lazy programmers...well, it kind of wasy; but there is a very good reason programmers chose to use loops rather than the 8253 was the 8253 wasn't guarnteed to be on there. In fact, they couldn't count on any hardware being the same on any of the systems back then. Standards? There were none. No APIs. Don't forget this is an era where your verison of DOS had to be customized by the OEM because the standards for what a motherboard were didn't exist.
It was necessity. Using the 8253 would have limited the software to just IBM PCs and anyone who directly cloned it. By using "lowest common" methods, they ensured it worked on more hardware.
@@dewdude IBM didn't even name their ISA bus. It was later Compaq and a group of clone makers retroactively named it, along with their future plans for E-ISA (which soon gave way to PCI)
1:05
Making up over 80% of all new home computers sold by 1989, well maybe in the US, but in the UK, the ZX Spectrum basically had the most HOME* marketshare during the 80's, with the C64 coming 2nd, then Atari STe/Amiga 500 3rd, with the PC bringing up the rear, Apple really WAS NOT A THING in the UK until 1997.
*Schools tended to have BBC Micro's/Acorn Archimedes before changing to PC's in the early-mid 90's
DOS was an ideal OS for gaming precisely because it did so _little_ to help the programmer. It offered exactly ZERO aid for things like sound, graphics etc. The BIOS helped somewhat, but no professionally written game wouldn't rely on BIOS routines for graphiics. DOS's strength was that it didn't get (much) in the way of the programmer, if we exclude the horribly complicated memory management standards.
I still use DOS regularly for data recovery and other low level stuff. DOS is still used in niche applications to this very day.
Thanks for the fun video. I actually used to use dos 1 and over time about 12 versions on those old chips.
i don’t remember subscribing but i sure am glad i did, this is so interesting
Of course ARM CPUs have a long and storied heritage of their own, and if you get the right arm single board computer you can even run the original RISCOS -originally designed by acorn computers for use in their first generation ARM workstations.
None of the mainstream ARM chips can run ARM 1 or ARM 2 code. So the RISCOS binaries from Acorn won't work. Of course if you have a binary build for something newer it will work. But original RISCOS runs on original hardware and emulators only.
I ran dos on my laptop on bare metal back in 2014. I very very quickly discovered that I liked having USB drivers. Frankly dosbox on top of something modern will give you a better experience. Like, I can use my USB midi keyboard (which DOS would have no drivers for) as an audio output device for doom. Or a virtual synth like fluidsynth. If you want the games of yesteryear to survive, pin your hopes on emulation. It will give you a much better experience I swear…
The most compatible OS? Daddy's Old System!
My very first computer was a HealthKit that my father and I built together. It ran CPM (MS-DOS was really a copy of the CPM). When I was in university, for a short time, I worked in IT (I am no IT expert). We had a UNIX machine. It was command line and a great computer! I moved to Macintosh which, to this day still uses UNIX (I am sad they took away the ability to log in as root.). But on a current Mac, you can still get to the command line, if you really want and bypass the GUI. (I think the main reason why Mac stopped allowing people to log in as root was because idiots could use the RM -R command and with no checks, would just erase everything!) But today’s UNIX will still run every old UNIX program, no questions asked.
Nice idea to run DOS on a ZimaBlade. You got a new subscriber. I review ARM SBCs. Cheers.
A lot of new laptops don't have any CSM mode so you can't run DOS at all...Soon desktops won't have it either.
I'm at an impass on your comment:
1) I like it because you are correct.
2) I absolutely hate it because you are correct.
That's exactly why I believe that DOS is the innate operating system of the x86 architecture
There are LGA1700 platform boards that take Alder Lake or Raptor Lake cpus that have a PCI slot. And it's possible on some motherboards to break out a fully functional ISA bus over the TPM connector. So you know what that means right?
PCI-e can easily use PCI cards via adapters too
@@AshBashVids motherboard still has to support DDMA if you want PCM sound over the PCI bus under dos, alot of boards with PCI slots don't, but then you find one that does and its golden, usually industrial boards today.
i think it's done through the lpc header, if the board is old enough, not the tpm. but even so, on one point in time and later, the way dma and irqs are handled by these ports (a.k.a the chipset), has fundamentaly changed, so dos sound etc. cannot work even if you managed to breakout an isa slot. you cannot use pci because it works completely different.
@@giornikitop5373 TheRasteri got it working, but I don't think every board may still have the full ISA protocol intact.
@@tek_lynx4225 More thinking if you want to use a Voodoo card. You can get sound working via ISA if you have a TPM/LPC header.
Thank you for the memories. ❤
Thank you so much for this Video! So many people do not know that DOS exist, because they work with Windows.
I am 50 years old. It is so fun to show small kids that their new PC can still load DOS and they surprised at what an ancient being is summoned LOL! They think it is black magic.
This is a big reason why I'm a bit of an x86 fanboy. Yes, the syntax for x86 asm is bad, but it's rare to write x86 by hand anymore. The compatibility is beautiful. I often bring up embedded ARM systems(usually zynq or zynqmp) and specifying devices in device trees, configuring u-boot is tiresome. This is why for instance with OpenWRT you see one build for x86 and 50 others for variants of ARM/MIPS devices.
Yeah compatibility is truly it's superpower, and by the looks of it AMD is getting even more serious about power efficiency with Strix and Intel is with Lunar Lake. There's lots of doomsaying about x86 but if history as proven anything it is one of, if not the most, adaptable architecture in history and clever engineers always find a way to break the mold with it.
Technically most CSM legacy support is emulation. API emulation that is. They use a combination of small stubs with Virtualization, IOMMU and/or ACPI functions to create a classic Interrupt BIOS that calls the newer UEFI/ACPI functions. Because UEFI functions must run in either 32bit or 64bit mode, they do need to use a minimum of Virtual 8086 mode for a portion of their CPU-Side function.
It doesn't fully hide itself. If a DOS program traces down the calls it will see how they execute. They honestly won't care and will consider it like any other 32bit mode extended BIOS.
This basically makes it more in the family of the Wine Win32/Win64 stack or or CP/M for DOS stacks, than native stacks like WinPR and MinGW.
Casually drops the fact that ME DOS was extracted and running perfectly
You don’t need basic in rom. You need a program to load the files into. Gwbasic. Qbasic etc
He was talking about BASIC which was originally present in PC DOS.
Of course, it is possible to use versions, which don't rely on ROM code.
Some TPM ports can be used to effectively get a legacy ISA socket. Saw a legacy sound card attached to a semi-modern machine by this method.
USB floppy drives are bootable so long as the device can boot from that USB port, and WAAAAAY faster than the floppy interface.
DOOM is what I still use a DOS P4-3GHz for. With ISA slots.
haven't you run into incompatibilities with the usb floppy drives? And more specifically modern ones, not the few IBM and other big makers put out back in the day. I was really shocked at the poor construction and lack of support. Maybe if all you do is boot FreeDOS it's all right. Actually moving data and updating firmware is a nightmare.
Wait! How do you even play Doom anyways???? Heresy! 😂 Great vid btw
I used to play The Last Ninja on the Wii as a kid, thanks for bringing up a good memory!
How did you solve the problem of old games running too fast?
modern processors are optimized for 64bit instructions
@@yuan.pingchen3056 🤦♀️
Did you even watch the video?!
@@yuan.pingchen3056 BOT detected, opinion rejected!
Turbo button.
The underclocking of a lifetime.
Intel is planning to make x86 64bit only architecture through x86-S specification
x86-S supports 32-bit apps but not 32-bit kernels. It also removes stuff like ring 1 and 2 (which almost nobody used)
@@sundhaug92 The should have done it years ago... there is a ton of legacy stuff that can be removed, also it would simplify the boot process and the operating systems (basically because the OS doesn't have to start from a 16bit real mode and setup all the stuff to get to 64 bit mode).
@@alerighiyeah, removing that tiny bit of code that is already written and working will save so much. I also bet that removing the old instructions will save hundreds of transistors.
@@alerighi You'd be surprised how many niche things still depend on 32-bit kernels. Also, modern kernels don't have to start in 16-bit, either because the bootloader gets them up to that point or because of UEFI
@@harrkev removing old instructions won't save transistors because they're just microcode in the ROM, it'll save a couple of diodes, no reason to do that.
they could remove the real mode because that can be emulated in user-mode by the operating system.
Intel is quietly working to drop 16-bit and 32-bit compatibility from future processors, which will let them squeeze out more performance in future chips. So, this is a feature that probably won't last another decade.
We will see which DosBox emulation works faster on ARM or newest 64 bit intel. 😂
Corncob 3D was my favorite shareware game before I discovered Descent.
Shorter video than usual but pleased!
There is a big difference. In old days, DOS PC's allways ask date and time setting when device power on. There was no RTC batarie and memory.
I use DosBox on android and it have no date and no time command.😂
@@maxmuster7003 It was 1980s 8088, 8086 HW, when i were at Unv. Not later . ;)
@@aligokcen5908 For an 80286 mainboard i used a floppy disk with a bios setup programm to enable the devices like hdd, because the mainboard have no setup on a bios chip.
The biggest issue is really the driver situation. You may have gadgets like CSM, SATA compatibility mode and SBEMU, but your first hurdle is probably lack of support for NVMe SSDs - only the first mainstream Samsung drives shipped with a legacy option ROM to allow booting from them. Thankfully USB sticks are well supported instead.
DOS is going to make a big comeback some day. Invest in it now, while you can.
"No floppy drive" is indeed an oversight.
Makes me want to buy an IBM PC Jr. from ebay and relive my childhood.
Yay! Guess time to mod my PC now!
Intel microprocessor started with "real mode" at boot up. In Windows, it then followed by switching to "protected mode". The "protected mode" is the working environment for Windows. In DOS, it stayed at "real mode" (DOS was made before Intel had any idea about "protected mode").
In essence, DOS can still run in i9 computers, but it's confined to the 1 MB memory. Additional drivers can be used to reach the rest of the memory space, but it will always use in a swap mechanism.
This chiptune at the end is giving Lampshade vibes
Itanium lost not because of no compatibility with earlier processors but because of theoretical power which was not visible in a real life.
Itanium required software tools and Itanium targeting software developers understanding this architecture = dozen of years developing it.
Moreover it was visible pretty fast that size of working simultaneously on the same microcode part of the chip matters and it is better to have asynchronous chip doing small things in parallel than more organised one doing its job in theory better but in reality needing more space to do it.
Some evolution of multi-cores and HT was the output of works on Itanium.
I'm not English fluent enough and I've read about the problem with size of gates structures needed to maintain Itanium organisation decades ago so most likely my attempt here to review results is not too good.
More or less the meritum of what I've read was that RISC and Itanium had 1 major problem - size of die needed to work in parallel. Whereas Intel could spread operations more flexible and sync from time to time ending with slightly chaotic but easier to maintain in a real world layout of gates.
To clarify, Itanium was built around the idea that certain optimizations could be made by the compiler in order to achieve the full performance of the processor. The problem was that those optimizations turned out to be impossible, so the chip was never able to reach its full potential. It never compared well to other 64 bit architectures such as SPARC64, POWER, or Alpha. The only reason HP adopted it was because they had stopped developing PA-RISC to help Intel develop the Itanium; by the time they realized it wasn't a great chip, it was too late to turn back.
But what really killed it was the x86. Itanium was a great idea in 1994 when they first started working on it. By the time Itanium came out, x86 was eating into the high performance computing and business markets that used to be dominated by IBM, DEC, Sun, and HP. You didn't need a $20k workstation to run CAD anymore and you didn't need a $200k server to run your business logic. Commercial UNIX was dying, its business customers fleeing to Windows and its Internet customers fleeing to Linux. The only use HP found for it was for their legacy HP-UX customers - hardly anyone was developing new systems on HP-UX. Intel had made the x86 too good, and AMD brought it into the 64 bit world.
Trust me. The code that was written back 1960s does not waste as much space as you think.
People back then didnt have the luxury of abundance of RAM, Storage and Processing power. That forced them to make their software incredibly optimized, and their quality of code is unmatched to today's standards.
What is a problem though is modern hardware, unoptimized games and programs that developers don't bother optimizing because "eh, it runs well on my $10,000 dollar gaming rig"
No one makes their own software anymore. They just pick the easiest or most popular option, no matter how abstract it is, and put a more expensive GPU / CPU in their software requirements page.
standardization is the only key to success on everything . this is the only reason why we still have pcs that work and make the world go round
Actually I was mostly running ARM in the 1990s after running mostly 6502 in the 1980s. The ARM back then was mostly backwards compatible with the 6502 in a number of ways mostly because of how it came about though the RISC structure of the ARM did eventually make this somewhat harder to do. Of course ARM has come a long way since those days...
So NO, not all of us went overboard for 8086 back then.
"ARM back then was mostly backwards compatible with the 6502" That's simply not true. Nothing about ARM 1 and ARM 2, and in fact any later ARM architecture, had anything to do with 6502. ARM 1 could run 6502 code very well after binary translation, but it was a waste of a 32-bit processor for the most part. Binary translators with good peephole optimizers could detect common 6502 instruction sequences like long addition/subtraction and translate them to fewer ARM instructions. Today you can do excellent binary translation from pretty much any retro processor to x64, RV32/64 and A32 (modern "desktop" ARM) instruction set. That's because there's a ton of very expensive software that is open source, like theorem provers.
In think they should get rid of that 16bit legacy comptability, would save transistors on the die
The Celeron N3450 is anything but new. Is CSM mode still included in new motherboards?
I modded my laptop's BIOS to unlock all options and it doesn't seem to have CSM support (it's an 11th gen Intel).
Only on Desktop PCs now
My Asus laptop doesn't have CSM support but I do have an old 2nd gen core i3 laptop that has a UEFI thats locked to bios only mode due to shipping with win7 and an 8th gen core i3 laptop that has a UEFI that supports CSM.
I'd someone made an EFI bootloader that emulated CSM/BIOS that would allow you to boot older OS's on newer hardware.
@@pankoza That sucks.
@@Izanami95 I think Clover Bootloader can boot Legacy on UEFI, I never used it for that tho.
@@Flopster101I remember many moons ago using TianoCore to emulate UEFI on my BIOS-only machines. Now I have to use Clover to emulate BIOS on my UEFI-only machines. 🤣
Great stuff. Funny bit about the missing floppy. 🤣 Strictly speaking, saying DOS provides a HAL isn't correct. A HAL is an internal OS layer meant to support porting it across more than one processor architecture. DOS, of course, is a one-trick pony. Windows NT was the first MS OS with a HAL.
IA-64 x86 compatibility was only 32-bit and never was a huge priority and it wasn't providing full compatibility. Intel at the time thought native IA-64 code would quickly obsolete x86 so putting a lot of engineering and die area on x86-compatibility would be wrong. Meanwhile AMD did their x86-64 thing and the markted decided. One of the early adopers of x86-64 was the Linux community which at the time did support several other 64-bit architectures including Alpha, SPARC64, MIPS, PPC64 so x86-64 was really just a formality. IA-64 has some niche success with supercomputing or highly available systems (IA-64 did support master-checker afair) - not enough to tip the scale in favor of Intel.
don't forget Itanium was created by Hewlett-Packard. Intel only joined in development later. And HP took a pretty big hit financially from the failure. I don't think the Linux community was even a factor, because they ported to everything, they would adapt to any 64-bit tech as you note yourself. A big factor was costs. It cost the market (as a whole) a lot less to hold on to Win32 and be discretionary about what to software to port(and/or buy) to 64-bit. Yes the market decided, but it was short term (maybe even quarterly) budgets vs upfront costs for moving to an entirely new architecture.
@@squirlmy The Linux community in that case was a whole bunch of companies including HP, Intel, SGI and others plus various Linux vendors, the usual suspects including smaller ones such as Turbolinux and contractors. SGI was among those early adapters because they wanted to get rid of their inhouse processor development. Which arguably is what IA-64 did best - it resulted in the end of the Alpha, HPPA aka PA-RISC and the high-end variants of MIPS. SGI's traditional customerbase controls their own software which was already running on 64-bit. So the 32-bit compatibility was never much of interest. In In fact due to memory size 32-bit was no practical option even back then. Afair the first SGI Itanium system had 64-bit processors. HP's customer base was probably to a large degree identical so I don't think they had much interest in 32-bit either. Intel was making a bunch of IA-64 systems such as the Big Sur but I'm not sure if they were ever planning to become a big mainboard or system vendor. Microsoft and IA-64 was an entirely different matter.
Yes, the market did decide. Itanic took Intel a lot longer to finish development than they were planning for. At the same time Itanium 2 aka McKinley's was looking like it was going to be a faster processor than promised and arriving earlier. SGI eventually decided to cancel their Merced product in favor of the McKinley. Which however meant their first IA-64 product was being delayed yet again. Meanwhile AMD's x86-64 was looking much more promising and became available on the mass market at interesting prices.
This left a few niche markets for McKinley-based systems. Floating-point-heavy applications did tremendously gain. Also initially systems with massive I/O and memory were better off with McKinley than x86-64 since the heavy hitters HP and SGI had been working on them. Another niche was HP's Tandem division. Tandem was building fault tolerant systems. The sorts that doesn't crash only because you yank out a few parts or is hit by the death star. Tandem was using MIPS; an attempted migration to Alpha had failed so eventually they migrated to Itanium. Meanwhile there was no support in x86-32 / x86-64 for that sort of fault tolerance, so they were stuck with Itanium for a long time.
These were niches. x86-64 was meanwhile selling like fresh rolls. By the time Windows Vista, the first version of Windows with 64-bit support out of the box hit the market, x86-64 was already widespread.
What also helped selling x86-64 was that it was a better architecture design than the i386 aka IA-32 architecture. For every other architecture 64-bit code was running slower than 32-bit because it was bigger resulting in a lower cache hit rate. Not in case of x86-64.
Meanwhile Intel themselves appeared to have lost interest in IA-64 a bit. I think one reason was also that IA-64's EPIC (remember EPIC is the new VLIW ...) was hard to efficiently generate code for. That issue held compilers back; the actually achieved performance of compiled code was well less than what the hardware actually was capable of. And some features of the IA-64 such a giant register sets and register windows made for inherently large dies and complicated designs without achieving the original design goals. Then the unimaginable happened, Intel took an x86-64 license, had to catch up with AMD's x86-64 implementation and at the same time had that boat anchor IA-64 tied around their neck which they couldn't drop because they already had a customers.
Actually somewhere around the Nehalem Generation and with the availability of EFI1.0 (I have a Xeon Workstation from 2008 with EFI1.0, the predecessor to UEFI2.0 which we all use today) PC-CPUs did no longer start in 16Bit-mode but always in full 64Bit mode. You could force them to switch to 16Bit but that was about it. Besides that, all that old stuff, 16, 32, protected, real mode etcpp, uses around 100.000 bits of Microcode inside a current CPU. As modern CPUs have transistors in then 10ths of billions this is less 0,01% of the die-space.
And btw, around the Intel 10000 generation support for CSM/16Bit was dropped from pretty much any BIOS/UEFI. Finding Boards still supporting that is... hard.
I would love to see compilers do more work than CPU.
Stuff like branch prediction should be coded, tested, load registers based on results, and then executed. You can put other commands between these.
Also, the very time-consuming load indirect should be multiple commands.
These would decrease the complexity of the pipeline. There will still be latencies on cache misses, but the proper set of commands would reduce wait cycles.
Compilers attempt a lot of stuff like that.
When they're not trying to make rival cpu bench worse anyway
I think they tried that with Itanium, didn't went well. The compiler had to calculate precisely the latencies and put nops in the proper places, and even do the branch prediction.
I don't think backwards compatibility is necessarily a bad thing. Remember, when Intel tried to introduce an entirely new architecture (Itanium), it was a total flop. But ARM is not a new architecture either. In fact, you can do this on a Raspberry Pi: Install RISC OS, and you can play games made for the Acorn Archimedes on bare metal.
Very last line *chefs kiss*
🤣
"Nothing says «an act of rebellion» like an intel cpu" LMFAO 💀
Apple gave up backward compatibility in hardware when they switched from 68000 to PPC, from PPC to x86_64, and from x86_64 to ARM. They gave up software backward compatibility when they switched from macOS classic to MacOS X and partially when they gave up on supporting 32 bit software in macOS altogether. That's one of the things I truly love about Apple. They step forwards, cut off old tails, get rid of legacy crap. There's always a smooth transition (macOS classic could run on MacOS X for quite a while, PPC code could run on x86_64 CPUs, and x86_64 code can currently run on ARM CPUs) but after a while they just move on.
If you truly want to run 45 year old software, just use an emulator. Not only can it emulate any hardware you like and hardware you cannot even buy anymore for ages, it can also fix issues that hardware had, it can make software even run better than on the original hardware. Some games today are way better playable in an emulator as they had ever been on true hardware.
Some people may think cutting off backward compatibility is a bad thing, but they would probably not think that way if they knew that x86 CPUs could be faster, more efficient, and cheaper today and Windows would be a lot smaller, more stable and less resource hungry if they only dropped a bit of backward compatibility at least once every decade. You are paying a high price for compatibility that 99.999% of all users will never ever have any use for. Quite often the answer to "Can we do that?" is "Sure we could do that and it would be awesome but it would also break backward compatibility, so unfortunately no, we cannot do that". That happen in software dev, that happens in hardware design and it happens all the time.
"That's one of the things I truly love about Apple. They step forwards, cut off old tails, get rid of legacy crap. "
... and their market share showed the result of such decision 😏
@@gorilladisco9108 Their market share has been constantly raising in the last 20 years and the main reason why not more people buy Apple products is the price and the fact that they cannot run their Windows programs on it but that has nothing to do with backward compatibility, as even if Apple was backward compatible to their systems 30 years ago, people could still not run their Windows software on it.
It's good that Intel has such backwards compatibility. And with increasingly faster and more powerful processors, there really shouldn't be *that* much wasted space for such compatibility. And it's better than Microsoft's lack of backwards compatibility for DOS and 16-bit Windows programs.
Even 32 bit support is iffy on windows. I heard the best way to run old 32 bit games on a PC is to use Linux running wine
the backwards compatibility is not just for the sake of running DOS the 16 bit real mode is also when the CPU and bios communicate with each other and the BIOS then builds the tables and charts that windows will read to grab all the details of all attached devices pci cards etc. Hardware we really only understand in 16bit real mode or at least its best to talk to it in that way because of simplicity during commands the 2 wire serial interface near cmos can be used to issue commands to the BIOS during 16bit real mode on bootup meaning you can adjust registers and modify read/writable bits while in the 16 bit real mode. You can also do it in 64 bit mode but the bitness changes and so do alot of the addresses. I like to poke around with hardware probing tools like R/W everything and etc. Also why most drivers are written in low level languages to maximize speed effeciency when polling hardware devices along the PCI/USB/sata bus. You essentially could run 16 bit programs by pointing to them or by switching hardware registers to correlate to the required cpu execution model meaning you can run programs directly from bios theoretically ive never tried it but interesting to experiment with. Can also probe the clock generator on alot of motherboards and get back pure hexadecimal data giving you details of the Clock gen allowing you to overclock and change usually non accessible settings sometimes chips are internally write protected and theres no way to modify the signals without messing with the continuity of the wire leading to write protection pin on the chip.
That Doom install screen was nostalgia
I expected to find a video about a 45 years old denial of service attack (DOS attack) targeting Intel CPUs. 😀
DoS is not DOS.😂
Bro, it’s not backwards compatibility. It’s x86 architecture all the way through it will even run 8-bit applications if you boot to DOS. But those 8-bit & some earlier 16-bit games all run way too fast
That's why we have ubiquitous (and reliable) slowdown TSRs these days. If you gobble up 9,999 out of 10,000 CPU cycles with a busy loop, even a firebreathing 7950X will do a convincing impression of a 286. 🤣
The different operation modes of x86 microprocessor have different instruction set. The same "mov ax, bx" will be translated to different bit pattern by the compiler/assembler based on the intended operation mode.
About old games run too fast, the reason is they use software counter to time their movements. With computer running faster and faster, the 1990s kids had a challenge "play Tetris on new computer". DosBox has speed up and speed down buttons for that purpose.
"Nothing says rebellion like x86." You a funny guy, says Charlie Chan.
The 8086 was assembly code and register compatible with the 8080 and 8008. It was NOT binary opcode compatible with them. The register set was a superset of the 8080 and 8008.
Naming a computer Zima is probably just going to have people associate it more with the failed alcoholic beverage more than I guess the intended cool operating temperature of a Russian winter. :p
"new" is questionable since you're running a 2016 chip. that said Skylake (which I think the N3450's Goldmont was based on) was the first CPU to begin trashing backwards compatibility. maybe Goldmont kept more of what Skylake dumped.
well, that's one to describe that i'm old. 45 year old DOS