I won't lie... at the beginning of the video I didn't see the point of your experiment - 1 hour and 12 minutes later I am thoroughly entertained! Great video!
it really floats my boat. I would have so many ideas for more tests. e. g. using a clock divider to make the signal more stable at low frequency. or to use wait states instead of reducing clock speed.
would be more fairer to pit a 65816 vs the intel chip because then they are both interplexing for memory access as well and it's 16 bit to so it's a more fairer test and the apple 2gs had one inside it
Last statement of your video: Useless video? I deny this. I think it is really valuable. For me as a developer it is interesting to see how things work together hardware to software. You do not need a doctors title to show your knowledge. Honestly. There are plenty of things I learned from your video, which will influence my work in the future. Stay healthy. Keep up the good work.😊
I was writing video games for 8 bit home computers in 1989 using PCs running PDS which compiled 8 bit assembler and "squirted" it to the target machine with a parallel interface. I remember the day the first 286 showed up and how much faster it was than a 10Mhz 8088/8086, it was crazy faster.
PDS was a game changer. At the time, I used it to develop for C64s. Being able to use the target as a "dev kit" made things so much easier for development.
I love it at the end with the on screen seconds counter where it was still counting the time correctly despite it not being able to draw the clock every second so it was refreshing the clock every few seconds. Very funny.
Basically an 8088 would start out with the same handicap that an 8080 or Z80 would, in that a basic clock cycle (T-state) does less. The 8080 (and Z80) needed 4MHz to do what a 1MHz 6502 could do. But that doesn't mean that it couldn't benefit from a pre-fetch queue. A 286 is probably more efficient per clock, and it can read more bytes at a time. I think the 286 was a much more interesting choice for this experiment than an 8088 would have been. And I never realized how much I would enjoy seeing a variable clock speed knob on a PC.
@@phill6859not necessarily the 8088 has a big handicap in accessing memory and also longer clock cycle counts for common instructions. 6502 manages the register issue with zero page, but you are correct that the ALU is very basic. The original IBM PC was not far off other 8 bit machines of the time
@@MarianoLu in modern speed tests focusing on complex operations with code optimized for each CPU the 6502 only came out at about 50% faster per MHz. For complex tasks the 4.77mhz 8088 outperformed even the 2mhz 6502s. This was the conclusion of one of the tests conducted a few years ago; "A 1.77-MHz MOS 6502 in Atari 800XL/XE (PAL) required about 66 CPU cycles to go through a single inner-loop step. If graphics output was disabled during the test, this decreased to just 49 CPU cycles. A 4.77-MHz Intel 8088 needed about 73 CPU cycles to do the same. Thus, 6502 is faster if running on the same clock. On the other side, the original IBM PC is clocked 2.7x higher than the Atari and 4.7x higher than other important 6502 machines (Apple IIe, Commodore 64). Thus, IBM PC was at least twice as fast in this type of tasks (similar to compiling, XML parsing…). I’m not surprised, but it is nice to see the numbers."
@@dgmt1 yeah as I said the ALU sucks in the 6502 for calculations. And a prime sieve test is a great way to highlight that I’m surprised by the results since I would have expected the 8088 to be much closer than a 6502 that in this type of test. That is a great Article from Swarmik on tumbler thanks for the reference, it shows the advantage being driven only by the clock speed and cycle to cycle they are neck to neck. Just downloaded the code and will take a look to see if I can mod it to run on an Apple or C64 to send it to Adrian to try on his under-clocked 286 to see the difference. As a side note I found it funny he mentioned XML parsing, XML came in the mid nineties so it will be really fun to see 70s and 80s computers trying to do that.
Hi Adrian! I know you get 20 billion messages and you might never read this but just wanted to say that you rock, love your stuff, you seem like a great guy. Thanks for the great videos!
FWIW, my first professional job right out of college was in the design verification team for a GPU company. We used PCs for our testing, and the AGP card we used as part of our test setup could not run at full AGP speed, so my first job, on my first day, was to desolder the crystal on a P2 class motherboard and replace it with half the speed. The whole machine worked fine, but a regular keyboard (PS2 style) would not, so we simply replaced the crystal in the keyboard with half the speed and it worked fine. Obviously we never went down to 100kHz... :) Great job!
NMOS processors use roughly the same power independent of the clock speed. The pull up loads are passive, so at any moment in time around half of the transistors are conducting. When the next clock comes you will still have around half of the transistors conducting, just not the same ones. CMOS processors (and AMD did make some CMOS Am286 later on) only conduct current at the clock edges. Speaking of clocks, the design style of the time was two non overlapping clock phases. Motorola and MOS processors had these phases coming in directly into separate pins while Intel and Zilog processors used a single clock pin with a higher frequency that was then divided down internally to get the two phases. This makes direct comparisons more complicated.
I found this video really interesting because I would have never thought to try lowering the clock speed until the computer stopped working. Your videos are always a pleasant surprise 🙂
Just as a quick addition to your talk about tying clock speeds to TV frequencies, the SNES also ties almost everything to the same 3.58 MHz clock. (The exception is the sound system, which is an independent coprocessor with its own clock signal.) Overclocking a SNES is therefore pretty much out of the realm of practicality for most people. The Genesis, on the other hand, provides an independent clock signal to the 68000 CPU, making it super-easy to overclock with just a crystal swap. I've gotten an otherwise stock model 2 Genesis running reliably at 10 MHz, but more daring hobbyists have replaced the 8 MHz 68k chip with ones that will run all the way to at least 20 MHz. Not many games benefit from the faster CPU, but a few really do. Getting back to the SNES, it uses a customized version of the 65C816, itself a 16-bit descendent of the 65C02. It's also known to be incredibly efficient for its time; somewhere around twice as fast clock-per-clock as the 68000 in most operations.
The downside is that the 65816 required a roll.your.own approach, much as the 6502 did. Not as bad mind you. But the intel crap and the 68K had far broader and more useful instructions and with the 68K you could pretty much got full orthogonal. I seem to recall there is a 2 part video on UA-cam that looks at the myth of the 65816 being faster than the 68k at the same clock speed. Always hard to make direct comparisons but the 68K was a hell of a lot easier to program than any other chips. You could pretty much use any 68K addressing mode with any instruction. I don't know if a real comparison is possible but it would be fun to see it.
This was just straight up exciting to watch. You looked like you were having a blast this entire video, and I would be too if I were at the controls there. Fascinating stuff.
@@CommodoreGreg Bingo. The PC keyboard protocol is clocked by the keyboard, so the microcontroller failed to keep up while the ASIC could simply use that clock directly. Once the microcontroller was too slow to stretch the clock, it started missing bits. Downclocking the keyboard too could compensate, but make it less responsive.
In my youth, I wrote assembler code for both Apple II+ and IBM PC (the 8088). 6502 was easy to learn but you had to write your own multiply/divide routines if you needed them. Doing anything with 16 bit numbers was a pain. The processor stack was only 256 bytes so forget about recursion :-). The 8088 was harder to get into because you had to be aware of the segment registers that implicitly controlled where instructions read or wrote data to, and the whole notion of using software interrupts to talk to the OS took a bit of getting used to at the time, but having 16 bit registers made coding much easier.
My impression of the era is what limited performance in the day was mostly memory bandwidth combined with designs which operated with CPU and RAM synchronously. Obviously, with 8-bit micros suffered with sharing that bandwidth with the video generator. One of the big wins with ARM in 1985 was it being designed to take advantage contemporary DRAM achitecture with full 32-bit access cycles. PCs saw a huge improvement with the introduction of high speed SRAM CPU caches.
Imagine if someone had launched a computer with a small but high speed SRAM scratchpad? A 6502 would have been able to do some wild shyt with that in zero page mode. The VIC, Atari and Famicom could run faster exactly because the bandwidth wasn’t compromised by video access. Either because of simply fewer bits or because they had separate VRAM. A machine with 16Kb or 32Kb of VRAM, two 6502 working boxer style on alternating cycles on a small chunk of SRAM and a simple MMU to RAM, would have been economically realistic and would have killed anything in the first half of the 80s WRT power (probably latter half too).
@@Frisenettethe Ataris were kind of hobbled vs the Vicll. The C64 kernel and basic were posted the the Atari800. See 8 Bit Show and Tell channel. It was not a 78% increase in speed as you might expect. It was only 17-20%. Reason being the Vicll and 6510 and the 2Mhz C64 ram did an efficient dance. The Ataris not so much. I bet they botched that at CBM and the C128 probably lost that at 2Mhz. Would be fun to benchmark a C128 vs an Atari. I bet the C128 would win but not by that much.
I love this video, one of the coolest I have seen from Adrian. By no means useless, manipulating the clock is an amazing and unique idea and makes me wish I had tried this before. So I will keep this idea in mind, very cool! There is a way to keep the keyboard functional even at the lowest speeds if you can separately clock the keyboard controller at 8Mhz from some oscillator. It's pretty tolerant and doesn't need the clock to be synced to anything. You could also try dividing OSC from the ISA slot by two using a 74LS74 and see if the keyboard controller works on that.
Very cool video! Never knew a PC would be usable running at that low of a clock speed. Cool to see some Infocom games in your tests. I actually work next to one of the founders of Infocom!
An alternative to reduce clock speed: throw in wait states via the ISA slot. that you can freeze or single step code. this is what was done to break copy protection dongles when the code was protected against software debuggers like SoftIce.
Better signal integrity is achieved by placing a 50ohm resistor cut to fit and inserted into the clock and ground pins of the oscillator socket. Then you clip your signal generator configured for 50ohm operation through a coax to that 50ohm resistor in the socket. At those speeds, you could probably still get away with a BNC to leaded clips adapter at the socket.
I had no idea there was so much to consider when configuring a PC for a test like this. I'm amazed it works so well and in fact at all. For kicks, I ran your BBC BASIC fractal program using Bobbi Manners' Applecorn implementation of Acorn MOS to run BBC BASIC on a 1 MHz Apple IIe Enhanced. It completed it in 489 seconds, the same as your Applesoft BASIC measurement.
OK, something was bugging me about that AppleWin emulator run. I tried a real 1 MHz Apple IIe Enhanced (NTSC). It completed the fractal in 447 seconds, just under twice the time of a 2 MHz BBC Micro.
I'm only 12 min in but I'm going to comment: Great explanation of clock dividers, wait states, and early PC history. This is why I come here. It's not Adrian prattling on! It's exposition.
Just finished and what a fun video! I am going to seek out some PC hardware to play with. I am inspired by how flexible it is and your showing off some fun hardware techniques was very educational! IMHO, this is one of your best Adrian. Well done!
@@markmuir7338 True for what receives that divided signal, but possibly not everything does. And the divider may have a minimum slew rate which would be satisfied by a square wave but not sine wave.
I wrote games on an Apple ][+ in assembly language in the 80s, and it was definitely necessary to count clock cycles, and to use tricks like precomputed address tables for accessing rows on the hi-res buffers, self-modifying code to use instructions with faster address modes, unrolling loops, etc, in order to wring the most performance out of my code. So it was very much a case of needing to understand the architecture of the computer and the 6502.
The clock itself is treated a bit differently between 6502 and Intel... I am not aware of Intel treating "half periods" specially, whereas 6502 does. To some extent (not entirely) 6502 has twice the cycles as clock frequency, making the comparison even harder. That said, the only fair comparison is to have a look at whatever common or maximums were available at any one time and compare those.
First, I love your videos. Second, a tangential comment is about my Apple //c in 1990 where I tried to increase performance. I had switched to a Mac by 1990, so I was willing to take my //c apart. I installed a RAM disk and loaded the word processor program and spelling checker into the RAM disk and it was faster than needing to seek information from the floppy disk. I tried a 4 Mhz processor but it was incompatible with the RAM disk. So, in my 1990 project the RAM disk alone versus the faster CPU alone was the most efficient for my word processing tasks. Besides a dummy terminal with modem, I never used the //c for anything else. But I loved my //c and transported it between office and home (2 monitors at each location).
As soon as you do anything that demands 16-bits (or more), instead of just 8, the 286 is at least ten times as clock cycle efficient as the 6502. The 286 can do a 16-bit addition in 2 clocks, while the 6502 needs at least 20 clock cycles for the same thing.
So, with 18/20 cycles on a 6502 you can do 16-bit addition from memory to memory: CLC ; 2 cycles, instruction which you don't need if you know your C flag is already cleared LDA $00 ; 3 cycles ADC $02 ; 3 cycles STA $04 ; 3 cycles LDA $01 ; 3 cycles ADC $03 ; 3 cycles STA $05 ; 3 cycles In order to do that in a 286, you will have to do MOV, MOV, ADD, MOV, that is 3+3+2+3 = 11 cycles. Even if you were to add mutating one of the operators you would need a MOV and an ADD, at 3+7 cycles, yes, the "ADD mem, reg" instruction takes 7 cycles; so it would be 10. Correct me if I'm wrong, but the spec for the 80286 clearly mentions that it takes 7 cycles if a memory operand is used; and "ADD mem, mem" obviously doesn't exist. I agree with you that it would be faster (twice as fast), but not 10 times faster. And btw, the 286 doesn't have that many registers either, so unless you have a very particular algorithm able to use it and a very optimized implementation, it's not going to be any faster than that.
@@franciscocastro4017 You are right in a sense. However, most critical algoritms could be performed mainly in registers on the 286. Say the inner loops of transcendental functions in a compiler or interpreter. So it can indeed be 20 times as clock cycle efficient as the 6502, i.e. when hand optimized as hard as code typically has to be on the 6502. (And it's probably about the same ratio in many cases when code is not very optimized on either...) Yes it takes 5 additional clocks to address memory in an ALU operation. But that memory operand can use an indexed address, still at the same speed. Such as for accessing a parameter or local variable from a stack frame. So for high level languages, the 286 is very efficient compared to the 6502 (that doesn't really have a properly sized stack, or efficient instructions for indexes/offsets in it). The 6800/6502 family were interesting minimalistic processors. But the alleged efficiency of the 6502 is largely a myth in an "hardware" aspect too. The main reason it can cope with relatively few clock cycles is because it uses a slow low resolution clock based on this design principle that poses several times higher requirements on memory timings than did the contemporary "arch rival" Z80 (for instance). The 6502 allowed less than half a clock cycle for memory access, while the Z80 allowed two full cycles for memory to respond. That's why the Z80 could be run at 5-6 MHz using about the same speed memories as the 6502/10 needed at around 1 MHz (i.e. 300-250ns). With memory speed and prices being the primary limiting factor for small computers in the 1970s and early 80s, that was a significant difference. An upside of the 6502 design though, was that is was easier to build video hardware around it (having the video circuits simply use these free half-cycles when memory is never accessed by the 6502).
This was very entertaining. I love these kinds of videos, where it's just playing around with things and seeing what happens. It's always amazing to me how these older computers worked, even if they're not too different from modern ones. Seeing the way all the "pieces of the puzzle" interact with each other, and swapping one out or pulling at one really highlights that. ^_^ This is one of my favorite channels on UA-cam. I can't get enough. ^_^
I knew it was possible because I watched a video on someone breadboarding a 386 (or maybe it was a 486.) They got it running in the kilohertz range. But seeing it on an entire motherboard is just different and fun. Thanks!
NMOS chips have a very consistent static power draw, because only transistors pulling low to zero are switching, while the transistors pulling high to 1 are always weakly on acting very much like pull-up resistors.
35:40 Yes, that generation 286 processors were still built in n-MOS. Hence mostly static power dissipation independent of the clock or switching frequency (unlike CMOS). The heat stems from all the depletion mode transistors used as pull up "resistors".
That shirt really brings back memories! Is that an original? Still have plenty of those //e disks. Great video. I like where your head is at. “I wonder…”. YES!!!🙌
As far as I know, in the original PC and PC/XT the PIT output frequencies rely on the main clock (or the 1.19MHz derived from 4.77MHz). So there was no independent main clock timer on the original PC and PC/XT. It was added later, on PC/AT, where it was based on RTC (e.g. INT 15h/86).
Victor Frankenstein's Digital Basement. One of the most fascinating retro computing videos I've seen anywhere, it's such a 'thinking outside the box' / mad scientist idea. Loved this.
Great video! I ported the PC version of Attack of the Petscii Robots, so that was fun to see that it could run at lower clock speeds just fine. A lot of the code was written in C, with assembly routines used for writing to the video/sound card.
Speaking of slow clock, chips from this era were based on NMOS process which does not implement fully static design. Simple flip-flops implemented with pass transistors and a dual phase clock may not work correctly due to insufficient parasitic capacitance to hold the charge between two cycles. Ken Shirriff's blog gives detailed explanations on the subject.
The other problem in comparing an Apple ][ to an IBM PC isn’t just the CPU chips, it’s the architecture of the boards themselves. The Apple used video circuitry which directly had access to specific addresses in main RAM, reading them in the off-cycles when the 6502 was doing internal stuff. The IBM PC and clones needed an external card to do even CGA or monochrome; there was no on-board video generation. So to display a character in text mode, either the 8088/86 had to talk to the video card, or the card had to do DMA to fetch data from the main RAM, and the CPU still had to tell the card what video mode to be in so it’d know what to do with the data it got. IIRC, this went through a BIOS routine that corresponded to the C language function, putch().
pc video cards had their own ram and the ram was in one of two locations based on what mode you used. So software generally did just poke the screen. The original cga card has ram that was only fast enough for the display, so writing to the ram needed to be done during vertical blank. But by the late 80s nobody cared as ram was fast enough. The bios was still used for switching modes, but there are times when this was skipped to allow more advanced modes (like mode-x)
That C&T SCAT motherboard was the one used when I had a 286 computer starting in 1990 or so. It was, in my opinion, a really nice 286 board, alright. It seemed to outperform all of the others I had tried out. I still have it in a box somewhere, and now I feel motivated to get it up and running again. Usually I'm just into 8-bit computers, but these old PCs were pretty much contempories, and I might get back into them someday soon.
I used that BBC BASIC to write a lot of stuff for PCs in the 1980s and 90s. The person who wrote it is still active and has ported it to the Raspberry Pi. There is also a version for the Raspberry Pi Pico microcontroller.
Perfectly demonstrates how having the video card able to do its own thing without relying on the CPU's clock cycles can have a big advantage in overall system speed.
One of the cool things about the CMOS Z80 is that it's fully static -- you can completely stop the clock, or run it at 1Hz if you want. It's a neat way to actually watch the opcodes and such being fetched and executed.
I would have lost that bet (will the apps run). Great seeing WP 5, though I think 5.2 is the version that I used and happened to really get good mileage from (macros, etc, embedded printer codes). Lest not forget reveal codes! Great research and presentation, Adrian. See you at MW next week. (Michael)
Back in the days, we had computer class once per week at school. Never had a PC, so I used an AT emulator on an unexpanded Amiga 500 to study at home. It was equivalent to a half MHz 8086. DOS and GWBasic were working fine.
This is probably the first time since 1984 where someone is trying to slow down a 1980s PC! :D fun video! Alley Cat also ran quite slow, it is a surprisingly responsive game if I recall correctly, but 35 year old recollections are not reliable..
So the reason early games on the PC rarely reprogrammed the interrupt timers is because a lot of people were working with high-level languages like BASIC, Pascal, and C, and those languages didn't offer a direct means to modify the timers, with the only timer in place you could be sure was in place running at 18.2 Hz. You either had to learn how to reprogram the timers from someone else, or from a book, which was a much more low-level process, or you had to just make due, thus if you wanted your game to run faster than 18.2 FPS you may've just let it go as fast as it could, often with a manual speed adjustment the player could use which would simply introduce loops which did nothing but count repeatedly to burn a little extra CPU time every frame of gameplay. It's really only been the past couple decades where the majority of games are now being timed more appropriately. :P
One example of a game that didn't work on a faster system was Wing Commander. I remember it was great on a 386, but when I tried running it on a 486DX/66, it was unplayable it was so fast.
Indeed. Another from my recollection is the PC version of Bubble Bobble. It works fine on 286 machines but really doesn't work on 486 machines anymore, it has all sorts of speed and performance issues. If I recall, it goes the wrong way and it's too slow on fast machine.
Some version of wc2 works sorta playable on pentium except animations(things move too far like you see the launch sequence that the ships front only exists). You could play wc1 on 8mhz ega. It was a bit painful though. 386 is sweet spot. A lot of games were unplayable on a 386 too tho, but after that games got frame limiters. A lot of the shareware cd's came with some slowdown program to be able to play
@@lassikinnunen WC2, from what I recall, was quite playable on a 5x86/133 (which is a 486DX5, not a Pentium-class machine but with integer performance comparable to a Pentium/75). But WC1, not so much.
I love watching videos about overclocking, seeing how quick computers can achieve. I've never much much attention to underclocking, but this was much more entertaining.😂
Not all PC keyboard controllers are microcontroller based. The VIA keyboard controllers are implemented using all discrete logic gates. They don't require programming.
It's worth noting that even to the original theme of this video, a 6502 at ~1MHz vs a 286 at ~1Mhz is still not really comparable. Even when it got down to sub-1MHz, the 286 is still moving more data and computationally working more efficiently than a 6502 could ever hope to do at the same speed. It's an important lesson of how clock speed is only one factor when it comes to CPUs.
Fun fact about timings on IBM PC compatible machines: Windows 95 will crash during boot on a sufficiently fast computer. It performs an operation and measures how many time intervals it takes to complete the operation. If the computer is fast enough, it takes _zero_ time intervals, Windows divides by zero, and crashes.
Neat experiment! It's possible that using a sinewave at those low frequencies may have caused some issues with the logic circuits. You could try that again using a square-wave to see if the super-low slew rate was causing the system crash.
@16:00 timer based games just worked; some languages exposed a function to calculate the delay loop duration for you. In particular, Turbo Pascal 6.0 and below (iirc) offered a function to do this, and it did that by running the slowdown loop once, checking the timer to see how long it took to run it, and then run it for divided by . Which worked, until you got to ~233MHz machines, where the loop ran in less than one , making it divide by zero.
Excellent video. It reminds me of running a Nascom 2 (early British micro from circa 1979) in 1980 at about 40khz. The Nascom ran at 2 or 4 MHz but a friend and I were interested in looking at the read/write signals etc. and we only had a crap 1960s 100khz scope. You could connect the CPU to an external clock (the video circuitry continued to run at its normal clock). We connected the cassette clock (about 48KHz) to the external clock. Everything worked (including keyboard) but super slow. There was 1K static workspace ram (and the video ram was also static ) so we could poke in a little loop (a jump instruction back to itself, 3 bytes). On the scope we could see the read signal for each byte as well as the M1 for the instruction fetch. Very instructive to our young minds. Interestingly the DRAM (32K of 4116s) in the system worked at the low speed. Later experiments with DRAM showed we could delay refresh for hundreds of milli-seconds with no problems.
I, for one, used to wonder this all the time growing up. Though my curiosity was around the 486 DX/50 MHz we had at home vs. the 16 MHz Macintosh LC-II machines in our computer labs at school.
Regarding timing-the original PC and XT had 1 clock crystal that everything used for timing. This caused problems when faster computers were used, as the clock and interval timing circuits also ran on this crystal, so even games that tried to use the delay timing circuits ran into problems. These computers often had a "game mode" that slowed down the PC. And sometimes games could use the video card (which had its own crystal) to set delays. The PC AT had a second clock crystal to run the ISA Bus, which allowed faster computers while running the bus at a slower speed (originally 6 MHz, later 8 or 12 MHz). This second fixed speed crystal also ran the real time clock and interval timing circuits. This fixed frequency and real time clock provided an easy way to synchronize games across different speed processors.
for the keyboard controller: feeding a TTL signal might help, so use a Schmitt-trigger discriminator like 74LS14. and perhaps a divider circuit for lowest frequencies.
In Atari ST and Amiga times you could enjoy to work on an 1MHz XT MS-DOS machine with the quite commonly findable PC-Ditto software emulator. The 68000 was not powerful enough to be able to emulate the 8088 in real time and the result was a machine that was just able to run at a 0.3 SI score. Later, with hardware emulation it was a whole other story. I had later a PC-Speed emulator in my Atari Mega ST and this allowed to emulate a nice well equipped XT with an SI of almost 2.
I believe PC-Ditto was slower than 1MHz XT (closer to 0.6MHz), but it emulated a NEC V20 CPU, so an 80186 instruction set. I still have one STfm with a Vortex AT-ONCE modern replica. It is 286 as the name suggests, but runs really slow, especially the video (CGA) emulation is crappy.
The 80286 has a major advantage in cycles per instruction over the 8086/8088, so the results you get are different from what you would have gotten from a PC XT type board.
Interesting video, thank you as always. You may if you do get some comparable machine code to compare in a follow-up video is enable and disable shadowing video bios to demonstrate the difference, as the Apple II didn't have that capability. Keep the great videos coming.
I won't lie... at the beginning of the video I didn't see the point of your experiment - 1 hour and 12 minutes later I am thoroughly entertained! Great video!
it really floats my boat. I would have so many ideas for more tests. e. g. using a clock divider to make the signal more stable at low frequency. or to use wait states instead of reducing clock speed.
Absolute
I'm 34 in. This is the weirdest benchmark I never knew even was a thing to compare 😂
I learned a lot of stuff along the way so far tho.
Only Adrian black would run an IBM PC at ENIAC clock speeds and we all love it.
would be more fairer to pit a 65816 vs the intel chip because then they are both interplexing for memory access as well and it's 16 bit to so it's a more fairer test and the apple 2gs had one inside it
Last statement of your video: Useless video?
I deny this. I think it is really valuable. For me as a developer it is interesting to see how things work together hardware to software. You do not need a doctors title to show your knowledge. Honestly. There are plenty of things I learned from your video, which will influence my work in the future. Stay healthy. Keep up the good work.😊
Glad you enjoyed it! Thanks for the super thanks. :-)
I was writing video games for 8 bit home computers in 1989 using PCs running PDS which compiled 8 bit assembler and "squirted" it to the target machine with a parallel interface. I remember the day the first 286 showed up and how much faster it was than a 10Mhz 8088/8086, it was crazy faster.
PDS was a game changer. At the time, I used it to develop for C64s. Being able to use the target as a "dev kit" made things so much easier for development.
It was so fast that some things that relied on execution delay became unplayable or unusable even with the 4.77MHz/8MHZ “non turbo” speeds
I love it at the end with the on screen seconds counter where it was still counting the time correctly despite it not being able to draw the clock every second so it was refreshing the clock every few seconds. Very funny.
I would have love it to have a power meter to measure how many watts it consumes and find a sweetspot for the downclocking
@@geografiainfinitului It doesn't help much, as evidenced by the hot CPU. Saving power based on frequency is more of a CMOS thing.
Basically an 8088 would start out with the same handicap that an 8080 or Z80 would, in that a basic clock cycle (T-state) does less. The 8080 (and Z80) needed 4MHz to do what a 1MHz 6502 could do. But that doesn't mean that it couldn't benefit from a pre-fetch queue. A 286 is probably more efficient per clock, and it can read more bytes at a time. I think the 286 was a much more interesting choice for this experiment than an 8088 would have been.
And I never realized how much I would enjoy seeing a variable clock speed knob on a PC.
The 8088 has more registers, mul/div instructions and rep movsb. A 4mhz 8088 should be able to beat a 1mhz 6502 for complex tasks.
@@phill6859not necessarily the 8088 has a big handicap in accessing memory and also longer clock cycle counts for common instructions. 6502 manages the register issue with zero page, but you are correct that the ALU is very basic. The original IBM PC was not far off other 8 bit machines of the time
@@MarianoLu in modern speed tests focusing on complex operations with code optimized for each CPU the 6502 only came out at about 50% faster per MHz. For complex tasks the 4.77mhz 8088 outperformed even the 2mhz 6502s. This was the conclusion of one of the tests conducted a few years ago;
"A 1.77-MHz MOS 6502 in Atari 800XL/XE (PAL) required about 66 CPU cycles to go through a single inner-loop step. If graphics output was disabled during the test, this decreased to just 49 CPU cycles. A 4.77-MHz Intel 8088 needed about 73 CPU cycles to do the same. Thus, 6502 is faster if running on the same clock. On the other side, the original IBM PC is clocked 2.7x higher than the Atari and 4.7x higher than other important 6502 machines (Apple IIe, Commodore 64). Thus, IBM PC was at least twice as fast in this type of tasks (similar to compiling, XML parsing…). I’m not surprised, but it is nice to see the numbers."
@@dgmt1 yeah as I said the ALU sucks in the 6502 for calculations. And a prime sieve test is a great way to highlight that I’m surprised by the results since I would have expected the 8088 to be much closer than a 6502 that in this type of test. That is a great Article from Swarmik on tumbler thanks for the reference, it shows the advantage being driven only by the clock speed and cycle to cycle they are neck to neck. Just downloaded the code and will take a look to see if I can mod it to run on an Apple or C64 to send it to Adrian to try on his under-clocked 286 to see the difference. As a side note I found it funny he mentioned XML parsing, XML came in the mid nineties so it will be really fun to see 70s and 80s computers trying to do that.
By the way Dave’s Garage just put up a video of his PDP11 running a prime sieve… and he compares it to a Rozenfeld Threadripper a nice one to watch.
Hi Adrian! I know you get 20 billion messages and you might never read this but just wanted to say that you rock, love your stuff, you seem like a great guy. Thanks for the great videos!
You're right, it's not an Apples to Apples comparison.
It's an Apples to IBM compairison.
i just had the same thought LOL
@@Club_Michas same!
In the cellphone field back in the day they used to compare Apples to Blackberries!
I am loving the longer-form videos of late. More is better.
I second this
yup more plz :-D
@@IntegerOfDoomI guess these small clips are for fast food young generations
That ending though...... still processing below 100khz! amazing !
FWIW, my first professional job right out of college was in the design verification team for a GPU company. We used PCs for our testing, and the AGP card we used as part of our test setup could not run at full AGP speed, so my first job, on my first day, was to desolder the crystal on a P2 class motherboard and replace it with half the speed. The whole machine worked fine, but a regular keyboard (PS2 style) would not, so we simply replaced the crystal in the keyboard with half the speed and it worked fine.
Obviously we never went down to 100kHz... :)
Great job!
Love the Elephant Memory Systems shirt!
Unforgettable.
I adored their ads but never saw their products in any store.
I remember stopping by the computer store every day on my way home to check if they had any floppies in stock.
the Shirt is great. those floppy media ads were full page on the back of big magazines in the early 80ies. "Elephants never forget"
Proof of the old adage - the ISA bus computer is I/O bound, not processor bound. This was very interesting!
NMOS processors use roughly the same power independent of the clock speed. The pull up loads are passive, so at any moment in time around half of the transistors are conducting. When the next clock comes you will still have around half of the transistors conducting, just not the same ones. CMOS processors (and AMD did make some CMOS Am286 later on) only conduct current at the clock edges. Speaking of clocks, the design style of the time was two non overlapping clock phases. Motorola and MOS processors had these phases coming in directly into separate pins while Intel and Zilog processors used a single clock pin with a higher frequency that was then divided down internally to get the two phases. This makes direct comparisons more complicated.
I found this video really interesting because I would have never thought to try lowering the clock speed until the computer stopped working. Your videos are always a pleasant surprise 🙂
Landmark should play "Daisy Daisy" while doing this...
"sorry, I can't do this, Dave!"
I'm losing myself, Dave, I'm slipping away...
Just as a quick addition to your talk about tying clock speeds to TV frequencies, the SNES also ties almost everything to the same 3.58 MHz clock. (The exception is the sound system, which is an independent coprocessor with its own clock signal.) Overclocking a SNES is therefore pretty much out of the realm of practicality for most people. The Genesis, on the other hand, provides an independent clock signal to the 68000 CPU, making it super-easy to overclock with just a crystal swap. I've gotten an otherwise stock model 2 Genesis running reliably at 10 MHz, but more daring hobbyists have replaced the 8 MHz 68k chip with ones that will run all the way to at least 20 MHz. Not many games benefit from the faster CPU, but a few really do.
Getting back to the SNES, it uses a customized version of the 65C816, itself a 16-bit descendent of the 65C02. It's also known to be incredibly efficient for its time; somewhere around twice as fast clock-per-clock as the 68000 in most operations.
The downside is that the 65816 required a roll.your.own approach, much as the 6502 did. Not as bad mind you. But the intel crap and the 68K had far broader and more useful instructions and with the 68K you could pretty much got full orthogonal. I seem to recall there is a 2 part video on UA-cam that looks at the myth of the 65816 being faster than the 68k at the same clock speed. Always hard to make direct comparisons but the 68K was a hell of a lot easier to program than any other chips. You could pretty much use any 68K addressing mode with any instruction. I don't know if a real comparison is possible but it would be fun to see it.
This was just straight up exciting to watch. You looked like you were having a blast this entire video, and I would be too if I were at the controls there. Fascinating stuff.
"Dai-sy Dai----sy, give.... me........your...........an-----swer........do..." - HAL
So Dave could have just dialed down the frequency clock to get same result
I read somewhere that the Jetkey is a fixed function ASIC rather than a microcontroller which might be why it works at much lower clocks.
I assumed this exact thing before he even ran it and that would 100% explain the stability. I'd even bet the limit was the keyboard.
@@CommodoreGreg Bingo. The PC keyboard protocol is clocked by the keyboard, so the microcontroller failed to keep up while the ASIC could simply use that clock directly. Once the microcontroller was too slow to stretch the clock, it started missing bits. Downclocking the keyboard too could compensate, but make it less responsive.
In my youth, I wrote assembler code for both Apple II+ and IBM PC (the 8088).
6502 was easy to learn but you had to write your own multiply/divide routines if you needed them. Doing anything with 16 bit numbers was a pain. The processor stack was only 256 bytes so forget about recursion :-).
The 8088 was harder to get into because you had to be aware of the segment registers that implicitly controlled where instructions read or wrote data to, and the whole notion of using software interrupts to talk to the OS took a bit of getting used to at the time, but having 16 bit registers made coding much easier.
Mov ax, bx
Yeah once you go 16 bit numbers on the 6502 you’ll need a lot more instructions to do everything. 😢
I loved the part about testing in the KHz range! Great video!
My impression of the era is what limited performance in the day was mostly memory bandwidth combined with designs which operated with CPU and RAM synchronously. Obviously, with 8-bit micros suffered with sharing that bandwidth with the video generator.
One of the big wins with ARM in 1985 was it being designed to take advantage contemporary DRAM achitecture with full 32-bit access cycles.
PCs saw a huge improvement with the introduction of high speed SRAM CPU caches.
Imagine if someone had launched a computer with a small but high speed SRAM scratchpad?
A 6502 would have been able to do some wild shyt with that in zero page mode.
The VIC, Atari and Famicom could run faster exactly because the bandwidth wasn’t compromised by video access. Either because of simply fewer bits or because they had separate VRAM.
A machine with 16Kb or 32Kb of VRAM, two 6502 working boxer style on alternating cycles on a small chunk of SRAM and a simple MMU to RAM, would have been economically realistic and would have killed anything in the first half of the 80s WRT power (probably latter half too).
@@Frisenettethe Ataris were kind of hobbled vs the Vicll. The C64 kernel and basic were posted the the Atari800. See 8 Bit Show and Tell channel. It was not a 78% increase in speed as you might expect. It was only 17-20%. Reason being the Vicll and 6510 and the 2Mhz C64 ram did an efficient dance. The Ataris not so much. I bet they botched that at CBM and the C128 probably lost that at 2Mhz. Would be fun to benchmark a C128 vs an Atari. I bet the C128 would win but not by that much.
“Adrians Analog Basement” would be a good second channel title.
I love this video, one of the coolest I have seen from Adrian. By no means useless, manipulating the clock is an amazing and unique idea and makes me wish I had tried this before. So I will keep this idea in mind, very cool! There is a way to keep the keyboard functional even at the lowest speeds if you can separately clock the keyboard controller at 8Mhz from some oscillator. It's pretty tolerant and doesn't need the clock to be synced to anything. You could also try dividing OSC from the ISA slot by two using a 74LS74 and see if the keyboard controller works on that.
Very cool video! Never knew a PC would be usable running at that low of a clock speed. Cool to see some Infocom games in your tests. I actually work next to one of the founders of Infocom!
An alternative to reduce clock speed: throw in wait states via the ISA slot. that you can freeze or single step code. this is what was done to break copy protection dongles when the code was protected against software debuggers like SoftIce.
Better signal integrity is achieved by placing a 50ohm resistor cut to fit and inserted into the clock and ground pins of the oscillator socket. Then you clip your signal generator configured for 50ohm operation through a coax to that 50ohm resistor in the socket. At those speeds, you could probably still get away with a BNC to leaded clips adapter at the socket.
@AdriansDigitalBasement
This is next level tinkering! Well done!
I had no idea there was so much to consider when configuring a PC for a test like this. I'm amazed it works so well and in fact at all. For kicks, I ran your BBC BASIC fractal program using Bobbi Manners' Applecorn implementation of Acorn MOS to run BBC BASIC on a 1 MHz Apple IIe Enhanced. It completed it in 489 seconds, the same as your Applesoft BASIC measurement.
OK, something was bugging me about that AppleWin emulator run. I tried a real 1 MHz Apple IIe Enhanced (NTSC). It completed the fractal in 447 seconds, just under twice the time of a 2 MHz BBC Micro.
I'm only 12 min in but I'm going to comment: Great explanation of clock dividers, wait states, and early PC history. This is why I come here. It's not Adrian prattling on! It's exposition.
Just finished and what a fun video! I am going to seek out some PC hardware to play with. I am inspired by how flexible it is and your showing off some fun hardware techniques was very educational! IMHO, this is one of your best Adrian. Well done!
That crystal oscillator is a square wave output, but your signal generator is set to sine wave.
But it looks like a square wave on the oscilloscope. So that’s all the hardware cares about - rising and falling edges.
that was my "yelling at screen" moment at the beginning. I then looked in awe watching it running with sine input.
@@markmuir7338 True for what receives that divided signal, but possibly not everything does. And the divider may have a minimum slew rate which would be satisfied by a square wave but not sine wave.
This is funny, cause the slowed down XT performance (like DIR listing) reminds me a LOT of my PCJR, even with a V30 chip!
My work laptop (8th gen i7) was overheating so bad that it was running at 200MHz. I thought THAT was bad! Impressive running the computer at 100kHz!
Why not repaste that machine?! And clean the heatsink.edit.. I'm blind: work laptop. Escalate to IT!!!!!!
I wrote games on an Apple ][+ in assembly language in the 80s, and it was definitely necessary to count clock cycles, and to use tricks like precomputed address tables for accessing rows on the hi-res buffers, self-modifying code to use instructions with faster address modes, unrolling loops, etc, in order to wring the most performance out of my code. So it was very much a case of needing to understand the architecture of the computer and the 6502.
The clock itself is treated a bit differently between 6502 and Intel... I am not aware of Intel treating "half periods" specially, whereas 6502 does. To some extent (not entirely) 6502 has twice the cycles as clock frequency, making the comparison even harder. That said, the only fair comparison is to have a look at whatever common or maximums were available at any one time and compare those.
First, I love your videos. Second, a tangential comment is about my Apple //c in 1990 where I tried to increase performance. I had switched to a Mac by 1990, so I was willing to take my //c apart. I installed a RAM disk and loaded the word processor program and spelling checker into the RAM disk and it was faster than needing to seek information from the floppy disk. I tried a 4 Mhz processor but it was incompatible with the RAM disk. So, in my 1990 project the RAM disk alone versus the faster CPU alone was the most efficient for my word processing tasks. Besides a dummy terminal with modem, I never used the //c for anything else. But I loved my //c and transported it between office and home (2 monitors at each location).
I love to watch this long sized videos. It's like a series, that i watch over few days. But this one took me only 2 days. Fantastic work. I want more!
As soon as you do anything that demands 16-bits (or more), instead of just 8, the 286 is at least ten times as clock cycle efficient as the 6502. The 286 can do a 16-bit addition in 2 clocks, while the 6502 needs at least 20 clock cycles for the same thing.
So, with 18/20 cycles on a 6502 you can do 16-bit addition from memory to memory:
CLC ; 2 cycles, instruction which you don't need if you know your C flag is already cleared
LDA $00 ; 3 cycles
ADC $02 ; 3 cycles
STA $04 ; 3 cycles
LDA $01 ; 3 cycles
ADC $03 ; 3 cycles
STA $05 ; 3 cycles
In order to do that in a 286, you will have to do MOV, MOV, ADD, MOV, that is 3+3+2+3 = 11 cycles. Even if you were to add mutating one of the operators you would need a MOV and an ADD, at 3+7 cycles, yes, the "ADD mem, reg" instruction takes 7 cycles; so it would be 10. Correct me if I'm wrong, but the spec for the 80286 clearly mentions that it takes 7 cycles if a memory operand is used; and "ADD mem, mem" obviously doesn't exist.
I agree with you that it would be faster (twice as fast), but not 10 times faster. And btw, the 286 doesn't have that many registers either, so unless you have a very particular algorithm able to use it and a very optimized implementation, it's not going to be any faster than that.
@@franciscocastro4017 You are right in a sense. However, most critical algoritms could be performed mainly in registers on the 286. Say the inner loops of transcendental functions in a compiler or interpreter. So it can indeed be 20 times as clock cycle efficient as the 6502, i.e. when hand optimized as hard as code typically has to be on the 6502. (And it's probably about the same ratio in many cases when code is not very optimized on either...)
Yes it takes 5 additional clocks to address memory in an ALU operation. But that memory operand can use an indexed address, still at the same speed. Such as for accessing a parameter or local variable from a stack frame. So for high level languages, the 286 is very efficient compared to the 6502 (that doesn't really have a properly sized stack, or efficient instructions for indexes/offsets in it).
The 6800/6502 family were interesting minimalistic processors. But the alleged efficiency of the 6502 is largely a myth in an "hardware" aspect too. The main reason it can cope with relatively few clock cycles is because it uses a slow low resolution clock based on this design principle that poses several times higher requirements on memory timings than did the contemporary "arch rival" Z80 (for instance).
The 6502 allowed less than half a clock cycle for memory access, while the Z80 allowed two full cycles for memory to respond. That's why the Z80 could be run at 5-6 MHz using about the same speed memories as the 6502/10 needed at around 1 MHz (i.e. 300-250ns).
With memory speed and prices being the primary limiting factor for small computers in the 1970s and early 80s, that was a significant difference.
An upside of the 6502 design though, was that is was easier to build video hardware around it (having the video circuits simply use these free half-cycles when memory is never accessed by the 6502).
It would be more reasonable to compare a 286 to the 65816, not the 6502.
Alley cat and zaxxon totally jogged a memory from my youth I could barely remember. I must have played those for hundreds of hours over several years.
This was very entertaining. I love these kinds of videos, where it's just playing around with things and seeing what happens. It's always amazing to me how these older computers worked, even if they're not too different from modern ones. Seeing the way all the "pieces of the puzzle" interact with each other, and swapping one out or pulling at one really highlights that. ^_^
This is one of my favorite channels on UA-cam. I can't get enough. ^_^
I knew it was possible because I watched a video on someone breadboarding a 386 (or maybe it was a 486.) They got it running in the kilohertz range. But seeing it on an entire motherboard is just different and fun. Thanks!
NMOS chips have a very consistent static power draw, because only transistors pulling low to zero are switching, while the transistors pulling high to 1 are always weakly on acting very much like pull-up resistors.
love the lil heatsink on that 286! glued and all in a plcc socket!
35:40 Yes, that generation 286 processors were still built in n-MOS. Hence mostly static power dissipation independent of the clock or switching frequency (unlike CMOS). The heat stems from all the depletion mode transistors used as pull up "resistors".
Wow this brought back some memories. I printed off our wedding invitations on Print Shop in 1985. Thanks for this video. I love watching your videos
That was a really fun video Adrian!
That shirt really brings back memories! Is that an original? Still have plenty of those //e disks. Great video. I like where your head is at. “I wonder…”. YES!!!🙌
As far as I know, in the original PC and PC/XT the PIT output frequencies rely on the main clock (or the 1.19MHz derived from 4.77MHz). So there was no independent main clock timer on the original PC and PC/XT. It was added later, on PC/AT, where it was based on RTC (e.g. INT 15h/86).
I'm surprised on how well Alley cat was developed. It could run on any speed with a nice looking game.
That's why I did it for the zx spectrum
Victor Frankenstein's Digital Basement. One of the most fascinating retro computing videos I've seen anywhere, it's such a 'thinking outside the box' / mad scientist idea. Loved this.
Great video! I ported the PC version of Attack of the Petscii Robots, so that was fun to see that it could run at lower clock speeds just fine. A lot of the code was written in C, with assembly routines used for writing to the video/sound card.
I used Word Perfect 5 a lot in the early 90s. F1 brings up a help screen and allows use without a keyboard template. Great video.
Maybe a 286 overclocking video would be a cool next project. To see how far you can push it.
Speaking of slow clock, chips from this era were based on NMOS process which does not implement fully static design.
Simple flip-flops implemented with pass transistors and a dual phase clock may not work correctly due to insufficient parasitic capacitance to hold the charge between two cycles.
Ken Shirriff's blog gives detailed explanations on the subject.
3:55 Obviously it is an Apples to IBMs comparison
🙄
😂 And don't forget the obvious basket 🧺 of disks
This was extremely interesting! Not the results I expected at all!
The other problem in comparing an Apple ][ to an IBM PC isn’t just the CPU chips, it’s the architecture of the boards themselves. The Apple used video circuitry which directly had access to specific addresses in main RAM, reading them in the off-cycles when the 6502 was doing internal stuff. The IBM PC and clones needed an external card to do even CGA or monochrome; there was no on-board video generation. So to display a character in text mode, either the 8088/86 had to talk to the video card, or the card had to do DMA to fetch data from the main RAM, and the CPU still had to tell the card what video mode to be in so it’d know what to do with the data it got. IIRC, this went through a BIOS routine that corresponded to the C language function, putch().
pc video cards had their own ram and the ram was in one of two locations based on what mode you used. So software generally did just poke the screen. The original cga card has ram that was only fast enough for the display, so writing to the ram needed to be done during vertical blank. But by the late 80s nobody cared as ram was fast enough. The bios was still used for switching modes, but there are times when this was skipped to allow more advanced modes (like mode-x)
Amazing video. I would love to see the same thing on an XT 8088 running at 1mhz instead of the 286 which I assume has several architectural advantages
That C&T SCAT motherboard was the one used when I had a 286 computer starting in 1990 or so. It was, in my opinion, a really nice 286 board, alright. It seemed to outperform all of the others I had tried out. I still have it in a box somewhere, and now I feel motivated to get it up and running again. Usually I'm just into 8-bit computers, but these old PCs were pretty much contempories, and I might get back into them someday soon.
I used that BBC BASIC to write a lot of stuff for PCs in the 1980s and 90s. The person who wrote it is still active and has ported it to the Raspberry Pi. There is also a version for the Raspberry Pi Pico microcontroller.
It was fun and educational at the same time. Thanks for going along with this idea. To extreme lengths :D
Just a note: "Without further ado" does not mean "after the intro video and music". Maybe "After this ado" would be better?
Perfectly demonstrates how having the video card able to do its own thing without relying on the CPU's clock cycles can have a big advantage in overall system speed.
One of the cool things about the CMOS Z80 is that it's fully static -- you can completely stop the clock, or run it at 1Hz if you want. It's a neat way to actually watch the opcodes and such being fetched and executed.
I would have lost that bet (will the apps run). Great seeing WP 5, though I think 5.2 is the version that I used and happened to really get good mileage from (macros, etc, embedded printer codes). Lest not forget reveal codes! Great research and presentation, Adrian. See you at MW next week. (Michael)
Back in the days, we had computer class once per week at school.
Never had a PC, so I used an AT emulator on an unexpanded Amiga 500 to study at home.
It was equivalent to a half MHz 8086. DOS and GWBasic were working fine.
This is probably the first time since 1984 where someone is trying to slow down a 1980s PC! :D fun video! Alley Cat also ran quite slow, it is a surprisingly responsive game if I recall correctly, but 35 year old recollections are not reliable..
So the reason early games on the PC rarely reprogrammed the interrupt timers is because a lot of people were working with high-level languages like BASIC, Pascal, and C, and those languages didn't offer a direct means to modify the timers, with the only timer in place you could be sure was in place running at 18.2 Hz. You either had to learn how to reprogram the timers from someone else, or from a book, which was a much more low-level process, or you had to just make due, thus if you wanted your game to run faster than 18.2 FPS you may've just let it go as fast as it could, often with a manual speed adjustment the player could use which would simply introduce loops which did nothing but count repeatedly to burn a little extra CPU time every frame of gameplay. It's really only been the past couple decades where the majority of games are now being timed more appropriately. :P
Thanks for the explanation ADG dude!
One example of a game that didn't work on a faster system was Wing Commander. I remember it was great on a 386, but when I tried running it on a 486DX/66, it was unplayable it was so fast.
Indeed. Another from my recollection is the PC version of Bubble Bobble. It works fine on 286 machines but really doesn't work on 486 machines anymore, it has all sorts of speed and performance issues. If I recall, it goes the wrong way and it's too slow on fast machine.
I'm 35. I absolutely remember that. Did you also remember that if you shot at your own mother ship enough, it would explode!
Some version of wc2 works sorta playable on pentium except animations(things move too far like you see the launch sequence that the ships front only exists).
You could play wc1 on 8mhz ega. It was a bit painful though. 386 is sweet spot. A lot of games were unplayable on a 386 too tho, but after that games got frame limiters. A lot of the shareware cd's came with some slowdown program to be able to play
@@lassikinnunen WC2, from what I recall, was quite playable on a 5x86/133 (which is a 486DX5, not a Pentium-class machine but with integer performance comparable to a Pentium/75). But WC1, not so much.
As a kid I never understood why computers had turbo buttons. I always wanted to go fast. I never considered that older games would be too fast.
I love watching videos about overclocking, seeing how quick computers can achieve.
I've never much much attention to underclocking, but this was much more entertaining.😂
This was even better than the OVERCLOCKING video! 👍
Good to see adrian doing some totally normal comp.....oh wait, tgats a different channel.
Not all PC keyboard controllers are microcontroller based. The VIA keyboard controllers are implemented using all discrete logic gates. They don't require programming.
Dave Plummer probably has the assembly for a prime sieve for both already hashed out
It's worth noting that even to the original theme of this video, a 6502 at ~1MHz vs a 286 at ~1Mhz is still not really comparable. Even when it got down to sub-1MHz, the 286 is still moving more data and computationally working more efficiently than a 6502 could ever hope to do at the same speed. It's an important lesson of how clock speed is only one factor when it comes to CPUs.
Love the Elephant Memory System tshirt. Looks brand new! Where did you get it?
This was very interesting and makes me want to experiment with my own old PCs and function generator.
@1:06:00 get that machine an espresso STAT! This was so informative as always.
Sir Adrian, you R the "Geek Whisperer". And we love it
Meanwhile, I think this is neat, while I now look at 2 GHz, much like it's 0.2 MHz! Subbed!
Fun fact about timings on IBM PC compatible machines: Windows 95 will crash during boot on a sufficiently fast computer. It performs an operation and measures how many time intervals it takes to complete the operation. If the computer is fast enough, it takes _zero_ time intervals, Windows divides by zero, and crashes.
Windows 95 was such a disaster of an OS lol
Neat experiment! It's possible that using a sinewave at those low frequencies may have caused some issues with the logic circuits. You could try that again using a square-wave to see if the super-low slew rate was causing the system crash.
I would love to see the CGA games "Digger" and "Paratrooper" on this. 😅
This video was a great fun.
Thank you very much.
underclocking in it's perfection...
@16:00 timer based games just worked; some languages exposed a function to calculate the delay loop duration for you. In particular, Turbo Pascal 6.0 and below (iirc) offered a function to do this, and it did that by running the slowdown loop once, checking the timer to see how long it took to run it, and then run it for divided by . Which worked, until you got to ~233MHz machines, where the loop ran in less than one , making it divide by zero.
Excellent video. It reminds me of running a Nascom 2 (early British micro from circa 1979) in 1980 at about 40khz. The Nascom ran at 2 or 4 MHz but a friend and I were interested in looking at the read/write signals etc. and we only had a crap 1960s 100khz scope. You could connect the CPU to an external clock (the video circuitry continued to run at its normal clock). We connected the cassette clock (about 48KHz) to the external clock. Everything worked (including keyboard) but super slow. There was 1K static workspace ram (and the video ram was also static ) so we could poke in a little loop (a jump instruction back to itself, 3 bytes). On the scope we could see the read signal for each byte as well as the M1 for the instruction fetch. Very instructive to our young minds. Interestingly the DRAM (32K of 4116s) in the system worked at the low speed. Later experiments with DRAM showed we could delay refresh for hundreds of milli-seconds with no problems.
This is just an amazing motherboard. I wish I could find one.
Cool vid! Loved the mhz cycling
This was way more interesting than it should have been.
I, for one, used to wonder this all the time growing up. Though my curiosity was around the 486 DX/50 MHz we had at home vs. the 16 MHz Macintosh LC-II machines in our computer labs at school.
I had a blast with this episode! I was like a child smiling and enjoying all the experiment. Very FUN! Thank you for a wonderful Saturday! 😉🌞👍
Regarding timing-the original PC and XT had 1 clock crystal that everything used for timing. This caused problems when faster computers were used, as the clock and interval timing circuits also ran on this crystal, so even games that tried to use the delay timing circuits ran into problems.
These computers often had a "game mode" that slowed down the PC.
And sometimes games could use the video card (which had its own crystal) to set delays.
The PC AT had a second clock crystal to run the ISA Bus, which allowed faster computers while running the bus at a slower speed (originally 6 MHz, later 8 or 12 MHz). This second fixed speed crystal also ran the real time clock and interval timing circuits.
This fixed frequency and real time clock provided an easy way to synchronize games across different speed processors.
for the keyboard controller: feeding a TTL signal might help, so use a Schmitt-trigger discriminator like 74LS14. and perhaps a divider circuit for lowest frequencies.
In Atari ST and Amiga times you could enjoy to work on an 1MHz XT MS-DOS machine with the quite commonly findable PC-Ditto software emulator. The 68000 was not powerful enough to be able to emulate the 8088 in real time and the result was a machine that was just able to run at a 0.3 SI score. Later, with hardware emulation it was a whole other story. I had later a PC-Speed emulator in my Atari Mega ST and this allowed to emulate a nice well equipped XT with an SI of almost 2.
I believe PC-Ditto was slower than 1MHz XT (closer to 0.6MHz), but it emulated a NEC V20 CPU, so an 80186 instruction set. I still have one STfm with a Vortex AT-ONCE modern replica. It is 286 as the name suggests, but runs really slow, especially the video (CGA) emulation is crappy.
in the end you could hear the computer saying ""I'm sorry Dave, I'm afraid I can't let you do that...."
The 80286 has a major advantage in cycles per instruction over the 8086/8088, so the results you get are different from what you would have gotten from a PC XT type board.
Interesting video, thank you as always. You may if you do get some comparable machine code to compare in a follow-up video is enable and disable shadowing video bios to demonstrate the difference, as the Apple II didn't have that capability. Keep the great videos coming.