This has turned into the best introduction to digital electronics I've ever seen. A great achievement and Ben should be assured that many people really appreciate the huge amount of effort he's put into these videos.
Back in the day, projects like this were hardcover books, often from TAB. And while the basic logic series chips were affordable to a kid, the "interesting" chips like RAM were not.
@@JohnDlugosz There's a story by The 8 Bit Guy, I think, on here about the Commodore 64 predecessor (where all the tasty support chips were developed and which 'wanted' to be a real thing)... They were determined to use SRAM and have ROM cartridges make up the deficit in program space. The software engineers wanted 6kB, the EE's wanted to give them 4kB, and the Management chopped them both down to 2kB (just fine for an Arduino, I guess!) But comparative prices on today's Mouser in the UK are: GBP 0.40 - 16-DIP dual JK-FlipFlop (CD4027BE by TI) ; GBP 1.17 - 1/20" pitch (32 legs dual-in-line) SMD 1 Mbit SRAM [10ns access] (IS61WV1288EEBLL-10TLI by ISSI); GBP 1.33 - 0.8mm pitch QuadFlat (32-leg) ARM Cortex M0+ 64MHz MCU: 128kB flash, 36kB RAM (STM32G070KBT6, by ST Microelectronics). Apart from a few SMD to 'DIP' (or pin grid) adapters for the latter chips (perhaps on the slow boat), that is a massive ratcheting-up of value. When you do (-or might possibly-) need discrete logic these days you really end up paying for it, unless you know of a bargain bucket and really know what might come in useful ahead of time.
@@averagejoe9040 In this case, he isn't lying about its quality... But VGA is dated, so it can use dated hardware, so this graphics card probably was the *best* in its time. Same with the computer.
I’ve been teaching myself EE and programming the past couple years, and am a 15+yr IT professional and current virtualization/SAN architect for the government... and every time I accomplish something big personally, say write a Pong game in a new language or build a virtual 6502 in Minecraft or a new gadget from parts, I look at your videos and still aspire to be anywhere close to your level of badassery one day. You are my hero Ben. Moreso than any other creator on UA-cam, and there are some geniuses on this platform. But you’re the cream of the crop, seriously.
I mean, after watching this video I feel like the fact that I can just go out and buy a random graphics card and plug it into a random motherboard and expect them to work is pretty amazing.
@@sydneybiscuit Gameboy graphics has its own VRAM, separate from the CPU's RAM, so it doesn't disable the CPU when it's reading VRAM, it simply disables the CPU's access to the VRAM at those times. This allows the CPU to do processing while the graphics are being read, but it does mean the CPU needs to use VBLANK interrupts and the like to ensure it only ever accesses the VRAM while the graphics are idle.
@@weardanaether5539 Absolutely the case for many of the devices that use blanking periods for computation, but from the performance perspective of shutting of the CPU (or redirecting its time to outputting the video signal) the NES, SNES, I think even the C64 and similar systems all process only while the blanking intervals are active. The cunning examples are things like Mario Kart that actually have additional blank lines in the UI to allow more compute time. If the bottleneck is due to RAM access or the CPU being busy with the display, it's still only doing non-display work during a blanking period and the overhead remains the same. So the hardware reasons are different, but the timing and technique are functionally similar.
*time slows dramatically and voice slows down as I lunge towards Ben's breadboard* "Noooooo, Ben! Use an opposite phase clock and interleaved memory reeeeaaaadddddssssss....." *time freezes as Ben connects to RDY with an ominous thunk.*
You are still the best teacher of computers and just electronics in general I have ever encountered. So glad you take the time to make these videos and share your knowledge with others. Keep up the awesome work Ben!
Hey, Ben. For my Senior Project to graduate in Computer Science, I had a Sponsor that used your 8-bit computer series for inspiration. We ended up with a 20-bit, Turing complete computer. There were deviations from your original project to achieve this. Anyways, I finally did one of your projects; wanted to for over two years. My brain is oatmeal after doing it in about two months. Cheers.
@@Zeitiah Typically computers are referred to bitwise by the width of their data buss. So the 6502 is 8bit, so the data bus coming off the CPU in his example would be 20 bits wide. If they were using a 6502 they likely built a large amount of bus circuitry to extend that so with multiple 8 bit read/writes you could push and pull 20 total bits on and off the bus if they also used a 6502 and didn't say replace it with a 24/32bit processor.
This series has taught me so much more about how digital logic and especially assembly code works more than my years at college. All they taught us at college was how to increment, decrement, read and write to registers and add binary. That's great. But they never taught us why, or how it would be useful, which to me made it difficult to understand. This series makes you appreciate how it works and more importantly , why it works and how to use thise functions! So much more intuitive! Fantastic series!
I'm surprised you didn't mention the clock phase system that most 6502 computer implemented for shared RAM access. It's the whole reason the CPU's clock pin is labeled PHI2 (ie Phase 2, like the video circuity gets Phase 1).
But won't the GPU be running way faster than the 6502? Like, 10 MHz vs only 1 MHz. And you can't just halt the GPU for some time, because it'll cripple pixels. Can you elaborate on this one please?
@@MrJake-bb8bs Yes, for this to work both GPU and 6502 needs to run at the same clock. And no, halting the GPU won't cripple pixels. The only problem is that because 6502 is using the half of the cycles, you'll need GPU to run at twice the speed in order to compensate it. And you don't need crazy fast speeds to run at 100x75. Even 1 MHz should be sufficent, 10MHz would be plenty.
@@mustafacanelmaci I think achieving 10 MHz on a breadboard will be fairly difficult. I mean cripple the image being displayed. At least in the visible frame. I'm thinking about doing something with the interrupts. For example use a buffer and write the data to the VGA memory when it's in a blanking time. But my calculations give me results of only 740 cycles in vertical blanking time (1Mhz, 800x600)
There is a problem with drawing from a 1mhz CPU. We're working with a 100x64 buffer. Each pixel is addressed as a byte. STA instruction (write a byte to memory) takes 4 cycles. 64*100 writes * 4 cycles is 25600 cycles to fill the buffer. 1mhz is 1,000,000 cycles per second. That's 0.0256 seconds to fill the buffer (ONLY writing, no other computation, which is only possible if we're filling the whole buffer with a single color value). One frame at 60hz is 0.016 seconds. The vertical address range is 128, so vblank is 28 / 128 of the frame time, which is 0.0036 seconds. So unless we're slowing down the effective frame rate (with some kind of buffering) we've got a problem. I'm interested to see how he's going to work around it. Maybe by only writing a few pixels per frame or something. What I would do is implement a double-buffer system, where the CPU would be writing to one RAM bank while the GPU is reading from another, then every Nth vblank signal (or even a single arbitrary signal from the CPU) the bank access pattern switches so the GPU and CPU would be operating on the opposite banks than they were previously. That way you could draw at 10fps or something and then swap banks and start on the next frame. The approach used by Nintendo (the NES and SNES used 6502 derivatives) was to have the graphics hardware do the drawing based on tiles, tilemaps, palettes, and sprites which are all pre-loaded into VRAM when the scene loads. During blanking the CPU was allowed to upload instructions about what the tilemap x/y scrolling was and tiny structs saying where the sprites were to be drawn and what palettes they were using. That way the graphics hardware could figure out the pixel values in real time while the CPU was performing game logic.
The BBC micro back in the 80s used a 6502 and ran the video memory at 4MHz and the "GPU" and CPU at 2MHz each in an interleaved fashion so that the CPU and GPU never interfered with each other. The ZX Spectrum using a Z80 would "clock stretch" the CPU clock on occasions when the CPU tried to access the video memory until the video ULA had done with it. The ZX81 (Z80 again) worked in a similar way to this video implementation but the Z80 was actually used to implement the addressing for the video ram during the display frame time so it was only available for "computational" processing during blanking. And yes the ZX Spectrum was way faster than the ZX81! I still have a BBC B, a Master and a ZX Spectrum!
It's impressive that, although I'm just a computer enthusiast and I don't have any training in eletronics, I still can follow his explanation. That's a guy who can definetely explain stuff.
Oh my, you are literally the best. You manage to teach the details of computers through practical builds. Love it! Learned more from you during my Computer Systems course than I did from our lectures...
My guess: After he's finished describing this interface, Ben will go on to describe how to create a "dumb terminal" style video display with an asynchronous interface so that the CPU doesn't have to wait for the video. The trade-off will be a display which updates a bit more slowly.
"So that's what I will do in the next video" Why Ben why? :( On a more serious note, thank you Ben for making these videos, you're the man! I've learnt so much from these series! More power to you, I hope I can donate one day when i'm earning.
The video card kit box made me chuckle, '1 frame per second'. This is fantastic stuff Ben, thank you for sharing and for continuing with this project. It is fascinating, educational and entertaining. Would it be worth adding some storage next? More space than ROM, and you will have more flexibility with what you could display.
I finally caught up to current. I started watching from "How Semiconductors Work" a couple months ago, and now we're here. Absolutely incredible series. The best thing I've seen on UA-cam. This is uniquely-phenomenal, peerless teaching.
Possibly weirdly, I'm a fan of this design being suboptimal. It motivates me to alter the design to make it more elegant in my opinion. Which means more learning opportunities. Funny how in college I always hated when stuff was left as "an exercise for the student".
Some ideas: 1. Get rid of the blank space in the image-a linear bit counter instead of x,y 2. Use a register to point to the memory address of the frame buffer. This would allow display of multiple images. This requires a 16-bit register and a 16-bit counter. The counter is enabled during the display time and resets (loads from register) at the end of the vertical sync. 3. Indexed colors as used in the original Amiga. 256x12bit fast RAM to hold the palette. (I can't find a data sheet for SN74LS20MR1, which seems appropriate. ) 4. Text mode.
The greatness of these series is that you can build and get it working. Possible improve are then up to you. For example, adding a separate video memory block that is isolated from the cpu, and only stops it when being accessed. Or mapping the separate x+y video scan address into a linear memory address.
It really hurts my old 8-bit programmer's "squeeze everything into as little space as possible"-mentality to see that memory map, knowing that soo many bits, are going to be totally unusable ... It really itches in me to take the design and "optimize it" with some crazy advanced circuitry. Thank you for tickling that part of my brain! Hasn't been used for ages!
Why not add another 13-bit counter to do the actual addressing. Reset it at the start of each frame, and increment it when inside the display region. If you invert the low 13 address bits you could put the display at the top of RAM and you wouldn't have to rearrange the rest of your memory usage around it.
The apple 2 did some funny stuff with ram. It interleaved the Dynamic ram so that each bank could refresh as a different bank was being drawn. The result was a really messed up memory map.
Didn't that result in sometimes getting garbage on the screen from accidentally reading data as if it was a framebuffer, or am I thinking of a different brand/model?
@@-argih The joke is that both are free of binary blobs, but one's dev board costs thousands of dollars on a mailing list, the other costs an elec engineer degree.
Damn, this comment made me recall my classes with RISC-V, I still have that book somewhere, with the Green Card that was like my bible when doing the actual project. a 16 bit "computer" lol
Another thing to consider is the C64 trick which had RAM even behind the ROM addresses. Reads come from ROM but writes at the same address go to the RAM. Also, if you read+write the whole range there's a hardware bit you can set to make reads come from RAM for that range also. Turns the entire addressable range into RAM that's initialized from ROM.
when he says "what I'm going to do is actually going to be pretty terrible" and then starts explaining how there's a lapse of time when the GPU doesn't need to access RAM, I was like "oh shit. you are *not* going to do what I think you're going to do? please don't. ah crap. he's doing it..."
@@MateoConLechuga Do they really halt the cpu? Anyway, I was always interested in how PCI(e) works, so I'd really appreciate a link to some explanation article or blog post or something.
@@gregorykhvatsky7668 They don't halt the CPU, but they do share the same buses. PCIe has access to the entire system bus, and packets (known as TLPs) are routed by a network of switches to different components, such as memory. The bus interconnect logic essentially "starves" the CPU from accessing memory while the PCIe is reading or writing from it; however the CPU can continue to execute out of caches if it wants. Wikipedia is always a good place to get a brief overview of things: en.wikipedia.org/wiki/PCI_Express
@@MateoConLechuga But the system memory is not on the PCI(-E) bus, the GPU has its own local memory which most of the time it will access and CPU will not (the CPU can access it, or at least an part of it as the memory window is limited in size). PCI-(E) devices can indeed bus master and access system memory, I don't think GPUs really do this (other then maybe for transfer operations like texture uploads).
@@randomfish42 Yes, that's all correct. PCIe GPUs also use their local memory because DDR and GDDR has different performance gains for the most part. Integrated graphics are a special case though where they can share system memory, but they still use either the QPI or UPI bus interconnects to arbitrate. Honestly at the end of the day I feel either the CPU or GPU is waiting on data from something; so it seems almost impossible to not stall one or the other. But who knows what crazy hardware is out there :P
One of the first home computers used this exact strategy for managing the display. I think it was the VIC20? The cpu literally shuts down while the display renders.
the ZX-80 and -81 did this, except the processor was actually active the whole time - it was driving the display during its interrupts, and running the user program in the blanking interval. the ZX-81 added “FAST” mode in BASIC which would shut off the video during calculations!
@@krallja yup yup yup.... I came here to say that... but you've already done it for me. The ZX80 was in FAST all the time... the great "innovation" of the '81 was the introduction of SLOW mode.
No. The VIC chip in the Vic20 can manage the display entirely with its half of the clock cycle. The c64 with its higher resolution VIC-II chip did require stealing some cpu cycles.
I've been watching UA-cam religiously for 10 years, but this is my all time favorite series. I am thoroughly impressed by your slow but steady tutorial and descriptions. Great work!
I was going to clean up the house today, but I guess I'm going to be playing with my breadboard computer watching your videos instead. Thank you for the reprieve! It's always great to get new content from you.
I feel like i have been waiting for this video for a long time, but i KNOW you probably spend insane amounts of time to set up each video! thank you for being so extremely detailed, its super amazing content!
Another possibility would've been double-buffering. Having 2 framebuffers on 2 ram chips and having 1 accessible by the GPU and one by the CPU and swapping.
So sort of like using the outputs almost like a linked list that maps onto a different RAM chip with special instructions for how to read and write the underlying data, sort of like the low-level implementation of how high-level languages use classes?
@@johnm2012 CPU wouldn't have to halt at all in the case that I mentioned. Imagine there are 2 frames "A" and "B". When the video controller is showing and iterating over Frame" A". The CPU could make Frame "B" ready. When the CPU is done, it could wait for the Vsync signal (as an interrupt) to swap the frame that the video controller is seeing. Now CPU has frame "A" and the video controller is iterating over frame "B". The CPU now has to perform the same transformation as performed earlier and a new transformation that it wants to show next. (OR we can maintain a queue for each frame, and for every transformation we want to perform, we push to both the queues and pop only when a transformation is completed for a frame, but this is all additional in-software-manageable complexity for future) Swapping of the frames can be done, by using a separate address, writing to which allows managing the active bank of the memory. (This would be tricky since it requires us, to select such an address, wiring up a separate register, and using its outputs to drive the bus signals to swap the banks.) An obvious drawback of this approach would be that every computation has to happen for both the frames (, effectively twice) unless we cache the resultant frame somewhere else (which seems unlikely on such a small system).
@@rohandvivedi I see what you mean now. Switching between two banks of memory. Initially, you wrote "shuffle" which led me to believe you meant copying data from one bank to another.
The NES, which used a 6502 CPU, had two sets of RAMs. The VPU used its RAM to index into the cartridges' ROM for the tile images. There was no framebuffer at the pixel level. Here's a nice white paper: web.mit.edu/6.111/www/f2004/projects/dkm_report.pdf
I'm happy I stumbled on your channel! I find this very interesting I was trained as a technician to fix radio's but since being qualified I have not had the privileges to play with components and get a deeper understanding of them. So I want to thank you for opening this door for me and people like me! :D I'm looking forward to ordering some kits!
When i first found these videos i went and bought a bunch of breadboards and various parts myself to do some of these projects, I'm very glad he's selling these kits.
I would say that the industry is far from being saturated; companies are being bought out left, right and center Maxim? Now Analog Devices ARM? Now NVIDIA. Xilinx? Now AMD. None of those were small or irrelevant companies either
@@oliverer3 I haven't heard of Maxim making supercaps? Googling it doesn't even show any sign of their existence? There's just back up regulators and charge controllers
Brilliant stuff! I haven’t done much of this since I did 8-but microcontroller programming and hardware integration during my degree, 28 years ago. Watching your excellent videos brought it all flooding back! Once my children are older I’m definitely going to introduce them to this stuff - it’s fascinating. Thank you for bringing it to life in such a clear and accessible way.
Men I'm a computer engineering student, and your entire channel gave me so much knowledge on how the hardware works, and you do it on breadboards man, you deserve all the success in the world. I learned so much from this channel, thank you man.
Yes! Another video from the gods of computer teaching! I love all of your stuff, binge watched your entire 8-bit computer playlist a couple weeks ago and ran out of videos to watch. You're awesome!
You know, you could do a lot of audio with a 555. The WiFi . . . I mean, there's a lot of "worsts" and as much as I'd love to see a logic gate transciever and modem, what I'd really love to see is a spark gap WiFi card. The FCC would burst into angry flames.
Wi-Fi is impossible with circuitry like this, unfortunately. Sound is certainly doable, though. One easy way to get is to remove the LCD and then hook a simple DAC up the now-unused pins on the 6522. Hardware-wise, that will work. Then write code to update the sample at the desired rate, you can use the 6522's timer to help with this.
@@eDoc2020 You could use any number of radio standards to implement a serial connection. Sure that's not connecting to a home router, but SSTV style data over local RF is very possible.
@@BetweenTheBorders Very true but I assumed we were talking about something which would work with "normal" equipment. If we include obscure possibilities like SSTV we get lots of easier options. We could probably bitbang SSTV video in software, eliminating all the video hardware, but that's almost cheating in my mind. What I'd say is go back in time 20 years. Then you can build a basic dial-up modem and connect to the Internet like you would with any then-modern system.
14:30 I'm really happy seeing I'm not alone having a hard time plugging groups of jumper cables! EDIT: Great video & very clear explanation; thanks! Now, I'll have to wait patiently for the rest ^.^
This way of creating a video signal is similar to the ZX Series by Sinclair. On the ZX Spectrum the Z80 clock is stopped when the ULA wants to read from the framebuffer. On the ZX80/81 the video signal is generated by software.
old school 8k basic should fit nicely... but he needs a keyboard input. This is why things like the altair had a serial port. let the terminal handle all that, and jsut send characters in and out to the computer.
@@AllenKll good idea. Using serial ports for that is outside my experience so it didn't occur to me. Everything I used either had a keyboard or you could telnet into.
I bet he's going to do just that. just a couple more videos down the line... Ben's going to single handedly reliving Bill Gates' old days of creating a BASIC interpreter from scratch!
I was happy that there isn't a part number in the title, i thought it's a full video, the excitement got me.. until you said it at the end 😫😫😭😭. Silly trick from you Ben 😫.. Now i can't wait for the next part.
I love the feeling of how he turned his passion into a business so he could share his passion. It's a fresh breath of air from the constant barrage of shills peddling products they have no connection to other than getting paid.
Well, this was a pleasant surprise. I didn't think we'd see TWWVC again since it's been about a year and preceded the videos for the 6502. Coincidentally, in a desire for more of your videos, in part as an escape from current events, last night (11/6/2020) I re-watched "The world's worst video card" 1 and 2. Thank you for the video.
@@vaguebrownfox the whole process seemed surprisingly straightforward when everyone was talking about it in the comments section in his earlier videos. It would have been absolutely bizarre if he didn't at least address it in passing at some point.
This time slicing is what the Sinclair ZX80 & ZX81 did in "Slow Mode" except they executed dummy instructions during the display period to index the RAM to feed the video.
Fun video! The CPU and video sharing the same memory was pretty much standard procedure in 8-bit micros from the 70's and 80's. That's why you see a blank video line every other line on many computers of the era. The blank line is where the CPU is accessing memory.
This has been one of the best series of videos I have ever watched - but I used to fix 36bit DECSystem 1090 Computers to chip level... :) Thank you Ben, there's a lot of nostalgia for me in these videos, they brought back some really good memories.
Hey Ben, Fantastic video! I think a small thing that could add a lot of value in the future is having a pinout diagram of the component up while you're wiring them in the video in the top right corner.
This has turned into the best introduction to digital electronics I've ever seen. A great achievement and Ben should be assured that many people really appreciate the huge amount of effort he's put into these videos.
I for one would love to see him do something like this for analog electronics of some sort, not sure what perhaps a ham radio or something else.
@@unclefreddy2009 As an amatuer radio and Ben Eater enthusiast I would love that!
I learned far more here in hours than I did in four years at college, though it was for CIS not engineering/science.
Ben is the man.
Back in the day, projects like this were hardcover books, often from TAB. And while the basic logic series chips were affordable to a kid, the "interesting" chips like RAM were not.
@@JohnDlugosz There's a story by The 8 Bit Guy, I think, on here about the Commodore 64 predecessor (where all the tasty support chips were developed and which 'wanted' to be a real thing)... They were determined to use SRAM and have ROM cartridges make up the deficit in program space. The software engineers wanted 6kB, the EE's wanted to give them 4kB, and the Management chopped them both down to 2kB (just fine for an Arduino, I guess!) But comparative prices on today's Mouser in the UK are:
GBP 0.40 - 16-DIP dual JK-FlipFlop (CD4027BE by TI) ;
GBP 1.17 - 1/20" pitch (32 legs dual-in-line) SMD 1 Mbit SRAM [10ns access] (IS61WV1288EEBLL-10TLI by ISSI);
GBP 1.33 - 0.8mm pitch QuadFlat (32-leg) ARM Cortex M0+ 64MHz MCU: 128kB flash, 36kB RAM (STM32G070KBT6, by ST Microelectronics).
Apart from a few SMD to 'DIP' (or pin grid) adapters for the latter chips (perhaps on the slow boat), that is a massive ratcheting-up of value. When you do (-or might possibly-) need discrete logic these days you really end up paying for it, unless you know of a bargain bucket and really know what might come in useful ahead of time.
Massive props for selling a product with “world’s worst” written right on the box.
lol
I cackled when I saw that, and immediately subscribed. Big "Rotting Turtle" energy.
It doesnt matter what you title it, if it's DIY, people will buy
@@averagejoe9040 In this case, he isn't lying about its quality... But VGA is dated, so it can use dated hardware, so this graphics card probably was the *best* in its time. Same with the computer.
@@cxpKSip unlikely. Even for when these components came out this wouldnt be a good graphics card.
🤯
Time to buy myself some kits
Would love to see your video on that topic.
Smart, are we talking crossover?
Collab
@@Ralfidogg They could collab and make a rocket guiding system from scratch.
That's why this dude gets smarter EVERY day :)
I feel like with every video the inevitable engineering question comes closer.
"But can it run Doom?"
If the cpu can handle it, hopefully yes
It would run at 1 fps if it ran
It probably could, but not at a playable framerate.
@@nikkiofthevalley Somehow that makes it sound even better, to drive home the "world's worst" label for laughs if nothing else.
@@starcrashr I'm half-tempted to try to calculate the framerate you would get, but I'm pretty sure that's not possible.
Doing stuff for 25 minutes straight without testing is the single most terrifying and brave thing I've witnessed
Don't worry, he tested plenty before he made the video
He tested at least 1000 times and was in a trial and error for at minimum a 1000 times . He zpent at least 50 hours into this
There's plenty of editing as well.
There's editing and so. But I don't doubt Ben's capability.
Imagine making a minecraft computer without testing it tell its done
The lengths we have to go to because Nvidia can't stock enough graphics cards.
Ray Tracing support when?
🤣🤣👌
rtx on 1 ray per month
You can do ray tracing on it. It may take years to render a single frame, but you *can* do it.
@@ze_rubenator
yes exactly
it is possible
you just need to do the tracing in software as there is obviously no hardware raytracer installed
You can probably make a raycaster with it at least LOL
I’ve been teaching myself EE and programming the past couple years, and am a 15+yr IT professional and current virtualization/SAN architect for the government... and every time I accomplish something big personally, say write a Pong game in a new language or build a virtual 6502 in Minecraft or a new gadget from parts, I look at your videos and still aspire to be anywhere close to your level of badassery one day. You are my hero Ben. Moreso than any other creator on UA-cam, and there are some geniuses on this platform. But you’re the cream of the crop, seriously.
You’re my hero, a dude doing his best
I mean, after watching this video I feel like the fact that I can just go out and buy a random graphics card and plug it into a random motherboard and expect them to work is pretty amazing.
universal plug-and-play baby :D
On the flip side I wonder what innovations are being discarded because they are incompatible with the standard.
fr tho
Lies, you can't buy GPUs nowadays /s :D
This hits different now.
The fact that you had to make both the computer _and_ video card even worse for them to work together is hilariously fitting.
"Instead I'm going to do something . . . fairly terrible."
*Does the 80s and 90s industry standard*
I was gonna say, Gameboy graphics does this method exactly
@@sydneybiscuit Gameboy graphics has its own VRAM, separate from the CPU's RAM, so it doesn't disable the CPU when it's reading VRAM, it simply disables the CPU's access to the VRAM at those times. This allows the CPU to do processing while the graphics are being read, but it does mean the CPU needs to use VBLANK interrupts and the like to ensure it only ever accesses the VRAM while the graphics are idle.
@@weardanaether5539 ah, TIL! Thanks for the info. I appreciate the insight
Elaborate on that?
@@weardanaether5539 Absolutely the case for many of the devices that use blanking periods for computation, but from the performance perspective of shutting of the CPU (or redirecting its time to outputting the video signal) the NES, SNES, I think even the C64 and similar systems all process only while the blanking intervals are active. The cunning examples are things like Mario Kart that actually have additional blank lines in the UI to allow more compute time.
If the bottleneck is due to RAM access or the CPU being busy with the display, it's still only doing non-display work during a blanking period and the overhead remains the same.
So the hardware reasons are different, but the timing and technique are functionally similar.
*time slows dramatically and voice slows down as I lunge towards Ben's breadboard*
"Noooooo, Ben! Use an opposite phase clock and interleaved memory reeeeaaaadddddssssss....."
*time freezes as Ben connects to RDY with an ominous thunk.*
We need to convince Ben to make second worst video card ever
Primitive Technology: Advanced Edition
Primitive Technology: Teaching Rocks to Think
@@jafizzle95 Don't oversimplify, first we had to flatten the rocks and put lightning in them
This is what aliens watch when they cannot sleep at night
@@prathameshdighe1485 guess I'm an alien
Primitive Technology: +3000 years
*he finally did it! he finally connected the video card to a computer!*
"and that's what I'll do in the next video!"
saddest words in all of history.
This counts as part of my engineering degree I guess... I'm great at making excuses not to study
It *is* valid real-world learning.
Same here
This sort of thing is my achiles heel. Stuff like this fascinates me, but filling out a lab report on basics of resistance is like pulling teeth.
Same here, I'll say this counts as study for my electronics engineering degree lol
@@toahero5925 agreed
School looking for student computers: "ILL TAKE YOUR ENTIRE STOCK"
"I've been waiting so long for this day to come." "The elections ending?" "No, Ben installing the world's worst video card"
We most listen to people that actually know what they are saying
Love the "some assembly required" on your kit's box LOL
This is the greatest crossover since the avengers first met
You are still the best teacher of computers and just electronics in general I have ever encountered. So glad you take the time to make these videos and share your knowledge with others.
Keep up the awesome work Ben!
Hey, Ben. For my Senior Project to graduate in Computer Science, I had a Sponsor that used your 8-bit computer series for inspiration. We ended up with a 20-bit, Turing complete computer. There were deviations from your original project to achieve this. Anyways, I finally did one of your projects; wanted to for over two years. My brain is oatmeal after doing it in about two months. Cheers.
Why 20 bits? Wouldn't that be more difficult then powers of 2 bits?
@@Zeitiah Typically computers are referred to bitwise by the width of their data buss. So the 6502 is 8bit, so the data bus coming off the CPU in his example would be 20 bits wide. If they were using a 6502 they likely built a large amount of bus circuitry to extend that so with multiple 8 bit read/writes you could push and pull 20 total bits on and off the bus if they also used a 6502 and didn't say replace it with a 24/32bit processor.
@@Jackpkmn Yeah, but wouldn't it be harder to have 20-bit-wide busses compared to a bit-width that's a power of 2?
@@Zeitiah Yup. Unless you used a 24 or 32bit processor then it would just be you aren't using all its capability.
@@Zeitiah It's technically 16-bit w/ a 4-bit flags resister. 4-bits of the overall 24-bits are unused.
This series has taught me so much more about how digital logic and especially assembly code works more than my years at college. All they taught us at college was how to increment, decrement, read and write to registers and add binary. That's great. But they never taught us why, or how it would be useful, which to me made it difficult to understand. This series makes you appreciate how it works and more importantly , why it works and how to use thise functions! So much more intuitive! Fantastic series!
I'm surprised you didn't mention the clock phase system that most 6502 computer implemented for shared RAM access.
It's the whole reason the CPU's clock pin is labeled PHI2 (ie Phase 2, like the video circuity gets Phase 1).
But won't the GPU be running way faster than the 6502? Like, 10 MHz vs only 1 MHz. And you can't just halt the GPU for some time, because it'll cripple pixels. Can you elaborate on this one please?
@@MrJake-bb8bs Yes, for this to work both GPU and 6502 needs to run at the same clock. And no, halting the GPU won't cripple pixels. The only problem is that because 6502 is using the half of the cycles, you'll need GPU to run at twice the speed in order to compensate it.
And you don't need crazy fast speeds to run at 100x75. Even 1 MHz should be sufficent, 10MHz would be plenty.
@@mustafacanelmaci I think achieving 10 MHz on a breadboard will be fairly difficult. I mean cripple the image being displayed. At least in the visible frame. I'm thinking about doing something with the interrupts. For example use a buffer and write the data to the VGA memory when it's in a blanking time. But my calculations give me results of only 740 cycles in vertical blanking time (1Mhz, 800x600)
There is a problem with drawing from a 1mhz CPU. We're working with a 100x64 buffer. Each pixel is addressed as a byte. STA instruction (write a byte to memory) takes 4 cycles. 64*100 writes * 4 cycles is 25600 cycles to fill the buffer. 1mhz is 1,000,000 cycles per second. That's 0.0256 seconds to fill the buffer (ONLY writing, no other computation, which is only possible if we're filling the whole buffer with a single color value).
One frame at 60hz is 0.016 seconds. The vertical address range is 128, so vblank is 28 / 128 of the frame time, which is 0.0036 seconds.
So unless we're slowing down the effective frame rate (with some kind of buffering) we've got a problem. I'm interested to see how he's going to work around it. Maybe by only writing a few pixels per frame or something. What I would do is implement a double-buffer system, where the CPU would be writing to one RAM bank while the GPU is reading from another, then every Nth vblank signal (or even a single arbitrary signal from the CPU) the bank access pattern switches so the GPU and CPU would be operating on the opposite banks than they were previously.
That way you could draw at 10fps or something and then swap banks and start on the next frame.
The approach used by Nintendo (the NES and SNES used 6502 derivatives) was to have the graphics hardware do the drawing based on tiles, tilemaps, palettes, and sprites which are all pre-loaded into VRAM when the scene loads. During blanking the CPU was allowed to upload instructions about what the tilemap x/y scrolling was and tiny structs saying where the sprites were to be drawn and what palettes they were using. That way the graphics hardware could figure out the pixel values in real time while the CPU was performing game logic.
The BBC micro back in the 80s used a 6502 and ran the video memory at 4MHz and the "GPU" and CPU at 2MHz each in an interleaved fashion so that the CPU and GPU never interfered with each other. The ZX Spectrum using a Z80 would "clock stretch" the CPU clock on occasions when the CPU tried to access the video memory until the video ULA had done with it. The ZX81 (Z80 again) worked in a similar way to this video implementation but the Z80 was actually used to implement the addressing for the video ram during the display frame time so it was only available for "computational" processing during blanking. And yes the ZX Spectrum was way faster than the ZX81!
I still have a BBC B, a Master and a ZX Spectrum!
It's impressive that, although I'm just a computer enthusiast and I don't have any training in eletronics, I still can follow his explanation. That's a guy who can definetely explain stuff.
You can't fool me, the DDR on the RAM clearly stands for dance dance revolution
I thought it was double deer run over
It's obviously Deutsche Demokratische Republik
@@jegkompletson1698 the based germany
Oh my, you are literally the best. You manage to teach the details of computers through practical builds. Love it! Learned more from you during my Computer Systems course than I did from our lectures...
Thanks so much for making these videos! As someone working in this field, I still love watching! Cheers!
I wonder how you commented 3 hours before on a video which uploaded 5 minutes before.
@@deepak_00 I bet he has this magical power called "Patreon". Just a guess though.
@@mrt1r Yup. Patreon.
This video (in a slightly different form) was released about a week ago.
@@m1geo If you don't mind my asking, what company do you work for?
@@mrt1r Arm Ltd.
at 0:49 i have no word to describe my feeling when someone make a kit and named it with worst
Something so satisfying about hearing Ben say “FFF”
Talk HEXy to me
@@jull1234 0xFACE 0XEA8 0xB16 0xB00B135
0xCAFEBABE
If you flip it horizontally, it's quite funny.
0xDEADBEEF
My guess: After he's finished describing this interface, Ben will go on to describe how to create a "dumb terminal" style video display with an asynchronous interface so that the CPU doesn't have to wait for the video. The trade-off will be a display which updates a bit more slowly.
So basically the display can only run 75p50 or smth like that?
"So that's what I will do in the next video"
Why Ben why? :(
On a more serious note, thank you Ben for making these videos, you're the man! I've learnt so much from these series! More power to you, I hope I can donate one day when i'm earning.
The video card kit box made me chuckle, '1 frame per second'. This is fantastic stuff Ben, thank you for sharing and for continuing with this project. It is fascinating, educational and entertaining. Would it be worth adding some storage next? More space than ROM, and you will have more flexibility with what you could display.
"Maybe 71% CPU overhead isn't something you want in a graphics card," had me laughing out loud. What a good chuckle, thank you.
Instead, think of it as 71% CPU power consumption reduction.
@@origamiscienceguy6658ooo talk about efficient lol. Who needs water cooling when 71% of the time the cpu is napping
I finally caught up to current. I started watching from "How Semiconductors Work" a couple months ago, and now we're here. Absolutely incredible series. The best thing I've seen on UA-cam. This is uniquely-phenomenal, peerless teaching.
Possibly weirdly, I'm a fan of this design being suboptimal. It motivates me to alter the design to make it more elegant in my opinion. Which means more learning opportunities. Funny how in college I always hated when stuff was left as "an exercise for the student".
I just lean towards overthinking and overdoing when something is left for me.
Some ideas:
1. Get rid of the blank space in the image-a linear bit counter instead of x,y
2. Use a register to point to the memory address of the frame buffer. This would allow display of multiple images. This requires a 16-bit register and a 16-bit counter. The counter is enabled during the display time and resets (loads from register) at the end of the vertical sync.
3. Indexed colors as used in the original Amiga. 256x12bit fast RAM to hold the palette. (I can't find a data sheet for SN74LS20MR1, which seems appropriate. )
4. Text mode.
Me too, I want to build one that uses sram as video memory.
@@eternalskywalker9440 nice! What's a SN74LS20MRI? Looks more like a NAND gate.
@@eternalskywalker9440 Blank every other VGA line to give cycles back to the CPU. It will just look like TV scanlines.
The greatness of these series is that you can build and get it working. Possible improve are then up to you. For example, adding a separate video memory block that is isolated from the cpu, and only stops it when being accessed. Or mapping the separate x+y video scan address into a linear memory address.
It really hurts my old 8-bit programmer's "squeeze everything into as little space as possible"-mentality to see that memory map, knowing that soo many bits, are going to be totally unusable ... It really itches in me to take the design and "optimize it" with some crazy advanced circuitry. Thank you for tickling that part of my brain! Hasn't been used for ages!
Why not add another 13-bit counter to do the actual addressing. Reset it at the start of each frame, and increment it when inside the display region. If you invert the low 13 address bits you could put the display at the top of RAM and you wouldn't have to rearrange the rest of your memory usage around it.
@@gdclemo Even better. You wont need a decimal multiplier anymore.
It's especially weird because he isn't using the bottom half of the I/O space, which is directly adjacent to the video RAM.
The apple 2 did some funny stuff with ram. It interleaved the Dynamic ram so that each bank could refresh as a different bank was being drawn. The result was a really messed up memory map.
Didn't that result in sometimes getting garbage on the screen from accidentally reading data as if it was a framebuffer, or am I thinking of a different brand/model?
@@starcrashr I think that was the commodore pet that had that "fuzz" problem. The apple turned off the processor except during screen blanking.
"Mom, can we have RISC-V?"
"We have RISC-V at home."
The RISC-V at home:
Let's go to download it on the Internet
Is not RISC-V, is 6502 those are completely different ISAs
@@-argih The joke is that both are free of binary blobs, but one's dev board costs thousands of dollars on a mailing list, the other costs an elec engineer degree.
@@-argih r/whoooosh
Damn, this comment made me recall my classes with RISC-V, I still have that book somewhere, with the Green Card that was like my bible when doing the actual project. a 16 bit "computer" lol
Another thing to consider is the C64 trick which had RAM even behind the ROM addresses. Reads come from ROM but writes at the same address go to the RAM.
Also, if you read+write the whole range there's a hardware bit you can set to make reads come from RAM for that range also. Turns the entire addressable range into RAM that's initialized from ROM.
when he says "what I'm going to do is actually going to be pretty terrible" and then starts explaining how there's a lapse of time when the GPU doesn't need to access RAM, I was like "oh shit. you are *not* going to do what I think you're going to do? please don't. ah crap. he's doing it..."
@oH well,lord! I mean, that's basically how modern PCIe GPUs work too so...
@@MateoConLechuga Do they really halt the cpu? Anyway, I was always interested in how PCI(e) works, so I'd really appreciate a link to some explanation article or blog post or something.
@@gregorykhvatsky7668 They don't halt the CPU, but they do share the same buses. PCIe has access to the entire system bus, and packets (known as TLPs) are routed by a network of switches to different components, such as memory. The bus interconnect logic essentially "starves" the CPU from accessing memory while the PCIe is reading or writing from it; however the CPU can continue to execute out of caches if it wants. Wikipedia is always a good place to get a brief overview of things: en.wikipedia.org/wiki/PCI_Express
@@MateoConLechuga But the system memory is not on the PCI(-E) bus, the GPU has its own local memory which most of the time it will access and CPU will not (the CPU can access it, or at least an part of it as the memory window is limited in size). PCI-(E) devices can indeed bus master and access system memory, I don't think GPUs really do this (other then maybe for transfer operations like texture uploads).
@@randomfish42 Yes, that's all correct. PCIe GPUs also use their local memory because DDR and GDDR has different performance gains for the most part. Integrated graphics are a special case though where they can share system memory, but they still use either the QPI or UPI bus interconnects to arbitrate. Honestly at the end of the day I feel either the CPU or GPU is waiting on data from something; so it seems almost impossible to not stall one or the other. But who knows what crazy hardware is out there :P
Expectation: Trying to hook this up into a modern computer/motherboard.
Reality: "We're hooking this baby up to the computer we made, boys."
One of the first home computers used this exact strategy for managing the display. I think it was the VIC20? The cpu literally shuts down while the display renders.
the ZX-80 and -81 did this, except the processor was actually active the whole time - it was driving the display during its interrupts, and running the user program in the blanking interval. the ZX-81 added “FAST” mode in BASIC which would shut off the video during calculations!
@@krallja yup yup yup.... I came here to say that... but you've already done it for me.
The ZX80 was in FAST all the time... the great "innovation" of the '81 was the introduction of SLOW mode.
The C128 also had a 2mhz fast mode. The 8502 would run at 2mhz mode while the VIC-2e shutdown
@@edgeeffect Don't you love these communities where you come in to comment but only have to check a few rows down and throw a thumb?
No. The VIC chip in the Vic20 can manage the display entirely with its half of the clock cycle. The c64 with its higher resolution VIC-II chip did require stealing some cpu cycles.
I've been watching UA-cam religiously for 10 years, but this is my all time favorite series. I am thoroughly impressed by your slow but steady tutorial and descriptions.
Great work!
I was going to clean up the house today, but I guess I'm going to be playing with my breadboard computer watching your videos instead. Thank you for the reprieve!
It's always great to get new content from you.
I feel like i have been waiting for this video for a long time, but i KNOW you probably spend insane amounts of time to set up each video! thank you for being so extremely detailed, its super amazing content!
Oh yesss! the follow up I was waiting for! :D
Agreed
Love watching you! You're one of the people that has truly helped spark my computer curiosity.
Another possibility would've been double-buffering. Having 2 framebuffers on 2 ram chips and having 1 accessible by the GPU and one by the CPU and swapping.
That's an excellent idea!
So sort of like using the outputs almost like a linked list that maps onto a different RAM chip with special instructions for how to read and write the underlying data, sort of like the low-level implementation of how high-level languages use classes?
@@evannibbe9375 I don't think I completely understand what you mean, sorry. Could you please elaborate?
The “some assembly required” sent me 💀
How about using 2 memory chips and the processor shuffles between these 2 chips, whenever the video frame needs to be changed.
Would that help or would the CPU spend 70% of it's time moving data around instead of being halted?
Double buffering. You are thinking ahead, my dude !
@@johnm2012 CPU wouldn't have to halt at all in the case that I mentioned.
Imagine there are 2 frames "A" and "B".
When the video controller is showing and iterating over Frame" A". The CPU could make Frame "B" ready.
When the CPU is done, it could wait for the Vsync signal (as an interrupt) to swap the frame that the video controller is seeing.
Now CPU has frame "A" and the video controller is iterating over frame "B". The CPU now has to perform the same transformation as performed earlier and a new transformation that it wants to show next. (OR we can maintain a queue for each frame, and for every transformation we want to perform, we push to both the queues and pop only when a transformation is completed for a frame, but this is all additional in-software-manageable complexity for future)
Swapping of the frames can be done, by using a separate address, writing to which allows managing the active bank of the memory. (This would be tricky since it requires us, to select such an address, wiring up a separate register, and using its outputs to drive the bus signals to swap the banks.)
An obvious drawback of this approach would be that every computation has to happen for both the frames (, effectively twice) unless we cache the resultant frame somewhere else (which seems unlikely on such a small system).
@@rohandvivedi I see what you mean now. Switching between two banks of memory. Initially, you wrote "shuffle" which led me to believe you meant copying data from one bank to another.
The NES, which used a 6502 CPU, had two sets of RAMs. The VPU used its RAM to index into the cartridges' ROM for the tile images. There was no framebuffer at the pixel level. Here's a nice white paper: web.mit.edu/6.111/www/f2004/projects/dkm_report.pdf
I'm happy I stumbled on your channel! I find this very interesting I was trained as a technician to fix radio's but since being qualified I have not had the privileges to play with components and get a deeper understanding of them. So I want to thank you for opening this door for me and people like me! :D
I'm looking forward to ordering some kits!
This guy still be making money in a saturated hardware industry
When i first found these videos i went and bought a bunch of breadboards and various parts myself to do some of these projects, I'm very glad he's selling these kits.
I would say that the industry is far from being saturated; companies are being bought out left, right and center
Maxim? Now Analog Devices
ARM? Now NVIDIA.
Xilinx? Now AMD.
None of those were small or irrelevant companies either
@@MrPhilip796 is that why it's harder to find Maxim super capacitors I wonder
@@oliverer3 I haven't heard of Maxim making supercaps? Googling it doesn't even show any sign of their existence? There's just back up regulators and charge controllers
@@MrPhilip796 probably talking about "maxwell" ultracaps. I think Tesla bought them not too long ago.
Brilliant stuff! I haven’t done much of this since I did 8-but microcontroller programming and hardware integration during my degree, 28 years ago. Watching your excellent videos brought it all flooding back! Once my children are older I’m definitely going to introduce them to this stuff - it’s fascinating. Thank you for bringing it to life in such a clear and accessible way.
I have been waiting for this xD
I think everyone has in fact
Yep
Yes
nah i have been waiting even more for the "playing video with the world's worst video card from computer input"
Men I'm a computer engineering student, and your entire channel gave me so much knowledge on how the hardware works, and you do it on breadboards man, you deserve all the success in the world. I learned so much from this channel, thank you man.
„This is the worst video card, I have ever heard of.“
„But you have heard of me“
This is a great channel, you're really good at laying these things out in layman's terms.
Ben: Who are you?
Ben Eater: Death.
me: I need a video card.
wife: you have a bund of old video cards.
me: no no. this one is worst.
Ben, you built a video card. That in itself is amazing!
Yes! Another video from the gods of computer teaching! I love all of your stuff, binge watched your entire 8-bit computer playlist a couple weeks ago and ran out of videos to watch. You're awesome!
Next: 1. world's worst sound card; 2 world's worst wifi card ..
That would be actually fantastic! Making a world worst PC, ahahah
You know, you could do a lot of audio with a 555. The WiFi . . . I mean, there's a lot of "worsts" and as much as I'd love to see a logic gate transciever and modem, what I'd really love to see is a spark gap WiFi card. The FCC would burst into angry flames.
Wi-Fi is impossible with circuitry like this, unfortunately. Sound is certainly doable, though.
One easy way to get is to remove the LCD and then hook a simple DAC up the now-unused pins on the 6522. Hardware-wise, that will work. Then write code to update the sample at the desired rate, you can use the 6522's timer to help with this.
@@eDoc2020 You could use any number of radio standards to implement a serial connection. Sure that's not connecting to a home router, but SSTV style data over local RF is very possible.
@@BetweenTheBorders Very true but I assumed we were talking about something which would work with "normal" equipment. If we include obscure possibilities like SSTV we get lots of easier options. We could probably bitbang SSTV video in software, eliminating all the video hardware, but that's almost cheating in my mind.
What I'd say is go back in time 20 years. Then you can build a basic dial-up modem and connect to the Internet like you would with any then-modern system.
"But you know I also built this computer" Casually pulls out complex custom built computer
14:30 I'm really happy seeing I'm not alone having a hard time plugging groups of jumper cables!
EDIT: Great video & very clear explanation; thanks! Now, I'll have to wait patiently for the rest ^.^
uff yeah.... they're tough to get in all at the same time and they tend to pop out
This is fantastic! Thank you so much for making these videos and kits. :D
Me: spending $120 on a breadboard VGA graphics card
My brain: but but but.... you need a graphics card to actually run Minecraft
Do you!? It hardly touches mine!
I guess Ben Eater will upload a video soon explaining how to install and play minecraft on breadboard using this worst video card.
I get around ~200 fps on my shitty Radeon Vega 11 integrated graphics so say about that what you will
@@simeondermaats how did you get it to use GPU? I have an Nvidia (dunno which one) that it hardly touches!
Just add a metric ton of DSPs until you have the performance you need.
This way of creating a video signal is similar to the ZX Series by Sinclair.
On the ZX Spectrum the Z80 clock is stopped when the ULA wants to read from the framebuffer. On the ZX80/81 the video signal is generated by software.
I like where this is going. I'd love to get this machine to a place where it can put text on a monitor and implement some form of BASIC.
old school 8k basic should fit nicely... but he needs a keyboard input. This is why things like the altair had a serial port. let the terminal handle all that, and jsut send characters in and out to the computer.
@@AllenKll good idea. Using serial ports for that is outside my experience so it didn't occur to me. Everything I used either had a keyboard or you could telnet into.
I'm sure we had a BASIC at school that could fit into 4K... maybe even 2K.
I bet he's going to do just that. just a couple more videos down the line... Ben's going to single handedly reliving Bill Gates' old days of creating a BASIC interpreter from scratch!
@@mfaizsyahmi Hope so!
I was happy that there isn't a part number in the title, i thought it's a full video, the excitement got me.. until you said it at the end 😫😫😭😭. Silly trick from you Ben 😫.. Now i can't wait for the next part.
You explain things so good man, can tell you're an expert
I love the feeling of how he turned his passion into a business so he could share his passion. It's a fresh breath of air from the constant barrage of shills peddling products they have no connection to other than getting paid.
You're a madman, I love it!
If you were to bring this card in 1960-1980 then everyone will be amazed at the power
Maybe not in 1980 but 1960 for sure
it's using (more modernized versions of) 80s tech, not too impressive for back then, 60s-mid 70s would be more impressed
@@burp2019 yeah that’s what I said too, this “pc” runs on a 6502, an 80s processor used in basically anything from the 80s
So, Conway's Game of Life?
This. Yes!
Eventually: "Making a colorful command-line computer using only breadboards and the world's worst video card"
Yeah imagine if the 6502 computer and this video card were put together
That title is what I clicked immediately.
I have been following this and I really like how he shows the build from discrete components to an image on screen.
Your kit is literally worth more than my graphics card. Still worth it.
Well, this was a pleasant surprise. I didn't think we'd see TWWVC again since it's been about a year and preceded the videos for the 6502. Coincidentally, in a desire for more of your videos, in part as an escape from current events, last night (11/6/2020) I re-watched "The world's worst video card" 1 and 2. Thank you for the video.
I'm a simple man, I see a Ben Eater video, I click!!
Intolérance.
Yes! I've been waiting for another installment on the video card. Most interesting project of this whole channel.
Haha! He's actually doing it. The Mad lad is actually doing it!
I knew it!
@@vaguebrownfox the whole process seemed surprisingly straightforward when everyone was talking about it in the comments section in his earlier videos. It would have been absolutely bizarre if he didn't at least address it in passing at some point.
Ben Eater you are the only youtuber where i dont speed up the play-speed :D it is already as fast as x1.5
thumbs up for your awesome project!
that's how universities should teach their student's especially engineers , i wish i had you as a teacher in my uni
This time slicing is what the Sinclair ZX80 & ZX81 did in "Slow Mode" except they executed dummy instructions during the display period to index the RAM to feed the video.
The sheer scope of this project is giving me anxiety.
Fun video! The CPU and video sharing the same memory was pretty much standard procedure in 8-bit micros from the 70's and 80's. That's why you see a blank video line every other line on many computers of the era. The blank line is where the CPU is accessing memory.
I’m expecting the next video to be “running doom on the computer I build”
This has been one of the best series of videos I have ever watched - but I used to fix 36bit DECSystem 1090 Computers to chip level... :) Thank you Ben, there's a lot of nostalgia for me in these videos, they brought back some really good memories.
3:21 For the curious this is called a quarternion.
Finally you are building this project... The reddit page of yours were filled with asking of this crossover.... :D
This series going to be the best...
Guys I'm calling it now, in few episodes he will use it to run Doom.
Hey Ben, Fantastic video! I think a small thing that could add a lot of value in the future is having a pinout diagram of the component up while you're wiring them in the video in the top right corner.
If this were 1990, you'd be starting a video card company! Don't downplay your creations Benjamin, it's not like 99.99% people could do a better job.
1980*
@@tauon_ I agree that 1980 would have been a better time, yes. Get in during the CGA/EGA days.
I've been waiting for this for so long! Thank you for what you do!
Mr. Eater , I have to ask you : are you working as a professional in computer HW design ? Because your videos are just AWESOME !
You have an amazing ability to sound like youre making it up while being completely capable having a full understanding of what's going on
the saga continues
I need to find time and money to get that kits and build all of your projects. I just love this