Definitely throw Win 2k on that thing. 2k fixed so many little issues with NT4 and you should see a lot better compatibility with games, and iirc most BP6 owners swapped to 2k as soon as it was available.
been trying win 2k on my single 1ghz p3 machine and (saying this as someone who grew up with win xp in the late 2000's early 2010's) it freakin kicks ass! orders of magnitude better than win98se in practically every way.
Agree have used win2000 on dual cpu setups. i had custom workstation with two pentium 3 ( it was in 2002) it was my "in" movable data server. Used for client data copying and storage! And it was a Dell workstation motherboard cramed into a smaller case but because of 6 scsi disks thirtysomething gigs eatch it was heavy!
The amount of times this guy will say "I can't make a video about a PC, all they do is verify DMI pool data, boot DOS, and you type dir" and then still make videos about PCs (even if they're unique) is crazy. Don't stop now!
I owned this board when it was brand new. Pair of Celeron 366’s at 550. And you missed the reason why this board was so important. The Celeron wasn’t supposed to be capable of being multiprocessor capable. Then when the Celeron went from being slot based to socket based, they made “slocket “ adapters, this was a competitive market and the better slockets found a way to work around Intel’s SMP limitations with the Celerons. Turned out it was just a pinout difference. Abit, wanting to make a name for themselves built a motherboard with this mod built into it, and built this dual socket board. Not only were we taking Intel’s budget CPU and overclocking it 50% making it faster than the laggy Pentium II CPU’s, now we were able to build an epic machine that could run TWO of them for less than the price of ONE Pentium II. We were sticking it to the man! And Abit was our hero for taking them on. Also, your pair of 500’s are on a slow 66MHz bus. The system really shines when you can run a 300 or preferably a 366 on a 100MHz bus at 450 or 550.
Bingo. Being able to build an SMP system for so little was a huge novelty in of itself. Intel wanted SMP to be a premium feature, even though it was functional throughout the line if not artificially constrained.
OMG I absolutely had a build based on this motherboard. Loved it. After it ran its course as my main machine, i moved the internals to a much taller tower, installed a ton of drives with a RAID hardware card, and turned it into the file server for our FTP sharing group (shouts out to the rest of ANiVCD!) and it lived under my desk and then in my closet. Ah, memories!
Specifically, they're the backgrounds for the results screen, for a terran defeat, and a zerg victory (though, I think the zerg victory one was also just used as key art in a number of places).
@@mrquicky Meanwhile, the potential performance gains from doing an FSB OC on early P4s is scarcely known even today. Let's just say if more people knew about it, there would be no late PIII/early P4 debate and intel is guilty of an extreme overabundance of caution when releasing the first generation of P4 CPUs. They set Netburst up to fail from the start. This is a night and day difference compared to intel in 2024, where they push their CPUs far beyond what is reasonable for the tired, old P6 microarchitecture from 1995 they are still using. They've had to drop HT purely because of their wheezy old Pentium Pro on steroids needing more power than the silicon can handle. They need a new microarchitecture and should have started developing one when Ryzen launched and, in the near future, Pat Gelsinger will be known as the idiot who destroyed intel. Current intel CPUs are the equivalent of trying to drive a Ford Model T at 200mph....then there is the fact that Intel dGPU market share is so low, it is rounded down to zero percent. They failed to take note of how hard AMD has to work just to stay above 10% market share.
Hey Gravis. I can remember when you asked on Patreon a few months back, what we would like to see more. I think you're literally killing it. I am too young to would have understood that era. I have no intent to ever buy a retro gaming pc for myself. I will never even try to use any of these old pieces of software. But I wtached the entire video without a pause. I love the little guy series. I love your insights from e-waste. Back when you were going full on youtube and quitting your day job I was actually a bit worried. But this was obviously just the fear of change. Wish you the best, greetings from Germany.
The reason for the CPU pingpong of processes was to make more effective use of the caches. Every time you had a TXT page fault it would switch cores. There was a non-zero probability that the cache on the other core hadn't evicted the page you wanted. It also meant that when you went above 50% core utilisation and a process got evicted, it would be landing on a core that probably already had it partially cached. The cost of context switching was less than a millionth of the page fault on rotating rust, so it wasn't a costly gamble.
I also wondered if it was to balance usage over time or to not overheat one while the other's running cold. Both running at a mildly high temp has got to be better than one burning itself out, right? I don't know if that would happen with just a bunch of basic windowed applications open, I am not really familiar with old Windows, but if it did it seemed like a reasonable thing to implement to me.
The oldest Windows I remember using genuinely was probably XP, but I was young and the computer was already outdated then. It's funny, I probably a little too young for XP and VHS and CRT TVs but I had all of them because my dad was a cheapskate, I think. I mean, if it ain't broke and all that. I also didn't get a phone until I was a lateish teenager, and it wasn't new, it was a hand me down. Awkward story ahead, here be dragons: One time, when I was really young, I took a picture of my, uh, willy with someone's phone, not mine but a family member, but then got really scared and couldn't figure out the interface to delete it, it was probably Android 2 or something. Core memory, I was very stupid.
@@yuribacon Yikes! That reminds me of this one time when my oldest sister told me about some kid using nail clippers to do the same... That's messed up! Why did she do that? Core memory, but not my fault this time.
This channel is proof that no matter how many tech videos exist on YT, there is always room for the nice personality + well-researched video. By simply following your own interests, Gavin, you’ve got a wonderful thing going here.
Tech Tangents made a video on this same board but mostly to document replacing bad capacitors. It might be worth checking your board's caps to see if they're still good.
Rather, swapping the caps makes sense before they die. If the caps go bad the board will stop working, varying from just black screen with no POST to magic smoke and dead board. Mostly they start going flaky, it'll POST and run normally then freeze, and then won't post. Wait 30min, and it'll POST, boot and run again for a random amount of time. Best is really to check the cap brand against the list, or the board model, order replacements, roll up sleeves while soldering iron warms up and go to work on it. If one waits until the caps go to shit, there is a non trivial possibility the board will stay dead even once caps are replaced, ideally it should be done on a working board.
This is a very high quality dive into the system and provides great context. I am the proud owner of the motherboard's little brother, the VP6, with new capacitors and dual 1GHz Coppermines. I've been building it and "enjoying" the process of making newer hardware work on an old motherboard. Ever since I heard about SMP back in the day, I had to have it. Thanks for this awesome retrospective!
Hey, if you need help with telling the Stories (1:01:47) in the future, let me know. I'm part of a pretty big community (The Retro Web) who are probably a bit too knowledgeable/obsessed about that era of PCs for our own good. I (and many others in the community) would be happy to help connect any missing dots. I myself have what may have been one of the low cost (read: sketchy) direct competitors to the BP6 -- a dual 440LX Slot 1 board from everyone's "favorite" brand, PCChips (though it's branded Amptron). It's a.... very strange board.
@@ProtoMan0451 If I ever get back to video production, that board will definitely get its time on camera. In the mean time... I can't post a link because UA-cam, but go to The Retro Web (easy to find) and look up the "Amptron PII-2200 V3.1"
I adore this show concept. The best part of it is that you talk about everything, so even newbies can understand it. Explanation of multithreading and multitasking is fantastic! Anyway, very good topic to go over. SMP was stong pain in the early days and I kind of encountered this myself. About half a year ago I've build dual socket 370 computer with 1GHz Coppermine chips and fast SCSI drives. Game performance... well.. it's nearly identical to single chip, but multitasking is something other. It's running under Windows 2000 and it basically shreds everything you throw at it. Also, two servers you've shown are fantastic pieces of hardware, I'd love to get my hands on one of these, just to play around with networks and such.
Absolutely loved mine. Beige box with a window cut in it myself. Dual Celeron 400's running at 550mhz with water cooling and CCFL lighting. Had a 500 watt car amp bolted to the right side of the case and a second PSU with a half farad cap, and i DJ'd parties with it!
I loved that era of computing. Everything you did was fully custom. You could browse posts on forums and get cool ideas. I had a K6-3 400 that I could only overclock to 450 and stay stable, and I was jealous of the crazy overclocks the celerons could get. I think I remember the Celeron 300a being notorious for effortlessly overclocking past 500mhz.
In 2002 I needed a new case because my previous one was AT format, not ATX. This was of course when everything was beige, so I knew I'd have to paint it and every other component I ever put in it. What I didn't know was that black optical drives would become common, or I probably would have taken that route. Instead I went with metallic dark blue. And that's why I was in the back yard the other afternoon spray painting a DVD burner with the very same can of paint I used 22 years ago, because I'm still using that case.
as a 64 years old man and a PC technician. I still use on my 13th gen Intel wıth that case (only PS changed through out the year)s. Its nice to see a smart guy who took me back those good old days.Thank you. I sold a lot of Intel's "server board" with dual pentium III cpus in it. between 2000 and 2001. Thank you.
I totally remember the celeron hotness. My girlfriend at the time was absolutely bonkers about hers, lol. Also, unless it's changed since I was in grad school, most dictionary-based compression algorithms are impossible to parallelize. The state of the "dictionary" changes after every token is processed, so it's not possible to know what the state of the dictionary will be at any point in the file other than where the "head" is.
Yep, back then I was constantly on the overclocing forums and being envious of all the wild Celeron overclocks. I think I remember the Celeron 300a was super popular because you could pretty effortlessly get 500+ Mhz out of it.
The way to parallelize it would be by decompressing two files at the same time - this actually *should* be an area where zip has a huge advantage over 'monolithic' archive formats like .tar.gz, since each file within the archive is an independently compressed stream, something that's historically been regarded as a disadvantage since it of course makes the compression itself less effective than it otherwise might be.
The only way i can think of parallelise compression is ti split one file into multiple smaller ones. This is what i did sometimes. instead of "onebigfile zip" i had 4x "partofonebigfile zip" which were then merged together after extraction. But this felt kinda whacky at the time and was a manual process, i have to admit, to cut a single file in pieces. It was buggy too sometimes, espcially when receiving cut pieces from someone else who may have used a slightly different algorithm or settings. Thankfully, most mainstream archiving programs can do it automatically nowadays in one simple setting. multicore dual cpu workstations were a different hassle too with the NUMA nodes. Virtualisation was a must. I remember having a dual xeon workstation with the core2 architecture. Was the same issue just with more cores. //typo
This was indeed a fun video. As good as the summer of bench vids has been, it's nice to have you back in the studio. Looking forward to part 2! I think it would be cool to learn more about NT in particular. I know there's probably plenty of existing resources out there but any time I look it up, I'm confused about why it existed (beyond the technical improvements made to Windows) as a consumer product.
@@EvilCoffeeInc In a nutshell, NT existed because Microsoft recognized the inability of the legacy Windows code base to accommodate the needs of businesses. Windows 3, while pretty impressive for it's time and target audience, was simply not sophisticated enough to power a machine that needed to have 24/7 uptime. It was decently stable on its own, but the kernel did not leverage protected mode as much as it needed to in order to protect the OS from misbehaving apps. NT was a clean slate redesign that was intended to give Microsoft something on par with Unix in terms of stability and scalability, while still allowing the use of existing windows software. The way they went about this, from what I understand, is they hired a guy who used to work on VMS, a legendary big iron OS, and told him to just go nuts and make a kernel that could stand up to the rigors of industry. They then bolted the Windows 3 interface and APIs on top of that, which produced something that by and large could be used the same as their existing consumer OS, but still had a lot of rough edges that kept them from just switching the entire product line over to NT immediately.
@@CathodeRayDude Also let's not forget that NT was portable across CPU architectures from the get go. Even the earliest release of NT supported DEC Alpha and MIPS in addition to x86, while 3.51 and 4.0 added support for PowerPC. In hindsight, all those RISC ports were little more than a curiosity, and got dropped with the release of Windows 2000, but I think this architectural investment in portability was important once the switch to 64 bits happened (also remember that the first 64-bit version of Windows was actually for Itanium!) and more recently with the ARM ports. And they were truly committed to portability. On Dave Plummer's channel you can hear his stories about how he ported the 3D Pinball to NT of all things, which was a demanding task because much of the original codebase was in x86 assembly, which was a no-go for NT. It had to ship for all supported architectures after all, and there were 4 of them at the time, so he needed to rewrite all the assembly code into C. Or C++, I don't remember. One of the two.
@@CathodeRayDude Also let's not forget that NT was built for portability from the start. Even the first releases of NT supported Alpha and MIPS in addition to x86, and they later added support for PowerPC as well. Those RISC ports ended up being little more than a curiosity in hindsight, but they paved the way for why it was relatively easy to port Windows to 64-bit platforms (including Itanium) and later ARM. And they were serious about it from the get go. Dave Plummer has a video in which he explains how he ported the 3D Pinball of all things from x86 assembly it was originally written in, to portable C or C++ because it had to run on all the supported architectures for the NT version.
As I understood it at the time, pre-NT Windows was basically two OSes stacked one on top of the other, with Windows employing a lot of hacks to overcome the various limitations of DOS. There's only so far you can go with that, especially since MS-DOS was last updated in 1994. Eventually you have to rewrite the kernel to keep up with newer hardware, and when they did, NT was what they came up with.
I needed this so bad tonight. Thank you Gravis.The place I worked at for past 12 years apparently was sold and we just found out today our last day is tomorrow, I got a baby on the way mortgage due etc etc and we were given no heads up so we could arrange our finances. Stressed isn’t the word. Thank you for always posting content right when I need it to decompress.
This episode was such a nostalgia fest for me. The first PC I built was from this era, and I was deep into some of this stuff. For what it's worth, I enjoyed the somewhat long journey you took us on to get to the actual motherboard. It had an Abit mobo and a Slot 1 Pentium 3 that I overclocked from 700 MHz to 900 MHz. To achieve that, I had a bunch of high flow fans, and I even did stuff like sand and polish the heatsink (which I'm pretty sure is useless now, but I had too much free time then). It was in a variant of the Enlight case that I rattle-canned metallic blue. I also dremeled a couple holes into the side and installed fans for extra cooling for the GPU and processor. All the fans made it sound like a jet spooling up when you turned it on. For its power light, it had one of the early blue LEDs. It was obnoxiously bright, enough to light up a room. I lugged it and a painfully heavy 19 inch CRT to a lot of LAN parties, and played a lot of Counterstrike and UT. It was my first DVD player, thanks to an MPEG card, and the first DVD I watched on it was The Matrix.
I ended up getting a 21" Sony crt that probably weighed close to 50 lbs or so from a cousin that got two of them. I lugged it around just like you said, we lanned almost every weekend in high school lol
The Enlight 7237 is the case I have built the most PCs in. My dad's company threw away tons of them and I put everything from Pentiums to Core i boards in them.
@@CathodeRayDude Awesome. That's a story that has definitely not been told, at least not on UA-cam. You've been spending a lot of time in old forums haven't you?
@@lemagreengreen Sounds like that one time intel forgot to disable base clock modification on non-k CPUs, allowing you to buy cheaper "non overclockable" CPUs, and overclock them via base clock.
@@hikkamorii Is that just like FSB overclocking? I admit it has been many years since I did anything like that but we used to always bump FSB up a bit. On many "multiplier locked" CPUs of the late 90s/early 00s there were tricks to enable multiplier control, Athlon first generation had pins on the cartridge board that we plugged dip switches in to control ,multiplier, Athlon Thunderbird/Duron had traces on CPU package we connected with conductive paint to set multiplier etc.
Really nice description of multitasking and multithreading. I mean, I already understand this stuff intuitively but this is probably the best explanation I've heard of how this works. Like, you didn't dumb it down to the point of being an abstraction, but you did make it simple enough to understand (I think) for someone who doesn't already understand this stuff intuitively.
I wanted a DayStar Genesis Mac clone in 1996 for no other reason than bragging rights. Almost nothing I used would take advantage of the extra PowerPC CPUs, but _damn_ those things were dope.
Classic MacOS itself had no concept of multiprocessing, you needed a special system extension to enable bare minimum support for it. Even with that extension, it only supported round robin SMP. Since Classic MacOS also had no concept of memory protection (it was a cooperatively multitasked OS like Windows 9x), it just made the system more unstable and more prone to crashing. Had Apple kept up development of A/UX when the first 60x MP Macs came around, they would have had a far more powerful market position, and not be stuck with crippled System 7 and bolted on 3rd party acquisitions until 1999/2000 when OS X came out.
@@Desmaad 9x had preemptive-ish multitasking in the same way Amiga did - which is to say, no wider protection from programs overwriting each other's memory spaces or crashing the entire system; but at least a program couldn't hog all the CPU time.
I'm only a quarter of the way through this video and as a first time viewer you have my sub. Your narration and diction is great and you really understand the nuances of the point you are trying to convey. Look forward to waking up at 3am with youtube jammin all night and having some oddball dream about overclocked Pentiums.
I was in retail selling computer parts when this board came out. You reminded of so much of my time then with just one video. I can't wait till part two. Thank you.
Overclock scene nowadays is a joke. The point of Overclock was to make slow mainstream cpu /gpu to go fast at a reasonable cost. Nowadays it means to make already very fast components to go even faster. Overclock became restrict to expensive cpu and high profile mainboards.
Kind of the reason why overclocking is a joke is that modern CPUs are either squeezed so much for every ounce of performance, binned into a new SKU entirely, or have pieces of silicon disabled.
Yeah, with cheap CPUs sometimes coming from artificially crippled high end chips, or at it's peak where you could literally unlock more cores in some AMD CPUs it was essentially chip lottery. Of course it was exciting! Nowadays there is none of that, "normal" components become unstable at anything other than stock bus speeds so you can no longer "literally overclock your entire PC at once" and "unlocked" chips you pay extra for performance that you may possibly get if you come prepared and lucky. I was big overclocker when I was running Core 2 Duo for way longer than I should and now I almost forget I ever did that... I'd just buy faster "normal" chip nowadays. I don't even overclock my GPU, it's just not worth it. This is what happens when corporations figure out their product.
I had one of these with Celeron 533s. Having an SMP machine in your house on a college student budget was a game changer. I ran Linux, FreeBSD, and BeOS 5 on it at various points. I could test multithreaded code projects without needing to log into to college lab server. And as a daily driver, running multiple apps was so smooth.
The BP6 can be credited with getting a lot of enthusiasts excited about getting into SMP. But sadly it also quickly garnered a reputation for being a quirky beast. The Highpoint ATAPI controller was particularly hated. I went with an Epox dual slot 1 board instead with Celerons in 'slotket' boards. Still loved following the BP6 community.
I still have that same Epox kp6-bs with the slockets, it was a beast for back then (though it has PIIIs in it now, i often think of sticking the celerons back in it for originality sake).
Yeah, the Highpoint was a plague on my board. I was planning to get the thing de-soldered but the board gave up the ghost before I had the chance. Was out of warranty by then and much faster CPUs were out, so it sadly went into e-waste. :(
It has a particularly special place in my heart. Used to have a hardware review site back in this era as a teenager, and the BP6 was the first board we were sent for review. These days I have BNIB one that I plan to recap and put into service as a FreeBSD retro web server.
20:11 WHAT!?! No backplate!?!?! 😂 I kid… But man, you’re taking me back on the nostalgia express! I’m so thankful you have this channel! I had one of these with overclocked Celerons, in a full tower case with casters on the bottom and three ultra wide SCSI drives, with a pair of 3-D Voodoo video cards running in whatever they called SLI. Please keep making these videos, don’t ever quit!
Also as a heads up, the reason we bought Celeron processors was due to the fact that you could overclock the hell out of them. You could not nearly overclock the pentium version. Not only because it would be unstable, but also due to the heat. The celerons gave you so much more performance when overclocking they blew past most of the pentiums that I knew about.
They're getting REALLY hard to find now -- even the early ones that all had exactly the same layout, with optional LAN above the USB, and optional sound.
>Vooodoo cards running in whatever they called SLI ...they called it SLI. nVidia got the trademark when they bought 3dfx's corpse. Though under 3dfx it was an acronym for "Scan-line interleave" while nVidia uses the much less specific "Scalable link interface"
@@Jay-ik1pt when did Nvidia make that change? I still recall everybody saying it stood for scan-line interleaving circa 2005-2010. (2011 is about when I stopped lusting after SLI.)
@@kaitlyn__L AFAIK nVidia bought the rights to the name and changed what it meant immediately as soon as they started using the acronym for marketing their own cards but my memory isn't THAT good to remember for sure. What I do remember, from reading about it back when nVidia's SLi first came about is that 3dfx's SLI and nVidia's SLi had little or nothing in common besides the name and concept, supposedly the actual technology behind it was quite different. But, again, going on old memories based on info from people who were journalists in the industry at the time, themselves relaying information from nVidia's marketing rather than the actual engineers.
I'm really excited for the next video. I recently picked up my first mid-90s PC, a HP Vectra XU5/133C, and it's been fascinating to figure out what its place was in history. It's a dual-socket system, but HP never sold it with 2 CPUs as far as I can tell. NT existed in 1995 when my Vectra was built, but it was weird and exotic even relative to the NT4/W98 days. It's been loads of fun to use it and learn what computing was like before I started to learn about it as a kid.
Fantastic story tell! I kinda love that most videos start with an obvious question that isn't answered till the end. It's usually pretty left field and a joy to finally get the answer!
Thank you for making this video. So much of it resonated with my personal experience around that time, when I was in late high school/early college, gaming and video editing. This was the experience that make me swear off single-thread CPU systems forever, and I didn't replace my BP6 running dual Celeron 300As (running at 464, I couldn't get my two to run at 504MHz, which was 112 FSB at 4.5x) until the Pentium 4 added Hyper-threading three-ish years later. I hope you are able to tell the story of Slotkets, the SL2W8 (the Overclocker's dream Pentium 2), and a time when overclocking was all about getting great performance out of cheap parts, and not the way things are now where it's about taking the most expensive parts and making them even faster while being less efficient. It was truly a remarkable time, where if you were "in the know" and willing to roll the dice, you could get a machine FASTER than the fastest "official" PCs, at half the price. What a ride it was.
Also - hopefully in your subsequent video on video acceleration that you teased here, you talk about the Pinnacle DC10/DC30 which was another wonderful example of getting more for your money!
There's not many legendary motherboards but BP6 is one of them. This thing with dual overclocked celerons, as much ram as you could afford and a copy of Windows 2000 provided serious epeen at the time, a legit PC workstation that many teenagers could afford. Of course it got absolutely thrashed by an Athlon released in the same year but meh, dual processors. It's still cool to have more than one processor and at least some FPS games could actually take advantage.
I don't recall about the Socket 370 ones, but the Slot 2 generation was significantly more overclockable than the flagship Pentium IIs because of their reduced onboard cache. Whereas the full-blooded P2s were running around 233/266 MHz, people were easily overclocking the Celerons to speeds over 400 MHz.
The Celeron A300 @100 MHz FSB. You paid the price for a Pentium II 233, but you got the performance of a PII 450. Half the CPU cache, but running at the full 450 MHz. Next to the 2600K one of the best Intel processors.
I'll chime in with the same comment many others have made. Win2k Pro is the OS you want to run on a BP6. It fixed so many of the issues NT4 had, especially driver support and better game compatibility. Even on a single CPU system back then I jumped straight from Win98 to Win2k as soon as I could and stayed there until after SP2 for XP had been released. Back then I really wanted a BP6 (and later a VP6) but I didn't have the spare money and the BP6's limitation of only (unless you did a hardware mod which came later) working with Mendocino core Celeron's (533 Mhz max stock speed) put a damper on my BP6 enthusiasm. At about the time I was going to pull the trigger on a VP6 board with dual Coppermine-128 celeron's in it in 2001 AMD released the original Athlon XP chips in October 2001. I attended an early morning AMD pop-up roadshow event in Chicago and won a top of the line for the time Athlon XP 1800+ and a MSI KT266 (not A) chipset based board. So I stuck with that for a while and put the thought of a overclocked VP6 rig aside. Those were the days though! Good times!
NT5/Windows 2000 Pro was what I ran on systems like this. Runs almost all W98 software since it had working Direct X 7 and even got DX 9 support a bit later. The UI was also pretty much exactly like XP but with a 98 skin on it.
@@GenOner Yes and no. The licensing between them was very different. 2kPro was much more restrictive with CPU count because it was never marketed for dual core and up CPUs. 2K also ran better on low memory/low power systems because it totally lacked the background windows update and activation infrastructure that XP got. Pretty sure the XP kernel also had some fairly significant changes too. Regardless, I skipped XP and Vista mostly, other than playing around with them on other systems out of boredom. I actually used XP-64 more than vanilla XP.
@@karathkasun I stuck with 2K as long as I could, but there came a day when it was no longer supported by Citrix and I couldn't log into work remotely (yes we did that before the pandemic) so I ended up getting upgraded to XP on the company dime. By that time, XP was in pretty good shape and the transition was pretty easy. I think I had one or two extremely old (Windows 3.0) apps that didn't work under XP, both of which had better replacements available for free, and I think I had to get patches for a couple other programs. Not surprisingly, those same programs broke again going from XP to 64-bit anything.
26:45 The way you started talking about the OS options just reminded me of The Hitchhiker's Guide to the Galaxy's talk about currency. "In fact there are three freely convertible currencies in the Galaxy, but none of them count. The Altairian Dollar has recently collapsed, the Flanian Pobble bead is only exchangeable for other Flanian Pobble Beads, and the Triganic Pu has its own very special problems. Its exchange rate of eight Ningis to one Pu is simple enough, but since a Ningi is a rubber coin six thousand eight hundred miles along each side, no one has ever collected enough to own one Pu. Ningis are not negotiable currency, because the Galactibanks refuse to deal in fiddling small change. From this basic premise it is very simple to prove that the Galactibanks are also the product of a deranged imagination."
2 місяці тому+9
The "celerons" story the missing bit is that Celeron CPUs were not supposed to run SMP, while those were obviously stripped down Pentums, this feature was blocked. But the lock was broken and this board was able to bypass the restriction.
IIRC, it had something to do with the BX chipset. It would not allow SMP on the Pentium III, to avoid cannibalizing the new Xeon market. But, it didn't stop you trying SMP on Celerons.... That, combined with the low cost and massive overclockability of the 300a ... voila. BP6's perfect niche.
2 місяці тому
@@nickwallette6201 Not sure on if/how this relates to BX chipset, but this was the thing on Pentium II Celerons already. PII was running SMP just fine. TL;DR is that there were two pins needed to boot CPU in SMP configuration - Intel nerfed one of them on Celerons so it never could be selected... but the failed to do so and by doing an improper power on sequence it was possible to bypass this limitation. For detailed reasons look for "celeron smp" ars technica article :) I guess they went further when s370 CPUs come along, but since 440BX started as PII chipset with no chipset-side restrictions on SMP/CPU support... I guess that was the next chapter in the story.
The BP6 came in at a very specific crossroads that made it "also consumer interesting" as opposed to "only enthusiast interesting". To address all points at the same time, previous closest thing would have been a dual slot A with a pair of 300A's running NT4. With pins drilled by hand (that's what the "unmodified Intel Celeron" means on the BP6). The BP6 could take (un-drilled) Celerons, Windows 2K was coming out with better Direct X and desktop feel and memory was becoming relatively cheap. All that together made it appealing to the consumer segment that wanted a bit more oomph but was budget limited. The next step on the "budget/consumer SMP" would have been the Dual Socket A with dual Duron's, that came out a bit later, ~2001 IIRC. p.s. you're totally right about the "doing other stuff while x is running". That's what made A64X2's and C2D's such a game change with Windows XP. Compared to previous P4/AXP boxes, the new ones with that extra core meant you could do other stuff while the "loaded core" was churning away. And after that, the Core iX with HT, that while not being full threads per core, DID make things A LOT smoother.
Yea I had a slot board with an adaptor for this reason. Got lucky as the board was able to be firmware upgraded to P3 and supported the whole Celeron overclock. I really did feel like I was sticking it to the man at that time.
@@warlockd Well, we sort of were ;) Think especially the 300A was a huge FU to Intel. And AMD's "wink and a nod" to the enthusiasts by laser cutting the bridges that you could easily restore with a pencil was another way to stick the knife in Intel's back. It was like "we know who you are, we know you're on a budget, we have to pretend we care but we really don't, knock yourselves out". And look were we are, Intel still nickel'n'dimming and AMD still not caring and still pulling ahead... If they could bring their 8/16's down to 6/12's price range, Intel would pack the back and call it a day.
You're spot on with the "smooth feel" thing. I remember when we went from high-end singlecore (Athlon XP) to relatively early multicore CPUs (Brisbane Athlon 64 X2), it was astonishing how smooth the things felt. It was a jump in usability that was only trounced by the introduction of SSDs later on. We were on windows XP by that time, so software was ready. Before, when you launched say a browser you couldn't also open task menu. You were used to it, it had always been that way. With two cores you could use the machine even while it was "thinking". It was revolutionary, we just got used to that new normal so fast.
I loved that board. It was my first watercooling rig. I had to machine my own waterblocks and used a Ford Aerostar transmission oil cooler as a radiator with an aquarium pump and was rocking dual 366mhz Celerons OC'd to 600mhz. I was essentially rocking 1.2 ghz of processing power when 800mhz cpu's were about the fastest you could get. It was also significantly cheaper. That is what was so special about the BP6.
some info about NT4 and why it does not have some features of W95/98... Inside MS work was going on for NT 4 (Cario) to upgrade NT3.51 (Daytona) Win95 hit the market and the big business customers started wanting Win95 interface on NT 3.51..there were lots of arguments internally Cairo was not done so basically MS was at least 1-2 years out for NT 4 Cairo..well those big businesses carry alot of weight they buy things by the truck load...so NT 4 Cairo was put on hold (or at least planed for NT5) ..MS needed something fast..so they basically took NT 3.51 and put Win95 Chicago skin on it (there are a couple of dialog boxes deep inside NT 4 that are actual NT 3.51 boxes)..and released it as NT4 ...hated by overwhelming amount of people at MS...sold like crazy to businesses as an upgrade...work continued on NT5 Cairo but due to lots of issues a good portion of it had to be scrapped (i.e. Hermes etc) so what was left was now NT 5 Alpha (which would become Win 2000) ..this should explain the issues you had and also wondering how far back Win 2000 went... W2K Alpha would go back to late 1997 and Beta 1 would have been mid 1998... Internally MS was pretty proud of Win 2K and wanted to forget about NT4 NOT RELAVANT: i don't know how much is known about Hermes project i got to see it in action (in early Alpha) while i was at MS and play with it...one of the many features was that you had a master Admin screen for your whole company and you could configure it in many ways by department...i.e. when a machine was booted it could re-format the drive go out and get that departments disk image and install it automatically..(assuming large LAN pipe) any viruses or software the employees put on it would be gone..you could also manually trigger this and have the machine boot normally unless triggered ..all controlled by the master Admin screen..the part that was being worked on when it was stopped was in Admin screen you could drag and drop software packages and it would create a disk image for the department.. a part of Hermes became Systems Management Server (SMS)
Intel distributed a version of the dual-Pentium-2 Precision 410/420 (with their own branding instead of Dell's) as an i440BX chipset reference system. One such system (with a POST card installed) known as the "Crash Box" has been in use at Carnegie Mellon University for testing student OS projects for decades. I have fond memories of watching my kernel boot on this machine for the first time.
Biggest Issue with NT4, was lack of USB support. No USB flight stick\gamepad for you. Direct X support only up to 3.0 another big hindrance to gaming. PCI sound cards also an issue, you probably only would of had luck with a SBLIVE! and only the early versions of it too. Windows 2000 is where SMP systems got alot better for gamers running such a system.
Thanks for putting this out when you did. Throwing it on with a burrito and a beer was a great way to wind down from processing emotions about Cohost closing. Looking forward to Part 2!
I have watched so many bench videos that this - proper - video absolutely blew me away. Well done! Not gonna lie, I was thinking about turning it off during the Benchmarks chapter, but I'm glad I didn't.
SMP is a term used at least within OS development generally for all systems that have multiple cores with the same capability (speed, instruction set, memory protections), so it's inaccurate to say it disappeared with multicore SoCs - it rather took off and is now the norm. The opposite of SMP is things like Intel's big&little cores, the same thing on ARM phone SoCs, and microcontrollers like the ESP32-C6 that has extra low-power cores with different instruction sets.
Interesting. Now I have to figure out how to read up on that. I figured "E"fficiency cores were just for doing tasks that are somehow designated as requiring little processor time, with very low probability of wrong predictions. But that was just a guess, an inference from the name. And I had no idea about that last processor you mentioned.
I never owned one but my recollection of what made the BP6 hot at the time was specifically the fact it worked on Celeron processors, which was not supposed to be possible or maybe even allowed by Intel. Also its release happened at the around the same time as Windows 2000 (beta and RC builds were widely "available" in 1999) which was a great desktop OS compared to Windows 98, being able to run most games without issues too and it supported SMP. I have some recollection of my friend with a BP6 demonstrating how smoothly he could browse the internet while having other stuff running on the background.
Bravo! I felt the same as you with celerons, of course, but I knew looking back, there was something going on. Then you explained it and it made perfect sense again. The two of them explaining the CPU work, made a lot more of it click for me. And then hit us with another part, going more in-depth. Perfetto! Love to see this direction you’ve realigned with.
every time anyone mentions Win NT... I am reminded via nostalgia that NT is a fork of OS/2... Then I am reminded via more nostalgia that Win2k solved most of, if not all the problems that NT had and was limited by.
My first dual-core CPU was an AMD Athlon 64 X2 3800+ which I put in my second build when I was a kid. The first thing I did when it was all up and running was play Counter Strike: Source while running a full-disk scan with AVG antivirus. I was instantly convinced that it was a game-change. What a time. Great video. I love your long-form content and I eagerly await part 2.
I finished my engineering degree in 2002, and had daily driver of Windows 2000 with dual boot of Windows ME (which worked better for me than 98 SE, which other preferred). I worked on a dual core machine which was part of my final degree project (it utilised multi threading for data processing). I remember thinking how I barely even needed ONE CPU to perform the task, and just using the dual processor config just so I can write this in the project description and justify buying the data, lol. It kinda made the system more stable of course, in case a background process would hog one CPU, the other one was running my code fine ("real time" data streaming in).
As a BP6 polymod owner, I loved every minute of the video, even if people pointed a few caveats. I do have 333s on mine, hope I can bump the FSB to 100. btw as far as Award BIOS is concerned, the earliest iteration similar to yours was around 1993-1994.
I had a Precision 420! I had no idea the PIII worked with RAMBUS, I coulda sworn that was just a P4 thing. I also was completely unaware that you could do dual slot PIII. I thought that was just a Xeon exclusive thing at the time. Such a weird PC.
The memory controllers were on the chipset instead of on the CPU back then, so in theory you could pair any kind of memory with any kind of CPU. Intel themselves proved that when the initial Pentium 4 chipsets only supported Rambus and they claimed it's essential for P4s to work, but then a year later they caved in and released a chipset with regular SDRAM (SDR and DDR) support.
This brings back so many memories. The struggles of early multiprocessor machines. The lack of multithreading in software. The scheduler not really being the best for it, although, Windows 2000 and XP were a world of difference for SMP support. Then getting into things like processor affinity if you really wanted to fine tune a system. And then there was being in school learning about OS concepts. Threading, preemptive and cooperative multitasking and how preemptive is the only way to go. Windows 3.1 is a stark reminder of why cooperative multitasking doesn't work - one process breaks the rules and it's all over. I can say, adding threading to an application can be very complex, I've done it before but you really hit the nail on the head. It can be very hard to do and it needs to be done where it makes sense. It doesn't always make sense. Games, to your point are heavily multithreaded. Things like music, physics, graphics code, disk access, net code, etc.; are often all broken into threads. However, those threads may not be further subdivided so it can give the appearance of single threading where there's slow downs when a lot happens all at once and overall CPU usage is relatively low. Of course, synchronization across processors back then was a huge pain which is why a lot of old games struggle running on SMP or multicore machines. So it was all effectively single threaded even if there were some parts split into other threads. I think this video really did a great job at explaining a lot of these complexities without going into the mindbending realities of how threading is accomplished and controlled. Thank you for making this!
I had incredibly early broadband in 1997 with an INSANE speed of 300 kBit/sec. Back then I pulled ethernet cable to share the connection with my brother through a wall and just twisted the appropriate copper wires back together. Even piggy-backed ISDN over the unused pairs. Some scotch tape, tip top! You young people with your "crimping". pfft!
This brings back soooooo many memories, I worked in IT back then, so I remember working on the HUGE, weird servers like that big boy you showed, as well as the "small" servers like that Dell. I remember running NT 4, 98, Red Hat, BeOS, AND OS/2 all on the same machine just to prove to myself it could be done. I remember struggling to get games to run under NT because of its limited support for DirectX. And yes, I remember using jumpers and DIP switches to set up clock speeds, IRQs, and the like, as well as working with a rat's nest of cables and trying to route everything around those damn ribbon cables that took up so much room, and forcing reluctant Molex connectors to plug in because I am the old. I love your channel and I love to see all of the old hardware that I remember from my youth. Thanks for resparking my memories.
Yes, the BP6 was the first and only Socket 370 MENDOCHINO SMP System... Pentium 3 weren't supported... THough there were earlier Dual Slot1 BOards, such as the ASUS P2B-D or others, but nothing that was made for Celerons.
This board is a couple of years before my time, I started building in 2003 and I went with an Abit NF7-S, it had the same asthetics as this board. Awesome video man, above and beyond.
I too have been having problems with PCI sound cards. Also just later 90's sound cards in gereral. They all work prettey much perfactly in Windows 98, but then I switch to DOS and the only soundcard that does anything my SB AWE64. I use usisound in DOS btw.
a lot of later pci soundcards only work in dos with weird drivers, i've had good luck with emu10k1 based sb live values out of old dells but it's been a while and mostly i just use my awe64's
I am *so* looking forward to part two! Never stop being yourself and going completely overboard in your research, it's so amazing as I wonder in very similar ways =)
You had three different types of people using those things back in the day and only two did actually benefit from it: Full on Linux nerds (Compile jobs etc), Game-server hosts and those with too much money that just wanted "the best". Kinda like the dorks that went with Pentium Pro and discovered that windows 9x and its apps don't run well due to slow 16-bit support in that processor 😂
You know personally any of those "dorks" or it's just some urban legend? The Pentium Pro was a professional grade processor and unavailable for normal people. No one in my circle had it back then.
@@lordwiadro83 even worse. I were one 😂 They really, REALLY struggle with 16-bit code so w9x runs like molasses. I think you can even find benchmarks and videos these days if you want to see for yourself
59:21 Penium4 era Celerons really ruined the whole line. Imagine having a ~800Mhz P3-based Celeron, spending a good amount of money to "upgrade" to a ~2000MHz P4-based one a couple years later and finding out that programs don't run that much faster. What a fall from grace that was the line that had greats like the Celeron 300A.
I got super excited when I saw that Dell Precision! As a kid, my friend's dad offered me an old server and I gladly accepted. It was a Dell Precison that looked exactly like that one on the outside, but was slightly different on the inside. On mine, the rambus and cpu locations were swapped. The rambus was installed on two daughterboards that went in together and had these black wing handles that flipped out for removal. The CPUs were two black hunks of heatsink metal with a holographic Pentium Xeon sticker on each. It was the coolest thing I owned for several years. Never did much with it, I don't think the 10,000 rpm SCSI hard drive worked. I got Ubuntu running off a disc, but I didn't get far as a kid with limited internet access. Many years later I stripped it down and gave some of the parts away. Thanks for reminding me of a great childhood nerd memory.
The overclocking by far was the reason you got this setup, and why it became legend. Also had a BP6 with a pair of Celeron 400s (and GeForce 256) in 99, and I can't think of another CPU or even computer component over the last 30 years where you could not only get them at a semi affordable price, but increase their throughput by 70%+ via some small tweak in the BIOS (and a pair of FEP32 coolers), and at the very early stage when software was starting to support multi-process, get two of that same crazy value to work together. Those that were able to pick up a BP6 and a pair of Celeron 300As got just an incredible amount of value, I just don't think that has been seen since (would love to hear of other examples). And then Win2k became easily.. accessible and was still able to game, so was another win for this setup. To give people a modern equivalent for how much value these offered would be similar now to getting a being able to buy the cheapest Ryzen 3 and having a software menu to switch it to a 7800X3D for gaming, and a 7950X for productivity. When things like that come around, they become legend, because it is just so rare.
Oh Abit. I had a BX6 back in the day and it was a beast. Such great products made by an absolutely incompetently managed company that drove itself into the ground almost as fast as it grew.
@@meramsey It was indeed! I worked at a computer store at the time and we sold that exact combo to a bunch of enthusiasts. We kept telling the owner he should make a bundle out of it.
Man. It's like you made this video about my evil twin. I ran the Asus dual slot 1 P2B-DS board back then. Bought it with a couple of Celeron 300 and had to run the extra traces and drill out the back of the package to make them work. Then moved to dual P2 350 overclocked to 412Mhz. Then from that to P3 850. Ran SCSI for my optical drives and a 6 channel Promise IDE raid controller. 3Com NIC. SoundBlaster AWE64 Gold for sound card. Matrox Millennium 2 for 2D graphics and Orchid Righteous 2 3DFX for 3D until I upgraded. I was literally the dude at the LAN parties you were talking about. And yes. It was all under NT5/Windows 2000. Didn't upgrade until games insisted on Windows XP. This was an epic trip down memory lane. And yes I have most of that hardware here still :)
I had that same rig, minus the modified Celerons! Bought at an auction because my boss wanted the ATX case and power supply, I got to keep the case and the motherboard with dual Pentium II 400's, which was a huge step up from my overclocked Pentium MMX (200 running at 250) once I got a power supply (re-wired from an Apple, if I remember right) and eventually a case. That was my first Linux machine, since I didn't have access to Windows 2000 and Linux Journal came with distros, I could build an SMP kernel with optimized audio processing.
OS developers couldn't care less about that. It will switch task to whatever core is available if task was paused and a different core becomes available before the last one becomes available again. When it comes to performance you don't want any thread to change core's ever unless absolutely necessary. Spreading thermal load is purely by accident.
@@araarathisyomama787 modern OS schedulers tend to switch to whichever CPU core has the lowest utilisation rather than whichever is the first-available in a round-robin type way. Plus of course with modern Big-little architectures, the schedulers also try to keep them on the same type of core even when they are switching around.
Oh man, I went to college in the fall of '99, and a guy on the 5th floor of my dorm had a BP6 with two 366 Celerons overclocked to 500mhz, and he was the BADDEST DUDE on the block. I thought I was cool with my K6-2 450, and there was another guy on the 5th floor that had a slot1 P3 550, but we were all just bugs on his windshield! I had always heard that one of the things about the BP6 was that you weren't supposed to be able to run Celerons in SMP, but this board let you (hence the use of "unmodified Celerons" in the Wikipedia first paragraph). And there was something about a "pencil trick" to unlock the potential of Celerons in this era too. It's just ... fuzzy in the mists of time, and I guess I'll have to wait for part 2 to jog my memory.
53:47 The BP6 didn't have an APIC, so interrupts were still "slow". DMA would have helped, but DMA was still poorly supported, even in 1999. Even so, without an APIC it's a real limitation. This means hardware I/O would have been a real kicker for SMP efficiency, regardless of the caching in those two celeries. Celery chips were also starved for cache compared to what we would consider normal for SMP chips now, or even around that time. Still, that board made for an awesome budget linux server. Ahh, good times.
@@rasz Celery chips back then didn't have a local APIC, so in the end it was essentially like having a legacy PIC. You're correct that DMA existed long before the BP6, but support both in hardware and software was often flakey, with the perfect example being that HighPoint controller. Its DMA support is what gave it the bad reputation. There were plenty of examples of disk controllers back then were if certain DMA modes were enabled, performance and reliability would decrease, like UDMA/66 on that HighPoint controller. In these cases, a lot of controllers dropped back to PIO as a safe fallback, rather than a lesser DMA mode, which resulted in high CPU usage.
@@rasz UDMA for an ATA controller requires advanced DMA modes on the motherboard for bursting, so it does it with bus mastering, double-buffering, pipelining, double data rate, etc. but I guess you knew that. Everyone back in the day. It's still DMA (direct memory access) just not the way it would be done via an Intel 8237, but I guess you knew that and you're just being argumentative for the sake of pedantry. The Mendocino core celery chips definitely did not have a local APIC, or at least there was no way to activate it. This was surely part of Intel's cost-cutting measures back then. Later celery chips surely had a local APIC, probably when they were updated to be based on the Pentium-4 architecture, but none of those can be used on a BP6. The early celery chips were super cut down. The other big cut from the Pentium line in creating the celery chips was the L2 cache, although it was internal and ran at CPU speed, so... meh. There was also no L3 cache, they didn't support ECC RAM, they had a much lower FSB speed, and they had a crippled SIMD instruction set. It's all fairly obvious given the price point they were going for. Without a local APIC, windows NT or 2k, or Linux, wasn't able to properly control the affinity of interrupts. If I remember correctly, the BP6 did have an I/O APIC but without support in the CPU it's moot since everything would have to go through the legacy PIC anyway. With the legacy PIC, the interrupts couldn't be smartly routed - the OS cannot control the affinity - so in heavy I/O, one CPU may be hammered with interrupt requests, especially if the storage controller is crappy and keeps dropping down to PIO mode. I ran a couple of these boards as servers on the cheap and tried to squeeze every last bit of out them, which taught me a lot about the right way and wrong way to do SMP. For certain loads, the dual-CPU isn't much better than a single-CPU, but it was a very good board for running websites on Apache with CGI scripts, so long as the network I/O wasn't too heavy. This is 25 year old knowledge, and I think it doesn't matter what I say to you, as you will continue to argue :)
@benespection Yes this is 25 year old knowledge so im surprised you keep arguing about it :) >UDMA for an ATA controller requires advanced DMA modes on the motherboard for bursting all in the Chipset and fully supported on 440BX/ICH. >double data rate no DDR involved in UDMA > Mendocino core celery chips definitely did not have a local APIC Linux bootlog: >>mapped APIC to ffffd000 (010c1000) >>Initializing CPU#0 >>CPU: Intel Celeron (Mendocino) stepping 05 There is a problem with interrupts on this board, but its this board specific and has nothing to do with Celerons. You can find old thread on Linux mailing list titled "The buggy APIC of the Abit BP6" for more details. Something about errors under higher interrupt load and missing IPI messages. >they didn't support ECC RAM because CPUs at the time didnt support any ram :) Ram controller was in the chipset, BP6 does support ECC ram. >crippled SIMD instruction set Same MMX as normal P2 of same generation :p Even running with NOAPIC you only lose performance in server workloads.
What a fantastic idea for a new series. Thank you for all your videos Gravis. I really appreciate the unique lens you bring to these topics. Can’t wait for part II.
Exactly! And the 80 wire IDE cables starting to come with ATA/66+ were nice and stiff for keeping folds. Keep in place with double sided sticky pads or masking tape, and don't think how much of a mess that makes when you crack it open again a few years down the road.
It's been 25 years since I built that BP6 /w 2 Celeron 300As? They all dropped in and ran at 450... OMG Leo! I listen to TWIT every week! Thanks for the nostalgia!
Whoever gets mad at your explanations going longer than 10 minutes, they're wrong. Love listening to the long winded and rabbit hole filled explanations.
Definitely throw Win 2k on that thing. 2k fixed so many little issues with NT4 and you should see a lot better compatibility with games, and iirc most BP6 owners swapped to 2k as soon as it was available.
Windows 2000 is so underrated. It was basically XP with the Win9x/NT4 UI before XP even existed.
been trying win 2k on my single 1ghz p3 machine and (saying this as someone who grew up with win xp in the late 2000's early 2010's) it freakin kicks ass! orders of magnitude better than win98se in practically every way.
I kept Win2K on an SGI VWS320 and used it as a file server for years until shitty rural power killed the power supply in 2011.
Finest version of windows ever made and nobody used it.
Agree have used win2000 on dual cpu setups. i had custom workstation with two pentium 3 ( it was in 2002) it was my "in" movable data server. Used for client data copying and storage! And it was a Dell workstation motherboard cramed into a smaller case but because of 6 scsi disks thirtysomething gigs eatch it was heavy!
The amount of times this guy will say "I can't make a video about a PC, all they do is verify DMI pool data, boot DOS, and you type dir" and then still make videos about PCs (even if they're unique) is crazy. Don't stop now!
I owned this board when it was brand new. Pair of Celeron 366’s at 550. And you missed the reason why this board was so important. The Celeron wasn’t supposed to be capable of being multiprocessor capable. Then when the Celeron went from being slot based to socket based, they made “slocket “ adapters, this was a competitive market and the better slockets found a way to work around Intel’s SMP limitations with the Celerons. Turned out it was just a pinout difference. Abit, wanting to make a name for themselves built a motherboard with this mod built into it, and built this dual socket board. Not only were we taking Intel’s budget CPU and overclocking it 50% making it faster than the laggy Pentium II CPU’s, now we were able to build an epic machine that could run TWO of them for less than the price of ONE Pentium II. We were sticking it to the man! And Abit was our hero for taking them on.
Also, your pair of 500’s are on a slow 66MHz bus. The system really shines when you can run a 300 or preferably a 366 on a 100MHz bus at 450 or 550.
Intel should make the same “mistake” today
I couldn't have said it any better!
I ran two 400 MHz Celerons at 600 MHz on BP6. First time I booted, made my head spin.
I had the exact same setup
Bingo. Being able to build an SMP system for so little was a huge novelty in of itself. Intel wanted SMP to be a premium feature, even though it was functional throughout the line if not artificially constrained.
OMG I absolutely had a build based on this motherboard. Loved it. After it ran its course as my main machine, i moved the internals to a much taller tower, installed a ton of drives with a RAID hardware card, and turned it into the file server for our FTP sharing group (shouts out to the rest of ANiVCD!) and it lived under my desk and then in my closet.
Ah, memories!
The tiny bit I can contribute: those Starcraft images were splash screens.
Specifically, they're the backgrounds for the results screen, for a terran defeat, and a zerg victory (though, I think the zerg victory one was also just used as key art in a number of places).
Clocking the very cheap 300 mhz celerons to 450 mhz by raising the bus speed from 66 to 100 mhz was a dream steal at the time.
@@mrquicky Meanwhile, the potential performance gains from doing an FSB OC on early P4s is scarcely known even today. Let's just say if more people knew about it, there would be no late PIII/early P4 debate and intel is guilty of an extreme overabundance of caution when releasing the first generation of P4 CPUs. They set Netburst up to fail from the start. This is a night and day difference compared to intel in 2024, where they push their CPUs far beyond what is reasonable for the tired, old P6 microarchitecture from 1995 they are still using. They've had to drop HT purely because of their wheezy old Pentium Pro on steroids needing more power than the silicon can handle. They need a new microarchitecture and should have started developing one when Ryzen launched and, in the near future, Pat Gelsinger will be known as the idiot who destroyed intel.
Current intel CPUs are the equivalent of trying to drive a Ford Model T at 200mph....then there is the fact that Intel dGPU market share is so low, it is rounded down to zero percent. They failed to take note of how hard AMD has to work just to stay above 10% market share.
Hey Gravis. I can remember when you asked on Patreon a few months back, what we would like to see more. I think you're literally killing it. I am too young to would have understood that era. I have no intent to ever buy a retro gaming pc for myself. I will never even try to use any of these old pieces of software. But I wtached the entire video without a pause.
I love the little guy series. I love your insights from e-waste. Back when you were going full on youtube and quitting your day job I was actually a bit worried. But this was obviously just the fear of change.
Wish you the best, greetings from Germany.
The reason for the CPU pingpong of processes was to make more effective use of the caches. Every time you had a TXT page fault it would switch cores. There was a non-zero probability that the cache on the other core hadn't evicted the page you wanted. It also meant that when you went above 50% core utilisation and a process got evicted, it would be landing on a core that probably already had it partially cached. The cost of context switching was less than a millionth of the page fault on rotating rust, so it wasn't a costly gamble.
I also wondered if it was to balance usage over time or to not overheat one while the other's running cold. Both running at a mildly high temp has got to be better than one burning itself out, right? I don't know if that would happen with just a bunch of basic windowed applications open, I am not really familiar with old Windows, but if it did it seemed like a reasonable thing to implement to me.
The oldest Windows I remember using genuinely was probably XP, but I was young and the computer was already outdated then. It's funny, I probably a little too young for XP and VHS and CRT TVs but I had all of them because my dad was a cheapskate, I think. I mean, if it ain't broke and all that. I also didn't get a phone until I was a lateish teenager, and it wasn't new, it was a hand me down. Awkward story ahead, here be dragons: One time, when I was really young, I took a picture of my, uh, willy with someone's phone, not mine but a family member, but then got really scared and couldn't figure out the interface to delete it, it was probably Android 2 or something. Core memory, I was very stupid.
@@trashtrash2169 Well, at least you didn't try to cut the darn thing off. That wasn't me thankfully, but it was my oldest brother!
this channel has the best comments
@@yuribacon Yikes! That reminds me of this one time when my oldest sister told me about some kid using nail clippers to do the same... That's messed up! Why did she do that? Core memory, but not my fault this time.
This channel is proof that no matter how many tech videos exist on YT, there is always room for the nice personality + well-researched video. By simply following your own interests, Gavin, you’ve got a wonderful thing going here.
Tech Tangents made a video on this same board but mostly to document replacing bad capacitors. It might be worth checking your board's caps to see if they're still good.
Rather, swapping the caps makes sense before they die. If the caps go bad the board will stop working, varying from just black screen with no POST to magic smoke and dead board. Mostly they start going flaky, it'll POST and run normally then freeze, and then won't post. Wait 30min, and it'll POST, boot and run again for a random amount of time. Best is really to check the cap brand against the list, or the board model, order replacements, roll up sleeves while soldering iron warms up and go to work on it. If one waits until the caps go to shit, there is a non trivial possibility the board will stay dead even once caps are replaced, ideally it should be done on a working board.
This is a very high quality dive into the system and provides great context. I am the proud owner of the motherboard's little brother, the VP6, with new capacitors and dual 1GHz Coppermines. I've been building it and "enjoying" the process of making newer hardware work on an old motherboard. Ever since I heard about SMP back in the day, I had to have it. Thanks for this awesome retrospective!
Nice and congrats!
Hey, if you need help with telling the Stories (1:01:47) in the future, let me know. I'm part of a pretty big community (The Retro Web) who are probably a bit too knowledgeable/obsessed about that era of PCs for our own good. I (and many others in the community) would be happy to help connect any missing dots.
I myself have what may have been one of the low cost (read: sketchy) direct competitors to the BP6 -- a dual 440LX Slot 1 board from everyone's "favorite" brand, PCChips (though it's branded Amptron). It's a.... very strange board.
id actually be interested to see that thing
Pcchips that brings back memories.
@@ProtoMan0451 If I ever get back to video production, that board will definitely get its time on camera. In the mean time... I can't post a link because UA-cam, but go to The Retro Web (easy to find) and look up the "Amptron PII-2200 V3.1"
Remember the PC-Chips Lottery website? The martyr that ran that webpage should have been knighted for his efforts.
I adore this show concept. The best part of it is that you talk about everything, so even newbies can understand it. Explanation of multithreading and multitasking is fantastic!
Anyway, very good topic to go over. SMP was stong pain in the early days and I kind of encountered this myself. About half a year ago I've build dual socket 370 computer with 1GHz Coppermine chips and fast SCSI drives. Game performance... well.. it's nearly identical to single chip, but multitasking is something other. It's running under Windows 2000 and it basically shreds everything you throw at it.
Also, two servers you've shown are fantastic pieces of hardware, I'd love to get my hands on one of these, just to play around with networks and such.
Absolutely loved mine. Beige box with a window cut in it myself. Dual Celeron 400's running at 550mhz with water cooling and CCFL lighting. Had a 500 watt car amp bolted to the right side of the case and a second PSU with a half farad cap, and i DJ'd parties with it!
I loved that era of computing.
Everything you did was fully custom. You could browse posts on forums and get cool ideas.
I had a K6-3 400 that I could only overclock to 450 and stay stable, and I was jealous of the crazy overclocks the celerons could get.
I think I remember the Celeron 300a being notorious for effortlessly overclocking past 500mhz.
In 2002 I needed a new case because my previous one was AT format, not ATX. This was of course when everything was beige, so I knew I'd have to paint it and every other component I ever put in it. What I didn't know was that black optical drives would become common, or I probably would have taken that route. Instead I went with metallic dark blue. And that's why I was in the back yard the other afternoon spray painting a DVD burner with the very same can of paint I used 22 years ago, because I'm still using that case.
as a 64 years old man and a PC technician. I still use on my 13th gen Intel wıth that case (only PS changed through out the year)s. Its nice to see a smart guy who took me back those good old days.Thank you. I sold a lot of Intel's "server board" with dual pentium III cpus in it. between 2000 and 2001. Thank you.
I totally remember the celeron hotness. My girlfriend at the time was absolutely bonkers about hers, lol. Also, unless it's changed since I was in grad school, most dictionary-based compression algorithms are impossible to parallelize. The state of the "dictionary" changes after every token is processed, so it's not possible to know what the state of the dictionary will be at any point in the file other than where the "head" is.
Yep, back then I was constantly on the overclocing forums and being envious of all the wild Celeron overclocks.
I think I remember the Celeron 300a was super popular because you could pretty effortlessly get 500+ Mhz out of it.
The way to parallelize it would be by decompressing two files at the same time - this actually *should* be an area where zip has a huge advantage over 'monolithic' archive formats like .tar.gz, since each file within the archive is an independently compressed stream, something that's historically been regarded as a disadvantage since it of course makes the compression itself less effective than it otherwise might be.
450 was a given, 504 was common but not always
@@krz8888888504 on a cold night, backyard slider door open, case next to door
The only way i can think of parallelise compression is ti split one file into multiple smaller ones. This is what i did sometimes. instead of "onebigfile zip" i had 4x "partofonebigfile zip" which were then merged together after extraction. But this felt kinda whacky at the time and was a manual process, i have to admit, to cut a single file in pieces. It was buggy too sometimes, espcially when receiving cut pieces from someone else who may have used a slightly different algorithm or settings. Thankfully, most mainstream archiving programs can do it automatically nowadays in one simple setting.
multicore dual cpu workstations were a different hassle too with the NUMA nodes. Virtualisation was a must. I remember having a dual xeon workstation with the core2 architecture. Was the same issue just with more cores.
//typo
Thank you so much for this trip down memory lane! Had a BP6 and absolutely loved it!
And I had regular Pentium 3 in mine.
This was indeed a fun video. As good as the summer of bench vids has been, it's nice to have you back in the studio. Looking forward to part 2!
I think it would be cool to learn more about NT in particular. I know there's probably plenty of existing resources out there but any time I look it up, I'm confused about why it existed (beyond the technical improvements made to Windows) as a consumer product.
@@EvilCoffeeInc In a nutshell, NT existed because Microsoft recognized the inability of the legacy Windows code base to accommodate the needs of businesses. Windows 3, while pretty impressive for it's time and target audience, was simply not sophisticated enough to power a machine that needed to have 24/7 uptime. It was decently stable on its own, but the kernel did not leverage protected mode as much as it needed to in order to protect the OS from misbehaving apps. NT was a clean slate redesign that was intended to give Microsoft something on par with Unix in terms of stability and scalability, while still allowing the use of existing windows software. The way they went about this, from what I understand, is they hired a guy who used to work on VMS, a legendary big iron OS, and told him to just go nuts and make a kernel that could stand up to the rigors of industry. They then bolted the Windows 3 interface and APIs on top of that, which produced something that by and large could be used the same as their existing consumer OS, but still had a lot of rough edges that kept them from just switching the entire product line over to NT immediately.
Gotta love the unfettered willingness to pen a spiel in the UA-cam replies. CRD is the last bastion of real UA-camrs, the 'You' has meaning after all.
@@CathodeRayDude Also let's not forget that NT was portable across CPU architectures from the get go. Even the earliest release of NT supported DEC Alpha and MIPS in addition to x86, while 3.51 and 4.0 added support for PowerPC. In hindsight, all those RISC ports were little more than a curiosity, and got dropped with the release of Windows 2000, but I think this architectural investment in portability was important once the switch to 64 bits happened (also remember that the first 64-bit version of Windows was actually for Itanium!) and more recently with the ARM ports.
And they were truly committed to portability. On Dave Plummer's channel you can hear his stories about how he ported the 3D Pinball to NT of all things, which was a demanding task because much of the original codebase was in x86 assembly, which was a no-go for NT. It had to ship for all supported architectures after all, and there were 4 of them at the time, so he needed to rewrite all the assembly code into C. Or C++, I don't remember. One of the two.
@@CathodeRayDude Also let's not forget that NT was built for portability from the start. Even the first releases of NT supported Alpha and MIPS in addition to x86, and they later added support for PowerPC as well. Those RISC ports ended up being little more than a curiosity in hindsight, but they paved the way for why it was relatively easy to port Windows to 64-bit platforms (including Itanium) and later ARM.
And they were serious about it from the get go. Dave Plummer has a video in which he explains how he ported the 3D Pinball of all things from x86 assembly it was originally written in, to portable C or C++ because it had to run on all the supported architectures for the NT version.
As I understood it at the time, pre-NT Windows was basically two OSes stacked one on top of the other, with Windows employing a lot of hacks to overcome the various limitations of DOS. There's only so far you can go with that, especially since MS-DOS was last updated in 1994. Eventually you have to rewrite the kernel to keep up with newer hardware, and when they did, NT was what they came up with.
I needed this so bad tonight. Thank you Gravis.The place I worked at for past 12 years apparently was sold and we just found out today our last day is tomorrow, I got a baby on the way mortgage due etc etc and we were given no heads up so we could arrange our finances. Stressed isn’t the word. Thank you for always posting content right when I need it to decompress.
This episode was such a nostalgia fest for me. The first PC I built was from this era, and I was deep into some of this stuff. For what it's worth, I enjoyed the somewhat long journey you took us on to get to the actual motherboard.
It had an Abit mobo and a Slot 1 Pentium 3 that I overclocked from 700 MHz to 900 MHz. To achieve that, I had a bunch of high flow fans, and I even did stuff like sand and polish the heatsink (which I'm pretty sure is useless now, but I had too much free time then). It was in a variant of the Enlight case that I rattle-canned metallic blue. I also dremeled a couple holes into the side and installed fans for extra cooling for the GPU and processor. All the fans made it sound like a jet spooling up when you turned it on. For its power light, it had one of the early blue LEDs. It was obnoxiously bright, enough to light up a room.
I lugged it and a painfully heavy 19 inch CRT to a lot of LAN parties, and played a lot of Counterstrike and UT. It was my first DVD player, thanks to an MPEG card, and the first DVD I watched on it was The Matrix.
my back just winced at the thought of that crt, i used to lug a 19" hitachi crt around to those.
I ended up getting a 21" Sony crt that probably weighed close to 50 lbs or so from a cousin that got two of them. I lugged it around just like you said, we lanned almost every weekend in high school lol
"This thing is boring"
Proceeds to make a hour long vid that grips me from beggining to end
You did it again
The Enlight 7237 is the case I have built the most PCs in. My dad's company threw away tons of them and I put everything from Pentiums to Core i boards in them.
Are you gonna talk about that mistake intel made that made this board usable for consumers in part 2?
yup
@@CathodeRayDude of course.
You never disappoint, Gravis. I was a little scared on that last part, but you have me relieved.
@@CathodeRayDude Awesome. That's a story that has definitely not been told, at least not on UA-cam. You've been spending a lot of time in old forums haven't you?
@@lemagreengreen Sounds like that one time intel forgot to disable base clock modification on non-k CPUs, allowing you to buy cheaper "non overclockable" CPUs, and overclock them via base clock.
@@hikkamorii Is that just like FSB overclocking? I admit it has been many years since I did anything like that but we used to always bump FSB up a bit.
On many "multiplier locked" CPUs of the late 90s/early 00s there were tricks to enable multiplier control, Athlon first generation had pins on the cartridge board that we plugged dip switches in to control ,multiplier, Athlon Thunderbird/Duron had traces on CPU package we connected with conductive paint to set multiplier etc.
Really nice description of multitasking and multithreading. I mean, I already understand this stuff intuitively but this is probably the best explanation I've heard of how this works. Like, you didn't dumb it down to the point of being an abstraction, but you did make it simple enough to understand (I think) for someone who doesn't already understand this stuff intuitively.
Brooo, can't wait for part 2!!!
This was pretty epic. I've always found multiprocessor PCs kinda fascinating.
I wanted a DayStar Genesis Mac clone in 1996 for no other reason than bragging rights. Almost nothing I used would take advantage of the extra PowerPC CPUs, but _damn_ those things were dope.
Classic MacOS itself had no concept of multiprocessing, you needed a special system extension to enable bare minimum support for it. Even with that extension, it only supported round robin SMP. Since Classic MacOS also had no concept of memory protection (it was a cooperatively multitasked OS like Windows 9x), it just made the system more unstable and more prone to crashing.
Had Apple kept up development of A/UX when the first 60x MP Macs came around, they would have had a far more powerful market position, and not be stuck with crippled System 7 and bolted on 3rd party acquisitions until 1999/2000 when OS X came out.
@@GGigabiteM All true 🤷♂️
@@GGigabiteM I thought Win 9x had preemtive multitasking; not that it mattered much since that generation of Windows was notoriously flimsy.
@@Desmaad 9x had preemptive-ish multitasking in the same way Amiga did - which is to say, no wider protection from programs overwriting each other's memory spaces or crashing the entire system; but at least a program couldn't hog all the CPU time.
@@kaitlyn__L Badly written programs could definitely eat all of the CPU time. Two examples would be Sim City 2000 and Yoot Tower.
I'm only a quarter of the way through this video and as a first time viewer you have my sub.
Your narration and diction is great and you really understand the nuances of the point you are trying to convey. Look forward to waking up at 3am with youtube jammin all night and having some oddball dream about overclocked Pentiums.
We had a dual Pentium Pro system at work back in the day. We used it to live stream Princess Diana's funeral to the whole country. Simpler times.
Quad Pentium Pro systems were for the fancy folk.
I was in retail selling computer parts when this board came out. You reminded of so much of my time then with just one video. I can't wait till part two. Thank you.
Overclock scene nowadays is a joke.
The point of Overclock was to make slow mainstream cpu /gpu to go fast at a reasonable cost.
Nowadays it means to make already very fast components to go even faster. Overclock became restrict to expensive cpu and high profile mainboards.
Kind of the reason why overclocking is a joke is that modern CPUs are either squeezed so much for every ounce of performance, binned into a new SKU entirely, or have pieces of silicon disabled.
@@No-mq5lw don't forget overpriced motherboards to even allow you to atempt to get the 200mhz more
Yeah, with cheap CPUs sometimes coming from artificially crippled high end chips, or at it's peak where you could literally unlock more cores in some AMD CPUs it was essentially chip lottery. Of course it was exciting!
Nowadays there is none of that, "normal" components become unstable at anything other than stock bus speeds so you can no longer "literally overclock your entire PC at once" and "unlocked" chips you pay extra for performance that you may possibly get if you come prepared and lucky.
I was big overclocker when I was running Core 2 Duo for way longer than I should and now I almost forget I ever did that... I'd just buy faster "normal" chip nowadays. I don't even overclock my GPU, it's just not worth it. This is what happens when corporations figure out their product.
@@No-mq5lw My 5600x says 4,2 on the box, I runt it at just shy of 4,8 on all cores. It's well enough to make a noticeable difference when gaming.
Uhm... eh... Maybe you should try AMD?
I had one of these with Celeron 533s. Having an SMP machine in your house on a college student budget was a game changer. I ran Linux, FreeBSD, and BeOS 5 on it at various points. I could test multithreaded code projects without needing to log into to college lab server. And as a daily driver, running multiple apps was so smooth.
The BP6 can be credited with getting a lot of enthusiasts excited about getting into SMP. But sadly it also quickly garnered a reputation for being a quirky beast.
The Highpoint ATAPI controller was particularly hated. I went with an Epox dual slot 1 board instead with Celerons in 'slotket' boards. Still loved following the BP6 community.
I still have that same Epox kp6-bs with the slockets, it was a beast for back then (though it has PIIIs in it now, i often think of sticking the celerons back in it for originality sake).
Yeah, the Highpoint was a plague on my board. I was planning to get the thing de-soldered but the board gave up the ghost before I had the chance. Was out of warranty by then and much faster CPUs were out, so it sadly went into e-waste. :(
It has a particularly special place in my heart. Used to have a hardware review site back in this era as a teenager, and the BP6 was the first board we were sent for review. These days I have BNIB one that I plan to recap and put into service as a FreeBSD retro web server.
20:11 WHAT!?! No backplate!?!?! 😂 I kid… But man, you’re taking me back on the nostalgia express! I’m so thankful you have this channel! I had one of these with overclocked Celerons, in a full tower case with casters on the bottom and three ultra wide SCSI drives, with a pair of 3-D Voodoo video cards running in whatever they called SLI. Please keep making these videos, don’t ever quit!
Also as a heads up, the reason we bought Celeron processors was due to the fact that you could overclock the hell out of them. You could not nearly overclock the pentium version. Not only because it would be unstable, but also due to the heat. The celerons gave you so much more performance when overclocking they blew past most of the pentiums that I knew about.
They're getting REALLY hard to find now -- even the early ones that all had exactly the same layout, with optional LAN above the USB, and optional sound.
>Vooodoo cards running in whatever they called SLI
...they called it SLI. nVidia got the trademark when they bought 3dfx's corpse. Though under 3dfx it was an acronym for "Scan-line interleave" while nVidia uses the much less specific "Scalable link interface"
@@Jay-ik1pt when did Nvidia make that change? I still recall everybody saying it stood for scan-line interleaving circa 2005-2010. (2011 is about when I stopped lusting after SLI.)
@@kaitlyn__L AFAIK nVidia bought the rights to the name and changed what it meant immediately as soon as they started using the acronym for marketing their own cards but my memory isn't THAT good to remember for sure.
What I do remember, from reading about it back when nVidia's SLi first came about is that 3dfx's SLI and nVidia's SLi had little or nothing in common besides the name and concept, supposedly the actual technology behind it was quite different. But, again, going on old memories based on info from people who were journalists in the industry at the time, themselves relaying information from nVidia's marketing rather than the actual engineers.
I'm really excited for the next video. I recently picked up my first mid-90s PC, a HP Vectra XU5/133C, and it's been fascinating to figure out what its place was in history. It's a dual-socket system, but HP never sold it with 2 CPUs as far as I can tell. NT existed in 1995 when my Vectra was built, but it was weird and exotic even relative to the NT4/W98 days. It's been loads of fun to use it and learn what computing was like before I started to learn about it as a kid.
This finally happened to me, a CRD upload just as I open YT after long day at work. Thanks for the video as always
Fantastic story tell! I kinda love that most videos start with an obvious question that isn't answered till the end. It's usually pretty left field and a joy to finally get the answer!
2:46 "hanging on by a thread" lol good one
Or at least two threads
@@LtMoochsingle core. Dual thread hah
Thank you for making this video. So much of it resonated with my personal experience around that time, when I was in late high school/early college, gaming and video editing. This was the experience that make me swear off single-thread CPU systems forever, and I didn't replace my BP6 running dual Celeron 300As (running at 464, I couldn't get my two to run at 504MHz, which was 112 FSB at 4.5x) until the Pentium 4 added Hyper-threading three-ish years later. I hope you are able to tell the story of Slotkets, the SL2W8 (the Overclocker's dream Pentium 2), and a time when overclocking was all about getting great performance out of cheap parts, and not the way things are now where it's about taking the most expensive parts and making them even faster while being less efficient. It was truly a remarkable time, where if you were "in the know" and willing to roll the dice, you could get a machine FASTER than the fastest "official" PCs, at half the price. What a ride it was.
Also - hopefully in your subsequent video on video acceleration that you teased here, you talk about the Pinnacle DC10/DC30 which was another wonderful example of getting more for your money!
There's not many legendary motherboards but BP6 is one of them. This thing with dual overclocked celerons, as much ram as you could afford and a copy of Windows 2000 provided serious epeen at the time, a legit PC workstation that many teenagers could afford.
Of course it got absolutely thrashed by an Athlon released in the same year but meh, dual processors. It's still cool to have more than one processor and at least some FPS games could actually take advantage.
I don't recall about the Socket 370 ones, but the Slot 2 generation was significantly more overclockable than the flagship Pentium IIs because of their reduced onboard cache. Whereas the full-blooded P2s were running around 233/266 MHz, people were easily overclocking the Celerons to speeds over 400 MHz.
The Celeron A300 @100 MHz FSB. You paid the price for a Pentium II 233, but you got the performance of a PII 450. Half the CPU cache, but running at the full 450 MHz. Next to the 2600K one of the best Intel processors.
Yup, came here to mention that…also seem to remember it’s heat sink on it was much bigger than it really needed so stayed cool
I was running Windows 2000 pre release versions on my machine in 1999. It was still tons better than NT.
Me too, 2000 was the best for bp6
Agreed. it was rock solid and had a better user interface than NT4. It also had very good support for DirectX.
Had a older friend of my mom give me his msdn discs nt5 beta 2 i belive.
2k was the 💡 moment. NT architecture with slightly improved stability, a massive driver library and support for older systems..
By XP & Server 2k3 - the support for older machines was a big middle finger.. but there were good reasons.
I'll chime in with the same comment many others have made. Win2k Pro is the OS you want to run on a BP6. It fixed so many of the issues NT4 had, especially driver support and better game compatibility. Even on a single CPU system back then I jumped straight from Win98 to Win2k as soon as I could and stayed there until after SP2 for XP had been released.
Back then I really wanted a BP6 (and later a VP6) but I didn't have the spare money and the BP6's limitation of only (unless you did a hardware mod which came later) working with Mendocino core Celeron's (533 Mhz max stock speed) put a damper on my BP6 enthusiasm. At about the time I was going to pull the trigger on a VP6 board with dual Coppermine-128 celeron's in it in 2001 AMD released the original Athlon XP chips in October 2001. I attended an early morning AMD pop-up roadshow event in Chicago and won a top of the line for the time Athlon XP 1800+ and a MSI KT266 (not A) chipset based board. So I stuck with that for a while and put the thought of a overclocked VP6 rig aside.
Those were the days though! Good times!
Coppermine Celerons do not support SMP.
NT5/Windows 2000 Pro was what I ran on systems like this. Runs almost all W98 software since it had working Direct X 7 and even got DX 9 support a bit later.
The UI was also pretty much exactly like XP but with a 98 skin on it.
Windows 2000 was NT 5 and xp was NT 5.1, they were very similar in function just xp looked nicer and was updated and maintained longer
@@GenOner Yes and no. The licensing between them was very different. 2kPro was much more restrictive with CPU count because it was never marketed for dual core and up CPUs.
2K also ran better on low memory/low power systems because it totally lacked the background windows update and activation infrastructure that XP got. Pretty sure the XP kernel also had some fairly significant changes too.
Regardless, I skipped XP and Vista mostly, other than playing around with them on other systems out of boredom. I actually used XP-64 more than vanilla XP.
@@karathkasun I stuck with 2K as long as I could, but there came a day when it was no longer supported by Citrix and I couldn't log into work remotely (yes we did that before the pandemic) so I ended up getting upgraded to XP on the company dime. By that time, XP was in pretty good shape and the transition was pretty easy. I think I had one or two extremely old (Windows 3.0) apps that didn't work under XP, both of which had better replacements available for free, and I think I had to get patches for a couple other programs. Not surprisingly, those same programs broke again going from XP to 64-bit anything.
26:45 The way you started talking about the OS options just reminded me of The Hitchhiker's Guide to the Galaxy's talk about currency.
"In fact there are three freely convertible currencies in the Galaxy, but none of them count. The Altairian Dollar has recently collapsed, the Flanian Pobble bead is only exchangeable for other Flanian Pobble Beads, and the Triganic Pu has its own very special problems. Its exchange rate of eight Ningis to one Pu is simple enough, but since a Ningi is a rubber coin six thousand eight hundred miles along each side, no one has ever collected enough to own one Pu. Ningis are not negotiable currency, because the Galactibanks refuse to deal in fiddling small change. From this basic premise it is very simple to prove that the Galactibanks are also the product of a deranged imagination."
The "celerons" story the missing bit is that Celeron CPUs were not supposed to run SMP, while those were obviously stripped down Pentums, this feature was blocked. But the lock was broken and this board was able to bypass the restriction.
IIRC, it had something to do with the BX chipset. It would not allow SMP on the Pentium III, to avoid cannibalizing the new Xeon market. But, it didn't stop you trying SMP on Celerons....
That, combined with the low cost and massive overclockability of the 300a ... voila. BP6's perfect niche.
@@nickwallette6201 Not sure on if/how this relates to BX chipset, but this was the thing on Pentium II Celerons already. PII was running SMP just fine.
TL;DR is that there were two pins needed to boot CPU in SMP configuration - Intel nerfed one of them on Celerons so it never could be selected... but the failed to do so and by doing an improper power on sequence it was possible to bypass this limitation.
For detailed reasons look for "celeron smp" ars technica article :)
I guess they went further when s370 CPUs come along, but since 440BX started as PII chipset with no chipset-side restrictions on SMP/CPU support... I guess that was the next chapter in the story.
@@nickwallette6201 makes Dell's dual P3 from the start even more interesting.
@@kaitlyn__L Different chipset, I assume? Or maybe I heard/remembered wrong. :-)
@@nickwallette6201 I’m sure it’s a different chipset :)
Thanks - while I was on my way to bed wile I saw this, i didn't fall asleep. Thanks for reminding an old tech girl of her young years :3.
The BP6 came in at a very specific crossroads that made it "also consumer interesting" as opposed to "only enthusiast interesting". To address all points at the same time, previous closest thing would have been a dual slot A with a pair of 300A's running NT4. With pins drilled by hand (that's what the "unmodified Intel Celeron" means on the BP6). The BP6 could take (un-drilled) Celerons, Windows 2K was coming out with better Direct X and desktop feel and memory was becoming relatively cheap. All that together made it appealing to the consumer segment that wanted a bit more oomph but was budget limited. The next step on the "budget/consumer SMP" would have been the Dual Socket A with dual Duron's, that came out a bit later, ~2001 IIRC.
p.s. you're totally right about the "doing other stuff while x is running". That's what made A64X2's and C2D's such a game change with Windows XP. Compared to previous P4/AXP boxes, the new ones with that extra core meant you could do other stuff while the "loaded core" was churning away. And after that, the Core iX with HT, that while not being full threads per core, DID make things A LOT smoother.
Yea I had a slot board with an adaptor for this reason. Got lucky as the board was able to be firmware upgraded to P3 and supported the whole Celeron overclock. I really did feel like I was sticking it to the man at that time.
@@warlockd Well, we sort of were ;) Think especially the 300A was a huge FU to Intel. And AMD's "wink and a nod" to the enthusiasts by laser cutting the bridges that you could easily restore with a pencil was another way to stick the knife in Intel's back. It was like "we know who you are, we know you're on a budget, we have to pretend we care but we really don't, knock yourselves out". And look were we are, Intel still nickel'n'dimming and AMD still not caring and still pulling ahead... If they could bring their 8/16's down to 6/12's price range, Intel would pack the back and call it a day.
You're spot on with the "smooth feel" thing. I remember when we went from high-end singlecore (Athlon XP) to relatively early multicore CPUs (Brisbane Athlon 64 X2), it was astonishing how smooth the things felt. It was a jump in usability that was only trounced by the introduction of SSDs later on. We were on windows XP by that time, so software was ready.
Before, when you launched say a browser you couldn't also open task menu. You were used to it, it had always been that way. With two cores you could use the machine even while it was "thinking". It was revolutionary, we just got used to that new normal so fast.
He finally got access to his studio!
I loved that board. It was my first watercooling rig. I had to machine my own waterblocks and used a Ford Aerostar transmission oil cooler as a radiator with an aquarium pump and was rocking dual 366mhz Celerons OC'd to 600mhz. I was essentially rocking 1.2 ghz of processing power when 800mhz cpu's were about the fastest you could get. It was also significantly cheaper. That is what was so special about the BP6.
some info about NT4 and why it does not have some features of W95/98...
Inside MS work was going on for NT 4 (Cario) to upgrade NT3.51 (Daytona) Win95 hit the market and the big business customers started wanting Win95 interface on NT 3.51..there were lots of arguments internally Cairo was not done so basically MS was at least 1-2 years out for NT 4 Cairo..well those big businesses carry alot of weight they buy things by the truck load...so NT 4 Cairo was put on hold (or at least planed for NT5) ..MS needed something fast..so they basically took NT 3.51 and put Win95 Chicago skin on it (there are a couple of dialog boxes deep inside NT 4 that are actual NT 3.51 boxes)..and released it as NT4 ...hated by overwhelming amount of people at MS...sold like crazy to businesses as an upgrade...work continued on NT5 Cairo but due to lots of issues a good portion of it had to be scrapped (i.e. Hermes etc) so what was left was now NT 5 Alpha (which would become Win 2000) ..this should explain the issues you had and also wondering how far back Win 2000 went...
W2K Alpha would go back to late 1997 and Beta 1 would have been mid 1998...
Internally MS was pretty proud of Win 2K and wanted to forget about NT4
NOT RELAVANT:
i don't know how much is known about Hermes project i got to see it in action (in early Alpha) while i was at MS and play with it...one of the many features was that you had a master Admin screen for your whole company and you could configure it in many ways by department...i.e. when a machine was booted it could re-format the drive go out and get that departments disk image and install it automatically..(assuming large LAN pipe) any viruses or software the employees put on it would be gone..you could also manually trigger this and have the machine boot normally unless triggered ..all controlled by the master Admin screen..the part that was being worked on when it was stopped was in Admin screen you could drag and drop software packages and it would create a disk image for the department..
a part of Hermes became Systems Management Server (SMS)
love to learn about tech history from the source!
Nice!
Intel distributed a version of the dual-Pentium-2 Precision 410/420 (with their own branding instead of Dell's) as an i440BX chipset reference system. One such system (with a POST card installed) known as the "Crash Box" has been in use at Carnegie Mellon University for testing student OS projects for decades. I have fond memories of watching my kernel boot on this machine for the first time.
Biggest Issue with NT4, was lack of USB support. No USB flight stick\gamepad for you. Direct X support only up to 3.0 another big hindrance to gaming. PCI sound cards also an issue, you probably only would of had luck with a SBLIVE! and only the early versions of it too. Windows 2000 is where SMP systems got alot better for gamers running such a system.
I used to dual boot NT4 and windows 95 osr2 and home. nt4 was rock solid but for games, would often swap to 95
I feel pretty confident when I say that hardly anyone ran NT4 on these.
One way or another we all ran 2k before moving to xp.
Thanks for putting this out when you did. Throwing it on with a burrito and a beer was a great way to wind down from processing emotions about Cohost closing. Looking forward to Part 2!
This is going to be a 63 minute long core memory for me.
Multi-core, surely?
@@spitfire184prime pun opportunity missed. I failed.
I dont want the videos to be 10 mins, i want them even longer! I love listening to you nerding out about those exciting times!
Knocked it out of the park with this video. Can't wait for the rest of the story.
I have watched so many bench videos that this - proper - video absolutely blew me away. Well done!
Not gonna lie, I was thinking about turning it off during the Benchmarks chapter, but I'm glad I didn't.
The dual-core CPU having on by a... Thread. Prime opportunity to say "no pun intended".
your videos are always top notch CRD, and I look forward to everything you post and whatever comes next.
Love the title, keep it. Can't wait for part 2, I almost feel like they should be released together!
Bro I just want you to know your content is one of the few things that gets me through night shift, Thank you
SMP is a term used at least within OS development generally for all systems that have multiple cores with the same capability (speed, instruction set, memory protections), so it's inaccurate to say it disappeared with multicore SoCs - it rather took off and is now the norm. The opposite of SMP is things like Intel's big&little cores, the same thing on ARM phone SoCs, and microcontrollers like the ESP32-C6 that has extra low-power cores with different instruction sets.
Interesting. Now I have to figure out how to read up on that. I figured "E"fficiency cores were just for doing tasks that are somehow designated as requiring little processor time, with very low probability of wrong predictions. But that was just a guess, an inference from the name. And I had no idea about that last processor you mentioned.
I never owned one but my recollection of what made the BP6 hot at the time was specifically the fact it worked on Celeron processors, which was not supposed to be possible or maybe even allowed by Intel. Also its release happened at the around the same time as Windows 2000 (beta and RC builds were widely "available" in 1999) which was a great desktop OS compared to Windows 98, being able to run most games without issues too and it supported SMP. I have some recollection of my friend with a BP6 demonstrating how smoothly he could browse the internet while having other stuff running on the background.
I remember having insane debates in my school about whether one prefers "AMI BIOS" or "AWARD BIOS" in exactly THAT era. As if one had a choice...
Starting with the Pentium, Award. 486 and before, AMI.
Although eventually (Core 2 era I think), I stopped caring, and both were fine. :-)
Bravo!
I felt the same as you with celerons, of course, but I knew looking back, there was something going on. Then you explained it and it made perfect sense again.
The two of them explaining the CPU work, made a lot more of it click for me.
And then hit us with another part, going more in-depth. Perfetto! Love to see this direction you’ve realigned with.
every time anyone mentions Win NT... I am reminded via nostalgia that NT is a fork of OS/2... Then I am reminded via more nostalgia that Win2k solved most of, if not all the problems that NT had and was limited by.
My first dual-core CPU was an AMD Athlon 64 X2 3800+ which I put in my second build when I was a kid. The first thing I did when it was all up and running was play Counter Strike: Source while running a full-disk scan with AVG antivirus. I was instantly convinced that it was a game-change. What a time.
Great video. I love your long-form content and I eagerly await part 2.
I finished my engineering degree in 2002, and had daily driver of Windows 2000 with dual boot of Windows ME (which worked better for me than 98 SE, which other preferred). I worked on a dual core machine which was part of my final degree project (it utilised multi threading for data processing). I remember thinking how I barely even needed ONE CPU to perform the task, and just using the dual processor config just so I can write this in the project description and justify buying the data, lol. It kinda made the system more stable of course, in case a background process would hog one CPU, the other one was running my code fine ("real time" data streaming in).
Oooh, a WinME friend! I ran ME on a K6-2@450 and had way less problems than I'd with either 96 or any 98 edition.
As a BP6 polymod owner, I loved every minute of the video, even if people pointed a few caveats.
I do have 333s on mine, hope I can bump the FSB to 100.
btw as far as Award BIOS is concerned, the earliest iteration similar to yours was around 1993-1994.
I had a Precision 420! I had no idea the PIII worked with RAMBUS, I coulda sworn that was just a P4 thing. I also was completely unaware that you could do dual slot PIII. I thought that was just a Xeon exclusive thing at the time.
Such a weird PC.
The memory controllers were on the chipset instead of on the CPU back then, so in theory you could pair any kind of memory with any kind of CPU. Intel themselves proved that when the initial Pentium 4 chipsets only supported Rambus and they claimed it's essential for P4s to work, but then a year later they caved in and released a chipset with regular SDRAM (SDR and DDR) support.
Look up Precision 330.
I don't have one anymore, but I still have my Socket 423 RAMBUS Optiplex from that era.
I have never heard of the BP6 before but this video was fascinating and I couldn't stop watching. Can't wait for the second one.
Loved my BP6... still have it in storage somewhere. I think mine had 366MHz Celerons.
I remember WANTING one of those and not being able to find one... love that you're doing this in two parts.
This brings back so many memories. The struggles of early multiprocessor machines. The lack of multithreading in software. The scheduler not really being the best for it, although, Windows 2000 and XP were a world of difference for SMP support. Then getting into things like processor affinity if you really wanted to fine tune a system. And then there was being in school learning about OS concepts. Threading, preemptive and cooperative multitasking and how preemptive is the only way to go. Windows 3.1 is a stark reminder of why cooperative multitasking doesn't work - one process breaks the rules and it's all over. I can say, adding threading to an application can be very complex, I've done it before but you really hit the nail on the head. It can be very hard to do and it needs to be done where it makes sense. It doesn't always make sense. Games, to your point are heavily multithreaded. Things like music, physics, graphics code, disk access, net code, etc.; are often all broken into threads. However, those threads may not be further subdivided so it can give the appearance of single threading where there's slow downs when a lot happens all at once and overall CPU usage is relatively low. Of course, synchronization across processors back then was a huge pain which is why a lot of old games struggle running on SMP or multicore machines. So it was all effectively single threaded even if there were some parts split into other threads. I think this video really did a great job at explaining a lot of these complexities without going into the mindbending realities of how threading is accomplished and controlled. Thank you for making this!
holy crap, what an approachable but in depth explanation of multithreading vs. multitasking. Could even be it's own little short!
I had incredibly early broadband in 1997 with an INSANE speed of 300 kBit/sec. Back then I pulled ethernet cable to share the connection with my brother through a wall and just twisted the appropriate copper wires back together. Even piggy-backed ISDN over the unused pairs. Some scotch tape, tip top! You young people with your "crimping". pfft!
This brings back soooooo many memories, I worked in IT back then, so I remember working on the HUGE, weird servers like that big boy you showed, as well as the "small" servers like that Dell. I remember running NT 4, 98, Red Hat, BeOS, AND OS/2 all on the same machine just to prove to myself it could be done. I remember struggling to get games to run under NT because of its limited support for DirectX. And yes, I remember using jumpers and DIP switches to set up clock speeds, IRQs, and the like, as well as working with a rat's nest of cables and trying to route everything around those damn ribbon cables that took up so much room, and forcing reluctant Molex connectors to plug in because I am the old.
I love your channel and I love to see all of the old hardware that I remember from my youth. Thanks for resparking my memories.
Yes, the BP6 was the first and only Socket 370 MENDOCHINO SMP System...
Pentium 3 weren't supported...
THough there were earlier Dual Slot1 BOards, such as the ASUS P2B-D or others, but nothing that was made for Celerons.
This board is a couple of years before my time, I started building in 2003 and I went with an Abit NF7-S, it had the same asthetics as this board. Awesome video man, above and beyond.
I too have been having problems with PCI sound cards. Also just later 90's sound cards in gereral. They all work prettey much perfactly in Windows 98, but then I switch to DOS and the only soundcard that does anything my SB AWE64. I use usisound in DOS btw.
a lot of later pci soundcards only work in dos with weird drivers, i've had good luck with emu10k1 based sb live values out of old dells but it's been a while and mostly i just use my awe64's
I am *so* looking forward to part two!
Never stop being yourself and going completely overboard in your research, it's so amazing as I wonder in very similar ways =)
You had three different types of people using those things back in the day and only two did actually benefit from it: Full on Linux nerds (Compile jobs etc), Game-server hosts and those with too much money that just wanted "the best". Kinda like the dorks that went with Pentium Pro and discovered that windows 9x and its apps don't run well due to slow 16-bit support in that processor 😂
You know personally any of those "dorks" or it's just some urban legend? The Pentium Pro was a professional grade processor and unavailable for normal people. No one in my circle had it back then.
@@lordwiadro83 even worse. I were one 😂 They really, REALLY struggle with 16-bit code so w9x runs like molasses. I think you can even find benchmarks and videos these days if you want to see for yourself
Those slides and animations in the SMP pros and cons section are incredibly slick!
59:21 Penium4 era Celerons really ruined the whole line. Imagine having a ~800Mhz P3-based Celeron, spending a good amount of money to "upgrade" to a ~2000MHz P4-based one a couple years later and finding out that programs don't run that much faster. What a fall from grace that was the line that had greats like the Celeron 300A.
I got super excited when I saw that Dell Precision! As a kid, my friend's dad offered me an old server and I gladly accepted. It was a Dell Precison that looked exactly like that one on the outside, but was slightly different on the inside. On mine, the rambus and cpu locations were swapped. The rambus was installed on two daughterboards that went in together and had these black wing handles that flipped out for removal. The CPUs were two black hunks of heatsink metal with a holographic Pentium Xeon sticker on each. It was the coolest thing I owned for several years.
Never did much with it, I don't think the 10,000 rpm SCSI hard drive worked. I got Ubuntu running off a disc, but I didn't get far as a kid with limited internet access. Many years later I stripped it down and gave some of the parts away.
Thanks for reminding me of a great childhood nerd memory.
I've always liked NT4, but W2K was the really good OS. Most people around me abandoned Win98 for W2K. No one used ME, some used 98SE.
The overclocking by far was the reason you got this setup, and why it became legend. Also had a BP6 with a pair of Celeron 400s (and GeForce 256) in 99, and I can't think of another CPU or even computer component over the last 30 years where you could not only get them at a semi affordable price, but increase their throughput by 70%+ via some small tweak in the BIOS (and a pair of FEP32 coolers), and at the very early stage when software was starting to support multi-process, get two of that same crazy value to work together. Those that were able to pick up a BP6 and a pair of Celeron 300As got just an incredible amount of value, I just don't think that has been seen since (would love to hear of other examples). And then Win2k became easily.. accessible and was still able to game, so was another win for this setup.
To give people a modern equivalent for how much value these offered would be similar now to getting a being able to buy the cheapest Ryzen 3 and having a software menu to switch it to a 7800X3D for gaming, and a 7950X for productivity. When things like that come around, they become legend, because it is just so rare.
Oh Abit. I had a BX6 back in the day and it was a beast. Such great products made by an absolutely incompetently managed company that drove itself into the ground almost as fast as it grew.
BX6 + Celeron 300a was such a killer combo
@@meramsey It was indeed! I worked at a computer store at the time and we sold that exact combo to a bunch of enthusiasts. We kept telling the owner he should make a bundle out of it.
I love watching videos about hardware I will never own, see, work with, or want to have anything to do with. LOL!
Seriously, great video, good job!
Man. It's like you made this video about my evil twin.
I ran the Asus dual slot 1 P2B-DS board back then.
Bought it with a couple of Celeron 300 and had to run the extra traces and drill out the back of the package to make them work.
Then moved to dual P2 350 overclocked to 412Mhz.
Then from that to P3 850.
Ran SCSI for my optical drives and a 6 channel Promise IDE raid controller.
3Com NIC.
SoundBlaster AWE64 Gold for sound card.
Matrox Millennium 2 for 2D graphics and Orchid Righteous 2 3DFX for 3D until I upgraded.
I was literally the dude at the LAN parties you were talking about.
And yes. It was all under NT5/Windows 2000.
Didn't upgrade until games insisted on Windows XP.
This was an epic trip down memory lane.
And yes I have most of that hardware here still :)
I had that same rig, minus the modified Celerons! Bought at an auction because my boss wanted the ATX case and power supply, I got to keep the case and the motherboard with dual Pentium II 400's, which was a huge step up from my overclocked Pentium MMX (200 running at 250) once I got a power supply (re-wired from an Apple, if I remember right) and eventually a case. That was my first Linux machine, since I didn't have access to Windows 2000 and Linux Journal came with distros, I could build an SMP kernel with optimized audio processing.
I believe that the OS bouncing a process between two CPUs (or cores in a multi-core CPU) is meant to spread out the thermal load.
OS developers couldn't care less about that. It will switch task to whatever core is available if task was paused and a different core becomes available before the last one becomes available again. When it comes to performance you don't want any thread to change core's ever unless absolutely necessary. Spreading thermal load is purely by accident.
@@araarathisyomama787 modern OS schedulers tend to switch to whichever CPU core has the lowest utilisation rather than whichever is the first-available in a round-robin type way. Plus of course with modern Big-little architectures, the schedulers also try to keep them on the same type of core even when they are switching around.
Oh man, I went to college in the fall of '99, and a guy on the 5th floor of my dorm had a BP6 with two 366 Celerons overclocked to 500mhz, and he was the BADDEST DUDE on the block. I thought I was cool with my K6-2 450, and there was another guy on the 5th floor that had a slot1 P3 550, but we were all just bugs on his windshield! I had always heard that one of the things about the BP6 was that you weren't supposed to be able to run Celerons in SMP, but this board let you (hence the use of "unmodified Celerons" in the Wikipedia first paragraph). And there was something about a "pencil trick" to unlock the potential of Celerons in this era too. It's just ... fuzzy in the mists of time, and I guess I'll have to wait for part 2 to jog my memory.
53:47 The BP6 didn't have an APIC, so interrupts were still "slow". DMA would have helped, but DMA was still poorly supported, even in 1999. Even so, without an APIC it's a real limitation. This means hardware I/O would have been a real kicker for SMP efficiency, regardless of the caching in those two celeries. Celery chips were also starved for cache compared to what we would consider normal for SMP chips now, or even around that time. Still, that board made for an awesome budget linux server. Ahh, good times.
Every intel CPU since Pentium 75 has APIC. You also seem confused about DMA? DMA was supported since very first IBM PC in 1981.
@@rasz Celery chips back then didn't have a local APIC, so in the end it was essentially like having a legacy PIC.
You're correct that DMA existed long before the BP6, but support both in hardware and software was often flakey, with the perfect example being that HighPoint controller. Its DMA support is what gave it the bad reputation. There were plenty of examples of disk controllers back then were if certain DMA modes were enabled, performance and reliability would decrease, like UDMA/66 on that HighPoint controller. In these cases, a lot of controllers dropped back to PIO as a safe fallback, rather than a lesser DMA mode, which resulted in high CPU usage.
@@benespection Local APIC was there since Pentium 75 (IA32_APIC_BASE).
UDMA is not the same thing as DMA, its only sharing the name.
@@rasz UDMA for an ATA controller requires advanced DMA modes on the motherboard for bursting, so it does it with bus mastering, double-buffering, pipelining, double data rate, etc. but I guess you knew that. Everyone back in the day. It's still DMA (direct memory access) just not the way it would be done via an Intel 8237, but I guess you knew that and you're just being argumentative for the sake of pedantry.
The Mendocino core celery chips definitely did not have a local APIC, or at least there was no way to activate it. This was surely part of Intel's cost-cutting measures back then. Later celery chips surely had a local APIC, probably when they were updated to be based on the Pentium-4 architecture, but none of those can be used on a BP6. The early celery chips were super cut down.
The other big cut from the Pentium line in creating the celery chips was the L2 cache, although it was internal and ran at CPU speed, so... meh. There was also no L3 cache, they didn't support ECC RAM, they had a much lower FSB speed, and they had a crippled SIMD instruction set. It's all fairly obvious given the price point they were going for.
Without a local APIC, windows NT or 2k, or Linux, wasn't able to properly control the affinity of interrupts. If I remember correctly, the BP6 did have an I/O APIC but without support in the CPU it's moot since everything would have to go through the legacy PIC anyway.
With the legacy PIC, the interrupts couldn't be smartly routed - the OS cannot control the affinity - so in heavy I/O, one CPU may be hammered with interrupt requests, especially if the storage controller is crappy and keeps dropping down to PIO mode.
I ran a couple of these boards as servers on the cheap and tried to squeeze every last bit of out them, which taught me a lot about the right way and wrong way to do SMP. For certain loads, the dual-CPU isn't much better than a single-CPU, but it was a very good board for running websites on Apache with CGI scripts, so long as the network I/O wasn't too heavy.
This is 25 year old knowledge, and I think it doesn't matter what I say to you, as you will continue to argue :)
@benespection Yes this is 25 year old knowledge so im surprised you keep arguing about it :)
>UDMA for an ATA controller requires advanced DMA modes on the motherboard for bursting
all in the Chipset and fully supported on 440BX/ICH.
>double data rate
no DDR involved in UDMA
> Mendocino core celery chips definitely did not have a local APIC
Linux bootlog:
>>mapped APIC to ffffd000 (010c1000)
>>Initializing CPU#0
>>CPU: Intel Celeron (Mendocino) stepping 05
There is a problem with interrupts on this board, but its this board specific and has nothing to do with Celerons. You can find old thread on Linux mailing list titled "The buggy APIC of the Abit BP6" for more details. Something about errors under higher interrupt load and missing IPI messages.
>they didn't support ECC RAM
because CPUs at the time didnt support any ram :) Ram controller was in the chipset, BP6 does support ECC ram.
>crippled SIMD instruction set
Same MMX as normal P2 of same generation :p
Even running with NOAPIC you only lose performance in server workloads.
What a fantastic idea for a new series. Thank you for all your videos Gravis. I really appreciate the unique lens you bring to these topics. Can’t wait for part II.
You didn't zip tie ribbon cables. You did the origami fold.
Exactly! And the 80 wire IDE cables starting to come with ATA/66+ were nice and stiff for keeping folds. Keep in place with double sided sticky pads or masking tape, and don't think how much of a mess that makes when you crack it open again a few years down the road.
It's been 25 years since I built that BP6 /w 2 Celeron 300As? They all dropped in and ran at 450... OMG Leo! I listen to TWIT every week! Thanks for the nostalgia!
Awwww, 24-core is quaint? My threadripper cried hearing that. Or that may be a cooler leak 😅.
Whoever gets mad at your explanations going longer than 10 minutes, they're wrong. Love listening to the long winded and rabbit hole filled explanations.
I miss the old days when you could see the circuit boards. New computers are just a bunch of painted shrouds and LEDs.
What?