Fastest CPU? PC or Mainframe?
Вставка
- Опубліковано 2 чер 2024
- PC vs Mainframe, Pi vs Pentium, Apple Watch vs the Apollo Guidance Computer, and many more direct comparisons in MIPS (Millions of Instructions per Second).
For information on my book, "Secrets of the Autistic Millionaire":
amzn.to/3diQILq
Discord Chat w/ Myself and Subscribers: / discord
Book on Apollo Guidance Computer: amzn.to/3qWLCLu
MIPS LEADERBOARD in Chronological Order (Chapter List):
00:00 - Introduction and Questions to be Answered
01:00 - Measurements (MIPS == Million Instructions Per Second, AGC = 0.04 MIPS)
01:30 - The Apollo Guidance Computer
02:11 - The Data Center and "Big Iron"
02:23 - The University of What'd He Just Say?
03:00 - Intern Day
04:24 - Fun and Cool Programmers
04:45 - Satan as a Programmer courtesy of The Simpsons
05:30 - Halon Dumps. Heh. He said dumps.
06:06 - Bulletproof Windows
07:03 - IBM 360 Model 195 - 3 MIPS
07:23 -1969 AGC (2.048MHz) : 0.04 MIPS
08:42 - 1951 Univac 1 (2.25MHz) : 0.002 MIPS
09:10 - 1961 IBM 7030 (?MHz) : 1.2 MIPS
09:37 - 1977 DEC VAX 11/780 (5MHz) : 1 MIPS
09:42 - 1975 MOS 6502 (1MHz) : 0.45 MIPS
10:03 - 1965 CDC 6600 (10MHz) : 10 MIPS
10:10 - 1988 Motorola 68020 (16MHz) : 10 MIPS
10:15 - 993 Intel 486 (66MHz) : 25 MIPS
10:19 - 1991 Intel 860 (50MHz) : 50 MIPS
11:00 - 1994 Mips R4400 (150MHz) : 85 MIPS
11:14 - 1994 Motorola 68060 (75MHz) : 110 MIPS
11:28 - 1994 Intel Pentium (100MHz) : 188 MIPS
12:27 - 1994 PowerPC 601 (80MHz) : 157 MIPS
12:35 - 1995 PowerPC 603 (133MHz) : 188 MIPS
12:42 - 1996 PowerPC 603ev (300MHz) : 423 MIPS
12:45 - 1996 Intel Pentium Pro (200MHz) : 541 MIPS
12:55 - 2011 ARM Cortex A5 (800MHz) : 1'256 MIPS
13:32 - 1999 Intel Pentium 3 (600MHz) : 2'054 MIPS
13:42 - 2014 ARM A53 RasPi (1.2GHz) : 5'000 MIPS
14:00 - 2003 Intel Pentium 4 Extreme (3.2GHz) : 10'000 MIPS
14:05 - 2006 AMD Athlon FX-60 (2600MHz): 20’000 MIPS
14:10 - 2006 Intel Core 2 Extreme (2.6GHz) : 50'000 MIPS
14:15 - 2013 Intel i7 4770K (4GHz) : 133'000 MIPS
15:01 - 2020 AMD 3990X Threadripper (4.35GHz) : 2'356'230 MIPS
16:52 - 2020 IBM z14 Mainframe, 190 cores, 40 TB RAM (5200MHz)
Gear and Equipment List:
Sony FX3: amzn.to/3EiYBg1
Sigma 35mm Art Lens: amzn.to/31sfJS1
ElectroVoice Mic: amzn.to/3EevlXE
Glide Gear Prompter: amzn.to/3oeO5AQ
Atomos Ninja V Recorder: amzn.to/3xQ4YVr
USB-C to SATA: amzn.to/3xOYZ3j
4TB Samsung SSD: amzn.to/3xPnHRl
Reticam MT-01 Tripod: amzn.to/3xRzCOy
Aputure Light: amzn.to/3EnvKHm
Aputure Light Dome: amzn.to/31qN0g5
Errata:
- The z14 can be configured with a maximum of 170 cores, not 190.
- Halon is inert, but not a noble gas. It's a molecule. I am forshamed.
- Halon will not cause spontaneous male enhancement as claimed. - Наука та технологія
Dave is such a character sometimes. 😆 the thing at the end about demoing an IBM mainframe was very funny to me especially when he did the call me thing.
I too hope the silent 'Call Me' works for Dave! Wasn't there a dude who also purchased an ibm mainframe from a university and reassembled it in his parents' basement. Maybe that might work for some high performance trials if ibm doesn't call.
He’s a living legend! Love listening to him!
Haswell was what i bought from silicone lottery. Skylake even 200mhz slower with its IPC loss from its pitch im guessing pipeline optimization toward clocks had a decent memory controller you could run very high clocked ddr4 alongside. So it ended up being overall faster even for single threaded tasks. With crossfire r9 390x the draw call bottlenecks in crysis 3 and the witcher 3 were obvious. I could have hit higher speeds on the core but a 4.6 core to 4.6 memory controller pairing was partly why skylake was faster at all.
Other than that haswell is still very good. Got really tired of the 8mb cache. Even 8th gens 10 isn't really enough for my tastes. Sad to hear zen 3d is going to be unobtainable because more crypto stupidity.
Also Netbursts extreme edition was more sensitive to context switching than probably any other cpu people actually have.
Cells 1:1 ratio system wide at modern lithographies with AI writing in order software parallelization has entered the chat, shortly.
Dave, you are talking to me. I'm 75, started coding on an IBM 650 and I still love to code. I worked in disc drive and IC development and I'm flabergasted by the development I've seen. Love your presentation and research and your memories.
My high school had an 1130 when I started... trough it had just been decommissioned and was being removed and replaced with Commodore SuperPETs!
Is it true that in the big IBM disk packs that the heads could travel faster than the speed of sound, which is why they were run as a vacuum chamber? Or was it just to reduce drag? Why not use Helium instead of a vacuum? Always curious about that...
Indeed. Hell even small Microcontrollers have some insane capabilities now. For example, a Teensey 4.1 which has an ARM M7 core @600 Mhz with 1MB RAM and 8MB Flash plus SD card support. All of this in a tiny little 50mm x 20mm board.
@@shadow7037932 teensy can run upto 1ghz and the "better" chip from the same manufacturer has 1ghz main core and 400 or 600mhz second core. Teensy can emulate 70ns eeprom from instructions only - ask me how I know
@@DavesGarage Correct me if I'm wrong but even a relative light inert gas like helium would impart some level of friction and heat on the disks I would think. In addition although not the best airfoil I'm thinking any flat surface rotating at that velocity would have to generate some level of lift and that would have to put strain over time on the bearing the drive rotates on.
@@currentsitguy Actually, the newest drives ( greater than a few terabyte) are helium-filled at a significant fraction of atmospheric pressure, normal air caused vibrations that hampered bit density. Helium due to it's noble properties, causes much less vibrations (and less heat) in the head, making it possible to have the head closer to the surface, and deposit much smaller bits into the surface.. A vacuum in consumer drives is not possible.
Hi, I own a IBM Mainframe from late 1990. Actually my boss gifted it to me when I started my apprenticeship. He even gave me the proper terminal, physical keys to unlock the beast and 17 drives (almost the size of my head). What a great guy.
Hi Dave, a fan of your channel and an IBM mainframe guy here - maybe you already know, but just to make this clear: the biggest difference between a PC and a mainframe in terms of performance is not the processor speed, but actually the I/O throughput and efficiency... so while the processor speed alone in terms of instructions per second may be just a small multiple of a high-core count desktop CPU, it will be able to do the work of a few hundred (at least) such machines in terms of number of transactions processed - that's where one of its main strength lie, besides security, redundancy, etc.
Indeed, it's a bit like comparing a giraffe and a horse in a race. They're different animals! But I've always been curious about the raw CPU perf, esp. since we did the PRIMES benchmark project and have 50+ lanuages now! If you know were I could get access to time and space on a powerful z14 to run the test, that'd be cool! If so, please email me!
@@DavesGarage Dave, I am an zSeries emulator developer for IBM, I sent you an email with a link to the emulators for hobby developers. I used to be a I/O developer for the z/OS operating system, and I have to say development for mainframes is like nothing else. The assembler manual is over 1000 pages, if you get into it, good luck.
I do have to say, I run large compiles on both intel and z14 system and z is about 30 times faster. It is no contest, but compilations are very I/O heavy compared to PRIMES benchmarks.
That was once the case but the micros have caught up, maybe not the desktop systems but the server systems out there today can match the performance of mainframes in the realm of I/O activity. Both in I/Os per second and in data transfer throughput, the massive advantage the mainframes once had in that area has been slowly but surely whittled away over the years.
It's a shame in some ways and not such a shame in others, there's still an awful lot of code that'll still best be run on a mainframe but the number of mainframes still in use is still dropping as more and more companies finish moving all their systems to the client/server model that prevails today.
“Throughput” is really the point. In particular, that comes at the expense of minimizing latency. That made mainframes great for batch processing, not so good for interactive operation, like the DEC minicomputer timesharing systems could do.
@@lawrencedoliveiro9104 Actually it also depends on the architecture of the system, with a working hardware interrupt system (something the 360 and later line lacked) it is possible to do very time sensitive work while also processing a large number of batch jobs in the background.
Much of my career in computing was dealing with a line of mainframes (Burroughs Medium Systems aka Unisys V Series) that had such a hardware interrupt. One low end model of that line of mainframes could run up to ten high speed Reader/Sorters for checks at the same time. Those reader sorters would read a check at the OCR window and then the mainframe would have to respond within a couple hundred milliseconds to select which sort pocket the check would be dropped into. A high end 360 had trouble keeping up with one such reader sorter because it lacked the hardware interrupt on I/O complete. These days of course the reader sorters are almost extinct as check imaging has replaced them but even before that happened newer systems that had built in processing (micro computers :) of course) made it possible for systems like the 360 to handle such equipment much better.
The company I worked for didn't use their systems for check processing but did develop them using some Mini computer Front End processors (HP 1000 F Series) ultimately developed a large network (pre-internet) of CRT terminals, dot matrix printers, high speeds line printers, label printers and such for their warehousing business and I was a part of their growth from a 36 million dollar (sales) a year company to a 4 billion dollar a year company (their size when I left them). We had to remain highly responsive for all those remote terminals plus drive all those printers and it helped a lot that the mainframes we used could remain highly responsive even at night when the large scale batch processing took place since night was also when most of the printing and the warehouse picking activity took place.
Those were of course dead end mainframes and I have an emulator for them that is faster under emulation on my PC than those mainframes ever managed under their real hardware. So many things have changed since then.
I’m 85 and started writing code in the 1960s. I’ve written code for the NCR Century 100, the GE PAC4000 series, the GE PAC30 and the PDP-11 in the late 1960s and early 1970s. I was at Motorola when Chuck Peddle lead the team that designed the Motorola 6800. He recruited a coworker of mine to write a cross assembler for the 6800 on a PDP-11. Most of my programming was done in assembly language. I now have a dozen Raspberry Pi computers and I’m learning to write Python scripts.
I've heard that Assembly is really difficult to code in. I guess you have a leg up, however, if you've pretty much always worked in Assembly. Kudos to you for sticking with programming though :D. I remember regularly having conversations with a guy who used to write Fortran programs back in the day, he was in his 90s when I knew him, lovely and cool old guy :). Back then I mainly programmed in C++, nowadays I work mostly in C# and am currently learning Powershell - I'm not overly fond of it but the previous developer wrote everything in Powershell, so... I'm planning to move his scripts at least over into Python eventually, as Powershell seems to be a little bit of everything but then nothing like anything, it's the strangest language I've come across thus far.
Noble gas is a term used specifically for the group 18 elements, FYI! Halon in most fire-extinguishing applications was another name for tetrachloromethane. I figured you'd want to know, not meaning to nitpick :)
Oh geez. TCM is not only super, super bad for the ozone layer, it's also incredibly carcinogenic in mammals.
@@davidemelia6296 yeah. Used to use a lot of it when I worked in a research lab, it's still ubiquitous in the world of chemistry as a solvent. Good thing we have fume hoods 😅
We did not have Halon where I worked. We had Cardox which is basically CO2. Not any safer for humans. When the alarm sounded, you had no more than 30 seconds to get out.
@@jackpatteeuw9244 With the difference being that with CO2 you would get shortness of breath, I suppose
@@mevideymNo. You would pass out and die, very quickly.
08:42 - 1951 Univac 1 (2.25MHz) : 0.002 MIPS
09:12 - 1961 IBM 7030 (?MHz) : 1.2 MIPS
10:02 - 1965 CDC 6600 (10MHz) : 10 MIPS
15:27 - 1969 AGC (2.048MHz) : 0.04 MIPS
09:40 - 1975 MOS 6502 (1MHz) : 0.45 MIPS
09:33 - 1977 DEC VAX 11/780 (5MHz) : 1 MIPS
10:09 - 1988 Motorola 68020 (16MHz) : 10 MIPS
10:19 - 1991 Intel 860 (50MHz) : 50 MIPS
10:15 - 1993 Intel 486 (66MHz) : 25 MIPS
11:01 - 1994 Mips R4400 (150MHz) : 85 MIPS
11:13 - 1994 Motorola 68060 (75MHz) : 110 MIPS
11:27 - 1994 Intel Pentium (100MHz) : 188 MIPS
12:28 - 1994 PowerPC 601 (80MHz) : 157 MIPS
12:35 - 1995 PowerPC 603 (133MHz) : 188 MIPS
12:42 - 1996 PowerPC 603ev (300MHz) : 423 MIPS
12:49 - 1996 Intel Pentium Pro (200MHz) : 541 MIPS
13:33 - 1999 Intel Pentium 3 (600MHz) : 2'054 MIPS
13:58 - 2003 Intel Pentium 4 Extreme (3.2GHz) : 10'000 MIPS
14:05 - 2006 AMD Athlon FX-60 (2.6GHz) : 20'000 MIPS
14:09 - 2006 Intel Core 2 Extreme (2.6GHz) : 50'000 MIPS
12:57 - 2011 ARM Cortex A5 (800MHz) : 1'256 MIPS
14:15 - 2013 Intel i7 4770K (4GHz) : 133'000 MIPS
13:42 - 2014 ARM A53 RasPi (1.2GHz) : 5'000 MIPS
13:22 - 2014 Apple Watch S1 (520MHz) : 1'000 MIPS
15:03 - 2020 AMD 3990X Threadripper (4.35GHz) : 2'356'230 MIPS
Now add Times and I'll make it the chapter list :)
@@DavesGarage The timestamps are not chronological, but I think it makes more sense for the list to make the year of the CPU/System in chronological order...
I agree to an extent with both Nico and Dave. We all have our quirks. :)
Thanks for the list! I've sorted it chronologically and added it to the description.
Thank you Nico!
Your list saved me SO much time.
There's something genuinely cool about mainframes, all that power in one package makes it feel like "the ultimate computer".
If they didn't cost an arm and a leg to own and run, I'd own a few.
Kind of remind me of diesel engines.
My work had a “minicomputer”, engineers had an account and our department was billed according to the amount of cpu time we used. Our lead developer was doing multi variable control system simulations and was our biggest user but he had was nothing compared to the one user who was using orders of magnitude more cpu time. This user was a managers secretary and was using the electronic office.
What caused the "electronic office" to consume so much CPU?
Don’t know exactly, at the time I thought it was just that it was in use all the time where our jobs were more like batch jobs. Our workstations were in the darkened and air conditioned “computer room where the secretary’s workstation was in their office.
@@gregx5096 Electronic office consumed more CPU mostly in polling between keypresses, so as to give a more typewriter feel to the user, unlike engineering where you submit a batch job, which does use 100% of the CPU time, but only for a small slice of the overall time, and then releases the CPU. Contrast that to the electronic office, which needs hundreds of context switches a second to process any input, and generate output, so a hundred times a second the single CPU has to context switch, allocate the time in context switch to a user account, clear the memory by writing to disk storage, and then bring in the stored state of the electronic office user, write to main memory, restore all the CPU pointers, and then jump to the next instruction it was to execute before the scheduler triggered the task switch interrupt.
Thus 100 times a second perhaps 1 millisecond of task swapping, for a user time of perhaps another millisecond to process the character typed in and update the screen, then another millisecond of swapping back to the task scheduler to run the next multiuser job. Nearly a third of user time spent in this one application, while the compute job is a single job, that is likely flagged as low priority, so is run during the night, when the multi user system is not running any real time jobs needing that level of time resolution, so can run it for 10 seconds at a time, switch back, and see there is nothing to run, and give it another 10 seconds of run time till complete. Difference is multitasking with high resolution, versus big chunks of time, as likely the overnight jobs are all large data jobs, but they can all run at any time, no need for precise3 scheduling.
I worked in a mainframe environment in the 90s and our department was billed for CPU use - but not memory. Standard practice was to declare all variables as static (or 'FIXED' in PL/I) so the CPU didn't have to waste cycles allocating memory!
@@SeanBZA born in 1997 here. While I had already understood what you mentioned was a concept, it is honestly blowing my mind thinking about such a massive load from polling for KB inputs compared to engineering work! I have been spoiled by dedicated silicon for polling using USB protocol my whole life... well, nearly my whole life. When I read PS/2 my brain doesn't fully commit to accepting the person who wrote it mistyped Sony's (best) gaming console, but it tries to.
I don't think I am thankful for that enough.
Great topic. Back in the 80's I programmed Ratheon and Vax 11/750 -& 780 systems in assembly. My first PC was a Heathkit and I programmed it in assembly to communicate serially with the Ratheon. With it I was able to replace punch cards for inputs with ASCII text files. It was so cool.
Great idea, you shoud publish a articule about it.
I worked on IBM 360s at Delta. We had 6 of them. One was for HR. The rest were online. True or not, we were told that ours was the world's largest non-governmental computer operation. Out of a few hundred programmers almost all used Delta's proprietary version of PL-1. I was a core programmer. There were only 22 of us. We coded in IBM's version of Assembly Language. Core dumps were 500 pages of hexadecimal. Addition and subtraction were no problem. Multiplication was slow but division was hated. I think it was 1982 when I became the manager of a Wang, Victor and HP dealership. Though the shuttles had computing redundancy. the crews were issued preproduction versions of the HP 41C. If all else failed, they were supposed to be able to make the return solely with that programmable calculator. They did have the added feature of a highly accurate clock.
You missed the Alpha CPUs introduced in 1992. It out performed MIPS (that was the whole point by DEC). The most beautiful CPU created as it has pipelining branch prediction and out of order processing. Which is now common place.
I loved that CPU, and the two GS140s we had running with Tru64 and TruCluster. Clustering that was real clustering like DEC intended it. Migrating running processes to different nodes.
Wow I cannot even think about the kind of reliability with something like that. To be honest this is the first time I've heard of something like that and it is crazy to think about how to even migrate running processes to other nodes.
i worked for digital, back in the days when they changed to Windows on intel
mid 90's I was installing WinNT on both Intel and Alpha machines. Mostly DEC series, similar hw specs just with either Intel or Alpha CPUs. I also loved the Alpha CPU's. They were working really "relaxed", especially with heavy disk I/O the read/write disk sound was like comforting music
@@admrotob they basically took the ideas of VMS clustering and brought it to their Unix. Now it did require expensive memory channel boards with fiber links to move process memory from one node to the next if processes had to be relocated cross system. So there was a little hiccup and it churned on where it was caa_relocate was the command. I’d move our background calculation processes all to the same machine, did an upgrade of our code. Moved them to the upgraded machine and upgraded the other node. Next time that process was called it was the new version. Now downtime and now interruption of long running fluid sims we did to calculate viral spread in stables. So the stables could be equipped with the right ventilation to evacuate contaminated air and reduce viral infections in animals.
We generate a lot of extra business around the cattle farming industry by centralizing this data in 1997. I guess it was my best ever business idea but we never for rich of it as engineers :)
And that’s when 24 year old me realized in 1997 that data is the future, not necessarily software.
pfff Tr64 yeah had that but we mainly ran OpenVMS a true cluster - could take an axe and randomly hack things in the computer room and would continue to run. There was even a VMS Cluster up for like 19 yrs some where from memory (admittedly just need one node up any time to keep that stat).
My first actual programming job was using Fortran (WATIV) on an IBM 360. I feel so old.
Thanks for the trip down memory lane, Dave ;)
Me, Fortran lV on xerox Sigma 9 and Vax11. Cheers.
You put a lot of work into your videos. Thank you, Dave. All of them are very interesting.
Very entertaining - and educational! Thanks! You brought back lots of memories of my "ute". This also reminds me of the Pyramid 98x which used several mainframe techniques (independent I/O controllers for 1/4" tape and winchester platters) to using 10ms cache memory for main memory. Again, thanks for the trip down memory lane!
I still recall upgrading the old disk cabinet on an S360 in the 1990's. The guy that recycled it got close to a grand for the scrap metal alone. There's gold in them thar drives!
Swapping for a newer 360, the mainframe was so embedded into the building's power that they ended up having to shut power off to the five story office building at the street to remove and replace it. It was the equivalent of a liver transplant for a business.
We lowly Netware engineers and admins were never given such luxuries or support, yet no one could connect to the S360 via our Netware SNA gateway without us.
What still impresses me is how much computing access could be had with so little bandwidth. The satellite offices (where the actual products were manufactured) averaged about 200 terminals, yet they share a 19.2k tunnel.
Text only transmissions don't require much bandwidth. For as slow as old hardware was, the programs that ran on them were far simpler and ran very well.
@@tstager1978 That's just it, they weren't slow. They actually produced the query results onto a screen about as rapidly and occasionally moreso than one gets today. It just wasn't wrapped in all of the graphics.
@@wisenber exactly!
So funny story. I worked on decommissioning a small data center running IBM Big Iron for a three letter agency as they moved to AWS GovCloud. All that hardware was only a few years old and basically got thrown out with the all of the storage (SSDs and HDDs) securely and destructively destroyed. It was really sad to see all this hardware getting thrown out and I couldn't take anything home to play around with :(
Damn that sucks. Good use of our hard-earned money though /S
It really is a shame there doesn't seem any incentive to cleanse all storage & auction off the hardware for interested buyers.
Is it even an upgrade? You'd think that the three letter agencies would not compromise when it comes to owning your own data. Now they have to trust nothing goes wrong with Big Jeff's cloud.
@@InnuendoXP That's because the government wastes money! That's really all they do!
@@InnuendoXPthat would be illegal. You don’t have any experience with classified work do you 😂
Dave, that "Halon abort" button is to _prevent_ dumping Halon, not to cause it. IME most if not all machine room panic buttons are red, and must be pulled to activate. This is so an unintentional bump doesn't set it off. Pulling the panic button does three things: 1. it causes an emergency stop of the computer, 2. it activates the Halon system, and 3. it activates the building fire alarm. It's all done by one switch because better safe and simple than sorry. Mainframes typically have exotic power requirements, like 400 Hz 3-phase AC, not unlike what aircraft use. Utility power is converted to 400 Hz by a rotary converter, a motor attached to a generator. It's big and heavy, and because it spins, if you see one smoking or vibrating, shutting that down is really important. No matter what, removing power is Step 1.
It's possible that just removing power will fix the problem, but if everyone runs from the building, nobody will be left to figure that out. For that reason, the fire suppression system is set by default to go off after a predetermined time to allow evacuation and reassessment. After the initial panic, the computer operator may decide to halt the automated process. Think about it, if the fire retardant is discharged, there's no more left in case of a future fire, so the computer can't be turned back on and the computer room can't be occupied until the fire equipment is refurbished and re-certified by local authorities. If there's no fire, you'd want to avoid that. No, Halon doesn't "suck the air out of the room" but it's not 100% safe to ingest either.
The whole building may or may not be evacuated, depending on a large number of circumstances. In a commercial building there's someone whose job it is to make more complex decisions that automation isn't good at. That "bulletproof glass" may not be to protect the computer from armed intruders as much as it's there so it doesn't spray the computer operator or building engineer with shrapnel when they go to see if the computer really is on fire.
"No, Halon doesn't "suck the air out of the room" but it's not 100% safe to ingest either." Yep, I'd rather be in a room dumping halon than one dumping co2; better chance of survival. If it's CO2, you better be able to find the rescue breather with your eyes closed. :) Of course the best answer is don't dump the system, second best is be somewhere else when someone else dumps it. :) I've worked around both, never been present for a discharge though.
Modern fire suppression systems use inert gas (Proinert, Inergen, other brand names, usually a nitrogen/argon blend) or a modern "successor" to Halon in the form of heptafluropropane, aka HFC223ea.
HFC223ea is non-ozone-depleting, doesn't leave residues on equipment, doesn't displace or bind to oxygen so is "safe" to be in a room with, and like Halon works by rapidly cooling the space to rob the fire of energy to sustain itself.
The inert gas systems instead work by reducing the oxygen concentration of the air in the space from approx. 20% to (according to design specifications) below 10%, effectively suffocating the fire. CO2 systems also work on this principle.
CO2 suppression is following Halon off the market, I'd guess because it uses CO2.
I've been in two spaces so far that have had inert gas discharges, along with one CO2, and that total white-out CO2 causes made finding the SCBAs a panic-stricken nearly-lethal nightmare. Inert gas systems, however, are a lot easier to navigate during a discharge, although the light-headedness and headache from the lack of oxygen was not pleasant. And two of the three discharges I personally witnessed were because one of the servicemen servicing the system forgot to disconnect the trigger solenoid from the cylinder rack before commencing the test, the silly twit. In the second instance, said serviceman was me. It was a... rather expensive cock up. I don't know what caused the third (actually, the first) discharge I've witnessed, at that time I was still a network/server field circus tech and was busy racking some servers when the alarms went off warning of imminent gas drop. That was about 15? years ago.
Entertaining experiences I most certainly do not wish to repeat. Especially the CO2 drop, that one still gives me nightmares. I am definitely casting my vote with "be somewhere else when the system discharges".
Fun story: as a young Programmer/Analyst, I had spent all day (starting at 4am!) running the quarterly reports off the printers and delivering to the executive offices. I was tired, we were all chatting in the computer room, and I stupidly leaned against the red thing on the wall. It slid down and the fire alarm went off. I tried to reset it and managed to pull the slide fully off. My boss and colleague started talking about something and ran out of the room. I later learned that they went to hit the Halon cancel to prevent the room from being flooded. Good times. Good times.
i was asking myself if the glass had something to do with fire suppression. really glad i read your comment. thanks for making it.
Thats my nightmare, getting stuck in a shelter and having the fire suppression system go off. Its an inert gas, you will die of asphyxiation. I know an ST who was trapped in a shelter bc the door handle failed. This was a shared shelter, so there was fencing between the equipment. Fortunately, there was another exit door, but it was on the other side of that fence. He had to cut the wire s on the fence one at a time with wire cutters to get out. I'm glad I wasn't working with him; nothing worse than being trapped.
Awesome Dave. I had many years working on AS/400, iSeries, System i (whatever lol). I have fond memories of typing away on 122 key "battleship" buckling spring keyboards staring at a 52050 terminal. When an IBM engineer is at the door on a Monday morning saying that our system had contacted him over the weekend to complain about a failed disk, you raised your eyebrows. He came in, pulled the front, located the bay, yanked the disk, slotted in another one and said "all good" and left. Amazing that this was in 1998! Those were the days.
Hi Dave, I stumbled upon your channel about a month ago. I wanted to say thank you for the great content and entertainment. I really appreciate the time you take and I love your garage. Look forward to more.
I have implemented the PrimeSieve in Lattice / SAS C-compiler on my Amiga 4000 with the Blizzard CyberStorm 2 rocking the 68060 at the base 50mhz. Have also implemented the sieve in Pascal, Comal and working on an assembler version - all on the venerable Amiga A4K, because I can.
If interested, I can post numbers. I also have an A1200 with 68030, 68040 and 68060.
As I am a geek of old, I’ve done the same implementations in Borland Turbo Pascal, Turbo C and IBM PC Comal on my Digital Prioris Pentium 90….
So sad, really - AND GREAT FUN! Thank you for all your content. The historic stuff tickles me the most.
Regards
Anders, Denmark.
I saw Comal listed as one of the languages and immediately thought "Dane, this has to be posted by a Dane..."
Well I googled some and to my surprise there are apparently some Comal diehards at schools in the UK. But other than that it seems mostly a Danish affliction.
Now I do have a bit of a love/hate relationship with the language. Love it for the simplicity and structured elements. Hates it as when I had to learn it they refused to admit that it was in anyway related to BASIC in any shape or form, which I thought was deeply insincere. But then at the time the worst profanity you could use as a programmer were to mention BASIC.
why you need these old system, what you run on it?
Please take a look at ( github.com/PlummersSoftwareLLC/Primes ). If you can get the primes test in C++ or C to compile on the Amiga, heck, that'd be fun to feature in an episode. It requires fairly modern C++ support though, and I'm not sure what the current compilers are like on the Amiga.
Too bad someone doesn't use Lattice to port gcc!
Always wondered if it was possible to get a 68060 running in a A1200, it was something I dreamt about when I was younger :)
@@lucasrem1870 it’s like a vintage sports car. Sure, you don’t *need* it; almost any modern car is more reliable, gets better mileage, makes less noise, is faster in everyday traffic and spends less time in the shop. But *look* at a dark blue 1962 Ferrari 250 GT Lusso with creamy caramel-colored leather interior, and maybe you’ll appreciate it’s about something else entirely.
One of my all time favorite mainframe instructions is Move Character Long (MVCL). This instruction uses 2 register pairs. Each register pair contains an address and length of a memory area. One par for the source string of the move and the other for the target. Notice that the lengths are not required to be the same. This can result in either truncation when the target is shorter than the source or filling with a pad character when the target is longer than the source. One of the registers contains the pad character which is often time either a blank (x'40') or null (x'00'). This instruction runs in a virtual storage environment. It is possible that the source and targets span many pages but also the instruction itself might span across pages. The code would look like 1) LOAD R0 with the address of the source, 2) LOAD R1 with the length of the source, 3) LOAD R2 with the address of the target, 4) LOAD R3 with the length of the target. Both lengths are limited to 24 bits so this instruction cannot be used to move a string longer than 16 megabytes long. One of the length registers contains the pad character. I can no longer remember which register. It would be interesting to see a benchmark of a z processor versus a PC when moving a string from a source variable to a target variable when the string length is 16 megabytes. Remember, we are not using a mainframe to create cute web page graphics. We are doing DATA PROCESSING.
Cool to hear your story on this. Didn't realize you were Canadian. I used to work at the Revenue Canada data center in Ottawa when I went to university. I was just a lowly tape librarian. Most of the tapes were in StorageTek silos (see the movie Erasure for a scene with these), but it was always exciting when the terminal would ask for a reel-to-reel tape. Very interesting how it worked, vacuum to pull the tape through to the second reel and also used to tension the tape. It would also make cool 'sci-fi' sounds when it would not load properly and error out. beep-boop-beep-beep-boop....Great channel! Thanks for all the videos!
Dave, I am so glad I came across your channel. I am also autistic (diagnosed as Asperger’s many years ago), and work in the tech field, I truly enjoy your content. Keep up the great work my friend!
I really enjoyed this video. Back in the early 80s I managed a group of system programmers tasked with predicting mainframe IBM computer capacity’s (3090s as I recall), and DASD (hard disc) requirements. We had IBM PCs, and AT&T Dataspeed 40s for tools. Your video brought back some fond memories.
I always heard that the biggest difference between a mainframe and other CPUs wasn't the CPU power but the amount of I/O they could handle. I also have seen youtube presentation on the Apollo Guidance Computer that pointed out that it was a special purpose computer that had 3 processors running the code and would compare the results, so that if 1 processor came up with a different answer it would be ignored and go with the answer the other 2 gave. So kind of the 1st fault tolerant computer, way before Tandem.
Even that wasn't enough. The infamous 1202 Eagle's AGC kept throwing was was an overload error due to the amount of data that was coming in exceeded the 12 word register causing multiple reboots.
True, and something that makes a big difference between higher end Intel and especially AMD and their lower end offerings too. The high end stuff has a ton more PCIe lanes. Threadripper Pro has 128 PCIe 4.0 lanes, which is nuts; Ryzen CPUs from around the same gen have 24 lanes and Intel often goes with even fewer.
The other big advantage mainframes have is their resiliency. Usually you can replace just about every part in the machine without shutting it off. They often even have redundant processors and memory, doing the same thing at the same time, so if one suddenly goes missing temporarily, it's really not a big deal.
> computer that had 3 processors running the code and would compare the results, so that if 1 processor came up with a different answer it would be ignored and go with the answer the other 2 gave
This is also called triple modular redundancy.
> I always heard that the biggest difference between a mainframe and other CPUs wasn't the CPU power but the amount of I/O they could handle.
That's what I've read as well, but the claim smells fishy. If mainframes are so great for I/O, how come they're used for banking (low bandwidth requirements) and not for video processing (UA-cam)? It's possible that they're talking about transactional I/O (transactions per second). But even there, modern commodity computers are quite capable; e.g. Linux kernel developer Jens Axboe recently got over 5M IOPS per core with a Ryzen 5950X (this is a consumer-grade platform with limited DDR4 and PCIe bandwidth). The total absence of benchmark results from mainframe manufacturers is telling.
I suspect the only real difference is extreme reliability (in a single coherent machine; if you don't have a hard requirement on consistency you can achieve reliability cheaply using a distributed system on commodity hardware).
@@fat_pigeon why are they not seen hosting youTube?
1. youTube is far from being a horrible thing if it glitches, and the nature of all the network bandwidth, etc. guarantees it will fail. What youTube serves out is inherently lossy and failure-resilient in the media encoding and transport layer along with the media protocol itself that it accounts for hardware limitations and lowers resolution as needed for business purposes. Banks? That’s critical money hose stuff that MUST be correct in all aspects, as the order of operations of transactions has sequential effects on every other transaction that depends on those, not to mention the databases in general. Correctness is (by far!) the most vital thing in that context. For youTube, “correctness” is so far down the priority list as to be more a concern that they don’t miss showing commercials.
2. All the resiliency of mainframe big iron is very large, expensive, power-hungry and slow in comparison for servers for youTube by comparison: I’d wager Google uses the cheapest crap they can in their data centers, if it misbehaves bad enough, they rip it out and throw another one into the rack to replace it.
3. youTube is a major example of where internal consistency, for where it’s actually required, can be done on a very relaxed schedule of eventual consistency, with likely the biggest and most important thing to keep consistent is the list of available videos to play for updating the catalog. That a huge number of users wouldn’t see all the most recent updates as it replicates across the data center is merely a minor inconvenience for end users that more often than not, they’ll never notice. The process of serving up video streams to so many users is embarrassingly parallel compared to working with bank transactions, and don’t have many synchronization points. Some video servers fail? No big deal, the data is available elsewhere than those servers, the user may not even be aware of failures, other than network slowdowns and video servers with the balancers going down in huge numbers.
@@strictnonconformist7369 Exactly; that's my point. The primary thing the mainframe brings is extreme reliability; I suspect the mainframe's I/O capability is nothing special.
Just discovered your channel. 3 videos in I’ve subscribed. I’m a big tech and PC enthusiast, I am very very happy to be able to add you to my list of go-to UA-cam channels on the topic.
Awesome work, thank you! :)
Dave, You are really a treasure. I've been around PCs since the MS-DOS days, and worked on mainframes for a few decades. I love the history that you provide about Microsoft software that isn't available anywhere else.
Two things about this video.
1. I wonder how many people caught the "Das Blinkenlights" reference?
2. Breathing Halon at the concentrations dumped into a computer room is not harmful. I went through several Halon dumps during testing after a new computer room build. We were required to stay in there for 10 minutes so they could make sure the Halon concentration did not fall too rapidly. We tried unsuccessfully to light matches to kill time.
You are a great story teller.... Keep up the good work.
Great video - I come from an IBM mainframe background and have worked on everything from a 1620 to the biggest 370. I learned ALC from a Green Card before there was a manual and the assembler software was on a tape. Years ago while working on a project for Royal Bank in England. I had a friend who was heavily into PC archtecture who was constantly bragging about how fast his PCs were. Finally, I took him over to a mainframe console and showed him the 256 threads running concurrently. He stopped bragging. Anyway, I finished up the project, the first PC to mainframe to PC using LU6.2 protocol and post wait logic and went home. He is still friends today.
and now, years later, with the 3995WX, PCs can do.... half that many threads
i kept trying to find the price of mainframe because no site will tell you, but i finally found the price of a z14 and, say if i had that much money, would it be worth it if I bought one? I frequently run many different types of software, on GNU/Linux that does not need graphics to work only networking, and I am frustrated by the small core counts of my intel cpus.
I remember playing with a Pentium Pro 200, man that thing was a BEAST at the time. Next impressive one was a Pentium 4 Northwood that I was running 2.5ghz overclocked, it kept my garage warm in the winter (in Chicago)
I remember someone installing windows 3.1 on a pentium. That thing flew, too.
Paul
Hahaha, you bought that Pentium Pro, lol, nobody needed them, you found it in a dump store, never used i guess.
What did you run on them? the Pentium were faster, Pro was replaced by Xeon, that was their enterprise solution that did sell!
I had a dual proc Pentium Pro 200 on a Tyan board. Hell a machine in its time.
Fire alarm and fire suppression installer, programmer, and service technician here.
Gaseous fire suppression systems are subject to similar codes as fire alarm systems that designate device locations, heights, and colors -- though back in YOUR day, gaseous suppression was a bit of a wild west.
Still in its infancy, most systems were "hand made", so often times manual release and abort stations did not follow any particular standard of color, location, or density.
Halon is not toxic on its own, and it also should not deplete enough oxygen to prevent breathing IF the system is designed correctly. The system engineers use room parameters to figure out how much gas needs to be released to catalytically extinguish the fire. That concentration of gas would make it difficult to breathe, but it would be possible. If too much gas is used, it could lead to suffocation. Furthermore, if the gas HAS reacted catalytically, then byproducts are toxic.
This is what I do for a living, let me know if you need any more details.
Would you happen to know if the "bulletproof" glass might be a feature intended to handle the pressure when the halon is released? Or is that not related at all?
@@DavesGarage That would typically be a design decision that I wouldn't be a part of. I work on the control and field implementation side.
What I can say is that rooms do have to be designed to accomodate a gas suppression system; obviously you are going to be rapidly rising the atmospheric pressure in the room as you dump gas into it.
I do know that windows, doors, and any type of floating ceiling are all considered when designing a space that will have a suppression system covering it.
This is so great! I love the story of your experience as a student as well as the comparisons! Thanks, and please keep making great videos!
I would highly recommend looking at CuriousMarc's YT channel. They rebuilt an AGC over many episodes and was truly fascinating.
But I recall having a DX/2 66 and thinking how much faster it was than the ZX Spectrum!!
Thank you. I'm 68 and have been in the same business as you since 1977. Only, I've worked for tiny companies like IBM, MCI, and TEC America, not to mention Retix, DEC, Caldera, etc. I really appreciate your walks down memory lane. Oh and my first computer was a GenRad Future Data 8086 hardware emulator (with cassettes), followed by a Cromemco System III (Z80, yes, 128K of RAM was huge) when I was working for MSI Data. Yes, I came in through the hardware design path.
I'm just letting you know that I like your channel ;)
really interesting video and a nice walk down memory lane for me!!! - i'm almost 70 and a retired IT and network tech - i spent my career from c1970 working in that industry and I fondly remember all of those old mainframes, mini's and micro's - and i'm still a keen computer hobbyist even now - once a geek :)
Funny thing is that I use microcontrollers with vintage valve oscilloscopes to make the scope appear to run *slower*.... As an example of doing heart rate monitoring the scope would not be able to show a ten-second trace to reveal irregularities . So a micro stores the readings, and traces a 10-second window many times a second, to make appearance of a long-persistence phosphor. So you get the cachet of 1950s vintage valve equipment with Perspex covers showing the valves, whilst using digital signal processing to make the old junk do something useful... (Admittedly this is slightly off-topic...).
Enjoyed your own trip down memory lane. :-). 😊❤
Great show. Would loved to have seen the metrix for a Cray or Cyber computer included.
The ARM chips, which are so prevalent now, are the descendants of the RISC-based CPUs which powered the Acorn (including Archimedes) range of computers throughout the 80s and 90s. By 1996, the StrongARM CPU allowed Acorn's final production machine, the RiscPC 700, to hit 233MHz.
These were incredibly energy-efficient CPUs for the amount of power they delivered; there's one story that, when the team at Acorn in Cambridge were developing the original ARMs, they found that the CPU was still active even when the computer had been powered off. - It was running off the residual charge still in the motherboard.
RISC-V is an open source derivative of the RISC processor. I'd love to see that system become standard.
They are RISC cpus. Not "RISC-based". Or I suppose you can call the modern variants "RISC-based" as they have a lot of instruction set extensions, just like all CPUs to accelerate various tasks.
And Cambridge should probably be more careful with the people they let in, because if the function of a capacitor is surprising to someone, he doesn't belong in electrical engineering on a uni level...
It wasn't residual charge in a capacitor. The power supply pins were not wired up to the chip on the original testing board, I believe due to PCB layout error. But the chip had already been tested and was working perfectly. This was discovered when the design team went to measure the amperage draw of the world's first ARM chip. After a bit of thought as to how the chip working at all, they realized that the protection diodes on which were wired to working buses had been leaking enough to power the chip during the initial function tests.
I still remember weighing the decision to buy my first computer between the Amiga 2000 and the Acorn Archimedes.
Hi Dave, Great series of videos. I started working as a computer operator on system 370's. All the Halon buttons I saw were Red and I have even been in the computer room when one went off. This stuff you do is great and brings back some great memories. Thanks to my brother for putting me on to your videos. Andrew
That was great....love comparing performances just to see the progress we have made. Hope you get your 'Big Iron' Dave.....👍👌
I had started my career in IBM Mainframe. Z14 excels in parallel processing like no other. Z14 can pause the execution while waiting for I/O and pick up any other Tasks. Z14s are multi tasking beast. While I admire the power of Big Iron, they are prohibitively expensive.
On a $/MIP basis, they're not competitive. And it's interesting to consider that nothing really runs on the bare metal on these CPUs so you don't get all the performance.
I would also point out that z/OS z/VM despite being able to do some amazing failover tricks can't do simple things you can do in VMWare like fail over a workload to a remote data center without massive amounts of hardware and software and even then, z/VM doesn't have the flexibility of virtual machines in a cloud environment where you can have take advantage of multi-region worldwide failover with zero RTO & RPO.
@@teekay_1 That might be true, but you are kinda comparing apples to oranges. It might be true that the "cloud" has a lot of fancy stuff too, but you have to sacrifice a lot for those capabilities (mostly performance) and setting up a multi-region worldwide failover setup for is anything but trivial (or cheap), while setting up a parallel sysplex (which only needs 3 things: z/OS, an ICA SR or CE LR adapter in each machine -depending on the range- and the fiber optics between those 2 adapters) in a mainframe environment is trivial (getting the fiber optics from one site to the other takes the most time) and if you want to do more failover capabilities, then you can add your preferred GDPS flavour to the mix. Also I'm pretty sure that a parallel sysplex is not only capable moving your workload between sites, but also does load balancing on its own, without me needing to set up policies for it, the only time I saw it fail/not work correctly was when the front-end was configured incorrectly otherwise I would have seen a lot of complaints from Customers when 1 system was brought down for maintenance in a sysplex and suddenly their business stopped working. :)
Additionally there are services where you simply cannot move your workload over large distances as your response time would suffer when they are defined in the range of couple hundred milliseconds at best, also you cannot beat the performance density of a mainframe :)
p.s.: While also true that the mainframes are also virtualized, but the difference between your VMWare and the mainframe is that the HW is designed for it with this in mind, for decades now and PR/SM is much more tightly integrated with the HW and the SW running above it than your VMWare ever will be.
tl;dr: both platform has their strengths and weaknesses and use-cases, and yes mainframes can be expensive (neither will be the cloud if you want to have enough capacity and redundancy/failsafes), but they are very good in what they are designed for.
Do mainframes also has an advantage when it comes reducing locks on OLTP databases caused by frequent updates concurrently access by many users?
Dave, you're absolutely brilliant and are a father figure to me. Thank you for all that you do in your videos. You're beyond talented and are clearly a genius.
What a wonderful thing to tell someone. I hope he sees this, it's really sweet and I bet it'd make his whole week.
i Agree 100 percent
I was at BMC Software when we ran the first release of mainframe Linux on bare metal outside of IBM. (Others had run Linux on VM/ESA previously.) This was December of 1999.
The first application we ran was DOOM.
Seriously.
While I was busy being the geek and recompiling the compiler, my buddy Mike had taken a more interesting route. He built DOOM for Linux on S/390. We borrowed a nearby Sun workstation for the graphics. Worked perfectly.
So ... the first application run on mainframe Linux on bare metal (outside of IBM) was DOOM.
Nice video, thank you 😊 looking forward to the benchmarking video 😊
Thanks Dave, for another informative video. Please excuse the following, rambling comment about the speed of main frame computers (I am a retired, 69 year old engineer). Since the 1970's, I have observed the development of parallel processing computers and how the execution times of practical problems from science and engineering have never scaled with the number cores used to solve a problem. From the beginning of parallel processing, the speed of computation has never scaled linearly with the number of cores. What has improved dramatically is the size of problem that can be solved. In the 1970's, problems with 10,000 degrees of freedom were considered to be big, whereas today, problems with 10,000,000 degrees of freedom are solved routinely because the computers have more memory. When dealing with modern, massively parallel systems, running programs with a lot of parallelized coding, total execution time is often limited by the speed of the interconnect between the racks of equipment that hold the 10,000's of computing modules in a multi-million-core supercomputer. Since the topology of many interconnect networks cannot scale linearly, the processors in large supercomputers may actually be waiting for data the vast majority of time when trying to use a large number of cores. The types of problems in this class include problems with large FFT's, nonlinear stress analysis problems (i.e., modelling of crash performance of cars), and computational fluid mechanics of compressible gases (modelling aircraft performance). For a lot of the problems mentioned above, the LINPACK benchmark used in the Top500 rankings (LINPACK models dense systems of linear equations) does not use the interconnect the same manner as the program does because the sparseness and nonlinearity of the problem requires the movement of data through the interconnect many more times. Many of the supercomputers use versions of the Infiniband interconnect or Gigabit Ethernet interconnect. Since the cost of operation of these systems is a function of their electrical power consumption (often up in the range of megawatts of power), the speed of the interconnect affects the cost of operation of the supercomputer in a major way because the speed of the interconnect directly affects the computational performance of the cores. Remember Grace Hopper handing out 30 cm bits of wire representing the speed of light in nanoseconds? The physical spacing between the racks of equipment increase the latency of interconnects by nanoseconds, but the speed of the switching inside of the interconnect can affect the latency by microseconds or milliseconds depending on the bottlenecks on data transfer inside of a program and the characteristics of the interconnect itself.
I remember overclocking my 286 by desoldering the crystal oscillator package from the motherboard, and replacing it with a socket. Then I plugged in different speed oscillators I had mail ordered to try. The speed increase was significant. It was a long time ago, but think I overclocked from 12 MHz to 16 MHz.
Me too
Really appreciate the handwritten closed captions, thank youuu
Really loving you're new content! Very interesting ideas.
Speed isn't everything. The IBM Z processor is designed to run at close to 100% CPU utilization non-stop (given their cost, I wouldn't be surprised if that's how IBM customers use them) Imagine an Intel or AMD CPU running at 100% CPU utilization for a year? Not going to happen. Dave also touched on the other benefits of the platforms. Insane reliability (yes, you can swap PSUs, CPUs, RAM, etc on a live machine) and even do firmware upgrades on a live machine. The Z in IBM Z stands for Zero .. as in Zero downtime. Something very important for a bank, airline, etc.
Always love your videos Dave, thank you so much for em! I remember learning about the Eniac in 6th grade while being forced to learn how to use a Mac. That's what I found my love for computing, and my hatred for Macintosh computers. 🤣
It's funny I'm using a Mac now, and use a Mac a lot (about 40/60 Mac/PC for me), but I hated and still hate the original Macintosh!
@@DavesGarage Really?! I've been wanting to mess around with hackintosh just to check out the newer OS and see what it's all about but finding the time has been hard. I've been having a lot of fun learning Linux deploying servers and want to test drive Linux as a daily soon. Are you using Mac for video editing?
Lovely video Dave, great comparison of so many different devices
always fascinating, thanks for sharing dave!
I'm in my 20's and the sights and sounds of those old systems still give that awe inspiring limitless sense of the ability to calculate anything.
I worked with mainframes early in my career and one thing they were really good at was I/O. This is why companies like banks liked them. You could have a centralized MF and use dialup/X.25/ISDN etc. to connect all your branches and terminals. IBMnot only developed the MF but they also developed a whole family of products/technology to deal with LANs and WANs.
@@CTSFanSam Yes. IBM developed SNA to deal these massive customers. The networks built with this could span continents and connect thousands of users. Its kind of funny to see how in many ways we are returning to some of teh concepts used by mainframes.
Yes, they needed a lot of DASD to handle both the I/O load and the size of the storage. Some banks, telcos, and other large organizations built large data centers to hold all that storage. One bank I am familiar with had a 3 building data center, and one of those buildings at 4 floors tall (ok 3 floors for the computer room) held thousands of drives and an enormous [for its day] 5TB of storage. Today of course you could meet both the I/O load and way more of the storage on a half dozen SSDs.
Funny & informative as always...thx for the video Dave 😆
I started learning in 1976 on a Burroughs using COBOL. I finally started writing their company programs using a Kaypro running CP/M. Convinced them to start the change over to PC's when some were 8088, and one was an 8086. Their productivity was tremendously increased. That old Burroughs was a slow dog running tape drives. We were sooo happy when we got our first hard drive. It was the size of a 2 drawer file cabinet and ran on a dedicated 220v line, and it was a huge 10MB drive, oh we were so proud. We got the 10MB drive before I started the change over to PC's. I turned 60 in 2023
Why would someone shoot a mainframe?
Fun Fact: At IBM Poughkeepsie in the early ‘80’s they had an old school RJE room we took all our card decks to to submit our jobs. I’m heading to lunch one morning and all hell was breaking loose in there. The HUGE printer paper spool was just spinning out of control and emptying all its contents across the ops floor. Not sure why or how this happened but the thought of shooting that machine did cross my mind. Judging from all the hollering and arm waving going on in there I can tell you I wasn’t the only one.
That, and all the monkey business (allegedly) going on behind the 3380 drive enclosures. Ahh, fun times.
"Why would someone shoot a mainframe?"
You try dropping a stack of several thousand punch cards and lets see your mental state!
On an old CDC 6600, I found out the hard way that if you print a listing of a Basic program with line numbers starting with a 1, the chain printer interprets this as 'page eject' causing paper to spray into the air at an incredible rate. The noise was incredible, as was the panic trying to stop it!
Your comment about the huge moving parts made got me thinking... the bullet proof glass might have been to protect people on the outside from stuff on the inside running amok, not the other way around.
In the 80's printers had a paper tape loop that amongst other things marked the top of form (top of the paper). Never had it happen but can see he paper tape breaking and the printer keeping advancing and not find too of form thru the box of paper.
Back in the day, a lot of computer rooms had windows and you could look right in and watch the machines running. Then someone realized that all it would take was a single rifle round that could poke a hole in your very expensive computer, which would take your computer out and take your entire company down with it. Hence today's Class A datacenters that are locked down like Fort Knox.
Spent most of my time in that era of Mainframes upon Honeywell kit, Level-66, DPS7, DPS8, DPS88, prior to that was the ICL 2903/4. Though do miss going thru dumps nobody understood and then pointing at the code and going - that's the bug right there, was magic. But one of the earliest and coolest things I did was the keylogger I wrote for the ICL before PC's even became a glint in the eye.
Great kit and some really cool history but not as bankable skill wise today unlike IBM Man Vs Machine range.
We had Honeywell FSE who did troubleshooting by tappy tapping circuit boards with a small hammer.
@@SSJIndy Can't say I've experienced that, but I'm not surprised if it had any relays.
Love the tech talk from the man that has been there and done it... back in the day I seem to remember my Amiga A500's stock 68000 cpu ran at around 0.75 of a MIP @ 7.14Mhz
One of the best Channels on UA-cam. I could listen to you for hours. You Dave have such a valuable content in your videos. Fascinating.
A 3990X is ~2,000,000x faster than an Amiga 500 - just goes to show how poorly made modern software is. Much of that Amiga software on the A500 is more responsive and runs smoother than modern software on the latest PC/Macintosh hardware.
Important, often overlooked, and it goes to show that the "bloat" factor in software scales more in an exponential way than a linear one.
That’s because modern Windows is 90% spyware, 9% bloatware and 1% OS.
AGC:
-16 bit architecture, word addressable
-Few registers
-Very limited instruction set, hardware multiplication included
-Simple IO architecture
-Magnetic core memory RAM and core rope ROM
S360-195:
-32bit architecture, IIRC with 128bit floating point
-Many registers
-Complex instruction set
-Multi channel IO
-Up to 4MB of RAM, byte addressable
-Instruction pipelining
Good comparison! Writing code for the 360 would be a lot more fun, too!
@@DavesGarage I am writing some code for emulated S370 running on Hercules. Not yet in assembly, but mainly in BREXX (and sometimes a little of PL1/COBOL/GCC).
As always - greatly enjoyed your video. Thanks a lot :)
Immensely entertaining, as always!!! Thanks, Dave!!
I am the proud owner of a 68060 Amiga 1200 (with a Phase5 Blizzard 1260 accelerator) . Not an A4000 but still pretty cool. I have a 4000, but it has the more common 68040 (clocked at 40mhz in my configuration). The Amiga still has a thriving community and new accelerator are being released every year. The 060 is a sought after upgrade and it has gotten extremely expensive recently. It would cost as much to buy an 060 as a 12700k these days.
same here + with SCSI & Gfix card ;)
@@rafaswierczynski Same here :D
Acellerator SCSI and PCI gpu on the A1200, Acellerator SCSI and Zorro gpu on the A4000 :)
@@nevilovermann797 always wanted A4000.... ehh...
When we got our first 486 at work, the lucky stiff who had it on his desk demonstrated its speed by completing a game of Solitaire and watching the cards bouncing across the screen just ridiculously fast. Then he did a directory listing that scrolled across the screen so fast you couldn't even read it.
We all wondered at the time why in the world would anyone ever need anything faster.
486 was MS DOS days, no Solitaire, that came in 1990
Or you got that 486 too late, lol
@@lucasrem1870 486 dx2 was released in 92 so 486 was still a thing when windows 3 was around
@@TheBadloseruk Nobody bought windows 3, no Solitaire on that too. PC's were not fast enough, all software was DOS interface only
3.1, the Windows for Workgroup version, some years later ;) then came the software too, Lotus, WordPerfect
Try to find 1992 ads selling them, hahaha, they were on shelves in 1994.
@@lucasrem1870 We were on Windows 3.0 looking forward to upgrading to Windows 3.1 and eagerly showing off our custom window arrangements to each other.
@@lucasrem1870 Yes Solitaire was included in Windows 3.0. In the version we had anyway. I still argue Solitaire is the only decent software Microsoft ever created.
I genuinely hope someone out there hooks you up with a mainframe. It would be absurdly amazing to see you work your magic with it, and show us what it's all about!
Awesome, Dave !! Thanks a ton. James.
Did you know that the Mainframe can run multiple different OS's? Linux has existed on IBM Z since the late 90's (s390x architecture) The most common OS is called z/OS. IBM Z customers stay with the mainframe for many reasons, one of which is IBM's perpetual binary compatibility. You can take code written in the 1970's and still run it on the most recent z/OS and z15 CPU, without a recompile. Try that on any other platform.
My personal CPU benchmark has always been to calculate "10,000!" (ten thousand factorial) out to every significant digit. It has changed a lot over the year on the different systems I've run it on! In these cases, the computations were done in different interpreted computer languages, as the only user on the system (although in the Windows cases other processes were running in the background).
Year Time System
1974 Not capable Altair 8800 Intel 8080/Z80 clocked at 2MHZ, 56KB main memory, Microsoft BASIC interpreter
1985 - 22 minutes IBM 3090, 64MB main memory, 256MB paging cache, REXX language, system cost $25 million
1999 6 minutes Intel Pentium III clocked at 1GHZ, 32-bit, .513MB main memory, Java language, system cost under $1,500
2007 6 seconds Intel Pentium IV clocked at 1.5GHZ, 32-bit, 3.5GB main memory, Java language, system cost under $2,000
2013 80 ms Intel i7-3930k clocked at 3.0GHZ, 64-bit, 32GB main memory, Java language, system cost under $3,000
2016 60ms Intel i7-5820k clocked at 3.3GHZ, 32GB main memory, Java language, system cost under $3,200
Don't know about you, but I really don't need to compute 10,000! any faster than that!
How do you compute 10,000!? That sounds like it wouldn't fit in a long. Does a double cover all significant digits, or did you figure something else out?
This is a great video, I work for a company that is big into Mainframes and never really understood them. I work more on the Open Systems side (Windows and applications) and they are really a side to see and hear in our datacenter.
Your best documentary yet! My Veeder-Root counter jammed while tracking your words-per-minute narrative. Thanks.
In 1989 my employer had a couple of IBM 3081 series mainframes (I'd worked on much better gear before that and they replaced these within a couple of years) - those things ran dual CPUs at around 30 MIPS combined, had 32 Meg of main storage and disk access was about 6 Meg/second per channel, seek time around 7 ms (mostly IBM 3350s at 3600 RPM)
A mainframe can generally do a lot of things at once at a slow/medium pace. That disk speed is slow even by mid 1990s PC standards, but it might have been doing concurrent IO to dozens of drives at once not to mention network activity among thousands of nodes.
That's my understanding as well. Also, mainframes were designed to be more durable than desktop machines. While CMOS technology allowed low-power CPU sets to be made cheaply, mainframes stuck to using bulky and power-hungry bipolar junction transistor technology through the end of the 20th century. Mainframes had multiple CPU cores back when almost all desktops had only one, and mainframes could be partitioned into logical units as well. It's sort of like virtual machine technology for PC, but happens at the hardware level.
7ms seek times was actually scorching and the PC world needed to wait for 10k and 15krpm SCSI drives to get those speeds. I think my furst HDD 1987 had about 40ms seek time and next drive I bought was down to 28ms. The fastest drive I ever got had 4.2ms seek time.
This was amazing! So, as I recall, on the desktop, we didn't have instruction pipelines until the Pentium I introduced U and V pipelines. It was some time after that, that we received micro-instruction pipelines with out-of-order execution by way of a much smarter instruction decoder. Is this correct?
And how many years did it take for instruction pipelines to make it from mainframes to the desktop?
Thanks!
I think there was some pipeline on the 486?
The original ARM2 chip, used in the Acorn Archimedes desktop computer in 1987 had a 3-stage pipeline.
DUDE! I love your channel. I spent 33 years at Intel from 02-1981 (before the IBM PC - fixing Intel development systems in L.A.) to 11-2013 (the last 26 years in research in Santa Clara) and I just love the refences back to the day. I'd love to grab a cup of coffee with you someday. Take care and keep up the great job. God bless ya, brother.
You brought back memories. My first computer job (working for a state agency while I was still a college student working on my computer science degree) was to back up the Vax's at night. After the nightly backup was done, we had 8 vax's spread throughout the state that would send all their changes to our 2 main vax's at the central office. Those vax's would compile all the changes made with the other 8 vax's, then send a master copy of the databases out to all the other vax's. In the morning, I would get the Greenbar printout and look for errors or anything else out of the ordinary so I could hand it off to the programmers. While doing the backups at night, about every 10 minutes, I would change out a tape and replace it with a new one. It was about a 6-hour job.
So, I have a small System/390, and Hercules running on a Raspberry Pi is faster (z1090 on a Xeon even more so). The experience of running on real iron is quite different and neat as well, and there are some things emulation doesn't deal well with (e.g. SNA or VTAM applications).
how did you get the s390 images last I saw was only s360. I do miss the ipl days of the big iron.
@@rw-xf4cb If you're an IBMer, you can request a modern z/OS 2.4 from the Dallas Systems Center. It runs well on the z1090.
Rasberry Pi is more impressive when you realise that fully half of the processing power in that CPU is sitting there unused, as the graphics cores are only going to be unlocked with proprietary instructions and licensing, so around half the silicon in there does not even get a clock applied to it after boot, but is just idle. Unlock the GPU side and performance would likely triple, as the original design was to drive a HD display on a set top box, with a snappy processor for the system side, and then graphics that would do HD video decoding and such with ease.
I'm not sure what you mean, can you expand on this a bit?
he only meant the ARM cores.
M1 is the only modern chip here! OSX the only modern OS!
rest is 1970 tech.
Thank you for doing this. I remember going into the mainframe when I was working at a large furniture manufacturer. I was the keypunch operator.
The last mainframe I saw the covers open on was, I think, an IBM 3090. Water cooled. This was over 20 years ago. What I noticed about it was the quality of the welds on the plumbing. I guess when it costs maybe millions (actually I don't know) you get really highly skilled welders. I have never seen anything like it anywhere else. Of course other parts of it were impressive as well.
I've got a Amiga 3000 with a A3660 in it. But I honestly plan to move it over to a pistorm (running Emu68) with a Edu Arana adapter when the Pistorm32 happens.
Dave, the story I have heard about Windows NT is that Dave Cutler who previously worked on VMS just added one to each character and ended up with WNT - Windows NT.
Is that a myth too good to be true?
I really enjoy your Videos and especially if you mention the Amiga. I still own a A1000 and a A-4000/040 with a 68060 from Apollo and Cybervision 64 graphics board. In my view the best Computer ever and sadly forgotten.
Thank you very much and have a great and healthy new year!
Your content is absolutely fascinating Dave! So well presented too.
Thank you kindly!
Dave, thank you for giving me a more useful metric for CPU comparisons.
In a couple of weeks, I'll be 65. During my IT career, I was lucky enough to start in IT straight out of high school working at Australia's CSIRO (Science institute) while I also studied. I programmed a CDC Cyber 76 (in Fortran & COBOL) which used a CDC 3600 as its I/O device, I then moved on to PDP 11 using BASIC, then a UNIVAC 1100, to an ICL System 4 (which is effectively an IBM 360), back to UNIVAC and then on to IBM 370's, VAX's, Micro VAX's, IBM 390 Mainframes and beyond! At the same time my other obsession was Micro PC's. This episode gave me a lot of memories and bemusement seeing the progress of MIPS and Dave, I remember when MIPS was everything!!
You've seen quite a spectrum of hardware over your time, that's for sure!
Classic Dave, excellent video presentation.
This was one of my favorite videos, yet, Dave!
Love your content, Dave, and your presentation style, it's all top class stuff. Oh, and I do have an Amiga here somewhere, but I don't think it has the M68060 cpu in it. I'll dig it out from attic storage at some point and look.
Really liked this video. Different OS architecture and everything makes it hard to compare systems sometimes. But I've always wondered how things like my cellphone compare to the systems I've owned or some famous top of the line machines from back in the day
Great history lesson. I am constantly amazed at the performance we have sitting on our desks. Really enjoyed watching.
Me also
3:33 You were close: it's sisyphean. Great video!
Dave i have watched your channel from the start. Congratulations to what I believe is the best episode yet. I can clearly see that you are in your comfort zone now. Soon 250K subscribers. Awesome work.
Wow, thanks! I did put a lot of work into this one and am glad you noticed :-)
@@DavesGarage I hope IBM noticed it too and the UPS Guy stand's outside of your door with huge wooden crate. Best of luck 🤞
@@RonnyJakobsson That's not how it's normally done, but otherwise, yes, we noticed😉
15:48 -- By the way, a youtuber by the name of CuriousMark has been chronicling his and a group of fellow programmers' and engineers' journey rebuilding the AGC from salvaged simulator parts and donated hardware from former NASA engineers. They now have a fully working AGC that they take to various shows and you can land the Apollo lander yourself! It's simply amazing.
Another informative and entertaining video. Thanks Dave
Loved this walk down memory lane!
I LOVE these kinds of stats! Thanks, Sir.
Really interesting to see the progression. I'm watching you on a 11yo i7-3930K. GPU has been lightly upgraded but it still does just fine. It was bought to run fairly demanding simulations that took up to ~45min to run. Not doing those anymore. Other than being a little power hungry (and a big box) it's just
Very good, enjoyed it, thanks!
I just found a new favorite tech channel and just had to subscribe