I worked at Intel designing CPUs from 1980 through 2002. In the 80's we watched with interest the "supercomputer wars" which definitely influenced how we approached new processors. We didn't have enough area for large-scale parallelism but started by dedicating an unheard-of portion of the die to a Floating Point Unit. I designed that unit which was used in the 960 series, the 387 and eventually the 486. I was the design manager of the P6 (Pentium Pro/II) where we employed much more parallelism in addition to many techniques that had previously failed (out-of-order processing, speculative execution, register renaming, etc.). The Cray was always an inspiration, and in the late 90's we arrayed our processors and took the computing crown for a while. Interestingly, I knew the guys who started Ncube and even helped them fix their layout plots to avoid some fatal flaws. Those were heady times. We'[re now working on quantum systems, an even headier topic!
Speaking of nCube and layout plots, I remember visiting Stephen Colley, at nCube headquarters. He was lying on a schematic which covered rhe entire floor of the atrium, trying to debug something. We talked for about an hour after which he excused himself saying he really had to finish writing the operating system!
@@talkingpoetry5281 Yes, those were the days.... We did a room size plot for the P6 but it was just for show, not debugging.... Had some great shots of it in the press.
@@herbpowell343 No, I was the project manager for the follow-on. Our team found the FDIV bug even before it was publicly reported. The Pentium guys downplayed it even though we pointed out it was easily reproducible and corrupted results. They said it happened very infrequently (true) in normal numbers, and put it into a 2nd stepping to fix. Then it was posted for all to see and they looked pretty silly. The Pentium was not a good design. Late, twice as much power as expected, and half the performance. Took them 18 months to get to market. On the P6 we hit our schedules, got better than projected power, higher performance and an easy shrink and got to market in 10 months. We were viewed as "the wild Indians in the north" as Barrett said and nobody thought we'd be successful. We loved every minute of it.
My father worked for CDC at the time Cray was working on the 8600. Cray wasn't keeping HQ up to date as to how things were progressing. They were progressing very slowly. Norris sent my dad, who he knew was a laconic, hardheaded type like Cray, to take a field trip to Chippewa Falls to ask Cray how things were going. That was an inspired decision as Cray spilled the beans, the 8600 was unlikely to ever work due to cooling and reliability issues. Dad borrowed some of my facsimile paper, a paper coated with toxic but conductive metal powder. If you apply current to the paper you can do a crude analog simulation of heat flow. The flow lines didn't look promising. My dad brought the bad news back to Norris and he quickly wound down the 8600 project, and that prompted Seymour to leave CDC.
Oh such history and memories in this video! When I was growing up, my Dad was a computer technician for CDC in Arden Hills, MN, working on the cyber mainframes. I remember him talk about the cool things at the magnetic peripherals division which I recall eventually became Seagate. Unfortunately, his career at CDC ended when they put him at ETA Systems in the mid to late 80s and that spin off failed.
One minor correction to the video - The CRAY systems weren't C-shaped for cooling purposes, but rather to minimize the length of the backplane wires in order to reduce signal propagation delay. Cooling in the CRAY-1 and a few subsequent systems was provided by freon flowing through the vertical aluminum bars between the columns for the circuit boards, which were layered on both sides of heavy copper plates to conduct the heat from the circuit boards to the cold bars. Very elegant!
It was not Freon ... it was (I can't spell it) Florinert... you could dive in and breath it but could not come back out. And yes the "C" was for wiring length or the distance the electrons traveled ...
No, the cold bars *did* use freon (or at least a freon-like substance) . The Flourinert was only used in the immersion cooled systems like the CRAY-2 and was really expensive, like $500/gallon.
I worked for Control Data in 1970, fresh out of University. I was smart in school but at Control Data I felt like being at the bottom of the ladder. There were a lot of genius people working there. Some were super passionate working 7 days a week, day and night.
I hear you, I just came to respect those pure Genius types, many couldn't hold a conversation but could see the workings of the universe in their heads, I found there is a lot of high end work left for us "smart guys" to do.😊
I met Mr. Cray when he was installing a Y-MP at the Air Force Weapons Laboratory at Kirtland AFB in Albuquerque. I was a young Lieutenant and they actually used one of my finite element models as a benchmark.
I worked at the KAFB Weapons Lab from 1986 -1992 over in Bldg 617 on COIL. Seems like Wang was around our area too, but that was long ago and my hard drive has many missing sectors lol.
@@bobbys4327 very familiar with COIL. I worked on space-based laser vibration suppression in the four-trailer "quad" in the back parking lot from 85 to 89. We were there at the same time.
I once met Seymour Cray. I was a Field Engineer for Data General working in the Minneapolis field office. Cray Research was using a Data General Eclipse computer. I don't recall why. They had a hardware problem with the DG system at their Chippewa Falls facility and I repaired it. It was well after 5:00pm when I finished. I walked the halls of their building to find someone to sign my Field Service Report. That's when I met Seymour Cary, and he signed my FSR that night. I won't forget that encounter or his signature. I was 20 years old but still knew who he was and his legend.
IIRC correctly minicomputers were used as front ends to the supercomputers. Because the supercomputers couldn't really support terminals and interactive work, probably ditto for other peripherals. I had to program the 6600 with punch cards punched on an 026 keypunch.
The early Cray 1 machines used a DG eclipse as a front-end. Cray X-MPs and later ones used an 'expander chassis' that had a removable disk pack and some other hardware to bootstrap them.
Heh. My first IT job was running Data Generals. It was a good way to get IT experience because one was always fixing something. I swear you could get a DG to puke and abend just by staring at them the wrong way. But they were good for their time, I guess.
One of the most amazing things about the Cray supercomputers were their processing geometry was very unusual. One of my professors giving me a history lecture about this devoted a full hour to the competing geometries.
Back then I was at university in the 1980's they had an Cray there. It was then 486 was the hot stuff for pc. Don't GPU's work a lot like vector computers I belie at least the older ones did.
@@magnemoe1 Actually it's modern GPUs that work a lot like Cray's geometry vectors. Old GPUs were a hardware implementation of fixed function geometry pipelines like OpenGL up to version 2.x and DirectX up to version 10. They were basically "a program made in hardware". Modern GPUs are SIMD devices. (SIMD = Single Instruction, Multiple Data). Or pretty much what Jon described as vectors. But they're also massively parallel with thousands of cores. So if you want to project a 3D scene onto a 2D plane (the screen) you're running the same tiny little program, with enormous amounts of geometry data as arguments, split up over thousands of cores. In the next frame the angle has changed only slightly, so the tiny little program (called a shader program) only have a few cos()/sin() values updated to reflect that, and the whole thing goes again. In addition to being able to work on the vertices of a 3D mesh modern GPUs can also work directly on the pixels in the 2D projection to adjust brightness, transparency, etc. This is why running computer graphics becomes more and more demanding as resolution increases. But it basically works the same: a very small program iterates over all the pixels, but to do that effectively long rows are loaded into wide registers in many cores. (Disclaimer to fellow nerds: This was purposefully kept incredibly superficial to the point of introducing slight inaccuracies)
I was in charge of software maintenance while Seymour was still the leader of Cray Research .. All the wires in the Cray were blue/white and the ladies that wired the system had to know exactly from and to for the connections ... it was mind-blowing how fast they were. Hardware people including engineers HATED software programmers ...
YES as a software engineer - we always argued with the hardware dept - how will we fix the problem in our system - a hardware change or a software patch? Always fixed it with software!! Remember, hardware is easy - it is software that is hard!!!
Yes, but remember the 1991 conference in Minneapolis? That's when we, supposedly, brought together the hard/soft-ware groups into one big happy family.
I am old enough to remember when Cray was synonymous with supercomputing. I also remember Silicon Graphics. I remember being in grade school and middle school seeing magazine covers with Cray computers featured on the front.
I am old enough to remember the joke ad in the back of a late-Seventies issue of BYTE magazine: "CRAY-1 on-a-chip! Plug it into a penlight battery and watch it go!" I thought then 'I might live to see the day you could do that'. ..
Thanks for this. It brought back quite a few memories. I wrote my first program in 1966, and spent my working life in IT. Never was involved with supercomputers, but remember long days spent in looking for ways to reduce instruction path lengths in an airline reservations system on a Univac computer. What a joy it was to see 5 instructions knocked out of the path!
My first computer, in the 1980's, had software called 'Framework' by Ashton-Tate (around the time of Kapro Computers). Its monitor had amber colored font, no mouse and no graphics. I liked it. Framework had some features that even MSFT Windows never figured out, such as ability to 'copy' and retain text from multiple sources concurrently - to 'paste' or access later. MSFT can't 'copy' more than one text item, at a time.
Back in the early days when memory was not only expensive but the machine could only hold so much, you looked for all sorts of creative ways to make your code smaller and faster. Made code maintenance a nightmare, but it was fast!
I worked there from 1984 to 1992 in sales and sales management. It was the best place I ever worked. Smart, compassionate, high integrity people, great products, and a wonderful work environment. Thanks for this video from a proud ex-Crayon.
Thank you. I worked for REECo in Las Vegas Nevada in the early 1980s. They had the contract to do dosimetry research on Nevada test site data. I interacted with the atomic energy commission's computers. I think the tour I had of their facility was in high school. I wish I could remember more details.
@@farrapo2295 Cray is still my favourite job of all time, and what a product to work on. There will never be another machine that looks as good a Cray 1 or X-MP did.
The photos of the Eckert-Mauchly tag and the Cray-1 memory are my photos of my computer parts. I'm glad to see them used.
4 місяці тому+18
Thank you for this video. It tells a great part of my life. It started with the CDC7000 at ETH in Zurich. We moved on to the Cray-1. We were so proud using the fastest computer in the world. My mentors Niklaus WIrth and C.A. Zehnder pushed me in a wonderful life. Being now an old guy, hacking on an overclocked Intel Chip, I happily look back on those outstanding machines,
I programmed a 6400, the 6600's kid brother which sacrificed parallelism for cheapness, for about 3 years, assembler of course, my second computer (first was an IBM 1620). What was most fascinating to me was how clean the instruction set was, how symmetrical and logical. I didn't really appreciate it for many years after working on others with much more dreadful instruction sets. I'm looking at you, x86, the ugliest instruction set I have ever worked with. Thornton (with Cray?) later wrote a book on the tricks which went into speeding up the 6600's instruction set, adding to my impression of how clean the 6x00 family was. The short description of the 6600's multiple processors is slightly misleading from being so short. It had one central processor with 64K (?) of 60-bit memory (60 is 5 columns on a punched card) which had zero I/O capability and no system mode; it was a pure user mode compute machine. There were around 10 PPUs (Peripheral Processing Units) with 4K 12-bit memory each, I think, but that's misleading too. There was really only one real PPU, and it switched context to each of the virtual PPUs in turn, I think every microsecond. Those PPUs did all the I/O to tape drives, card readers and punches, and ran all the system instructions which started and stopped the CPU and switched tasks. Each CPU task stored I/O requests in their location 0, and the PPUs monitored that, executing file I/O and transferring data to and from CPU memory. There was also extended core memory, 10 times as slow but 10 times as much, with special instructions to copy blocks back and forth.
I used to be an operator of the later Cyber series (a 72-26). The problem for the CPU + PPU architecture was that the rolling PPU could hang. It did this rather too often. That meant that regardless of the applications continuing in the central CPU, all the output was lost, the I/O being performed by the PPU. The Operating System had to be restarted. The other thing the Cray-designed CDC computers were limited by was the address bus - many applications spent most cycles swapping overlays rather than manipulating data. I recall the crystallography programs were right on the limit. Those running large dataset Geography applications ... went elsewhere in abject despair. The x86 may be ugly, but even the slow original IBM 8088 PC could address a more RAM. When using the 8087 the high-precision floating point was providing even more bits.
@@grizwoldphantasia5005 Yes, the PC was a lot later. Nevertheless there were a few things about the IBM PC that seemed to me to change a lot about high-performance computing, as it had been. With a PC it was cheaper to have a little box in your office running difficult applications 24X7 - rather than battling for the limited time available on those old computers. It was not just in the realm of scientific computer. There was a person in the finance area at the uni who used a PC for end of year ledger reconciliation. Using a PC he managed over a weekend what took weeks of batch work on the mainframe. Unsurprisingly, he was the first person in the university administration to get an AT. I asked a fellow who used to work with crystallography on the old IBM 360/50 that preceded the Cyber, if he was moving to using PCs. But by then his career had moved into being a manager at the university, and he found no time to pursue his academic background. So I do not know if the crystallographers saw the transition the same way I did. I would expect that the ability to transition would depend on the availability of compilers, and the quality and capability of the complied code they produce. But I may be quite wrong in that expectation. IBM seemed to take this need for compilers a lot more to heart than Microsoft did.
Wow...I had NO idea that any of these details on the history of computers existed..!! I Highly recommend the "Computer History Museum" in Mountain View, CA... They not only have the very first 'Asteroid' game (that you can play if you're lucky), with a round vacuum tube display screen that has an incredibly amazing 3-d effect, but also a Cray-1 supercomputer as well as a Cray-2, Cray-3, the Utah Teapot, the 1969 Neiman Marcus Kitchen Computer, original Apple I and MUCH more. The museum is based on a timeline and starts with a Chinese 'Abacus' and proceeds to 'today'..!! You will find your earliest memory of your 'First Computer' and remember everything from that point onward..!! Mine was the Radio Shack 'TRS-80 Model I' that my Uncle had, and my best friend had an 'Odyssey' Pong Game that we played for many hours..!! (1973-ish) A Wonderful Museum..!!
A few comments. 1) Read The Supermen, by Charles J. Murray, to get a complete history of the man and the companies. 2) Cray worked closely with Les Davis, an underappreciated engineer who worked a lot of the packaging magic that made Seymour's designs practical. 3) Cray's custom vector CPUs eventually became unaffordable to develop, while the high volume microprocessor industry was making considerable performance advancements with every new CMOS generation. In the end, the entire supercomputer industry, now generally known as the High Performance Computing (HPC) industry, came around to deploying enormous numbers of microprocessors in highly parallel architectures such as Cray's T3 family, often with custom accelerators and then GPU accelerators.
I just want to remind you that modern scalar CPU's do have vector instructions. They are usually referred as SIMD (Single Instruction Multiple Data) and some of the popular examples are MMX, SSE, AVX, AltiVec, Neon...
I walk by one of these old round Cray-2s regularly at work. The plaque says they paid $19M for it (in the 80's), and it's something like 1/100 as powerful as an iPhone X.
@@blurglide How about Dave Cutler? He’ll get it running and do demos on his channel. He got an IBM mainframe recently. Great project, he got it up and operational.
And that's part of the stupid that contributed to the slow death of the company. (Remember, Job's bankrupted Apple several times like that. Form does not trump function, 'tho Mac purists bought whatever he crapped out.)
The great thing that the GPU companies did (NVidia and ATI, which later merged into AMD) was similar to what Cray did with scientific computing: they identified a small community with large compute budgets that were willing to write their own software on machines that were massively redesigned every generation. For GPUs, this was game designers. These folks didn't actually have large hardware budgets of their own, but their customers were collectively willing to spend many billions of dollars a year on graphics hardware. The total number of software titles that had to run on each generation was around a hundred, and NVidia and ATI developed close relationships with the folks doing that work. And crucially, they let the CPU handle most of the complexity in a way that was backward compatible, so that in each generation it was a small part of the software (the kernels) that had to be ported to the next GPU. Eventually, NVidia took an open-source software project, GPGPU, and turned it into CUDA, which wraps the GPU in a software layer that is somewhat forward- and backward-compatible. CUDA made it possible for a lot more people to write code that partially runs on the GPU, because they didn't need to learn as much and they didn't need as much individual support from NVidia. So I'd disagree with your summary. In years past, the GPU folks defined a space in which they did quite a bit of from-scratch redesign each generation. However, they've also been dragged down by their own success. Now that they have so many customers doing so many things with GPUs, they have a requirement for forward- and backward-compatibility that restricts some of their innovation. Recent generations have been able to run code for prior generations fairly well, as the architecture has stayed similar enough and just the sizes of various memories and numbers of SMs and cores has increased. To get full performance though, programmers still have to retune their code for each generation. NVidia in particular learned an important lesson along the way. They made a ton of money for a few years when cryptocurrencies moved from computing the blockchains on CPUs to GPUs. NVidia was unsure how long blockchains would be a cash cow, and was unwilling to throw away graphics performance to get better blockchain performance. Their run came to a halt when crypto folks moved to FPGAs, which didn't last long before the crypto folks moved to full custom silicon. So when the AI folks moved their code the GPUs, NVidia decided to support them with products tuned just for their workloads. They forked their product line and introduced new products which are scaled way up for AI workloads. It is unlikely that mainstream graphics processors in the near term will have larger caches or memories, for instance, than the H100, and so code tuned for the H100 is unlikely to run well at all on regular GPUs for perhaps a decade. H100s also have inter-GPU communication channels which completely outstrip the PCIe connections on mainstream GPUs. As well as pushing up the cache sizes, they pushed hard on packaging. An H100 burns 700 watts, far more than high-end professional GPUs (right now the RTX 4090, which uses a total of 450 watts including the memory chips). The H100 has six stacks of HBM memory on a silicon substrate, a scheme that gives it 3 TB/s of memory bandwidth to 80 GB, compared to 1 TB/s to 24 GB that the 4090 gets from 24 discrete GDDR6 memory chips on its printed circuit board. NVidia is competing with several companies making from-scratch NPUs, including Google, Amazon, Tesla, and Facebook, as well as a slew of startups. As the GPUs have a significant amount of hardware that is unused in AI (texture caches and MPEG decoders?), these from-scratch designs have some basic advantages. It'll be interesting to see if NVidia is willing to make a product which gives up software compatibility to keep up with all these new entrants. NVidia certainly has the capital to fund multiple chip design teams, but they may be unwilling to partition their best design team.
The Cray II eventually was expanded to 8, then 16 processors originally designed with four. I worked for CRI as a Circuit Design Engineer for 5 years in Chippewa Falls. The Cray 3 was extremely hard to manufacture and that was it's Achilles heal. Eventually circuit density of CMOS displaced bipolar IC's. There were few of us at CRI that liked CMOS. The YMP series used bipolar gate arrays from Motorola. Seymour was eccentric he did not believe in SECDED nor did he believe in the damaging effects of ESD, which GAAS is extremely sensitive to. He also did not believe in using both edges of the clock edge, preferring to use only the rising edge of the clock to instigate operations. I always found that to be the most eccentric thing as it could have potentially doubled the speed. I was told that he did not trust the signal integrity of switching events based on the falling edge. I was originally hired as a Reliability Engineer at CRI and was appalled when I learned about that and lack of SECDED and ESD protection on his designs. After I left I found out that some younger engineers had plans to fit a highly integrated version of a 16 CPU Cray II into the size of a shoe box. The company was divested before that could happen.
Mostly true, but maybe a bit misleading. Cray 1 S/N 1 didn't have parity, but IIRC, S/N 3 did, as did all subsequent Cray machines, using a (72,64) SECDED code licensed from IBM. The Y-MP and its successors used an (80,64) S4ECD4ED code that could correct any single 4-bit error in the 80-bit code word. The Cray 2 and 3 also had SECDED memory. The original Cray 2 had 4 CPUs, but there were several 'q' machines that had only one. There was also a single Cray 2.5 sold that had 8 CPUs; it was nominally sold by Cray Computer Corp. to NASA. Bipolar logic is low-impedance, and doesn't suffer terribly from ESD. The Cray 3, however, used MESFETs (high-impedance devices) rather than bipolar logic, and was more sensitive.
I would hate working on a system that uses both edges of the clock. It complicates clock distribution and edge skew prediction. Doesn't even help much for meta-stable event resolution. He probably used one standard D-register design throughout with known setup/hold times.
@@careycummings9999 If we are being honest he could probably do a good Birdperson impersonation without much practice. He almost nailed it this time which is why it got a good snort and laugh out of me. I was like did he jus... he did....
When I was taking CS in the 1980s, Cray was the ultimate of computing power.. Plus they really looked cool. They were used in CGI for the Last Starfighter.
It's not at all surprising Warren Buffett declined to invest in a company that took a really smart guy to run it. Considering he has said he prefers companies that could be run by a ham sandwich.
@@mrdumbfellow927 Forget whether it was his colleague Charlie Munger or Peter Lynch who said it, but the other line was you prefer a business that could be run (profitably) by an idiot because sooner or later an idiot will run it.
Steve Chen wasn't only let go because of financial issues, he couldn't ever call a design finished and was constantly tinkering to make the design better. The company got fed up of waiting for him to finish the Y-MP design and had to hand it to someone else to get it across the line. That was what the company told employees at the time, I was lucky enough to work for them between '88 and '95.
I remember walking through Boeing in about 1978, and reading the following note pinned to the outside of a cubicle: "There comes a time in the life of every project when you have to shoot the engineers and go into production."
@@bea9077w now Boeing fires or overrides its engineers and kills the passengers (and maybe the astronauts). That company won't change until senior executives go to jail for killing hundreds because of deliberate cost-saving decisions they make
In 1986, I worked for a company (Dynamotion) that provided equipment to Cray Research. Specifically, we provided circuit board drilling machines that could drill holes as small as 0.0039" diameter. That's about the diameter of a human hair. I was the applications engineer for Dynamotion. The drilled holes were then interconnected in a board stack using gold wire. Each one inch by one inch board had 2200 holes interconnecting 16 chips on each multi-layer board. The drill bit was spinning at 120,000 RPM. The spindle shaft was floating on air bearings. Ball bearings produced too much heat and vibration. Back then, air bearing spindles were leading edge. Back in the early 1990's, Chippewa Falls was my "home away from home". That facility is now TTM Technologies.
It's worth noting that Cray (as part of HPE) is still doing pretty well, with 4 out of the top 10 supercomputers in most recent TOP500. The systems are generally using third party processors in giant clusters but connectivity and cooling are still the secret sauces.
I took my first CS course in 1969, and am still working in the field. This is one of the better Computer History videos I have seen in a long time--congratulations on this video.
The 80s! W/ SUNs, Symbolics, Apollo……I still have SOLARIS running on a box, and my NeXT Cube…….still better OS than Mac, and I worked for Apple for 23 years, and NeXT too.
This was one of the most informative videos I've watched in a long time. The writing and visuals are great. I've become a subscriber, and I look forward to seeing more videos from Asianometry.
In the mid/late 1980s while on a tour of a Bell Labs data center in New Jersey, I was allowed to stand in the center of the 3/4 circle of this Cray: en.wikipedia.org/wiki/Cray_X-MP ....A year later the final 1/4th of the circle was filled with a RAM disk.
Around 1990 Shell Rijswijk employed a Cray for geological modelling, allowing geologist for interactive playing around. I was technical leader of a small team who did modelling with transputers. This was orders of magnitude cheaper than the Cray. Later Shell donned the transputer computer to the Dutch Hobby Computer Club. It was demonstrated regularly, for example at the "kennisdag" earlier this year.
Now the best thing out of Chippawa Falls, WI - Leinenkugel beer - of which I am drinking a toast to Mr. Cray and his work. Please celebrate responsibly.
It was a tradition for many years that every Cray delivered would also come with a case of Leinie's in the truck. Tradition carried through until at least the mid-2010s (Cray Inc)
Awesome walk through history! I started my career out of college at the EPA's NESC working for Martin Marietta where we installed and managed a Cray Y-MP for the EPA. We upgraded to a Cray C90-4 and also installed a Cray MPP (I don't recall the exact model) it was good and exciting times.
@@TheOnlyDamien My PhD project was partially funded by the Ministry of Defence in the UK. It was related to numerical simulation (LES) of turbulent flows and pattern recognition of turbulent flow structures. At that time, the meteorological offices tried to use the Cray supercomputers to predict atmospheric turbulence and the numerical techniques known as Large Eddy Simulation (LES ) were developed in that era. Sorry very technical stuff! I used the Cray remotely on the ancient BBC Micro computers. The Cray YMP supercomputer was not hard to use, but the programming/debugging with the language Fortran 77 for the simulation source codes was a nightmare. At the time, I also used the massive ICL mainframe computers that filled up a big room in my university for post-data processing and analysis.
@@singhonlo67 That's genuinely so fucking cool thank you so much for sharing, and I appreciate the technical bits it's what we're here for after all. Thanks!
@@TheOnlyDamien funny, I also did my Masters thesis using a YMP (maybe it was XMP), it was extremely painful. You submitted your program during the day and received the results the next morning. .... in my case I mostly got the fun message: .....fatal error abort, sth like that. It took me forever to get it right. But I definitely enjoyed the comfort of a line editor. Who cares for those stupid punch cards :-)
Eniac wasn't the world's first programmable digital computer. The first was the UK's Colossus Mk1, which preceded Eniac by at least two years, being launched in 1943 to help with wartime codebreaking. The ever-modest Brits felt there was no need to shout about it, and its existence and history were kept secret until well into the 1970s.
Colossus was a special built machine designed for one job. Eniac was general purpose. It didn't have stored program but it could compute firing table or Nuclear weapons calculations.
Where does the computer built by John Atanasoff fit in the timeline of computer invention. His computer also preceded ENIAC but the credit had gone to John Mauchly and J. Presper Eckert for ENIAC first computer.
I was born in 1956 and my dad was a software designer. Dad worked for several years at Collins Radio in Cedar Rapids Iowa,. Authur Collins was trying to develop a mainframe computer to challenge IBM. Things went south in the early 1970's and the Collins "C System" computer bankrupted Collins Radio. In 1972 Collins faced failure or selling his company to North American Rockwell. Rockwell rescued Collins from bankruptcy and eliminated anything that wasn't profitable including the new Collins C-System which I understand was purchase by Control Data Corporation. I always wondered if some of dad's software made it into Control Data or Cray systems? Dad said at the time that the hardware and software that they had developed was years ahead of IBM. We were not allowed to say the name Collins in our house for many years after that......
@@macicoinc9363 it’s ironic that you replied today as my wife and I are staying in our motor home at a campground just outside of Dubuque right now. We went to the HyVee last night and drove around a bit. Dubuque is such a nice place. We love visiting here.
Cray was a whimsical man, too. When asked what tools he uses to design supercomputers, he was very specific: a 2B pencil. Anything harder or softer simply didn't leave the most desireable lines. When someone pointed out that Apple used a Cray to design the Macintosh, Cray said he is using a Macintosh to design the next Cray.
You might recall the Mac-referencing t-shirt, "My other computer is a Cray". It had an image of a Cray-1 with a mouse attached to it. I was told you could only get one of those if you visited (or worked at?) the Mac lab. I had a friend on the inside who smuggled one out.
I worked for CDC in the late 70's. But my division wasn't the one with 6600's or other "super" computers. My group worked on the "Cyber 1000" unit which was a message processor. (See the Wikipedia "CDC Cyber" page for info). I worked on the Cyber 1000-2 version which added a bunch of Z80-based micros to service I/O. I wrote Z80 code for the "programmable line controller" card. While I worked there, I met some engineers on the C1000 itself. Here's a little oddity about it: The machine's assembler was written in Fortran. wtf? I never did find out why a person would code a Fortran compiler in raw machine language, and THEN write an assembler in Fortran. They all acted like it was normal and what was my problem anyway?
My first computer, in the 1980's, had software called 'Framework' by Ashton-Tate (around the time of Kapro Computers). Its monitor had amber colored font, no mouse and no graphics. I liked it. Framework had some features that even MSFT Windows never figured out, such as ability to 'copy' and retain text from multiple sources concurrently - to 'paste' or access later. MSFT can't 'copy' more than one text item, at a time.
Interesting. I used to maintain a bunch of Cyber 1000s at a bank's computer centre. The highlight of my career was sitting on a Cray 1. (Field Service Engineer - CDC UK in the 70s and 80s)
@@daveeyesfor my undergrad project, I built a z80 based microcomputer with a colleague, wire wrapping the boards. Input was via switches for address and data, plus a paper tape reader. LEDs for output. I thought the z80 was years beyond the 8080. Their timing and IO chips were wonderful, too.
Oof I respect that Cray ethos, like academic research and releasing state of the art technology making just enough money to pay your way but not sacrificing perfection for profit
Tremendous video...I was a graduate student at the time and was interested in minicomputers, PCs from the likes of DEC (PDP 8 bits), Data General (16 bits...wow!) for interfacing to laboratory instruments...I think DEC published three books at the time about how to program the PDP 8 in assembly language, but you had to master two of them to understand the third...in all permutations of the three.
I was a user of CDC 6600 & 7600 systems from 1975-78. I’d say the one thing it was deficient in was the software side. The native FTN Fortran compiler produced very fast running code, but you couldn’t figure what went wrong if the code crashed. To develop software we used a Fortran compiler from the university of Minnesota called MNF which gave good diagnostics, but wasn’t as fast. When you were happy the code worked you then ported the code to FTN!
I worked at Bell Labs as an MTS 1982-4. I got my Fortran programs working on the IBM and then transferred them to the Cray-1. The Cray made it possible for me to develop matrix spectral factorization methods for solving otherwise impossible-to-solve queuing system problems.
thank you for all the work, and sharing. I had often wondered what happened to the Cray computer which I used before I took an assignment overseas, but which had then disappeared when I came back 15 years later.
It’s amazing where they end up, one of the guys from Microsoft, think its Nathan Myrvold, or if not, Charles SImonyi (he wrote Word)…..bought a Cray 1, and it sits in his living room.
Back in the very early '80s I operated the Cray-1 #1. I was at the UKAEA in the UK and we had ordered a Cray-1s but had a very long wait time for it to be built and delivered, so Cray loaned us the Cray-1 #1 while ours was being built.
You MIGHT have been told it was SN#1, but the first of each went right into the basement of the NSA in Fort Meade……where they measure their computers in acres.
I owned a few Silicon Graphics computers with the Cray name attached to them, Like the SGI O2, Origin 2000 and the Onyx 2. Those were the days of backaches and headaches. Thank you for the video, it was awesome!
Back in the early 80's all of the issues Cray faced with pipelining and parallelism were known. Lots of progress has been made, there is still a long way to go. This was a trip through my years at college studying computer science back in the early to mid 1980's.
When the Cray 2 came out, there was a joke going around: "Did you hear about the Cray 3?" "It's so fast it can execute an infinite loop in six seconds!"
Which was blindingly fast for the time, at 250 MHz (Cray 2). The Cray 3 was supposed to be 500 MHz, and actually ran at 480 MHz. The Cray 1 ran at 80 MHz; the Cray X-MPs ran at varied clock speeds between 80 MHz and 166 MHz, and Y-MPs at 166 up to 250 MHz. PCs crossed 1000 MHz (1 GHz) in the early 2000s.
9:30 One can add and clarify a few things. In 1958 test firings of nuclear weapons were in full swing in the USA and in the USSR. There were lots of things measured by various instruments in each test. Over the years, the USA performed roughly a thousand of nuclear explosions and the USSR was not far behind. Underground test firings continued in Nevada all the way to 1992. Only after that the computers and the understanding of the nuances of physics became good enough to rely on numerical simulations more of less completely. Of course, computers were always very useful to gain insights into the dynamics of the explosions, even if one could only compute rather crude models. This was done already in the Manhattan project, where young Feynman was famously in charge of the computing department. This became absolutely crucial during later work on the hydrogen bombs, where the physics was much more complex. That work was done in Princeton, running the programs on IBM computers in New York.
In the late 90s/early 00s, I was a contractor on a large Air Force base, and inside the main building (which used to be a massive aircraft hangar... I worked in a large concrete building INSIDE this hangar), there was one hallway that had the husk of a Cray. It was mainly there as a kind of display piece/bench. It was in the hallway that led to the bowling alley. That was awesome.
Great video. Seymour Cray was a fascinating individual, but you kinda underplayed the full extent of his eccentricities. In his free time, his favorite hobby was...digging. He would dig tunnels for hours on end, and claimed to have had conversations with elves while doing so. I've often thought it may have been his way of dealing with possible PTSD from his combat experience in WWII.
I'd love to hear the story of Silicon Graphics Inc. Such an influential company during its time, and the companies that ended up spun out of its former employees still exist. Looking at you Jensen!
My thoughts as well. Specifically, in the book it's mentioned they used the Cray X-MP to do the gene sequencing. I don't remember if tit actually appeared in the film, but you can see the distinctive cylindrical computer in the Jurassic Park videogame Trespasser inside one the former laboratory buildings.
In the movie they reproduced the front panel of a Connection Machine CM-5 after Cray Research Inc. declined to pay for the display of a Cray X-MP prop. In the Jurassic Park book, the supercomputer used for genomics is indeed a Cray X-MP.
In the film its a Connection Machine CM-3 (3?)…….its in front of of the guy who smuggles the embryo a out (from Seinfeld), has the huge strip of LEDS down the middles of it, looks like 8 black boxes (4x4), with the vertical LED strip>
Wonderful video on an intriguing piece of history. Really a microcosm of high tech industry. Thanks for making this! @7:19 "In 1957, enough was enough. Norris left Sperry Rand to found a new company - Control Data Corporation." Replace "Sperry Rand" and "Control Data Corporation" with other hardware and software company names and you have the history of technology in the US. Aren't getting listened to/being allowed to do what you want, start a new company! It's an amazing dynamic! And, of course, Cray Research is just one example. Supercomputer Systems and Cray Computer Corporation are others mentioned in this video, showing how hard it is to keep creating new, successful business from one source. I disagree that CDC went out of business because it couldn't compete in the vector machine world. It went out of business because it lost focus and tried to do too many different things both in terms of technology and in terms of social programs. @26:08 "It was the story of Control Data and the 1604 all over again!" Really, it's the story of installed base--every tech company struggles with it.
You answered some questions I've had for years. I came from USAF analog computing in the late '60s, took CS night classes in the early '80s, and got a job monitoring AT&T-UNIX minicomputers from '85 until I retired in 2007. The only time I ever worked with main frames was in school in the early '80s. Never saw a supercomputer so there were a lot of unanswered questions. For example, some of the old gear we maintained actually still used core memory in the '80s. Storage was on 200MB and 300MB drives the size of a small 'fridge, however every system was for a dedicated app. There were no multi-processor operations. This led to some unusual configs some of which were very similar to the tech Cray used. I find that very fascinating. Thanks for answering so many of my questions.
I interned at a DOE lab back in 2018. They had instructions for a lot of their software to be compiled on Cray computers, but I had no idea how old Cray computers really were.
Great video again!!! Suggestion: The video has many individuals and companies.In the future of you could provide a mind map or a graphical representations of who left what company and joined whom it will be easier for the viewers to connect the dots in their head...but still a great video.
wow!!! thanks for this. definitely took me down a few memory lanes. i remember as a young 3d graphics c++ engineer in the early eighties with all the limitations of current day hardware (cga, ega anyone?) dreaming of having the resources of cray-1 to run my shading algorithms on.
You didn’t mention the Cray Business Systems Division which built a SPARC based multiprocessor system called the CS6400. Sun Microsystems eventually bought the division from SGI and immediately made a huge splash in the large UNIX systems market. This came at a crucial time because PCs had begun to encroach on Sun’s traditional workstation business. The acquisition occurred just as the Internet boom started.
1987 the Sun-4 was the first SPARC system, arrived with a VME bus and packaged the same as Sun's top-of-the-line Motorola-based Sun-3 servers. The SPARCstation 1, rated at 12.5 MIPS, 1.4 MFLOPS, 3 times faster than the Sun-3 it replaced. Smashed all competition in desktops, competitors still waiting on volumes of Motorola's 68040, to upgrade 68030. However, 68040 took so long arriving, forcing workstation oem's to ship old models with promise of free upgrades. Innovative, SPARCstation 1 was the first computer to implement an upgradable SBus interface. 10Base-2 Ethernet Controller, SCSI-SNS Host Adapter, Parallel Port, and 8-Channel Serial Controllers some of the SBus interfaces, all products you could only purchase from one company, Antares Microsystems. It was a good idea then, as initially, it worked without the chance of an anti-trust violation! 1996 Sun paid out for the SPARC business side of Cray Research, with Cray Systems being all that she wants, out of Silicon Graphics. Products, technologies and its customer base from the wildly named, "SuperServer" 6400. Sun's big launch with a killer family of products, their Ultra Enterprise servers, had configurations up to 30 x 64-bit CPU's, 30 x SBus channels. Each an internal Gigaplane I/O of 2.5 Gb/Sec, set standards and established Sun as leader in Unix roles, and data centre servers for Oracle's RDBMS. Sadly, in Sun's launch of the anticipated JavaStation, MicroSPARC Network PC, priced under US $1,000, looking like doing big things, was lost to noise from Intel, Apple and bigger players, with deeper pockets and longer arms, Sun had made a simple mistake, overreached. Lucky for none, same time, DEC wiped any shine left off SPARC. DEC Alpha's performance, saw Intel pick 'em up, rinse 'em n mince 'em, dumped on the supercomputing highway. Their unions birth, an unholy love child named, "IA64" and Itanium, pricey mistakes Intel wants us to forget. Now a period defined as "unstable" existed, database King Oracle, seen perhaps as their finest witness, virtues and miracles of Sun Microsystems. Mac Daddy of the DB land, had both Sun's hardware, optimised for Solaris OS, physical hardware and middleware operating system, enabling Larry Ellison, the licencing king of all dings of the lings, Mr Megabucks making off with Pro-C enabled Oracle financials, proved a cocaine cartel of cash counting, he could sell to infinity, and beyond.. World's leader etc etc.. dwarfing scales.. Sybase (Microsoft SQL / BI chosen license) left few if any competitors, gone open source or to the wall, only its a joke not funny, when broke with no money, the Sun could shine no more, and all that made such a bright SPARC, went dark in a flash, from lack of the cash.
Worked at the Supercomputer Center in San Diego when Cray machines were on the floor. Mr. Cray visited once before he died I was around him, but never directly met him. Also met Vinton Cerf - Inventor of IP and who many call one of the founders of the Internet, had an office just down from mine, but he never was there it was basically place for him to land if he was in town. So many interesting if not Historic things happened at SDSC in the 90s!
Two huge innovations in the Cray-1. First, the semicircular backplane which produced a wire wrapping job for very skinny teenage girls, and minimized propagation times. Second, every ECL logic circuit was duplicated 2x so that the complement was computed at the same time, which meant the entire computer put a constant load on the power supply, which was unregulated! So the computer looked like one huge resistor to the power supply, minimizing power supply noise!
It's very cool to see this. I had a programming job one summer and we had a CDC 1604. I went on into math, but a brilliant and close friend of mine who was a computer scientist always raved about Seymour Cray, his hero...
I was at Purdue in 1980. They still had a 6600 and 2 6200s. I wrote assembly code on the 6600 which was radically different from the assembly code I wrote on IBM 360s. Purdue also had a Cyber 205 which proved to be a bit of a disappointment. Most of the compute was used for large X-ray crystallography calculations which allowed Purdue be the first University to image a virus.
I was introduced to the CDC-6600 after several years of PDP-11s and micros. It seemed very strange to me, and it didn't make sense to me until a friend said "Seymour designed the 6600 to run FORTRAN fast," and it was like a lightbulb went off in my head. That's exactly what the A registers enabled (loading them forced a store or load operation with the associated X register). So it was dead simple to walk through an array by incrementing an A register.
Good overview. Minor points... There was substantial conflict between the STAR group and Cray as they were both competing for development dollars. Additionally, there were some personal issues between Jim Thorton (Star) and Cray going back to the 6600 days. Complicating the STAR development where to keep the STAR as a product, government requirements dictated that scalar and byte operations be supported. The additional instruction control logic and impacts to register and memory control add physical space between the memory and Instruction Control areas. I'm ex-CDC and worked on 6600, 7600 and STAR systems hardware and software.
Thanks for allowing us a sneak a peek into your amazing space. Can’t wait for the follow up and maybe a live action of you adjusting the depth of the pool???
I was working for SGI UK when they bought Cray in the late 90s. The CrayLink technology was pretty good but the blend of what SGI did and what Cray did was miles apart. It was a disaster. That was the beginning of the end for SGI which was a shame because it was probably the most fun job I ever had.
What probably started the rot was their decision to commit to Windows NT, like various other vendors duped by Microsoft, and to commit to Itanium, like various other vendors duped by Intel, although I suppose they made a better go of Itanium than most, hardly to Intel's credit. Hewlett-Packard also ditched credible product ranges, including those that they picked up from Digital via Compaq, in buying into the Wintel dogma. So, I guess it is fitting that all of this stuff is parked up at HPE.
Hello: I used to work for a company which owned an nCube machine which is wasn't using. The nCube is the only computer I have ever seen which looked like everyone's idea of a computer. It was five black columns with the central column having a pyramid shaped cooling tower on the top. All the connectors were out of the bottoms of the columns so each side was completely clean and flat. It was amazing.
I arrived at the U of Minnesota when the main computing resource was A CDC 1604. The installation of a 6600 at the Lauderdale facility was undeerway that year, and I wound up writing software that took advantage of its innovations. I actually have a couple of the "cordwoord" logic modules. Who needs more than 64 characters? We used every one of those 60 bits, lots of shifting and masking.
My 6600 and 7600 manuals are out of reach right now, but my memory is that they had pack and unpack instructions meant for floating point which could be repurposed to deal with 6-bit characters. But that was too long ago to remember for sure.
Cray was astounding in his creativity and vision. I really appreciate the work put into making this video. Thank you. On a side note. I would wonder as mentioned, people upgrading from one Cray to the next would have to rewrite their software. A Computer in itself is a very expensive boat anchor without software to run it. While it's not a flashy topic, I'd love to know a bit about who wrote the compilers and documentation for these very different machines. They would have to start working on it as soon as Cray finished defining the ISA so the software was ready for customers to rewrite their software to or implement new software. Like the ray tracing software used to make that music video.
Great overview! In the mid-'90's, I was fortunate to have NOAA and NIST as customers for my employer's network cable plant design practice. It's a bit hazy thirty years on, but I was at NOAA (I believe) in Boulder awaiting approval to enter their supercomputer machine room and sat on their original Cray-1 in the waiting area -- reduced from supercomputer to literally a lobby bench. Once I was approved to enter the machine room, the first thing I was greeted with was the infamous Cray-3 Tank (shown in this video), which used a massive amount of liquid freon as a very cold liquid bath for some of the processors or memory interfaces (unsure which). The Tank looked like an aquarium with thousands of very thin wires swaying in the continuous flow of freon from one end of the tank to the other, very much like anemones would look in an aquarium. I was only able to spend that single day in the machine room, but it definitely left an impression on me. I had no idea that Cray himself lived in Boulder, which I'm assuming is why a high-profile customer like NOAA had either a prototype of the Cray-3 or, as stated in this piece, the single Cray-3 that had been sold. Regardless, those were very heady times for computer and software development. From a geeky perspective, it was an honor to spend just a few minutes with a couple of Cray's legendary machines, even just as a casual observer.
The vector explanation at 21 minutes in inaccurate. In the Cray 1, vector instructions could issue a new result every clock cycle and the adder pipe is 3 cycles long once the opporands have been fetched so the actual time samings is closer to 80 cycles reduced to 25 or 26 rather than 80 reduced to 4.
From Cray's Wikipedia entry: Another favorite pastime was digging a tunnel under his home; he attributed the secret of his success to "visits by elves" while he worked in the tunnel: "While I'm digging in the tunnel, the elves will often come to me with solutions to my problem."
Interesting, IBM is often credited with starting RISC based computing in the 70's, but the CDC 6600 sounds a lot like a RISC based computer. And here's a fun fact. In the movie "The Last Starfighter" released in 1984 a Cray XMP supercomputer was used to render all of the outer space scenes. I would imagine it was very expensive to get time on one of these back then. If you pay attention to the graphical elements, it seemed that the ships got a lot more detail than the backdrops. They probably didn't have the money or time to fully render everything.
@@grizwoldphantasia5005 I'm not sure that those who coined the RISC term did think that the principles involved were new. If you look at the RISC-I paper by Patterson and Sequin, they observe the trends in contemporary commercial machines like various DEC and IBM products, and then they conclude that simplifying and rationalising the design of new machines is beneficial when implementing those machines with the same technology. And the authors were aware of the CDC-6600 since it merits a mention with regard to the zero register. What many people overlook is that approaches in computing, technology, science and many other disciplines have a habit of recurring. Thus, claims that someone "invented" something have to be treated with skepticism. Often, people just rediscover things that were known in some way before. But I don't think the authors were claiming to invent anything.
I don't know the full history of RISC but it was something the military needed. In switching to fly by wire, they needed a logic that function could be fully verified. They also needed as much processor power as possible. By limiting the instruction set they could produce clean logic with minimal or no exception that could be proven flawless. Motorola, IBM and I think another company were involved in producing chips of this family. They were called G1 through G6. Apple switched to the chip because of the low power consumption but by they G5 chip, they could no longer use it in laptops because of the power consumption so they switched to Intel. The G6 was a real power hog and mostly used by IBM. In the latter chips the instruction grew again and they again resembled the other chips on the market. I own Apple products with the RISC chips and they were reliable however there wasn't enough demand to justify the redesign to keep them competitive.
@@denawiltsie4412 RISC goes further back before microprocessors were commonplace. The first "credited" RISC computer was the IBM 801 and in the 80's Berkeley had the MIPS project which produced the MIPS series of CPUs. The first ARM chips came out in the 80s which are also RISC. But it wasn't until the 90s when RISC saw a lot of commercial success with designs from IBM, DEC, Berkeley, ARM, Sun Microsystems, and ARM.
Thank you for your historical videos! This one has a special place in my heart. As a Saint Paul resident and child of a former Control Data employee, all of this is new to me. I had no idea that Cray supercomputers had a local connection or that they were essentially spun out of CDC.
You should take a day trip to the Chippewa Falls Museum of Industry and Technology. About 2/3 of the exhibit is Cray-related stuff including a CDC 7600 and all sorts of Cray machines and memorabilia. Also, from about 2010 til 2016 (IIRC) Cray had a large office in downtown SP and the husk that is now owned by HPE is now in MoA.
In the mid-80s I got a tour of NASA Ames' computer facilities (thanks Eugene Maya!) where they had both a Cray-1 and a Cray-2. Better yet they had a Cray technician that could answer all our questions. Amazing that such different machines came sequentially from the mind of the same man.
As part of a larger Honeywell system, I repaired CDC disk pack drives in the US Army in the mid 80's. The (Army issue) Honeywell system replaced a non-standard IBM 360/370 that used Iron Core memory. Wish I had hung onto one of those boards. I still have a single CDC 14" platter (from a crashed drive pack) that was used to engrave a plaque our unit presented to all departing maintenance techs. While the Honeywell officially did the processing for Army systems (logistics and personnel), our division unit (part of 1st ID) replaced the 360 with an IBM S34, then S36, and finally AS/400 and was used for nearly all interactive front end/user processing with the Honeywell really only acting as DB and comms/batch processing to the US Army central systems. Unsure if the Big Red One still run on IBM, but I have continued my career with IBM Power Systems and Storage and still support "AS/400" aka IBM i some 37 years since its introduction in 1988.
Its crazy a boy grew up to achieve all he did. I, as an example, have done so little by comparison. How did he do it is not the point, the point is I am pleased such people are around and always have and, hopefully, always will.
The maximum length of the wires is because of the speed of light. For example, in a fetch from memory, you have to select the memory, send a request to it, and get the data back, ideally within one op code. The faster you ran the computer, the shorter the lead had to be when dealing with light at 300,000,000 meters per second. For example, at a clock rate of 1 million operations per second, things had to be within 150 meters to go to the memory and get back within one operation. And that is optimistic, since it doesn’t allow settling time for the memory operations, or that electrons in a wire actually move slower than light. Modern computers measure clock rates in billions, so they are thousands of times faster than a million operations per second. That’s responsible for both the continuing shortening of the maximum length wire in a Cray computer and the small size and high speed of a cellphone.
American (Ozark Mountains) HVAC man here. "Heat Pump" means different things to different people. Here a heat pump is an air sourced, forced air unit. This is what I have in my 60 year old 1000 square foot ranch style house. My typical total electricity use is about 12000 khw per year. A water sourced or geothermal unit is typically forced air with some domestic hot water production and can be run off a horizontal field, vertical field, or open loop wells. There are water to water heat pumps, and air sourced to water units, but they are not terribly common. Most any system can be put in any house if the customer has the money to do it. Some systems will be better suited for some applications. High discharge temps are very hard to achieve with heat pumps. You were correct that when people are used to wood or gas heat with 140 degree (freedom units) air coming out the vents, when they switch to a heat pump with 92 degree air coming out the vents, it feels cold. Of course it feels cold. A 20 degree delta T is really good on a 20 degree day, but the heat pump never shuts off. The man who does the heat load calculations, sizes the system, designs the ductwork, and installs the system is the one who ultimately determines the end users satisfaction. The cheapest contractor may not be the best choice.
That bench was the power supply ... you could throw a crow bar at and the bar would evaporate .. The system came with two (huge) motor generators to provide 400 HZ power.
The 400 Hz sound from those power supplies under the seats made it difficult to sleep on the seats while you were benchmarking Crays at night. I had to sleep on my right side (to curve my body around the Cray).
I was writing software on serial 1 of the Cray-1 at the Los Alamos Scientific Lab in about 1975-78. I had it all to myself for the first few months. It had a Data General minicomputer as a front-end.
I worked at Intel designing CPUs from 1980 through 2002. In the 80's we watched with interest the "supercomputer wars" which definitely influenced how we approached new processors. We didn't have enough area for large-scale parallelism but started by dedicating an unheard-of portion of the die to a Floating Point Unit. I designed that unit which was used in the 960 series, the 387 and eventually the 486. I was the design manager of the P6 (Pentium Pro/II) where we employed much more parallelism in addition to many techniques that had previously failed (out-of-order processing, speculative execution, register renaming, etc.). The Cray was always an inspiration, and in the late 90's we arrayed our processors and took the computing crown for a while. Interestingly, I knew the guys who started Ncube and even helped them fix their layout plots to avoid some fatal flaws. Those were heady times. We'[re now working on quantum systems, an even headier topic!
Meanwhile, I was whittling drum sticks in my backyard. Really.
Are you saying you're the one responsible for "I am Pentium of Borg; all will be approximated"?
Speaking of nCube and layout plots, I remember visiting Stephen Colley, at nCube headquarters. He was lying on a schematic which covered rhe entire floor of the atrium, trying to debug something. We talked for about an hour after which he excused himself saying he really had to finish writing the operating system!
@@talkingpoetry5281 Yes, those were the days.... We did a room size plot for the P6 but it was just for show, not debugging.... Had some great shots of it in the press.
@@herbpowell343 No, I was the project manager for the follow-on. Our team found the FDIV bug even before it was publicly reported. The Pentium guys downplayed it even though we pointed out it was easily reproducible and corrupted results. They said it happened very infrequently (true) in normal numbers, and put it into a 2nd stepping to fix. Then it was posted for all to see and they looked pretty silly. The Pentium was not a good design. Late, twice as much power as expected, and half the performance. Took them 18 months to get to market. On the P6 we hit our schedules, got better than projected power, higher performance and an easy shrink and got to market in 10 months. We were viewed as "the wild Indians in the north" as Barrett said and nobody thought we'd be successful. We loved every minute of it.
My father worked for CDC at the time Cray was working on the 8600. Cray wasn't keeping HQ up to date as to how things were progressing. They were progressing very slowly. Norris sent my dad, who he knew was a laconic, hardheaded type like Cray, to take a field trip to Chippewa Falls to ask Cray how things were going. That was an inspired decision as Cray spilled the beans, the 8600 was unlikely to ever work due to cooling and reliability issues.
Dad borrowed some of my facsimile paper, a paper coated with toxic but conductive metal powder. If you apply current to the paper you can do a crude analog simulation of heat flow. The flow lines didn't look promising.
My dad brought the bad news back to Norris and he quickly wound down the 8600 project, and that prompted Seymour to leave CDC.
This is what ypu come to the comments section for.
Oh such history and memories in this video! When I was growing up, my Dad was a computer technician for CDC in Arden Hills, MN, working on the cyber mainframes. I remember him talk about the cool things at the magnetic peripherals division which I recall eventually became Seagate. Unfortunately, his career at CDC ended when they put him at ETA Systems in the mid to late 80s and that spin off failed.
I remember that well, my dad Daniel C. Harrington, worked for CDC as an elect. engineer. 😉
So your Dad was a snitch?
@@shiraz1736 He was an electrical engineer...w/a masters of mathematics
One minor correction to the video - The CRAY systems weren't C-shaped for cooling purposes, but rather to minimize the length of the backplane wires in order to reduce signal propagation delay. Cooling in the CRAY-1 and a few subsequent systems was provided by freon flowing through the vertical aluminum bars between the columns for the circuit boards, which were layered on both sides of heavy copper plates to conduct the heat from the circuit boards to the cold bars. Very elegant!
Yeah. A circular shape is very good for keeping connection distances short. A spherical shape is best, but that's usually impractical.
The 'cold bar and cool plates was used on the 6600s and 7600s, too. (ex-CDC)
It was not Freon ... it was (I can't spell it) Florinert... you could dive in and breath it but could not come back out. And yes the "C" was for wiring length or the distance the electrons traveled ...
No, the cold bars *did* use freon (or at least a freon-like substance) . The Flourinert was only used in the immersion cooled systems like the CRAY-2 and was really expensive, like $500/gallon.
@@joesterling4299 If they could have made it spherical they would have. I believe Seymour was even quoted as saying that.
I worked for Control Data in 1970, fresh out of University. I was smart in school but at Control Data I felt like being at the bottom of the ladder. There were a lot of genius people working there. Some were super passionate working 7 days a week, day and night.
sweeeeet! i'd have given something (i need my right arm lol) to have worked there, HP, Intel, Zilog -Tektronics -
Did they sleep under their desks like Elon.... ? Lol...
@@thommysides4616 Probably, or a cheap motel nearby
I hear you, I just came to respect those pure Genius types, many couldn't hold a conversation but could see the workings of the universe in their heads, I found there is a lot of high end work left for us "smart guys" to do.😊
This was what made America great.
I met Mr. Cray when he was installing a Y-MP at the Air Force Weapons Laboratory at Kirtland AFB in Albuquerque. I was a young Lieutenant and they actually used one of my finite element models as a benchmark.
Genius and people skills seem to be a common theme in history.
I worked at the KAFB Weapons Lab from 1986 -1992 over in Bldg 617 on COIL. Seems like Wang was around our area too, but that was long ago and my hard drive has many missing sectors lol.
@@bobbys4327 very familiar with COIL. I worked on space-based laser vibration suppression in the four-trailer "quad" in the back parking lot from 85 to 89. We were there at the same time.
sweeeeeet! that is so awesome. if you can remember, how many 'nodes' did your project contain?
@@bobbys4327 dang! i'd forgotten about Wang. thats pretty cool.
I once met Seymour Cray. I was a Field Engineer for Data General working in the Minneapolis field office. Cray Research was using a Data General Eclipse computer. I don't recall why. They had a hardware problem with the DG system at their Chippewa Falls facility and I repaired it. It was well after 5:00pm when I finished. I walked the halls of their building to find someone to sign my Field Service Report. That's when I met Seymour Cary, and he signed my FSR that night. I won't forget that encounter or his signature. I was 20 years old but still knew who he was and his legend.
IIRC correctly minicomputers were used as front ends to the supercomputers. Because the supercomputers couldn't really support terminals and interactive work, probably ditto for other peripherals. I had to program the 6600 with punch cards punched on an 026 keypunch.
The early Cray 1 machines used a DG eclipse as a front-end. Cray X-MPs and later ones used an 'expander chassis' that had a removable disk pack and some other hardware to bootstrap them.
Heh. My first IT job was running Data Generals. It was a good way to get IT experience because one was always fixing something. I swear you could get a DG to puke and abend just by staring at them the wrong way. But they were good for their time, I guess.
One of the most amazing things about the Cray supercomputers were their processing geometry was very unusual. One of my professors giving me a history lecture about this devoted a full hour to the competing geometries.
Back then I was at university in the 1980's they had an Cray there. It was then 486 was the hot stuff for pc.
Don't GPU's work a lot like vector computers I belie at least the older ones did.
@@magnemoe1 Actually it's modern GPUs that work a lot like Cray's geometry vectors. Old GPUs were a hardware implementation of fixed function geometry pipelines like OpenGL up to version 2.x and DirectX up to version 10. They were basically "a program made in hardware".
Modern GPUs are SIMD devices. (SIMD = Single Instruction, Multiple Data). Or pretty much what Jon described as vectors. But they're also massively parallel with thousands of cores. So if you want to project a 3D scene onto a 2D plane (the screen) you're running the same tiny little program, with enormous amounts of geometry data as arguments, split up over thousands of cores. In the next frame the angle has changed only slightly, so the tiny little program (called a shader program) only have a few cos()/sin() values updated to reflect that, and the whole thing goes again.
In addition to being able to work on the vertices of a 3D mesh modern GPUs can also work directly on the pixels in the 2D projection to adjust brightness, transparency, etc. This is why running computer graphics becomes more and more demanding as resolution increases. But it basically works the same: a very small program iterates over all the pixels, but to do that effectively long rows are loaded into wide registers in many cores.
(Disclaimer to fellow nerds: This was purposefully kept incredibly superficial to the point of introducing slight inaccuracies)
Yep vector processing with an 8 instruction stack of 64 bits/instruction. I wrote a program to sort the instructions by priority ..
I was in charge of software maintenance while Seymour was still the leader of Cray Research .. All the wires in the Cray were blue/white and the ladies that wired the system had to know exactly from and to for the connections ... it was mind-blowing how fast they were. Hardware people including engineers HATED software programmers ...
YES as a software engineer - we always argued with the hardware dept - how will we fix the problem in our system - a hardware change or a software patch? Always fixed it with software!! Remember, hardware is easy - it is software that is hard!!!
@@denniss1211 Even in the post-SGI occupation era, software thought hardware existed only to serve them and vice-versa.
Yes, but remember the 1991 conference in Minneapolis? That's when we, supposedly, brought together the hard/soft-ware groups into one big happy family.
@@skwest supposedly
I am old enough to remember when Cray was synonymous with supercomputing. I also remember Silicon Graphics. I remember being in grade school and middle school seeing magazine covers with Cray computers featured on the front.
RIP BYTE magazine ❤
@@SavageBits BYTE reminds me of the movie "Weird Science". Almost all the covers looked like they could be from that movie.
Well now Cray and SGI are HPE and we still sell systems under the Cray name.
I am old enough to remember the joke ad in the back of a late-Seventies issue of BYTE magazine: "CRAY-1 on-a-chip! Plug it into a penlight battery and watch it go!"
I thought then 'I might live to see the day you could do that'. ..
pc on usb turn your tv with a usb port into a 2.4ghz edge capable soc.
Thanks for this. It brought back quite a few memories. I wrote my first program in 1966, and spent my working life in IT. Never was involved with supercomputers, but remember long days spent in looking for ways to reduce instruction path lengths in an airline reservations system on a Univac computer. What a joy it was to see 5 instructions knocked out of the path!
My first computer, in the 1980's, had software called 'Framework' by Ashton-Tate (around the time of Kapro Computers). Its monitor had amber colored font, no mouse and no graphics. I liked it. Framework had some features that even MSFT Windows never figured out, such as ability to 'copy' and retain text from multiple sources concurrently - to 'paste' or access later. MSFT can't 'copy' more than one text item, at a time.
Back in the early days when memory was not only expensive but the machine could only hold so much, you looked for all sorts of creative ways to make your code smaller and faster. Made code maintenance a nightmare, but it was fast!
@@brahmburgers If you type Win + V it will enable the clipboard history which does exactly this.
cousin worked for United res systems chicago, coworker worked on United systems in Denver
I worked there from 1984 to 1992 in sales and sales management. It was the best place I ever worked. Smart, compassionate, high integrity people, great products, and a wonderful work environment. Thanks for this video from a proud ex-Crayon.
Thank you.
I worked for REECo in Las Vegas Nevada in the early 1980s. They had the contract to do dosimetry research on Nevada test site data. I interacted with the atomic energy commission's computers.
I think the tour I had of their facility was in high school. I wish I could remember more details.
Not to be cheeky, but doing sales with so few customers sounds like a really cushy job to me. Never done sales in my life, though.
@@farrapo2295 Cray is still my favourite job of all time, and what a product to work on. There will never be another machine that looks as good a Cray 1 or X-MP did.
The photos of the Eckert-Mauchly tag and the Cray-1 memory are my photos of my computer parts. I'm glad to see them used.
Thank you for this video. It tells a great part of my life.
It started with the CDC7000 at ETH in Zurich.
We moved on to the Cray-1.
We were so proud using the fastest computer in the world.
My mentors Niklaus WIrth and C.A. Zehnder pushed me in a wonderful life.
Being now an old guy, hacking on an overclocked Intel Chip, I happily look back on those outstanding machines,
I programmed a 6400, the 6600's kid brother which sacrificed parallelism for cheapness, for about 3 years, assembler of course, my second computer (first was an IBM 1620). What was most fascinating to me was how clean the instruction set was, how symmetrical and logical. I didn't really appreciate it for many years after working on others with much more dreadful instruction sets. I'm looking at you, x86, the ugliest instruction set I have ever worked with.
Thornton (with Cray?) later wrote a book on the tricks which went into speeding up the 6600's instruction set, adding to my impression of how clean the 6x00 family was.
The short description of the 6600's multiple processors is slightly misleading from being so short. It had one central processor with 64K (?) of 60-bit memory (60 is 5 columns on a punched card) which had zero I/O capability and no system mode; it was a pure user mode compute machine. There were around 10 PPUs (Peripheral Processing Units) with 4K 12-bit memory each, I think, but that's misleading too. There was really only one real PPU, and it switched context to each of the virtual PPUs in turn, I think every microsecond. Those PPUs did all the I/O to tape drives, card readers and punches, and ran all the system instructions which started and stopped the CPU and switched tasks. Each CPU task stored I/O requests in their location 0, and the PPUs monitored that, executing file I/O and transferring data to and from CPU memory.
There was also extended core memory, 10 times as slow but 10 times as much, with special instructions to copy blocks back and forth.
I too was appalled at the ugliness of the x86. Not to forget the flakiness of Windows.
As I hold a $200 1TB microSD card in my hand, it’s almost unimaginable to me how far we’ve come since punch cards in what, 60 years? 😨
I used to be an operator of the later Cyber series (a 72-26). The problem for the CPU + PPU architecture was that the rolling PPU could hang. It did this rather too often. That meant that regardless of the applications continuing in the central CPU, all the output was lost, the I/O being performed by the PPU. The Operating System had to be restarted.
The other thing the Cray-designed CDC computers were limited by was the address bus - many applications spent most cycles swapping overlays rather than manipulating data. I recall the crystallography programs were right on the limit. Those running large dataset Geography applications ... went elsewhere in abject despair.
The x86 may be ugly, but even the slow original IBM 8088 PC could address a more RAM. When using the 8087 the high-precision floating point was providing even more bits.
@@John.0z That's all true, but the 8088 was 15 years later.
@@grizwoldphantasia5005 Yes, the PC was a lot later. Nevertheless there were a few things about the IBM PC that seemed to me to change a lot about high-performance computing, as it had been.
With a PC it was cheaper to have a little box in your office running difficult applications 24X7 - rather than battling for the limited time available on those old computers. It was not just in the realm of scientific computer.
There was a person in the finance area at the uni who used a PC for end of year ledger reconciliation. Using a PC he managed over a weekend what took weeks of batch work on the mainframe. Unsurprisingly, he was the first person in the university administration to get an AT.
I asked a fellow who used to work with crystallography on the old IBM 360/50 that preceded the Cyber, if he was moving to using PCs. But by then his career had moved into being a manager at the university, and he found no time to pursue his academic background. So I do not know if the crystallographers saw the transition the same way I did.
I would expect that the ability to transition would depend on the availability of compilers, and the quality and capability of the complied code they produce. But I may be quite wrong in that expectation. IBM seemed to take this need for compilers a lot more to heart than Microsoft did.
Wow...I had NO idea that any of these details on the history of computers existed..!! I Highly recommend the "Computer History Museum" in Mountain View, CA... They not only have the very first 'Asteroid' game (that you can play if you're lucky), with a round vacuum tube display screen that has an incredibly amazing 3-d effect, but also a Cray-1 supercomputer as well as a Cray-2, Cray-3, the Utah Teapot, the 1969 Neiman Marcus Kitchen Computer, original Apple I and MUCH more. The museum is based on a timeline and starts with a Chinese 'Abacus' and proceeds to 'today'..!! You will find your earliest memory of your 'First Computer' and remember everything from that point onward..!! Mine was the Radio Shack 'TRS-80 Model I' that my Uncle had, and my best friend had an 'Odyssey' Pong Game that we played for many hours..!! (1973-ish) A Wonderful Museum..!!
A few comments. 1) Read The Supermen, by Charles J. Murray, to get a complete history of the man and the companies. 2) Cray worked closely with Les Davis, an underappreciated engineer who worked a lot of the packaging magic that made Seymour's designs practical. 3) Cray's custom vector CPUs eventually became unaffordable to develop, while the high volume microprocessor industry was making considerable performance advancements with every new CMOS generation. In the end, the entire supercomputer industry, now generally known as the High Performance Computing (HPC) industry, came around to deploying enormous numbers of microprocessors in highly parallel architectures such as Cray's T3 family, often with custom accelerators and then GPU accelerators.
I just want to remind you that modern scalar CPU's do have vector instructions. They are usually referred as SIMD (Single Instruction Multiple Data) and some of the popular examples are MMX, SSE, AVX, AltiVec, Neon...
@@ИванСнежков-з9й Yep, and GPUs take the SIMD idea even further.
And NPUs take it even further still.
Meanwhile, I was whittling drum sticks in my backyard. Really.
Yes, Les Davis... what a guy.
Cray Computer was last iteration
keep going bro. 2 years later or so, i'm still finding every video of yours FASCINATING. 😄
beautiful. i remember the Cray from my early childhood, the first "supercomputer" back then. :) thank you! 😍
I walk by one of these old round Cray-2s regularly at work. The plaque says they paid $19M for it (in the 80's), and it's something like 1/100 as powerful as an iPhone X.
Are they going to decommission it? I’m looking…
@@waynesworldofsci-tech The thing hasn't been used in decades, but it'll never be for sale unless perhaps a museum wants it.
@@blurglide
How about Dave Cutler? He’ll get it running and do demos on his channel. He got an IBM mainframe recently. Great project, he got it up and operational.
@@blurglide
I don’t know Dave personally, but my guess is he’d love to have a Cray. And he’d put it to good use as an educational tool.
> and it's something like 1/3 as powerful as an iPhone X
honestly that's impressive still.
As a man with a 3d printed model of the Cray 2 on his desk, I got excited when I saw this video.
As a gal with a chip and piece of pcb from a Cray 1 on her desk, I need to know where to get the model to 3d print...
That sounds like some exciting desk art you have😂
A computer that was art as well as functional.
@@marcwolf60 And seated twelve comfortably besides.
The Cray-1 was not just the best super computer of the day, but it was also Art.
Who is Art?
And that's part of the stupid that contributed to the slow death of the company. (Remember, Job's bankrupted Apple several times like that. Form does not trump function, 'tho Mac purists bought whatever he crapped out.)
It's also the only computer intended to be sat on. Very warm and comfy.
hear hear
Cray was cray-cray in his time. At 11:00 it costs 100K (10M today) for a 0.2MHz beast. Midboggling.
The great thing that the GPU companies did (NVidia and ATI, which later merged into AMD) was similar to what Cray did with scientific computing: they identified a small community with large compute budgets that were willing to write their own software on machines that were massively redesigned every generation.
For GPUs, this was game designers. These folks didn't actually have large hardware budgets of their own, but their customers were collectively willing to spend many billions of dollars a year on graphics hardware. The total number of software titles that had to run on each generation was around a hundred, and NVidia and ATI developed close relationships with the folks doing that work. And crucially, they let the CPU handle most of the complexity in a way that was backward compatible, so that in each generation it was a small part of the software (the kernels) that had to be ported to the next GPU.
Eventually, NVidia took an open-source software project, GPGPU, and turned it into CUDA, which wraps the GPU in a software layer that is somewhat forward- and backward-compatible. CUDA made it possible for a lot more people to write code that partially runs on the GPU, because they didn't need to learn as much and they didn't need as much individual support from NVidia.
So I'd disagree with your summary. In years past, the GPU folks defined a space in which they did quite a bit of from-scratch redesign each generation. However, they've also been dragged down by their own success. Now that they have so many customers doing so many things with GPUs, they have a requirement for forward- and backward-compatibility that restricts some of their innovation. Recent generations have been able to run code for prior generations fairly well, as the architecture has stayed similar enough and just the sizes of various memories and numbers of SMs and cores has increased. To get full performance though, programmers still have to retune their code for each generation.
NVidia in particular learned an important lesson along the way. They made a ton of money for a few years when cryptocurrencies moved from computing the blockchains on CPUs to GPUs. NVidia was unsure how long blockchains would be a cash cow, and was unwilling to throw away graphics performance to get better blockchain performance. Their run came to a halt when crypto folks moved to FPGAs, which didn't last long before the crypto folks moved to full custom silicon.
So when the AI folks moved their code the GPUs, NVidia decided to support them with products tuned just for their workloads.
They forked their product line and introduced new products which are scaled way up for AI workloads. It is unlikely that mainstream graphics processors in the near term will have larger caches or memories, for instance, than the H100, and so code tuned for the H100 is unlikely to run well at all on regular GPUs for perhaps a decade. H100s also have inter-GPU communication channels which completely outstrip the PCIe connections on mainstream GPUs.
As well as pushing up the cache sizes, they pushed hard on packaging. An H100 burns 700 watts, far more than high-end professional GPUs (right now the RTX 4090, which uses a total of 450 watts including the memory chips). The H100 has six stacks of HBM memory on a silicon substrate, a scheme that gives it 3 TB/s of memory bandwidth to 80 GB, compared to 1 TB/s to 24 GB that the 4090 gets from 24 discrete GDDR6 memory chips on its printed circuit board.
NVidia is competing with several companies making from-scratch NPUs, including Google, Amazon, Tesla, and Facebook, as well as a slew of startups. As the GPUs have a significant amount of hardware that is unused in AI (texture caches and MPEG decoders?), these from-scratch designs have some basic advantages. It'll be interesting to see if NVidia is willing to make a product which gives up software compatibility to keep up with all these new entrants. NVidia certainly has the capital to fund multiple chip design teams, but they may be unwilling to partition their best design team.
Thank you! This was very enlightening.
The Cray II eventually was expanded to 8, then 16 processors originally designed with four. I worked for CRI as a Circuit Design Engineer for 5 years in Chippewa Falls. The Cray 3 was extremely hard to manufacture and that was it's Achilles heal. Eventually circuit density of CMOS displaced bipolar IC's. There were few of us at CRI that liked CMOS. The YMP series used bipolar gate arrays from Motorola. Seymour was eccentric he did not believe in SECDED nor did he believe in the damaging effects of ESD, which GAAS is extremely sensitive to. He also did not believe in using both edges of the clock edge, preferring to use only the rising edge of the clock to instigate operations. I always found that to be the most eccentric thing as it could have potentially doubled the speed. I was told that he did not trust the signal integrity of switching events based on the falling edge. I was originally hired as a Reliability Engineer at CRI and was appalled when I learned about that and lack of SECDED and ESD protection on his designs. After I left I found out that some younger engineers had plans to fit a highly integrated version of a 16 CPU Cray II into the size of a shoe box. The company was divested before that could happen.
Yep .. Seymour said "parity was for farmers" not fast computing.
Mostly true, but maybe a bit misleading. Cray 1 S/N 1 didn't have parity, but IIRC, S/N 3 did, as did all subsequent Cray machines, using a (72,64) SECDED code licensed from IBM. The Y-MP and its successors used an (80,64) S4ECD4ED code that could correct any single 4-bit error in the 80-bit code word. The Cray 2 and 3 also had SECDED memory.
The original Cray 2 had 4 CPUs, but there were several 'q' machines that had only one. There was also a single Cray 2.5 sold that had 8 CPUs; it was nominally sold by Cray Computer Corp. to NASA.
Bipolar logic is low-impedance, and doesn't suffer terribly from ESD. The Cray 3, however, used MESFETs (high-impedance devices) rather than bipolar logic, and was more sensitive.
I would hate working on a system that uses both edges of the clock. It complicates clock distribution and edge skew prediction. Doesn't even help much for meta-stable event resolution. He probably used one standard D-register design throughout with known setup/hold times.
spun off to Cray COmputer
Thanks! For a trip down memory lane during my days at Sperry Univac///Lockheed Martin…
"Which in bird culture is considered a dick move" You got me with that one.
Throwing in a Rick & Morty & Birdperson quote. I see what you did there.
@@careycummings9999 If we are being honest he could probably do a good Birdperson impersonation without much practice. He almost nailed it this time which is why it got a good snort and laugh out of me. I was like did he jus... he did....
this was the comment I was looking for.
😃
When I was taking CS in the 1980s, Cray was the ultimate of computing power.. Plus they really looked cool. They were used in CGI for the Last Starfighter.
That was what I commented, too. I don't know why Cray computer became well known in that movie!
And was used for the CGI for the Disney “Body Wars” ride.
I believe the first usable CGI on that CRAY was for an air intake on the new Fiero.
It's not at all surprising Warren Buffett declined to invest in a company that took a really smart guy to run it. Considering he has said he prefers companies that could be run by a ham sandwich.
And still profitable.
Well you never know when someone will eventually hire a ham sandwich in a suit and tie to run it into the ground......
@@mrdumbfellow927 Forget whether it was his colleague Charlie Munger or Peter Lynch who said it, but the other line was you prefer a business that could be run (profitably) by an idiot because sooner or later an idiot will run it.
@@mrdumbfellow927 this is what they call in the industry controlled flight into terrain. Where is NASA would put it, just piling it straight on in
Not much energy is needed to Hurd sheep.
Steve Chen wasn't only let go because of financial issues, he couldn't ever call a design finished and was constantly tinkering to make the design better. The company got fed up of waiting for him to finish the Y-MP design and had to hand it to someone else to get it across the line. That was what the company told employees at the time, I was lucky enough to work for them between '88 and '95.
Shades of Mr. Babbage!
I remember walking through Boeing in about 1978, and reading the following note pinned to the outside of a cubicle: "There comes a time in the life of every project when you have to shoot the engineers and go into production."
@@bea9077w now Boeing fires or overrides its engineers and kills the passengers (and maybe the astronauts). That company won't change until senior executives go to jail for killing hundreds because of deliberate cost-saving decisions they make
Wonderful Video : I lived through this era & looked forward to working on supercomputers in the 1990s.
Some amazing things going on back then !
I was and still am proud to have worked for Cray Research, Inc. VM Station!
Same here.
@@skwest What part of Cray were you in?
@@wb8ert
I hired on in '88 and was an on-site analyst at several of our customer sites.
In 1986, I worked for a company (Dynamotion) that provided equipment to Cray Research. Specifically, we provided circuit board drilling machines that could drill holes as small as 0.0039" diameter. That's about the diameter of a human hair. I was the applications engineer for Dynamotion. The drilled holes were then interconnected in a board stack using gold wire. Each one inch by one inch board had 2200 holes interconnecting 16 chips on each multi-layer board. The drill bit was spinning at 120,000 RPM. The spindle shaft was floating on air bearings. Ball bearings produced too much heat and vibration. Back then, air bearing spindles were leading edge. Back in the early 1990's, Chippewa Falls was my "home away from home". That facility is now TTM Technologies.
and now a billion $ machine does chip layout
It's worth noting that Cray (as part of HPE) is still doing pretty well, with 4 out of the top 10 supercomputers in most recent TOP500. The systems are generally using third party processors in giant clusters but connectivity and cooling are still the secret sauces.
I took my first CS course in 1969, and am still working in the field. This is one of the better Computer History videos I have seen in a long time--congratulations on this video.
What a wonderful journey and having worked in Cray/Silicon Graphics, brings me fond memories of all these sites. Awesome work.
The 80s! W/ SUNs, Symbolics, Apollo……I still have SOLARIS running on a box, and my NeXT Cube…….still better OS than Mac, and I worked for Apple for 23 years, and NeXT too.
This was one of the most informative videos I've watched in a long time. The writing and visuals are great. I've become a subscriber, and I look forward to seeing more videos from Asianometry.
In the mid/late 1980s while on a tour of a Bell Labs data center in New Jersey, I was allowed to stand in the center of the 3/4 circle of this Cray: en.wikipedia.org/wiki/Cray_X-MP ....A year later the final 1/4th of the circle was filled with a RAM disk.
Around 1990 Shell Rijswijk employed a Cray for geological modelling, allowing geologist for interactive playing around. I was technical leader of a small team who did modelling with transputers. This was orders of magnitude cheaper than the Cray. Later Shell donned the transputer computer to the Dutch Hobby Computer Club. It was demonstrated regularly, for example at the "kennisdag" earlier this year.
Now the best thing out of Chippawa Falls, WI - Leinenkugel beer - of which I am drinking a toast to Mr. Cray and his work. Please celebrate responsibly.
It was a tradition for many years that every Cray delivered would also come with a case of Leinie's in the truck. Tradition carried through until at least the mid-2010s (Cray Inc)
That was my first thought.
Awesome walk through history! I started my career out of college at the EPA's NESC working for Martin Marietta where we installed and managed a Cray Y-MP for the EPA. We upgraded to a Cray C90-4 and also installed a Cray MPP (I don't recall the exact model) it was good and exciting times.
Thanks for another great video. Seymour Cray was my inspiration in the '80s.
I used the Cray YMP at Rutherford Appleton Laboratory for my PhD project 1989-1990.
What was your PHD project on if you don't mind saying of course? I find that so fascinating, was it hard to use/learn?
@@TheOnlyDamien My PhD project was partially funded by the Ministry of Defence in the UK. It was related to numerical simulation (LES) of turbulent flows and pattern recognition of turbulent flow structures. At that time, the meteorological offices tried to use the Cray supercomputers to predict atmospheric turbulence and the numerical techniques known as Large Eddy Simulation (LES ) were developed in that era. Sorry very technical stuff! I used the Cray remotely on the ancient BBC Micro computers. The Cray YMP supercomputer was not hard to use, but the programming/debugging with the language Fortran 77 for the simulation source codes was a nightmare. At the time, I also used the massive ICL mainframe computers that filled up a big room in my university for post-data processing and analysis.
@@singhonlo67 That's genuinely so fucking cool thank you so much for sharing, and I appreciate the technical bits it's what we're here for after all. Thanks!
@@TheOnlyDamien funny, I also did my Masters thesis using a YMP (maybe it was XMP), it was extremely painful. You submitted your program during the day and received the results the next morning. .... in my case I mostly got the fun message: .....fatal error abort, sth like that. It took me forever to get it right.
But I definitely enjoyed the comfort of a line editor. Who cares for those stupid punch cards :-)
@@singhonlo67 i STILL listen to old BBC Shipping Forecasts for nostalgic purposes. A few parodies are rather funny.
Eniac wasn't the world's first programmable digital computer. The first was the UK's Colossus Mk1, which preceded Eniac by at least two years, being launched in 1943 to help with wartime codebreaking. The ever-modest Brits felt there was no need to shout about it, and its existence and history were kept secret until well into the 1970s.
Colossus was a special built machine designed for one job. Eniac was general purpose. It didn't have stored program but it could compute firing table or Nuclear weapons calculations.
Where does the computer built by John Atanasoff fit in the timeline of computer invention. His computer also preceded ENIAC but the credit had gone to John Mauchly and J. Presper Eckert for ENIAC first computer.
Actually, it was Konrad Zuse's Z3, which was operational in 1938
I was born in 1956 and my dad was a software designer. Dad worked for several years at Collins Radio in Cedar Rapids Iowa,. Authur Collins was trying to develop a mainframe computer to challenge IBM. Things went south in the early 1970's and the Collins "C System" computer bankrupted Collins Radio. In 1972 Collins faced failure or selling his company to North American Rockwell. Rockwell rescued Collins from bankruptcy and eliminated anything that wasn't profitable including the new Collins C-System which I understand was purchase by Control Data Corporation. I always wondered if some of dad's software made it into Control Data or Cray systems? Dad said at the time that the hardware and software that they had developed was years ahead of IBM. We were not allowed to say the name Collins in our house for many years after that......
That’s awesome, love to hear it when little Iowa pops up in history. Much love from Dubuque ❤.
@@macicoinc9363 it’s ironic that you replied today as my wife and I are staying in our motor home at a campground just outside of Dubuque right now. We went to the HyVee last night and drove around a bit. Dubuque is such a nice place. We love visiting here.
Cray was a whimsical man, too. When asked what tools he uses to design supercomputers, he was very specific: a 2B pencil. Anything harder or softer simply didn't leave the most desireable lines. When someone pointed out that Apple used a Cray to design the Macintosh, Cray said he is using a Macintosh to design the next Cray.
True, When I was at apple during the 80s, we bought 2 Cray’s. Detailed above.
You might recall the Mac-referencing t-shirt, "My other computer is a Cray". It had an image of a Cray-1 with a mouse attached to it. I was told you could only get one of those if you visited (or worked at?) the Mac lab.
I had a friend on the inside who smuggled one out.
Cray's naming scheme doomed the company. While people would buy a Cray-X or a Cray-Y computer, no one would buy a Cray-Z computer...
Just like how preperations A thru G were a complete failure, until PREPERATION H!!
Well. apple sold an IPhone 4S. so I don't know about that 😄
Well Acorn had a tough choice with explaining ARM abbreviation with "RISC Machine" to the board representatives...
Sounds Cray Cray…
I worked for CDC in the late 70's. But my division wasn't the one with 6600's or other "super" computers. My group worked on the "Cyber 1000" unit which was a message processor. (See the Wikipedia "CDC Cyber" page for info). I worked on the Cyber 1000-2 version which added a bunch of Z80-based micros to service I/O. I wrote Z80 code for the "programmable line controller" card.
While I worked there, I met some engineers on the C1000 itself. Here's a little oddity about it: The machine's assembler was written in Fortran. wtf? I never did find out why a person would code a Fortran compiler in raw machine language, and THEN write an assembler in Fortran. They all acted like it was normal and what was my problem anyway?
My first computer, in the 1980's, had software called 'Framework' by Ashton-Tate (around the time of Kapro Computers). Its monitor had amber colored font, no mouse and no graphics. I liked it. Framework had some features that even MSFT Windows never figured out, such as ability to 'copy' and retain text from multiple sources concurrently - to 'paste' or access later. MSFT can't 'copy' more than one text item, at a time.
The Z80 was a tough little chip. And it had a beautiful instruction set and Assembly language!
Interesting. I used to maintain a bunch of Cyber 1000s at a bank's computer centre. The highlight of my career was sitting on a Cray 1. (Field Service Engineer - CDC UK in the 70s and 80s)
@@daveeyesfor my undergrad project, I built a z80 based microcomputer with a colleague, wire wrapping the boards. Input was via switches for address and data, plus a paper tape reader. LEDs for output.
I thought the z80 was years beyond the 8080. Their timing and IO chips were wonderful, too.
I've seen a Cray for the first time of my life at the EDF Research Center in Clamart.
I was fascinated by these machines. Thanks for your video.
Oof I respect that Cray ethos, like academic research and releasing state of the art technology making just enough money to pay your way but not sacrificing perfection for profit
Except history shows economic matter.
Then you don't know how to survive
Tremendous video...I was a graduate student at the time and was interested in minicomputers, PCs from the likes of DEC (PDP 8 bits), Data General (16 bits...wow!) for interfacing to laboratory instruments...I think DEC published three books at the time about how to program the PDP 8 in assembly language, but you had to master two of them to understand the third...in all permutations of the three.
I was a user of CDC 6600 & 7600 systems from 1975-78. I’d say the one thing it was deficient in was the software side. The native FTN Fortran compiler produced very fast running code, but you couldn’t figure what went wrong if the code crashed. To develop software we used a Fortran compiler from the university of Minnesota called MNF which gave good diagnostics, but wasn’t as fast. When you were happy the code worked you then ported the code to FTN!
I worked at Bell Labs as an MTS 1982-4. I got my Fortran programs working on the IBM and then transferred them to the Cray-1. The Cray made it possible for me to develop matrix spectral factorization methods for solving otherwise impossible-to-solve queuing system problems.
I did Fortran on a 6400 and a Cyber 70/74 and I eventually learned how to read the core dump.
Concurrently, I was whittling drum sticks in my backyard.
thank you for all the work, and sharing. I had often wondered what happened to the Cray computer which I used before I took an assignment overseas, but which had then disappeared when I came back 15 years later.
It’s amazing where they end up, one of the guys from Microsoft, think its Nathan Myrvold, or if not, Charles SImonyi (he wrote Word)…..bought a Cray 1, and it sits in his living room.
I was the sysadmin for a couple Cray Y-MPs at the NSA ('89 to '91). One of them is in the NSA museum and the other at the Smithsonian.
Back in the very early '80s I operated the Cray-1 #1. I was at the UKAEA in the UK and we had ordered a Cray-1s but had a very long wait time for it to be built and delivered, so Cray loaned us the Cray-1 #1 while ours was being built.
You MIGHT have been told it was SN#1, but the first of each went right into the basement of the NSA in Fort Meade……where they measure their computers in acres.
I owned a few Silicon Graphics computers with the Cray name attached to them, Like the SGI O2, Origin 2000 and the Onyx 2. Those were the days of backaches and headaches. Thank you for the video, it was awesome!
SGI Files Patent Infringement Suit Against NVIDIA, April 17, 1998.
SGI graphics team moves to Nvidia, August 10, 1999.
Didn’t take long...
I had the opportunity to use a couple of those SGI machines back in the day. Good hardware, and IRIX was a decent Unix implementation.
Ditto, having worked in the Entertianment Industry most of my life, though moonlighting as Im a Computer Scientist Ph.D by Education.
Back in the early 80's all of the issues Cray faced with pipelining and parallelism were known. Lots of progress has been made, there is still a long way to go. This was a trip through my years at college studying computer science back in the early to mid 1980's.
When the Cray 2 came out, there was a joke going around: "Did you hear about the Cray 3?"
"It's so fast it can execute an infinite loop in six seconds!"
And it was so fast you need to say HALT twice 😂
Which was blindingly fast for the time, at 250 MHz (Cray 2). The Cray 3 was supposed to be 500 MHz, and actually ran at 480 MHz. The Cray 1 ran at 80 MHz; the Cray X-MPs ran at varied clock speeds between 80 MHz and 166 MHz, and Y-MPs at 166 up to 250 MHz. PCs crossed 1000 MHz (1 GHz) in the early 2000s.
9:30 One can add and clarify a few things.
In 1958 test firings of nuclear weapons were in full swing in the USA and in the USSR. There were lots of things measured by various instruments in each test.
Over the years, the USA performed roughly a thousand of nuclear explosions and the USSR was not far behind. Underground test firings continued in Nevada all the way to 1992. Only after that the computers and the understanding of the nuances of physics became good enough to rely on numerical simulations more of less completely.
Of course, computers were always very useful to gain insights into the dynamics of the explosions, even if one could only compute rather crude models. This was done already in the Manhattan project, where young Feynman was famously in charge of the computing department. This became absolutely crucial during later work on the hydrogen bombs, where the physics was much more complex. That work was done in Princeton, running the programs on IBM computers in New York.
In the late 90s/early 00s, I was a contractor on a large Air Force base, and inside the main building (which used to be a massive aircraft hangar... I worked in a large concrete building INSIDE this hangar), there was one hallway that had the husk of a Cray. It was mainly there as a kind of display piece/bench. It was in the hallway that led to the bowling alley. That was awesome.
Hah, Warren Buffett, what a fool!
lowry?
Offutt.
Great video. Seymour Cray was a fascinating individual, but you kinda underplayed the full extent of his eccentricities. In his free time, his favorite hobby was...digging. He would dig tunnels for hours on end, and claimed to have had conversations with elves while doing so. I've often thought it may have been his way of dealing with possible PTSD from his combat experience in WWII.
Great video! I thoroughly recommend reading The Supermen, a book about Cray and his band of merry men. It's a fascinating story.
I'd love to hear the story of Silicon Graphics Inc. Such an influential company during its time, and the companies that ended up spun out of its former employees still exist. Looking at you Jensen!
This always reminds me of reading Jurassic Park where they used a Cray to sequence dino DNA
My thoughts as well. Specifically, in the book it's mentioned they used the Cray X-MP to do the gene sequencing. I don't remember if tit actually appeared in the film, but you can see the distinctive cylindrical computer in the Jurassic Park videogame Trespasser inside one the former laboratory buildings.
SGI's influence at the time?
In the movie they reproduced the front panel of a Connection Machine CM-5 after Cray Research Inc. declined to pay for the display of a Cray X-MP prop. In the Jurassic Park book, the supercomputer used for genomics is indeed a Cray X-MP.
In the film its a Connection Machine CM-3 (3?)…….its in front of of the guy who smuggles the embryo a out (from Seinfeld), has the huge strip of LEDS down the middles of it, looks like 8 black boxes (4x4), with the vertical LED strip>
Wonderful video on an intriguing piece of history. Really a microcosm of high tech industry. Thanks for making this!
@7:19 "In 1957, enough was enough. Norris left Sperry Rand to found a new company - Control Data Corporation." Replace "Sperry Rand" and "Control Data Corporation" with other hardware and software company names and you have the history of technology in the US. Aren't getting listened to/being allowed to do what you want, start a new company! It's an amazing dynamic! And, of course, Cray Research is just one example. Supercomputer Systems and Cray Computer Corporation are others mentioned in this video, showing how hard it is to keep creating new, successful business from one source.
I disagree that CDC went out of business because it couldn't compete in the vector machine world. It went out of business because it lost focus and tried to do too many different things both in terms of technology and in terms of social programs.
@26:08 "It was the story of Control Data and the 1604 all over again!" Really, it's the story of installed base--every tech company struggles with it.
Fun Fact: the CDSI Arden Hills building now sits on the Campus of Boston Scientific in Arden Hills, Minnesota.
Arhops!
You answered some questions I've had for years. I came from USAF analog computing in the late '60s, took CS night classes in the early '80s, and got a job monitoring AT&T-UNIX minicomputers from '85 until I retired in 2007.
The only time I ever worked with main frames was in school in the early '80s. Never saw a supercomputer so there were a lot of unanswered questions.
For example, some of the old gear we maintained actually still used core memory in the '80s. Storage was on 200MB and 300MB drives the size of a small 'fridge, however every system was for a dedicated app. There were no multi-processor operations.
This led to some unusual configs some of which were very similar to the tech Cray used. I find that very fascinating. Thanks for answering so many of my questions.
I interned at a DOE lab back in 2018. They had instructions for a lot of their software to be compiled on Cray computers, but I had no idea how old Cray computers really were.
Great video again!!!
Suggestion: The video has many individuals and companies.In the future of you could provide a mind map or a graphical representations of who left what company and joined whom it will be easier for the viewers to connect the dots in their head...but still a great video.
Love the Computer History Museum - spent a whole day there and basically got kicked out at close. Got the 1401 visitors printout as well
wow!!! thanks for this. definitely took me down a few memory lanes. i remember as a young 3d graphics c++ engineer in the early eighties with all the limitations of current day hardware (cga, ega anyone?) dreaming of having the resources of cray-1 to run my shading algorithms on.
You didn’t mention the Cray Business Systems Division which built a SPARC based multiprocessor system called the CS6400. Sun Microsystems eventually bought the division from SGI and immediately made a huge splash in the large UNIX systems market. This came at a crucial time because PCs had begun to encroach on Sun’s traditional workstation business. The acquisition occurred just as the Internet boom started.
1987 the Sun-4 was the first SPARC system, arrived with a VME bus and packaged the same as Sun's top-of-the-line Motorola-based Sun-3 servers. The SPARCstation 1, rated at 12.5 MIPS, 1.4 MFLOPS, 3 times faster than the Sun-3 it replaced. Smashed all competition in desktops, competitors still waiting on volumes of Motorola's 68040, to upgrade 68030. However, 68040 took so long arriving, forcing workstation oem's to ship old models with promise of free upgrades. Innovative, SPARCstation 1 was the first computer to implement an upgradable SBus interface. 10Base-2 Ethernet Controller, SCSI-SNS Host Adapter, Parallel Port, and 8-Channel Serial Controllers some of the SBus interfaces, all products you could only purchase from one company, Antares Microsystems. It was a good idea then, as initially, it worked without the chance of an anti-trust violation!
1996 Sun paid out for the SPARC business side of Cray Research, with Cray Systems being all that she wants, out of Silicon Graphics. Products, technologies and its customer base from the wildly named, "SuperServer" 6400. Sun's big launch with a killer family of products, their Ultra Enterprise servers, had configurations up to 30 x 64-bit CPU's, 30 x SBus channels. Each an internal Gigaplane I/O of 2.5 Gb/Sec, set standards and established Sun as leader in Unix roles, and data centre servers for Oracle's RDBMS. Sadly, in Sun's launch of the anticipated JavaStation, MicroSPARC Network PC, priced under US $1,000, looking like doing big things, was lost to noise from Intel, Apple and bigger players, with deeper pockets and longer arms, Sun had made a simple mistake, overreached.
Lucky for none, same time, DEC wiped any shine left off SPARC. DEC Alpha's performance, saw Intel pick 'em up, rinse 'em n mince 'em, dumped on the supercomputing highway. Their unions birth, an unholy love child named, "IA64" and Itanium, pricey mistakes Intel wants us to forget. Now a period defined as "unstable" existed, database King Oracle, seen perhaps as their finest witness, virtues and miracles of Sun Microsystems. Mac Daddy of the DB land, had both Sun's hardware, optimised for Solaris OS, physical hardware and middleware operating system, enabling Larry Ellison, the licencing king of all dings of the lings, Mr Megabucks making off with Pro-C enabled Oracle financials, proved a cocaine cartel of cash counting, he could sell to infinity, and beyond.. World's leader etc etc.. dwarfing scales.. Sybase (Microsoft SQL / BI chosen license) left few if any competitors, gone open source or to the wall, only its a joke not funny, when broke with no money, the Sun could shine no more, and all that made such a bright SPARC, went dark in a flash, from lack of the cash.
Yep they built the sun e10k and the 15/25k systems. Based in San Diego. I ran the project that beta tested their systems
Worked at the Supercomputer Center in San Diego when Cray machines were on the floor. Mr. Cray visited once before he died I was around him, but never directly met him. Also met Vinton Cerf - Inventor of IP and who many call one of the founders of the Internet, had an office just down from mine, but he never was there it was basically place for him to land if he was in town. So many interesting if not Historic things happened at SDSC in the 90s!
Two huge innovations in the Cray-1. First, the semicircular backplane which produced a wire wrapping job for very skinny teenage girls, and minimized propagation times. Second, every ECL logic circuit was duplicated 2x so that the complement was computed at the same time, which meant the entire computer put a constant load on the power supply, which was unregulated! So the computer looked like one huge resistor to the power supply, minimizing power supply noise!
It isn't wire-wrap.
It's very cool to see this. I had a programming job one summer and we had a CDC 1604. I went on into math, but a brilliant and close friend of mine who was a computer scientist always raved about Seymour Cray, his hero...
An article on the NSA website about Cray and his work for the agency suggests 100 of 6600 models were sold
My university had a CDC 6600 back in the 70s. The first OS for it was called Chippewas. The assembly language was a real beast.
I was at Purdue in 1980. They still had a 6600 and 2 6200s. I wrote assembly code on the 6600 which was radically different from the assembly code I wrote on IBM 360s. Purdue also had a Cyber 205 which proved to be a bit of a disappointment. Most of the compute was used for large X-ray crystallography calculations which allowed Purdue be the first University to image a virus.
I was introduced to the CDC-6600 after several years of PDP-11s and micros. It seemed very strange to me, and it didn't make sense to me until a friend said "Seymour designed the 6600 to run FORTRAN fast," and it was like a lightbulb went off in my head. That's exactly what the A registers enabled (loading them forced a store or load operation with the associated X register). So it was dead simple to walk through an array by incrementing an A register.
Brilliant story. brings back so many memories from when i started as a systems programmer on an IBM 158 in the early 80's
Good overview. Minor points... There was substantial conflict between the STAR group and Cray as they were both competing for development dollars. Additionally, there were some personal issues between Jim Thorton (Star) and Cray going back to the 6600 days. Complicating the STAR development where to keep the STAR as a product, government requirements dictated that scalar and byte operations be supported. The additional instruction control logic and impacts to register and memory control add physical space between the memory and Instruction Control areas. I'm ex-CDC and worked on 6600, 7600 and STAR systems hardware and software.
Thanks for allowing us a sneak a peek into your amazing space. Can’t wait for the follow up and maybe a live action of you adjusting the depth of the pool???
I was working for SGI UK when they bought Cray in the late 90s. The CrayLink technology was pretty good but the blend of what SGI did and what Cray did was miles apart. It was a disaster. That was the beginning of the end for SGI which was a shame because it was probably the most fun job I ever had.
What probably started the rot was their decision to commit to Windows NT, like various other vendors duped by Microsoft, and to commit to Itanium, like various other vendors duped by Intel, although I suppose they made a better go of Itanium than most, hardly to Intel's credit.
Hewlett-Packard also ditched credible product ranges, including those that they picked up from Digital via Compaq, in buying into the Wintel dogma. So, I guess it is fitting that all of this stuff is parked up at HPE.
Agreed! But I believe that happened in the mid-90s.
Hello: I used to work for a company which owned an nCube machine which is wasn't using. The nCube is the only computer I have ever seen which looked like everyone's idea of a computer. It was five black columns with the central column having a pyramid shaped cooling tower on the top. All the connectors were out of the bottoms of the columns so each side was completely clean and flat. It was amazing.
I arrived at the U of Minnesota when the main computing resource was A CDC 1604. The installation of a 6600 at the Lauderdale facility was undeerway that year, and I wound up writing software that took advantage of its innovations. I actually have a couple of the "cordwoord" logic modules. Who needs more than 64 characters? We used every one of those 60 bits, lots of shifting and masking.
My 6600 and 7600 manuals are out of reach right now, but my memory is that they had pack and unpack instructions meant for floating point which could be repurposed to deal with 6-bit characters. But that was too long ago to remember for sure.
I had forgotten about the massive word size on the CDCs. 32 bits seemed puny in comparison. Now everybody is on some multiple of 8.
I was the lead engineer on the Install teams for all four of the HP/HPE HPCs at MSI. Itasca, Mesabi, Mangi and Agate.
Cray was astounding in his creativity and vision. I really appreciate the work put into making this video. Thank you. On a side note. I would wonder as mentioned, people upgrading from one Cray to the next would have to rewrite their software. A Computer in itself is a very expensive boat anchor without software to run it. While it's not a flashy topic, I'd love to know a bit about who wrote the compilers and documentation for these very different machines. They would have to start working on it as soon as Cray finished defining the ISA so the software was ready for customers to rewrite their software to or implement new software. Like the ray tracing software used to make that music video.
This is an excellent episode, brings back some fond memories ❤
Great overview! In the mid-'90's, I was fortunate to have NOAA and NIST as customers for my employer's network cable plant design practice. It's a bit hazy thirty years on, but I was at NOAA (I believe) in Boulder awaiting approval to enter their supercomputer machine room and sat on their original Cray-1 in the waiting area -- reduced from supercomputer to literally a lobby bench. Once I was approved to enter the machine room, the first thing I was greeted with was the infamous Cray-3 Tank (shown in this video), which used a massive amount of liquid freon as a very cold liquid bath for some of the processors or memory interfaces (unsure which). The Tank looked like an aquarium with thousands of very thin wires swaying in the continuous flow of freon from one end of the tank to the other, very much like anemones would look in an aquarium. I was only able to spend that single day in the machine room, but it definitely left an impression on me.
I had no idea that Cray himself lived in Boulder, which I'm assuming is why a high-profile customer like NOAA had either a prototype of the Cray-3 or, as stated in this piece, the single Cray-3 that had been sold. Regardless, those were very heady times for computer and software development. From a geeky perspective, it was an honor to spend just a few minutes with a couple of Cray's legendary machines, even just as a casual observer.
The vector explanation at 21 minutes in inaccurate. In the Cray 1, vector instructions could issue a new result every clock cycle and the adder pipe is 3 cycles long once the opporands have been fetched so the actual time samings is closer to 80 cycles reduced to 25 or 26 rather than 80 reduced to 4.
From Cray's Wikipedia entry:
Another favorite pastime was digging a tunnel under his home; he attributed the secret of his success to "visits by elves" while he worked in the tunnel: "While I'm digging in the tunnel, the elves will often come to me with solutions to my problem."
Interesting, IBM is often credited with starting RISC based computing in the 70's, but the CDC 6600 sounds a lot like a RISC based computer. And here's a fun fact. In the movie "The Last Starfighter" released in 1984 a Cray XMP supercomputer was used to render all of the outer space scenes. I would imagine it was very expensive to get time on one of these back then. If you pay attention to the graphical elements, it seemed that the ships got a lot more detail than the backdrops. They probably didn't have the money or time to fully render everything.
When RISC was announced, I was astounded they thought they were new. The CDC 6600 was absolutely a RISC computer before that term was invented.
@@grizwoldphantasia5005 I'm not sure that those who coined the RISC term did think that the principles involved were new. If you look at the RISC-I paper by Patterson and Sequin, they observe the trends in contemporary commercial machines like various DEC and IBM products, and then they conclude that simplifying and rationalising the design of new machines is beneficial when implementing those machines with the same technology. And the authors were aware of the CDC-6600 since it merits a mention with regard to the zero register.
What many people overlook is that approaches in computing, technology, science and many other disciplines have a habit of recurring. Thus, claims that someone "invented" something have to be treated with skepticism. Often, people just rediscover things that were known in some way before. But I don't think the authors were claiming to invent anything.
@@paul_boddie Could be. Most of my annoyance was with the breathless reports this amazing new paradigm.
I don't know the full history of RISC but it was something the military needed. In switching to fly by wire, they needed a logic that function could be fully verified. They also needed as much processor power as possible. By limiting the instruction set they could produce clean logic with minimal or no exception that could be proven flawless. Motorola, IBM and I think another company were involved in producing chips of this family. They were called G1 through G6. Apple switched to the chip because of the low power consumption but by they G5 chip, they could no longer use it in laptops because of the power consumption so they switched to Intel. The G6 was a real power hog and mostly used by IBM. In the latter chips the instruction grew again and they again resembled the other chips on the market. I own Apple products with the RISC chips and they were reliable however there wasn't enough demand to justify the redesign to keep them competitive.
@@denawiltsie4412 RISC goes further back before microprocessors were commonplace. The first "credited" RISC computer was the IBM 801 and in the 80's Berkeley had the MIPS project which produced the MIPS series of CPUs. The first ARM chips came out in the 80s which are also RISC. But it wasn't until the 90s when RISC saw a lot of commercial success with designs from IBM, DEC, Berkeley, ARM, Sun Microsystems, and ARM.
Thank you for your historical videos!
This one has a special place in my heart. As a Saint Paul resident and child of a former Control Data employee, all of this is new to me. I had no idea that Cray supercomputers had a local connection or that they were essentially spun out of CDC.
You should take a day trip to the Chippewa Falls Museum of Industry and Technology. About 2/3 of the exhibit is Cray-related stuff including a CDC 7600 and all sorts of Cray machines and memorabilia. Also, from about 2010 til 2016 (IIRC) Cray had a large office in downtown SP and the husk that is now owned by HPE is now in MoA.
@@ajlitt001There are many former Cray facilities around the Twin Cities.
@@dtj7995 Yep. I interviewed when they were in Mendota Heights, visited the SP office often, and left shortly after they moved to MoA.
In the mid-80s I got a tour of NASA Ames' computer facilities (thanks Eugene Maya!) where they had both a Cray-1 and a Cray-2. Better yet they had a Cray technician that could answer all our questions. Amazing that such different machines came sequentially from the mind of the same man.
Fascinating history, with the right amount of technical details. Thanks, mate!
Mad respect for Cray. He was an artist of computing.
An excellent story, with some lovely extra ‘snippets’ … the mention of Amdahl for instance. Very nicely done. Thank you
To this day, the Cray-1 is the most badass-looking computer ever made
Ehh Thinking Machines CM-5 or SGI Origin 2000 128proc?
As part of a larger Honeywell system, I repaired CDC disk pack drives in the US Army in the mid 80's. The (Army issue) Honeywell system replaced a non-standard IBM 360/370 that used Iron Core memory. Wish I had hung onto one of those boards. I still have a single CDC 14" platter (from a crashed drive pack) that was used to engrave a plaque our unit presented to all departing maintenance techs.
While the Honeywell officially did the processing for Army systems (logistics and personnel), our division unit (part of 1st ID) replaced the 360 with an IBM S34, then S36, and finally AS/400 and was used for nearly all interactive front end/user processing with the Honeywell really only acting as DB and comms/batch processing to the US Army central systems. Unsure if the Big Red One still run on IBM, but I have continued my career with IBM Power Systems and Storage and still support "AS/400" aka IBM i some 37 years since its introduction in 1988.
Excellent! Would love to see your take on Gene Amdahl.
Its crazy a boy grew up to achieve all he did.
I, as an example, have done so little by comparison. How did he do it is not the point, the point is I am pleased such people are around and always have and, hopefully, always will.
The maximum length of the wires is because of the speed of light. For example, in a fetch from memory, you have to select the memory, send a request to it, and get the data back, ideally within one op code. The faster you ran the computer, the shorter the lead had to be when dealing with light at 300,000,000 meters per second. For example, at a clock rate of 1 million operations per second, things had to be within 150 meters to go to the memory and get back within one operation. And that is optimistic, since it doesn’t allow settling time for the memory operations, or that electrons in a wire actually move slower than light. Modern computers measure clock rates in billions, so they are thousands of times faster than a million operations per second. That’s responsible for both the continuing shortening of the maximum length wire in a Cray computer and the small size and high speed of a cellphone.
Close, but you do get a cigar. Electrons run slower than the speed of light……and on the outside of the conductors surface.
American (Ozark Mountains) HVAC man here. "Heat Pump" means different things to different people. Here a heat pump is an air sourced, forced air unit. This is what I have in my 60 year old 1000 square foot ranch style house. My typical total electricity use is about 12000 khw per year.
A water sourced or geothermal unit is typically forced air with some domestic hot water production and can be run off a horizontal field, vertical field, or open loop wells.
There are water to water heat pumps, and air sourced to water units, but they are not terribly common.
Most any system can be put in any house if the customer has the money to do it. Some systems will be better suited for some applications.
High discharge temps are very hard to achieve with heat pumps. You were correct that when people are used to wood or gas heat with 140 degree (freedom units) air coming out the vents, when they switch to a heat pump with 92 degree air coming out the vents, it feels cold. Of course it feels cold. A 20 degree delta T is really good on a 20 degree day, but the heat pump never shuts off.
The man who does the heat load calculations, sizes the system, designs the ductwork, and installs the system is the one who ultimately determines the end users satisfaction. The cheapest contractor may not be the best choice.
I remember when the Cray-1 showed up at LLNL. Everybody has to go sit on the Cray.
That bench was the power supply ... you could throw a crow bar at and the bar would evaporate .. The system came with two (huge) motor generators to provide 400 HZ power.
The 400 Hz sound from those power supplies under the seats made it difficult to sleep on the seats while you were benchmarking Crays at night. I had to sleep on my right side (to curve my body around the Cray).
I was writing software on serial 1 of the Cray-1 at the Los Alamos Scientific Lab in about 1975-78. I had it all to myself for the first few months. It had a Data General minicomputer as a front-end.
Yep. As I recall, it was delivered sans OS, and you guys had to write everything from scratch. True?