Anyone ever see all these different companies on this show and instantly realize “yea they don’t exist anymore” and kind of feel bad how they got washed away. And then sometimes you see some companies still around today and you’re like “wow, amazing they survived, and also, they were so much more professional about quality of products and their personell back then”
I love this show. I reminds me of being a field tech way in the beginning of computers. People respected techs and when we made good money. Now we are just viewed as nothing.
Thats very interesting, is it a case of this show was talking about something ahead of time and hasnt come to fruition yet, or is it that these systems have been made back then but the technology as been kept from the mainstream?
@@FixitAde This is a good summary : en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems As you can see, we still use object-oriented programming languages, though I'll admit even my Ryzen 5 PC is definitely an asyncronously parallel supercomputer by 1980's standards. As per other vids, even by 1991 there were primitive "AI" NN's around, but their flowering is only a feature of the last 5 years.
My first job in the computer field was a tape librarian. To make it worse it was third shift. But it was a good start to a career as a software engineer.
@@RottenRroses I started to laugh very hard in the middle of the CS college department while using my headphones. All my colleagues somehow felt intrigued I presume. Hibernation mode he says, oh boy!
for anyone having problems sleeping just put this episode on and lay down and watch it and within in 5 minutes you will fall in to a nice deep sleep LOL
Feigenbaum was a genius at the time. He described what we now know as explainable AI, a major challenge of all neural network-based AI. He would marvel and be horrified by the current state of technology. His ability to foresee four decades ahead is uncanny.
He is basically talking about Machine Learning, about the program learning from people or some dataset and inferring what to do (rather than just having preset instructions of what to do). That man has a hard time explaining stuff in simple words but he really was ahead of his time...
Has nothing to do with machine learning, that is what is used today with neural networks but this type of AI was largely abandoned in the 70s and early to mid 80s because of the myth propagation that such neural networks could never do anything worthwhile (even in the distant future). Instead, the academic world and industry put its eggs in the basket towards symbolic AI, which are just glorified boolean logic statements tied to a knowledge set that must be hand-coded to meet specific outcomes. While not totally useless (expert systems still used widely for customer service and technical troubleshooting tools), this is absolutely not the same technology used in GPT and other LLMs of today.
The stated goals of Japan's 5th Generation Project were never really achieved as the required storage and computing technology didn't exist and the problem of creating effective heuristics for generalized application was greatly underestimated.
And no technology of 1984 was used for that :-) Machine learning've totally beaten "knowledge based symbolic approach". But i still can't figure how they replaced most complex expert systems? Just fed rules to neural networks ?
This has to be the oddest _Computer Chronicles_ episode I've ever seen. Kept waiting for them to use that DEC VT100 terminal for a demo or something. Why was it just sitting there?
Maybe here for another episode, because they would shoot a couple episodes on Saturdays, so they may have had it there before or for the next set but didn’t want to move it/wanted something cool there for this episode
@@oubrioko But lord knows when this even was... It says 83, but it watches like one of their first ever episodes based on the absolute deadpan expression especially from the usually animated Stuart Cheifet. Also, the lighting is way more dim than usual of this time period of the show. Interesting episode for sure. I always thought Mainframes, Minis, and Micros was the first episode to be shot and aired but I may be wrong.
The oddities don't stop there. Ulysses 21 pointed out Gary in the intro seems to be somewhere else mentally, rocking the chair back and forth, twiddling his thumbs, and seemingly staring off to nowhere in particular. There's also some weird quirks in the editing that are inconsistent with every other episode I've seen; the text in the outro is green instead of blue, which is the only time I can find that in the entire series, and in the intro the zoom on the Difference Engine goes much further in than any other episode except for Computer Security from 1984. Some of the music cues are missing such as the Micro Focus ad in the beginning which seems to come and go without a clear pattern throughout 1983, as well as the Byte by Byte theme during the initial introduction, and while the latter is present in the outro it's cued significantly later than any other episode from this period. If I had to take a guess I would say this is either the actual first episode, or the first episode to go through post production. It's possible the deep zoom on the Difference Engine was an early version of the intro that was revised immediately afterwards, and for Computer Security from 1984 somebody accidentally grabbed the wrong tape during the editing process. Definitely an oddity for sure.
I remember when the US computer industry was all panicked about the Japanese 5th Gen Project, and that Prolog was going to be the language of the future.
There was a lot of hype of how powerful computers had become, and that was true from a relative point of view, but from an absolute point of view we know they were all crap systems too slow to do any of this. Even 20 years of leaping innovation from the time of this show, those processors of the time are relegated to being simple micro-controllers.
@@oldtwinsna8347 "they were all crap systems" They were more than powerful enough to a hell of a lot more than you can imagine. For example, I programmed for a *2 MIPS* mainframe with *6MB RAM* that easily supported *75 interactive users* with sub-second response times while also running batch jobs.
@@RonJohn6316MB of RAM in 1984 was a pipe dream. Even 1MB was considered an ultra high end premium system if you could find one for the home. 16MB did not become standard for a home computer until around 1993-1994.
I love Computer Chronicles, but this is the most obscure episode I've watched. They're talking about something that never came to pass, being watched by modern viewers who don't have any context of what was supposed to happen. I guess it's about AI, but it's been done much better.
No, think Machine Learning, ChatGPT, MidJourney or Stable Diffusion. And there is hardware specially made to run AI tasks like NVIDIA T4 cards and computers specially made for that exist. Technology simply has taken so, so long to catch up. Honestly hearing a man in 1983 talk about Inference was a bit mind blowing...
@@mikeluna2026 The inference engine in an expert system is completely different than an inference engine in an LLM. Nothing in common other than the name.
It's amazing that AI in the 80s bet the farm on expert system technology. ES trees are utterly worthless in an open system. That is to say, they're only useful when comprehensively pre-coded with all possible permutations of a closed system. Think like the decision tree in a rudimentary chess program, which has every play state and every optimal move hard coded. Expert Systems are great at playing a game of twenty questions, when the user is limited to a known dictionary of answers, but they can't handle new information. It seems strange that, in the 80s when storage was prohibitively expensive, and CPUs were becoming exponentially faster every year, they decided on a storage-reliant system, rather than a processing reliant system.
Crazy how they thought with their little 286s they could make an ai work. AIs usually need like terabytes of code on a cloud server to even barely function, and even then it's just more instructions not an actual thought process like a human brain. No emotions. No motive, no understanding what we want or need, just digital or electrical signals being pushed based on what's coded to run on the hardware. There is no real "AI"
Yea, I believe automated help systems use what would be classically called an expert system. Problem is they only have limited ability to formulate a response.
Various technology companies are developing Artificial Intelligence to manufacture more advanced finely detailed products, it is also used to dig down deeper into more detailed information.
Wow, that was the most interesting episode ever. I’m glad the hosts gave the professor time to talk and explain. Awsome window back in time to the early days of “weak AI” and expert systems. Best of all, i actually learned something new.
Wow, that was only the most boring thing ever. I'm glad the hosts eventually hit their stride and got comfortable in their roles. It's amazing how much of a difference just smiling can make.
Oh my God!!!! Did he just say Bushnell and pizza restaurants with robot's??? That's the bith of Chuck E. Cheese’s some of my best birthday's were spent there!!!!😉
Chuck E. Cheese's Pizza Time Theater had existed for 7 years by this point. Showbiz Pizza, the competing animatronic pizza restaurant and arcade company that bought CECPTT after bankruptcy, was already 4 years old when this aired.
"In 1981, Pizza Time Theatre went public; however, the evolving video game industry and the video game crash of 1983 resulted in significant losses for Pizza Time Theatre, which lost $15 million in 1983, and by 1984, Bushnell's debts were insurmountable, resulting in the filing of Chapter 11 bankruptcy for Pizza Time Theatre Inc. Showbiz then bought the foundering company, recreating itself as Showbiz Pizza Time Inc."
Appears it would've been some kind of TTL based computer, i.e. a mini-computer that had large circuit boards that basically functioned as a CPU. This is as the year the episode was shot did not have microprocessors powerful enough to do what is visually seen here. Looking at something that would've cost about 30-50k in money at that time period.
Not, not at all. These are expert systems which are basically elaborate decision trees with only specific known inputs and outputs. They are still in use, though, for things like customer service chatbots that have only fixed functions and cannot do it all hence they do not use neuro networks as is the technology with GPT and other LLMs of today.
There are subtle reasons why GOFAI (5th generation systems) failed. A good treatise of the subject is What Computer Still Can't Do by Hubert Dreyfus. Basically the argument boils down to the fact that GOFAI assumes that expert-like reasoning comes from symbolic manipulation, i.e. following some form of explicit albeit very complex rules. In reality expert-like reasoning is more intuitive and resembles pattern-matching. Dreyfus calls this "Heidegerian." Dreyfus was proven right of course, since what neural networks do is not rule-based symbolic reasoning, but rather sub-symbolic statistical pattern matching. What Feigenbaum gets right here is that "Knowledge is Power." In fact, even though, symbolic inference (alone) cannot produce human level intelligence, even sub-symbolic, statistical, reasoning is pretty useless until large quantities of data are fed to it. In fact much larger than what humans require. So there is still some component missing from our inference toolbox, but it is safe to assume that data is paramount.
I would like to make one clarification: by sub-symbolic I do not mean "magic": in fact statistical pattern matching is 0's and 1's. What I mean is that there are no high-level features (symbols) that the brain/ NN manipulate in a mathematical way, but rather a brain/NN perceive the sensorium as a whole and then generate a fast response, directly from the sensory data without constructing high-level features explicitly. A caveat though: still, one can argue how much the intermediate layers of a NN are high-level features :)
Fascinating, and a little sad, to hear this man talk about AI, inference, etc, that are only now becoming more mainstream with Language and image generation (like ChatGPT, MidJourney or StableDiffusion). I wonder what he would think if he saw the current technology.
It's a completely different concept with generative AI of today to what is being discussed in this episode with expert systems. Neural networks, what's used today, was well known but the computing environment was nowhere near the level it needed to be at that time. Rather, rules based systems, instead of machine learning floating point algorithms, were far less computing intense but still relied on vast knowledge bases. Some of the words are the same, such as inference engine, but the underlying technology is so vastly different that its just shared vocabulary that is the only thing in common.
Those were the days of 512/640 KB of RAM, although some 286 systems had 1 MB. 2/4 MB became common with the 386 systems when they were launched in 1987. Until around 1993, Windows 3.1/3.11 required as low as 2 MB to function properly. 4 MB were still much memory until mid 90s', when 8/16 MB became commonplace because of the growth of the Windows NT. 286 systems could handle 4 MB of RAM, but just in protected mode, wich was not the reality of the operational systems at that time. We hadn't the technology necessary to implement AI. We were way too far from that, although we already had the knowledge.
It was one style. People could buy other styles, many like the styles we have today. You just had to steer clear of the fads that optical stores pushed on you for more $$$.
They were on track for today’s ongoing discussion and search for Artificial General Intelligence. Their minds would be blown if we could go back and show them ChatGPT and a quantum computer.
Just show them an off the shelf PC with two GPUs running a Llama model in chat mode. All on native metal, in a desktop PC box under $2k resembling not much different than what housed a turbo XT at this time.
Why is a country that gets a technology first in a special position? Wouldn't others just study the product and copy its technology for their own use and production?
Backwards engineering is what you're thinking of. That takes time, and the country who gets the technology first can continue to improve on and use the technology. Think of a horse race, if a jockey gets a yard lead before the race begins, he's more than likely to win the race, right?
@@clarknapper3933 Well, Japan did just that with the Transistor and IC's in the decades earlier and they ended up dominating the electronics market so being first isn't a guarantee for staying ahead.
I don't like the way they frame the article as "there's a threat from the Japanese". It's very telling of where their interests lie. From most consumer's point of view, it shouldn't matter where it comes from. America sometimes has a sickening US vs them capitalistic mentality.
Nice acknowledgement of Ada Lovelace in this episode! Considering the toxic chauvinism clouding today's technology sector, it's interesting to think that the world's first programmer was a woman.
Bull. The first programmer was Babbage. Or do you think that he worked years creating a programmable computer and didn't bother to write a program for it?
The real money shot is at 27:25 where they talk about companies "putting advertising on computer disks and networks".
hey that's windows 10 how did they know that was going to happen they much have a time machine cheaters!!!!!
Anyone ever see all these different companies on this show and instantly realize “yea they don’t exist anymore” and kind of feel bad how they got washed away. And then sometimes you see some companies still around today and you’re like “wow, amazing they survived, and also, they were so much more professional about quality of products and their personell back then”
I love this show. I reminds me of being a field tech way in the beginning of computers. People respected techs and when we made good money. Now we are just viewed as nothing.
what are a computers first thought I wonder hmmm
19:55 Fast forward 39 years later and we love systems that are opaque and we have no idea why they give the results they do 😆
Only if you use Windows or Mac. At least Linux and BSD give some verbosity.
That tone used to play every night, right after the national anthem, when television would go off the air.
Forty years later, and we remain using advanced 4th Generation computers...the 5th generation is yet to come.
Thats very interesting, is it a case of this show was talking about something ahead of time and hasnt come to fruition yet, or is it that these systems have been made back then but the technology as been kept from the mainstream?
@@FixitAde This is a good summary : en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems
As you can see, we still use object-oriented programming languages, though I'll admit even my Ryzen 5 PC is definitely an asyncronously parallel supercomputer by 1980's standards. As per other vids, even by 1991 there were primitive "AI" NN's around, but their flowering is only a feature of the last 5 years.
@5:30 The intuitions about general purpose computing were those of Ada Lovelace, report by Charles Babbage, in the 1840s.
My first job in the computer field was a tape librarian. To make it worse it was third shift. But it was a good start to a career as a software engineer.
04:35 is he a sleep? He doesn't move at all when they are introducing him :)
+MegaBojan1993 Probably in hibernation mode.
RottenRroses Hahaha your comment made me laugh hard :)
But I suppose you're right :)
mega- hahahahahahahahahahahahahahaaaa !
@@RottenRroses- hahahahahahahahahahahahahahaaaa !
@@RottenRroses I started to laugh very hard in the middle of the CS college department while using my headphones. All my colleagues somehow felt intrigued I presume. Hibernation mode he says, oh boy!
Gary seemed distant and far away at the beginning in this episode. Like he had a lot on his mind.
Gary definitely did, his thoughts were far & distant in the future, much more than anyone else
it's really sad that the guy is not around anymore...
Maybe he was starting to count the billions he won't have after not taking a meeting with IBM like Bill Gates told him to.
@@slymarbo1183 he's alive
@@jr2904 no he is not. he died in a fight in a bar a long time ago
for anyone having problems sleeping just put this episode on and lay down and watch it and within in 5 minutes you will fall in to a nice deep sleep LOL
Yes I tried that last night.
I almost did sleep right now
i struggled to stay awake through ed”s explanations lol
Not working. I want more :/
I call that #bedtimelistening #sleepaidlecturer hehe
gary did not look happy about running out of time and being cut short... lol check 21:33
Feigenbaum was a genius at the time. He described what we now know as explainable AI, a major challenge of all neural network-based AI. He would marvel and be horrified by the current state of technology. His ability to foresee four decades ahead is uncanny.
He is still alive. He is 87.
He is basically talking about Machine Learning, about the program learning from people or some dataset and inferring what to do (rather than just having preset instructions of what to do). That man has a hard time explaining stuff in simple words but he really was ahead of his time...
Has nothing to do with machine learning, that is what is used today with neural networks but this type of AI was largely abandoned in the 70s and early to mid 80s because of the myth propagation that such neural networks could never do anything worthwhile (even in the distant future). Instead, the academic world and industry put its eggs in the basket towards symbolic AI, which are just glorified boolean logic statements tied to a knowledge set that must be hand-coded to meet specific outcomes. While not totally useless (expert systems still used widely for customer service and technical troubleshooting tools), this is absolutely not the same technology used in GPT and other LLMs of today.
The stated goals of Japan's 5th Generation Project were never really achieved as the required storage and computing technology didn't exist and the problem of creating effective heuristics for generalized application was greatly underestimated.
wrong. It was because of US gov't impeding them financially.
AI has been around the corner for 40 plus years.
Here I'm watching the video with automated subtitle using voice recognition.
And no technology of 1984 was used for that :-) Machine learning've totally beaten "knowledge based symbolic approach". But i still can't figure how they replaced most complex expert systems? Just fed rules to neural networks ?
This has to be the oddest _Computer Chronicles_ episode I've ever seen. Kept waiting for them to use that DEC VT100 terminal for a demo or something. Why was it just sitting there?
The other guy didn't even talk 😂
Maybe here for another episode, because they would shoot a couple episodes on Saturdays, so they may have had it there before or for the next set but didn’t want to move it/wanted something cool there for this episode
@@paulgambill good point
@@oubrioko But lord knows when this even was... It says 83, but it watches like one of their first ever episodes based on the absolute deadpan expression especially from the usually animated Stuart Cheifet. Also, the lighting is way more dim than usual of this time period of the show. Interesting episode for sure. I always thought Mainframes, Minis, and Micros was the first episode to be shot and aired but I may be wrong.
The oddities don't stop there. Ulysses 21 pointed out Gary in the intro seems to be somewhere else mentally, rocking the chair back and forth, twiddling his thumbs, and seemingly staring off to nowhere in particular. There's also some weird quirks in the editing that are inconsistent with every other episode I've seen; the text in the outro is green instead of blue, which is the only time I can find that in the entire series, and in the intro the zoom on the Difference Engine goes much further in than any other episode except for Computer Security from 1984. Some of the music cues are missing such as the Micro Focus ad in the beginning which seems to come and go without a clear pattern throughout 1983, as well as the Byte by Byte theme during the initial introduction, and while the latter is present in the outro it's cued significantly later than any other episode from this period.
If I had to take a guess I would say this is either the actual first episode, or the first episode to go through post production. It's possible the deep zoom on the Difference Engine was an early version of the intro that was revised immediately afterwards, and for Computer Security from 1984 somebody accidentally grabbed the wrong tape during the editing process. Definitely an oddity for sure.
I remember when the US computer industry was all panicked about the Japanese 5th Gen Project, and that Prolog was going to be the language of the future.
***** And CPU power and RAM. IBM Watson wouldn't do too well on a *8MHz 80286* with *16MB* RAM.
There was a lot of hype of how powerful computers had become, and that was true from a relative point of view, but from an absolute point of view we know they were all crap systems too slow to do any of this. Even 20 years of leaping innovation from the time of this show, those processors of the time are relegated to being simple micro-controllers.
prolog is that old?
@@oldtwinsna8347 "they were all crap systems"
They were more than powerful enough to a hell of a lot more than you can imagine.
For example, I programmed for a *2 MIPS* mainframe with *6MB RAM* that easily supported *75 interactive users* with sub-second response times while also running batch jobs.
@@RonJohn6316MB of RAM in 1984 was a pipe dream. Even 1MB was considered an ultra high end premium system if you could find one for the home. 16MB did not become standard for a home computer until around 1993-1994.
I so pity prof. Feigenbaum's students...
The professor is 48 in this episode!
So 1984, the professor is only 48 years old (if the birth year from Wikipedia is to be believed). He comes across as to be in his late 60-ies.
48? man i thought he was older ?
The 80s and 90s were an interesting time for computers, the rate at which computer power increased was at its highest.
Quick, cut to Kildall, he's shaking his head.
I love Computer Chronicles, but this is the most obscure episode I've watched. They're talking about something that never came to pass, being watched by modern viewers who don't have any context of what was supposed to happen. I guess it's about AI, but it's been done much better.
I’m not an expert (haha) but most of this is jibber jabber about expert systems is the backbone for AI used today.
No, think Machine Learning, ChatGPT, MidJourney or Stable Diffusion. And there is hardware specially made to run AI tasks like NVIDIA T4 cards and computers specially made for that exist. Technology simply has taken so, so long to catch up. Honestly hearing a man in 1983 talk about Inference was a bit mind blowing...
@@mikeluna2026 The inference engine in an expert system is completely different than an inference engine in an LLM. Nothing in common other than the name.
Boy oh boy...advertising on disks? Yeah, that would have gotten the war on spam and advertising going about 20 years ahead of time had they done that.
He knew the gist of what would become Chat GPT. As of me writing this AI is snowballing we call them Large Language Models.
This is the ONE episode that actually would've benefitted from Stewart's constant interruptions.
Steve F lmao! So true, Stewart was the worst constantly interrupting guests.
I listen to this episode when I have trouble sleeping.
RIP expert systems
It's amazing that AI in the 80s bet the farm on expert system technology. ES trees are utterly worthless in an open system. That is to say, they're only useful when comprehensively pre-coded with all possible permutations of a closed system. Think like the decision tree in a rudimentary chess program, which has every play state and every optimal move hard coded. Expert Systems are great at playing a game of twenty questions, when the user is limited to a known dictionary of answers, but they can't handle new information. It seems strange that, in the 80s when storage was prohibitively expensive, and CPUs were becoming exponentially faster every year, they decided on a storage-reliant system, rather than a processing reliant system.
Crazy how they thought with their little 286s they could make an ai work. AIs usually need like terabytes of code on a cloud server to even barely function, and even then it's just more instructions not an actual thought process like a human brain. No emotions. No motive, no understanding what we want or need, just digital or electrical signals being pushed based on what's coded to run on the hardware. There is no real "AI"
Yea, I believe automated help systems use what would be classically called an expert system. Problem is they only have limited ability to formulate a response.
Various technology companies are developing Artificial Intelligence to manufacture more advanced finely detailed products, it is also used to dig down deeper into more detailed information.
1:07 ermmm.. where's the intro music??
where's that iconic "byte by byte" theme???
I love 🇺🇸
Poor Gary looked bored senseless and droned out watching Ed rambling on in a Professor way with monotone type facial expressions and voice.
It took a few decades but current AI is finally getting to his kind of computing.
17:04 Episode I: The Phantom Sticking Problem
Edward Feigenbaum is top notch! I like his style ... a lot!
Wow, that was the most interesting episode ever. I’m glad the hosts gave the professor time to talk and explain. Awsome window back in time to the early days of “weak AI” and expert systems. Best of all, i actually learned something new.
Wow, that was only the most boring thing ever. I'm glad the hosts eventually hit their stride and got comfortable in their roles. It's amazing how much of a difference just smiling can make.
Akumacornflakes I know, this Stanford professor could have put an entire zoo to sleep.
+David Maiolo I yawned 3 times while that professor was giving his boring speech :)
these are my favourite years until about 1987 when 386 machines take over. back when computers were used by professionals.
Oh my God!!!! Did he just say Bushnell and pizza restaurants with robot's??? That's the bith of Chuck E. Cheese’s some of my best birthday's were spent there!!!!😉
Chuck E. Cheese's Pizza Time Theater had existed for 7 years by this point. Showbiz Pizza, the competing animatronic pizza restaurant and arcade company that bought CECPTT after bankruptcy, was already 4 years old when this aired.
"In 1981, Pizza Time Theatre went public; however, the evolving video game industry and the video game crash of 1983 resulted in significant losses for Pizza Time Theatre, which lost $15 million in 1983, and by 1984, Bushnell's debts were insurmountable, resulting in the filing of Chapter 11 bankruptcy for Pizza Time Theatre Inc. Showbiz then bought the foundering company, recreating itself as Showbiz Pizza Time Inc."
i remember my age 80 that time when this show begin
What do you think it smelled like back then
Probably the same? lol
What system is the "Drilling Advisor" running on? Some Unix variant? Screen seems high res for the era..
Appears it would've been some kind of TTL based computer, i.e. a mini-computer that had large circuit boards that basically functioned as a CPU. This is as the year the episode was shot did not have microprocessors powerful enough to do what is visually seen here. Looking at something that would've cost about 30-50k in money at that time period.
So were these earlier AI apps, like the chemical analysis one, the direct forerunners of our ChatGPT and Google Bard?
Not, not at all. These are expert systems which are basically elaborate decision trees with only specific known inputs and outputs. They are still in use, though, for things like customer service chatbots that have only fixed functions and cannot do it all hence they do not use neuro networks as is the technology with GPT and other LLMs of today.
This was a fabulous series on PBS. Fond memories of it.
There are subtle reasons why GOFAI (5th generation systems) failed. A good treatise of the subject is What Computer Still Can't Do by Hubert Dreyfus. Basically the argument boils down to the fact that GOFAI assumes that expert-like reasoning comes from symbolic manipulation, i.e. following some form of explicit albeit very complex rules. In reality expert-like reasoning is more intuitive and resembles pattern-matching. Dreyfus calls this "Heidegerian." Dreyfus was proven right of course, since what neural networks do is not rule-based symbolic reasoning, but rather sub-symbolic statistical pattern matching. What Feigenbaum gets right here is that "Knowledge is Power." In fact, even though, symbolic inference (alone) cannot produce human level intelligence, even sub-symbolic, statistical, reasoning is pretty useless until large quantities of data are fed to it. In fact much larger than what humans require. So there is still some component missing from our inference toolbox, but it is safe to assume that data is paramount.
I would like to make one clarification: by sub-symbolic I do not mean "magic": in fact statistical pattern matching is 0's and 1's. What I mean is that there are no high-level features (symbols) that the brain/ NN manipulate in a mathematical way, but rather a brain/NN perceive the sensorium as a whole and then generate a fast response, directly from the sensory data without constructing high-level features explicitly. A caveat though: still, one can argue how much the intermediate layers of a NN are high-level features :)
that tone killed my inner child
That was Carl Sagan, right?
Fascinating, and a little sad, to hear this man talk about AI, inference, etc, that are only now becoming more mainstream with Language and image generation (like ChatGPT, MidJourney or StableDiffusion). I wonder what he would think if he saw the current technology.
Hes alive
It's a completely different concept with generative AI of today to what is being discussed in this episode with expert systems. Neural networks, what's used today, was well known but the computing environment was nowhere near the level it needed to be at that time. Rather, rules based systems, instead of machine learning floating point algorithms, were far less computing intense but still relied on vast knowledge bases. Some of the words are the same, such as inference engine, but the underlying technology is so vastly different that its just shared vocabulary that is the only thing in common.
Looking at Garys face at 20:22 I think even he was ready to slap the professor.
That professor is the best cure for insomnia :)
26:40 Chuck E. Cheese's!
It was good the Japanese came to compete with the U.S.A. Nothing lights a fire under someone's ass like someone saying and proving I can do better.
They (the Japanese) did *NOT* come near to proving that they were better at Fifth Gen hardware and software. It was -- in fact -- a bust.
Are you sure this is from 1984? Seems to be one of the earliest episodes.
Considering the program started at '82, I think it's one of the earliest episodes, dude! lol
Felt bad for the gentleman from SRI. He only said a few words.
Those were the days of 512/640 KB of RAM, although some 286 systems had 1 MB. 2/4 MB became common with the 386 systems when they were launched in 1987. Until around 1993, Windows 3.1/3.11 required as low as 2 MB to function properly. 4 MB were still much memory until mid 90s', when 8/16 MB became commonplace because of the growth of the Windows NT. 286 systems could handle 4 MB of RAM, but just in protected mode, wich was not the reality of the operational systems at that time. We hadn't the technology necessary to implement AI. We were way too far from that, although we already had the knowledge.
What was it with eye-wear in the 80's? Seems like everyone had giant glass panes adorning their faces.
It was one style. People could buy other styles, many like the styles we have today. You just had to steer clear of the fads that optical stores pushed on you for more $$$.
Also the strength of your perscription dictates the thickness of your lenses which can force you to compromise on your frames.
That was it, they were glass. Most all today are polycarb, hi-index lens, with that made from highly computerized machines on the fly.
They were on track for today’s ongoing discussion and search for Artificial General Intelligence. Their minds would be blown if we could go back and show them ChatGPT and a quantum computer.
Most are still alive today my guy
@@scaryjam8 Doesn’t negate my point about “going back” and showing them modern AI and quantum computers. My dude.
Just show them an off the shelf PC with two GPUs running a Llama model in chat mode. All on native metal, in a desktop PC box under $2k resembling not much different than what housed a turbo XT at this time.
Imagine the amount of flabbergastation if these people would be told about ChatGPT. 19:04
Hello from 2020, u were wrong ! :-)
Watching this 40 years later, it is funny how much they were bullshiting us.
I love how professor Edward Feigenbaum sounds. It's easy to follow him.
Was this originally recorded with betacam?
+Nik Neuy It is more likely to be produced on U-Matic. What you see here is a digitized VHS.
Fifith japanese generation was a great AI project.
I was born in 1984.
So you're 40 this year, and commented when you were 36. Congrats on the big 4-0
That expert system looks very slow. A decission tree on paper probably beats it.
Are we currently having the seventh generation of computers? Hence the core i7 :)
No. Are you a troll or is this question serious?
I guess 6th gen are Quantium comuters.
lol they all looked and sounded so bored 😴
So we already had chatgpt in 1984! 😀
Why is a country that gets a technology first in a special position?
Wouldn't others just study the product and copy its technology for their
own use and production?
Backwards engineering is what you're thinking of. That takes time, and the country who gets the technology first can continue to improve on and use the technology. Think of a horse race, if a jockey gets a yard lead before the race begins, he's more than likely to win the race, right?
@@clarknapper3933 Well, Japan did just that with the Transistor and IC's in the decades earlier and they ended up dominating the electronics market so being first isn't a guarantee for staying ahead.
I don't like the way they frame the article as "there's a threat from the Japanese". It's very telling of where their interests lie. From most consumer's point of view, it shouldn't matter where it comes from. America sometimes has a sickening US vs them capitalistic mentality.
Man, Pizza Time Theater? I remember when it was Showbiz Pizza.
Stuart's hair system 🤔
nice, corporate AI was more intelligent in 1984 than consumer grade crap is today
Nice acknowledgement of Ada Lovelace in this episode! Considering the toxic chauvinism clouding today's technology sector, it's interesting to think that the world's first programmer was a woman.
Bull. The first programmer was Babbage. Or do you think that he worked years creating a programmable computer and didn't bother to write a program for it?
Yooman