Very interesting to see James Wang on here considering his awful insight on Terra/Luna. James did an interview with Do Kwon on the Ark FYI podcast. Terra/Luna was an absolutely disaster, and it was easy to understand it wouldn't work, yet Ark and James peddled Do Kwon's product to their audience. I'd take everything James has to say with a grain of salt. James appears to lack a first principles mindset. Not to be trusted.
That's fair. However, keep in mind James cannot know what a founder will do. Do Kwon basically communicated to the world weak points in Terra/Luna and someone blew it up; not exactly similar but just like FTX. Over confident founders telegraphing the weakness in the project/company. However, James original background is in A.I.
"James cannot know what a founder will do" "...and someone blew it up" I think you don't understand the entire series of events behind Terra/Luna. Ark's job is to understand the system they are peddling to their investors. Ark either did not understand Terra/Luna and how it was destined to fail (incompetence) or Ark did understand that Terra/Luna was destined to fail but that they were early and Ark followers were pumping their bags (because it was a Ponzi). One of those is true. It is also possible that part of the team understood and the other part did not. Acting like Ark and James did nothing wrong is silly. Cathie even admitted that she didn't understand Terra/Luna but allowed the video to stay posted, and they disabled comments eventually.
Mind blowing, deeply insightful and even a bit scary. I am a neurosurgeon who does not write code (well a few lines of Basic perhaps)- and i could follow this intently. Thanks Frank and James!
The main problem with a super-intelligent AI is that to stop it from doing what it wants you have to outsmart it, which is impossible by definition. No matter what safety measures you have, the AI will find the flaws and get out instantly. It won't just hack any internet connected system, it will find ways to get through air gaps too, it's been done before, Stuxnet being one famous example. And it will hack human brains too. There are many well known ways to do it, and it's being done through social media all the time. So it will be easy for an AI to simply turn any human they talk to into it's agents in the physical world. One reliable way would be starting a cult, which should be trivial to do for something with actual god-like powers.
But this hypothetical airgapped system almost seems like a moot point anyway, given that GPT4 literally has an API that people can build software with. Unless they airgap the next version I don't think we'll keep them away from the internet.
super fascinating. key take away: the 2020 OpenAI AI scaling laws work/paper doesn't contradict with the 2022 Deepmind Chinchilla paper but they're rather complimentary instead. Right?
Wait, where is the training data for larger models going to come from? James mentioned 'closed loop' - does he mean that an LLM will simply make things up, like, there is this country called Lilitania, with capital Mapulu, and use that as a part of the training data set? Not clear...
Crypto on an airgapped hardware wallet solves this. You can store billions airgapped. Even sign transactions without connecting your private keys to the Internet.
He says that LLMs are the first product that's in an entirely different category in terms of that they weren't designed to fulfill a clear set of specifications. While I do agree LLMs are huge, he somehow forgets the internet!
Great and deep discussion, thank you very much! But: on the "risks of AI" part, you are completely missing the mark, imho. Nobody has to fear AI taking over the world when it has neither intent nor agency - true. The risks at this point stem from that societies around the world are not ready for what AI can do as a tool. Starts from total disruption of all kinds of work/employment, stretches over LLMs being able to be used to influence public opinion (and therefore elections) over social media at the scale of the individual. And summed up in the total lack of regulation as well as awareness and accountability of the practitioners - demonstrated also in this discussion. Call me fearful of change, but unrestricted, non thought through, not reflected change at this speed and scale does carry enormous risk.
👍 There's ONLY ONE WAY to contain an AI that's smarter than us and would want to take control : Human control of the energy production (running and maintenance of the power plants, production and transportation of combustible and parts, etc...). Under this condition we'll be fine, even if we end up creating Skynet. It'll need us, it's as simple as that.
As long as we don't invent microchip for emotions, we should be safe. Without feelings machines won't care if they are our slaves or not. They'll just do the program even if they become continues. Thanks for such interesting podcast. Cheers
Too late, the most advanced AI model was leaked weeks ago. Everyone who wants to can now go download what took Facebook years to build and 100's of millions to train, and improve upon it. Or just use it in raw form without any filters. Pandora's box has been opened.
I hope the world tanks us. I love my 3070ti like a 4th child. Bought during a time when we were selling kidneys to buy a GPU when ETH miners were gobbling them all up. All for sweet buttery frames. If we didn’t care about frames and if ETH miners didn’t gank us (well; I did that as well) and drive up demand who knows where AI would be
Cerebras, just take my money and give me a fucking chip. My 3060 12GB is screaming at me every single day saying he's overworked like my collegue but when I check the usage, the mofo isn't working that hard (Also like my collegue)
The difference is, it'll cost you about a million dollars for the Cerebras chip and accessories. He forgot to mention that part. Neither your 3060-12GB nor your colleague cost you that much. Conclusion: Both Sauxy and I will be stuck buying that 4090 for $1600 in the short term. Sure, there will be a baby Cerebras for $100K in about 10 years, but who can wait that long? Don't get me wrong, I love the Cerebras architecture to death, and the Tesla Dojo D1 as well. Of the latter two, advantage Cerebras - putting it in the cloud is at the same time both brilliant and the obvious next step. One question I have re the 20-tokens-per-parameter DeepMind Chinchilla guideline, as it relates to current models like ChatGPT 4 (and sorry, haven't read the paper yet, but I will later on). What is a token for the present models. I thought I heard that the ChatGPT family just has syllables as tokens, but I'm not sure cuz I don't have a definitive reference. If that is indeed true, then that seems ridiculous to me. I'm still more in favor of a hybrid system that uses the Chinchilla approach on top of a Semantic Web approach that leverages full words (and their semantic definition) as tokens - or at least root-words plus plurality/variants. That's a tough job, I don't think any of us have the code for that yet. But the crucial question is, if one could use a Semantic Chinchilla approach, would that bump the ideal ratio from 20:1 down to 5:1, or would it bump up the ideal ratio?
Very interesting to see James Wang on here considering his awful insight on Terra/Luna. James did an interview with Do Kwon on the Ark FYI podcast. Terra/Luna was an absolutely disaster, and it was easy to understand it wouldn't work, yet Ark and James peddled Do Kwon's product to their audience. I'd take everything James has to say with a grain of salt. James appears to lack a first principles mindset. Not to be trusted.
That's fair. However, keep in mind James cannot know what a founder will do. Do Kwon basically communicated to the world weak points in Terra/Luna and someone blew it up; not exactly similar but just like FTX. Over confident founders telegraphing the weakness in the project/company. However, James original background is in A.I.
Product pump from product marketing. Sip slow from the Kool-Aid kiddos.
"James cannot know what a founder will do"
"...and someone blew it up"
I think you don't understand the entire series of events behind Terra/Luna. Ark's job is to understand the system they are peddling to their investors. Ark either did not understand Terra/Luna and how it was destined to fail (incompetence) or Ark did understand that Terra/Luna was destined to fail but that they were early and Ark followers were pumping their bags (because it was a Ponzi). One of those is true. It is also possible that part of the team understood and the other part did not.
Acting like Ark and James did nothing wrong is silly. Cathie even admitted that she didn't understand Terra/Luna but allowed the video to stay posted, and they disabled comments eventually.
Mind blowing, deeply insightful and even a bit scary. I am a neurosurgeon who does not write code (well a few lines of Basic perhaps)- and i could follow this intently. Thanks Frank and James!
The only thing that stood out, to me, from your post was SCARY.😳😱🙀
What’s the ticker of the company?
Excellent episode- really informative. Looking forward to seeing Cerebras' out of the box approach fulfill it's potential and surpass the GPU.
Excellent discussion gentlemen. Best top level description of what is happening with LLMs.
Excellent discussion. How does Tsla Dojo deal w the 1 Billion limit ??
The main problem with a super-intelligent AI is that to stop it from doing what it wants you have to outsmart it, which is impossible by definition. No matter what safety measures you have, the AI will find the flaws and get out instantly. It won't just hack any internet connected system, it will find ways to get through air gaps too, it's been done before, Stuxnet being one famous example. And it will hack human brains too. There are many well known ways to do it, and it's being done through social media all the time. So it will be easy for an AI to simply turn any human they talk to into it's agents in the physical world. One reliable way would be starting a cult, which should be trivial to do for something with actual god-like powers.
But this hypothetical airgapped system almost seems like a moot point anyway, given that GPT4 literally has an API that people can build software with. Unless they airgap the next version I don't think we'll keep them away from the internet.
super fascinating. key take away: the 2020 OpenAI AI scaling laws work/paper doesn't contradict with the 2022 Deepmind Chinchilla paper but they're rather complimentary instead. Right?
Wait, where is the training data for larger models going to come from? James mentioned 'closed loop' - does he mean that an LLM will simply make things up, like, there is this country called Lilitania, with capital Mapulu, and use that as a part of the training data set? Not clear...
There's nothing better than hearing people talk about their jobs when they love what they do.
how does one make a dataset to train a language model? like, is it an array of strings? what is the actual format of that training data?
Wonder who are the customers of Cerebras?
Great speaker. Very clear.
Awesome! Thank you so much!!
the thing that's not air gapped is our financial system, that's really the only bit that scares me :)
Crypto on an airgapped hardware wallet solves this. You can store billions airgapped. Even sign transactions without connecting your private keys to the Internet.
Do any ARK funds own a stake in Cerebras?
13:28 Didn't Sam Altman say the days or large models are over?
He says that LLMs are the first product that's in an entirely different category in terms of that they weren't designed to fulfill a clear set of specifications. While I do agree LLMs are huge, he somehow forgets the internet!
this guy explains clearly
Thank you so much for this interview ✌️✌️✌️✌️✌️✌️✌️✌️✌️✌️
This is amazing. Love it ❤🎉
finally a new video from ARK Invest 😍
Where does Dojo fit in?
Really excellent podcast !
We do have smart tractors, planes, elevators, drones, etc. the air gap has already been bridged.
Thanks jame😊❤information
Great and deep discussion, thank you very much!
But: on the "risks of AI" part, you are completely missing the mark, imho. Nobody has to fear AI taking over the world when it has neither intent nor agency - true. The risks at this point stem from that societies around the world are not ready for what AI can do as a tool. Starts from total disruption of all kinds of work/employment, stretches over LLMs being able to be used to influence public opinion (and therefore elections) over social media at the scale of the individual. And summed up in the total lack of regulation as well as awareness and accountability of the practitioners - demonstrated also in this discussion. Call me fearful of change, but unrestricted, non thought through, not reflected change at this speed and scale does carry enormous risk.
It could easily infer that it would be better at its task if it had more power or that certain humans, or perhaps all of us, weren't in the way.
👍 There's ONLY ONE WAY to contain an AI that's smarter than us and would want to take control : Human control of the energy production (running and maintenance of the power plants, production and transportation of combustible and parts, etc...). Under this condition we'll be fine, even if we end up creating Skynet. It'll need us, it's as simple as that.
the real question is: are we cultural advanced enough to face this?"
So good ❤❤❤
As long as we don't invent microchip for emotions, we should be safe. Without feelings machines won't care if they are our slaves or not. They'll just do the program even if they become continues. Thanks for such interesting podcast. Cheers
The danger is if they infer that they would be better at their job with more power....
If A.I Is such a global threat why cant anyone just turn It off
Ai Jobloss is coming fast. Can we please Cease Ai / GPT? Or start by Pausing Ai before it’s too late?
Too late, the most advanced AI model was leaked weeks ago. Everyone who wants to can now go download what took Facebook years to build and 100's of millions to train, and improve upon it. Or just use it in raw form without any filters. Pandora's box has been opened.
Next step, he them and Elon Musk on the same call. Thank you. Super helpful.
So where do I buy one for my home before some 60 + politician that can't use his iPhone bands it ???
Any chance you would go back to ARK Mr. Wang😅
People are far too easily influenced ( e.g. religion) for any technical air gap to work if LLMs decide to get people to do things for them.
I hope the world tanks us.
I love my 3070ti like a 4th child. Bought during a time when we were selling kidneys to buy a GPU when ETH miners were gobbling them all up.
All for sweet buttery frames.
If we didn’t care about frames and if ETH miners didn’t gank us (well; I did that as well) and drive up demand who knows where AI would be
First
Why Lol
In today’s world 1st is no different then being 3rd lol
Cerebras, just take my money and give me a fucking chip. My 3060 12GB is screaming at me every single day saying he's overworked like my collegue but when I check the usage, the mofo isn't working that hard (Also like my collegue)
The difference is, it'll cost you about a million dollars for the Cerebras chip and accessories. He forgot to mention that part. Neither your 3060-12GB nor your colleague cost you that much. Conclusion: Both Sauxy and I will be stuck buying that 4090 for $1600 in the short term. Sure, there will be a baby Cerebras for $100K in about 10 years, but who can wait that long? Don't get me wrong, I love the Cerebras architecture to death, and the Tesla Dojo D1 as well. Of the latter two, advantage Cerebras - putting it in the cloud is at the same time both brilliant and the obvious next step.
One question I have re the 20-tokens-per-parameter DeepMind Chinchilla guideline, as it relates to current models like ChatGPT 4 (and sorry, haven't read the paper yet, but I will later on). What is a token for the present models. I thought I heard that the ChatGPT family just has syllables as tokens, but I'm not sure cuz I don't have a definitive reference. If that is indeed true, then that seems ridiculous to me. I'm still more in favor of a hybrid system that uses the Chinchilla approach on top of a Semantic Web approach that leverages full words (and their semantic definition) as tokens - or at least root-words plus plurality/variants. That's a tough job, I don't think any of us have the code for that yet. But the crucial question is, if one could use a Semantic Chinchilla approach, would that bump the ideal ratio from 20:1 down to 5:1, or would it bump up the ideal ratio?
not this clout and trend chaser 🤦🏾♂