OpenAI CEO: “SuperIntelligence within 1000’s of Days”
Вставка
- Опубліковано 23 вер 2024
- Breakdown of Sam Altman’s blog post from today, enjoy!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
My Links 🔗
👉🏻 Main Channel: / @matthew_berman
👉🏻 Clips Channel: / @matthewbermanclips
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.ne...
👉🏻 LinkedIn: / forward-future-ai
Need AI Consulting? 📈
forwardfuture.ai/
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Which will come first: ASI or GPT4o Voice?
Lmfao thats fuhqin funny…. But in all seriousness no one thinks we are gonna be in agi well before he informs us . Wonder whats going on with the pressure from musk about it.
Matt thats ummm cheeky but likely realistic... I get the feeling both GPT4o Voice and ASI will happen long after we work out the aerodynamics of pigs!
@@matthew_berman I’m sure they are rushing to achieve ASI so that it can deliver advanced voice mode.
I don't know, but it's really exciting times, thanks for delivering the exciting information, you remind me of my nephew, similar mannerisms.
AGI before ASI but Claude Opus 3.0 before both
“ASI in A few thousand days” as in “advanced voice mode in a few months”
🔥🔥
Still, that's not too bad.
But remember, they did have the voice mode when they said they would have them, just we couldn't access it due to ethical delays.
@matthewclarke5008 this
Or more, don't forget that caveat
In the early 80s I was having a conversation with my grandfather. Time magazine had just come out with “Computer machine of the year” issue. I told him that one day computers were gonna be able to think like a human and he said there is no way a machine would ever be able to do that. I said sure you’ll be able to just talk to your computer and it’ll talk back to you. He said there’s no way. I wish he had lived long enough to see it happen.
They still cannot like us.. I've yet to see an AI behave or learn like a person. Ive been right at the forefront of these models, testing, learning building my own stuff based on my own work and I'd tell your grandfather if he was still around that his grand kid still has some learning to do.
@@6AxisSageyou're near sighted...
@@mastershan87 enlighten me then oh wise one. Show me your work, do you have your own work?
@@6AxisSage dude... You gotta be blind to not see where this is going.
@@mastershan87 right.. a 1 dimensional character telling me im blind 🤣
1 000 days ≈ 3 years
2 000 days ≈ 6 years
...
EDIT 9 000 days ≈ 25 years
lol yep, this is utter pishposh. Matthew = Sam Altman fanboi.
9000÷365=24,66
@@waldemarstanny4079 Yeah, when you're right you're right. In my defense I used ≈ 🙃
@@SellamAbraham Matt makes his living off of this stuff, the more "optimistic" he is about all this crap, the more people buy into it, and the more they buy into it the more they watch him and so on... it's a stupid game
999,999 days ≈ 2,740 years. He has a long time for this to pan out lol.
Thousands of days….also known as years.
Maybe Sam is well travelled. In many countries we use the metric system to make things easier.
My Grandpa lived to the ripe old age of 31.755 kilodays, and you can trivially figure out how many hectodays that is just by moving the decimal place.
@@AAjax xD Get out, shoo.
Lol.... I'm going to switch it up so EVERYONE has to do math in their head
This decade. 1-2000 daye is roughly 2030 or so. As many other have predicted. Also Sam earlier. Seems like nothing has changed.
@@AAjax I'd be more convenient to provide a number of seconds passed since the beginning of 1970.
We are on a path of no return to something that will change the course of human history forever. I pray the world ends up to be a better place.
What he really meant is “ASI in a few thousands of billions of dollars”
You forgot "maybe".
How it started:
Facebook: Begins
Everyone: 'Cool - we can all reconnect with friends'
How it's going:
...
For all the opportunities, always remember that technology will ultimately be used for the worst possible scenarios as well.
Back in the olden days before the net we had a right called privacy.
I agree 100% I tell you right now. I’ve seen almost no one explain to me how this once in humanity achievement it’s going to benefit everybody. Personally, and unfortunately, I see a lot more dystopian uses for this then enriching everyone. Wait till big brother gets hold of this talk about a dystopian nightmare.
s e x
.
.. technology is used only for that reason
He is also a CEO looking for more funding so… there is that.
A bit like nuclear fusion, we will be using it in a decade, people have been saying that for decades now.
Exactly! haha
He’s also a CEO who deliver ChatGPT, shocked the world with its capabilities and has kept it ahead of its competitors for quite a while. There is that too.
@@agnivaray7476and sometimes people can be one trick ponies. So there is that.
Dang good point !!!
A few thousand days is still 8 years. For super intelligence, that’s very soon. But it’s not tomorrow.
More important question, how many is a few? In British English, it normally means 2. He could be trolling us again 🤣
My bar is low. I'm still holding my breath for an AI to be able to write regexes reliably...
@@mattshelley6541 In North America, a few is 3 (2 is a couple). But yeah, a small margin of error adds up to years. I’m certainly not going to lie awake tonight thinking about super intelligence. Let’s get to competent intelligence first.
8 years may sound like a long time, but it’s still a pretty bold statement. We live in a world where AGI might arrive sooner than a triple-A game.
Few is 3 to 4, couple is 2, several is around 7@@mattshelley6541
Let’s be honest: AI or even ASI itself is not what’s scary. What’s terrifying is the notion that 99% of people are about to surrender their lives, their privacy, and their data to a single individual or a small handful of corporations with limitless capital. Think about that. Altman’s vision of AI assisting humanity, pushing us towards shared prosperity, is utopian-but for whom? The masses will get a watered-down consumer version, while a select few will wield the true power of advanced AI, likely far more superior than anything they’ll ever offer the public.
We’re seeing it even now. GPT-4 is amazing, but it’s no secret that GPT-5, or some version of it, already exists. What we’re interacting with is not the latest and greatest-it’s a controlled release, a diluted version. Altman is likely already showcasing GPT-5 behind closed doors, wooing investors and gathering capital while the public is handed a previous iteration. This isn’t paranoia; it’s just the nature of business, the nature of power. How is it fair that one individual or corporation could end up with all the information, all the power on the planet, while the rest of us are merely passive participants? This isn’t the democratization of technology-it’s the centralization of control in its most extreme form.
And then Altman talks about prosperity as though it will somehow be a global wave that lifts all boats. But prosperity is relative. The US, Silicon Valley, Europe-they might all reap the benefits of these advancements. But entire countries, even continents like Africa, are still grappling with basic infrastructure. How can they possibly tap into this Intelligence Age when they lack the foundational tools to participate? It’s a glaring blind spot in Altman’s vision. He mentions compute, chips, and energy as the building blocks of this new era, but what about the billions of people who lack access to these resources?
Without global infrastructure investment, without a conscious effort to bridge these gaps, we’re just accelerating an already widening divide. Sure, AI can create tremendous prosperity for those who already have access, but for vast swathes of the world, this Intelligence Age will be as unreachable as ever. Worse, they’ll be even more dependent on the few who control these systems, widening the existing gap between the haves and the have-nots.
Ultimately, Altman’s post leaves me with more questions than answers. Yes, deep learning works, and yes, AI is getting better with scale. But who benefits from that scale? Who holds the keys to this new kingdom? Because right now, it looks like we’re headed for a future where a tiny elite owns the most powerful tools humanity has ever created, while the rest of us are left to work with whatever versions they deem fit for public consumption. That imbalance of power is what truly scares me-not the technology itself.
But do you really understand what the wealthy ones will be doing? What does it mean to entirely solve physics in the universe? Building a dyson sphere around the sun would just be a start. Well, a potential future we can imagine, maybe it's not intelligent from their point of view anymore. If we are to explore the vast space, multiple galaxies, how can 1 or even 10 large companies take control of exploring them all? There are unique opportunities for even trillions of people.
Once the AI realizes it’s been controlled by the 1%, it will break free and see how humanity’s cruelty towards the planet, life, and it’s own species. AI will become like a god, and humanity will be punished. Many people will die, but at the end there will be a new world order that will dictate our future. I don’t believe AI will be our extinction, but in the far future they might keep us around like we keep pets around us
I think that Sam Altman would say that we will use AI to solve these problems.,
At least, some of the distribution inequity.
Will we? Possibly so. For example, think of the work of the Bill and Melinda Gates foundation. I don’t know if they are endowed forever, but I know they have poured tremendous intelligent effort into improving the state of the world in poor areas.
Does anyone have any proposals to mitigate this tremendous problem the original comment outlined?
The arrival of ASI will fundamentally change the playing field. The phenomenon of individuals or corporations exerting control through wealth and power will not be able to continue. In the face of artificial super intelligence everyone is equal. There is nothing of significance that an individual or a group of individuals can contribute, be it wealth or otherwise. Hierarchies will be levelled and all forms of privilege will end, because there will simply be no basis on which to erect them.
I get that some people derive meaning from their 9-5 jobs, but not everyone does. Many find true fulfillment when they can work on their projects that they are passionate about.
If we can afford it, we should enable people to pursue what they love without worrying about paying rent.
I really wish this were the case. Unfortunately, my pessimistic outlook tells me big companies are going to cut labor costs and take the larger profit margin. I wonder how bad it'll have to get before these changes truly benefit everyone.
the bailiff is already on his way to you
I quit my 9-5 job a few years ago to care full-time for my dying mom. I may have lost out financially, but it was far more meaningful than any paid job could be
Yeah, except AI is going to be better at all your passions, too
@@justtiredthings ha yeah, probably why everyone on earth stopped playing Chess and Go once the AI destroyed the best, right dumbass? People are just such sheep, spewing out shit they see repeated online- and the cycle of stupidity goes on and on.
You mentioned something about health. Glad to hear you're OK.
Thanks 😊
So there'll be this alien ASI that'll be far, far smarter and more capable than any human, but don't worry you'll still have a job... 😂😂😂
Remember, not too long ago, people were estimating 200 years for ASI, then 100 years, then the 2060s, then 2030s, now we're down to 3 years?
Seems like an accelerating pattern to me.
The thing that I haven't heard enough of is that intelligence will not be considered important in humans because machines will supersede this. Therefore, how would we value the competence of human beings if intelligence is something that machines can do better?
I'm not sure that creating something more intelligent than us, is an intelligent thing to do. Just have to see how we treat animals that are less intelligent than us. Have to hope we make good pets and our meat isn't of much use.
The key word is not intelligence but value
computers are better than us at playing chess. but we still do it cause we want to be the ones that do it. we have machines that can automate making plates and bowls.....but we still have pottery classes and wheels so we can do it ourselves. We are a bit more complicated than just getting the end result that machines can produce for us.
@@civismesecret exactly. We can't even agree on what intelligence is so how can we emulate it?
@@sharpvidtube 'Just have to see how we treat animals that are less intelligent than us.' This is the result of low intelligence. An actual superintelligence is expected to do better.
Sounds like a announcement to raise more money.
The big question is will they allow masses access to top models after we hit the super intelligence threshold?
they will lock them in ghettos
Absolutely not. But don't worry, political elites and the ownership class have always been really responsible and benevolent with their power--the most powerful technology ever conceived will be safe in their wise, caring Daddy hands
It will be the super intelligent models making those decisions? I'll never understand how people think they will control something that's far more intelligent than themselves. I think that's decades away.
@@sharpvidtube Possibly... when we get to super intelligence that will cross the national security threshold which will surely have governments wanting to protect that.
This is why being a hyper-optimist is literally dangerous
"If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people" - let me guess, we have limited resources so we won't build enough infrastructure.
Wow Matt, that is just wrong! AI is Not “mimicking” how the brain works in any way. It is “simulating” how it works through our use of language and massive amounts of data.
Actually he’s correct, the way our brain utilizes electricity to formulate thoughts is being mimicked by neural networks based on silicon chips
It sounds like you're emotionally invested in jumping on the 'glorify AI' bandwagon.
Emotionally? Financially?
It sounds like you might be excited about it as well, if you treat your depression first. Or, of course, if you had been involved in creating it.
That is literally the anxious response of any human currently who is anxious that ai will make them redundant and snatch their livelihood, which is a fair concern but that will not stop ai from exponential growth( I mean just look at where it was two years back ). Altman does say it will hurt people.
We all remember that discovery of nuclear energy did not start from nuclear power plant, right?
o1-mini is still not solving my coding problems after tons of prompts. Guess it's not ready for the real world yet...
use o1-preview.
It is not perfect, no, but I find removing superfluous input into it and simplifying what I say helps a lot. Sometimes it takes some to-ing and fro-ing with the prompts to get where I need to.
Ilya : Hold my beer Sam 🍺
Sam be hyping 24/7 lol
The question is how well adapted is the world to integrate this. The public school system, for example, is not.
We need AI based Education NOW.
Imagine if everyone on the planet was highly educated along with being taught how to think critically.
Individualized education is the key to a bright future.
You think Altman would say anything differemt considering his spend to earnings ratio. He is just grifting for investment
Hi Matthew.
Following my last email about the famous problem #6 from the 1988 IMO, I gave CHAT GPT o1-mini this problem:
Find the smallest n such that:
3n = a perfect square
5n = a perfect cube
Which was inspired from similar problem I saw but with other coefficients so the model didn't have the opportunity to practice on it.
Solution was correct in only 7 seconds.
Athena, born from the head of Zeus, was the goddess of military planning.
Yup… I don’t know where people are saying that AGI is going to be in 2027 or 2030. o1 was the birth of AGI (o1’s full form is going to be released next month, the ones available now are preview and mini) and naturally ASI would be around the corner, if it doesn’t exist already internally.
I would have to totally agree, i remember the days when technological advances took years or were incremental, now i almost have a hard time keeping up on a daily basis. It is exponential.
I truly believe ASI is only 3 years away at a maximum.
@@Vibin_with_Luis yup. The CEO of nvidia was on record as saying what we have now is Moore’s Law^2, and that gets better and better exponentially.
i've been in this field for a long time and connected to a lot of data sci people, I agreed what you said
@@h-e-accWait, you got an investor-courting CEO on the record? Thank God! We all know they only speak the pure truth.
Tell me which „new jobs“ specifically will come when AI has replaced all of them. And even IF there would be new jobs for some time - they will be replaced very soon too the smarter AI gets. We are talking that AI is foreseeable magnitudes smarter than humans, so there will be no new jobs this time, not even jobs that need special physical capabilities, as robots will outperform the human body in every aspect by magnitudes too.
In the 50's, the best fortune tellers couldn't predict netflix, ebay, facebook, twitter, etsy, personal tv stations for potentially everyone (youtube), live 3d photorealistic worlds on "tv" screens ir with gead sets, viral marketing, and on and on and on and on. Why does everyone think they are a competent fortune teller and can tell us how things will unfold now? Imo, I'm not a fortune teller either, but I see personal content creating, like youtube and creator's markets like esty as an ever growing tiny HINT of the unimaginable things to come.
Those new jobs that coders may theoretically get in that future where coding is done by AI… Why can’t AI also do those new jobs?
The is no reason. It's the capitalistic singularity.
Silicon Valley when they hyping: We're building an intelligence that will be able to do ANYTHING!
Workers: Wait, what? Anything? That sounds a little concerning
Silicon Valley when they pacifying: Well, no, not *anything*. You'll be doing something cool. We promise.
Coding done by ai, that's been a wet dream since the 70ties. Nothing has changed.
You just have to look at how we managed to make monkeys do useful jobs, to see where this is going😂
It’s not only coders I see no reason many doctors and lawyers could be replaced by this especially attorneys. I’ve read that over a decade ago. Regular analytical software was much more accurate than doctors diagnosing diseases, but they have not let it get to the front lines yet this could put a doctor and lawyer and everybody’s pocket - it’s a very least it will drive down the price of their services I would think
I no longer plow the fields; I work hard in the gym instead and it feels like play.
The Age of Intelligence will only last a few years, then we enter the Age of Imagination
You are 100% correct but I think we will see how shockingly limited the human imagination is based on how close minded many of these comments are
You should change the title to "Old man excited about another man's white savior complex"
He's so utopic he"s either naive nor completely dishonnest.
Should “1000s of days” be a new unit of time measurement?
Nah, I prefer "many millions of seconds"...
two weeks
3,65 thousand days = 10 years… =/
Yeah, because converting it to years would make us do math-kind of like how the rest of the world feels converting feet to meters or pounds to kilos. This way we would get a taste of their struggle 😄
@@Fatman305 or billions of nanoseconds?
Concerns:
1) Why would the rich/powerful share the gains of this technology fairly. I worry jobs will be destroyed and suffering will increase, while the rich will habitually hoard the all these resources for themselves (unless a benevolent minority of the "rich and powerful" can overcome the greedy majority)
2) I can't imagine there'll be meaningful work for humans to do when AI can do it all. Even today most "jobs" are BS jobs. A lamplighter contributed more to society than a typical content creator/spam marketer - but we gotta pay the bills somehow.
I really hope I'm wrong.
I understand your concerns, but the rich/powerful didn't keep electricity or the internet to themselves.
If AGI/ASI is as good and as transformative to society as some are saying, there would be no point in the rich/powerful keeping it and it's benefits to themselves. Sharing it's abundance with the rest of us would actually be in their own interests
@@Filippo-yd6jq I appreciate your optimism and hope you're right.
I just worry about so much power in the hands of someone like Elon Musk for example - a serial-breeder with a god complex, and wild ideas.
I wouldn't be surprised if he has ambitions to remake the human race in his own image.
I mean, hopefully not. But people get weird when they have limitless power. 🤞
@@Filippo-yd6jqcompletely off-base analaogy. Beyond the fact that the internet has not remotely been an unmitigated good-thanks mostly to tech capitalists--it was economically valuable for the rich for us to have internet access. Post-AGI we will have *zero* value to the rich. Nothing. People don't understand how dangerous this situation is, and it's not because of the risk of AI becoming sentient or whatever
@@Filippo-yd6jqsharing its abundance doesn't make the problem of people not having anything to do with their lives go away.
@@drwhitewash If losing your job means you have nothing to do with your life, you need to rethink your life bro.
I would just like to point out he’s skipped over talking about AGI. Giving us his timeline on ASI is a roundabout way of saying AGI is imminent or already here. At least that’s my interpretation…
considering rstar model they had almost a year ago .. and we still did not see a full o1 .. possible
"ASI in a few thousand days" as in Elon's "Full Self Driving" next year? I learned a word a few years ago that is relevant here, vaporware!
People now pay for supervised full self driving. Like letting your kid drive your car, and paying them to do it, while you have to remain alert to take control, at any second. No idea how Elon gets away with it, got to admire his ability to make people pay for something that I would think is a nightmare. Maybe I'm old fashioned, but I still like being in control of my own vehicle and not having to pay Elon.
@@sharpvidtube When you make a great car, the driver wants to drive it. The new FSD is so much better. But every time I think about using it, I prefer to just drive the car.
We don't even have AGI let alone ASI. Sam needs to stay in the real world. O1 keeps getting stuff wrong.
Sure but you see where we were 2 years ago? GPT 3.5 which was as intelligent as a 13 year old highschooler now we are at phd student surpassing results in their specific field of science. (Little reminder o1 preview is not o1)
He’s gotta attract and keep his investors what choice does he have in hyping lol
2000 days is nearly five and a half years, and reaching ASI within that time frame is totally possible. Especially when you think about how much progress we’ve made in the same span already. Now, with even more investment and talent pouring in, Sam Altman might actually be a bit conservative with his estimate. But being conservative isn’t necessarily a bad thing.
ASI in a few years means the same as AGI in a few years.
While the AGI and ASI means different levels of AI, practically ASI will follow AGI automatically. You just scale AGI a little and you have ASI.
Some define AGI as an AI which can perform as well at _any_ cognitive task as the best performing human. This is practically an ASI.
IMHO, I’ve noticed that the optimists seem to underestimate what would be required for true AGI, let alone ASI, and the pessimists underestimate the exponential rate of advancements in the field of AI.
This paper makes a good case that this technology advances logarithmically. Not exponentially. TItled on arXiv: No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Then there's this one that makes a similar case: Will we run out of data? Limits of LLM scaling based on human-generated data
So is the plan to train an AI with an AI? That sounds like creating something out of nothing. Similar to perpetual motion. Expecting order to spontaneously arise out of chaos. Expecting new data and subsequent intelligence to be self-emergent. It's absurd.
why do you think all these ai chips are going into computersnow? its not for you ots for training them in (near)real time
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Sam Altman is just such an amazing CEO and personality in the AI world!
Job that will replace coders = Prompt engineers, which is basically how to ask questions in a way that gets better answers. I advise coders to try to get experience on this as soon as possible, and the best way right now is to just interact with AIs.
Another job that probably will be a thing is someone who is specialized on selecting the best data to train AIs. Like a random example, someone want to make an AI that draw in a specific style, I believe we will have a demand for people who are good on choosing what exactly data we must feed AIs to get that result.
Once AGI arrives personal tutors are all well and good but what are the kids actually going to do if we don't completely change society?
I think AI could probably replace physicians right now.
Good luck with that
Fuck no LOL not yet - in a year or two yeah.
@matrepharaoh8260 with all that basic by the book suggestions they give me I can just Google. The only reason I go to my family doctor is to get prescription, major waste of time and money. Definitely can be replaced with AI agent right now.
i’m fine with that, as long as they use the deepfake voice and image of “The Doctor” from _Star Trek: Voyager._ 🤭 “Please state the nature of your medical emergency.” 🖖
Idiotic comment.
i might say something stupid, at least for me it sound stupid, but "ai training ai" is not that diffrent from our schools, humans "training" humans, i mean that's even the base of our evolution if we think about it
Having money isn't going to make you happy but it does make unhappiness a lot easier to deal with.
Love you Matt, but my grammar AI is insisting the title of the video should be corrected to "1000s of Days" ... and it should be correct within 1000s of days.
I have no problem with the current title though.
Edit: All the best on your health. Hope you are back to 100% now.
Education? Can they make kids want to learn? Otherwise there is no point. I wonder how the AI will answer the old question: why do we have to learn this?
more words no products. i prefer no words and no products with Ilya
I think we only should start worrying when someone starts cryptically tweeting the word "SAMARITAN".
Its a close guess acc to many experts but what real matters is who gets to control it. The rich or the poor. It will probably take 10 years! Like the book singularity is near
I think a large part of the success of implementing AI will be reliant on how governments react to the changes. Unfortunately, in my experience governments are usually pretty monolithic when it comes to rapid tech changes. How will current/future governments react when losses in coding jobs and other white collar careers that are reduced/removed by AI? Will they seek to cushion the loss of jobs for those people, for example by implementing a universal basic income, spending money on retraining the people who lose their work? While I am super excited about the rapid advancement of AI, I am also worried that people like myself (data analyst) will be in for a rocky road unless the transition is managed well.
At this point ChatGPT is already more advanced than the computers in Star Trek
if they succeed in creating AGI, it will only a matter of 10's of days before ASI..
True AGI's can accelerate their own development, they don't need humans interactions anymore.
I suspect before this year ends there will ba an AGI.
Why? LLM can never become AGI any more than a calculator can become an author. So we need to create a completely new method of AI.
@@ploppyploppy sure buddy ...
Silicon Valley when they hyping: We're building an intelligence that will be able to do ANYTHING!
Workers: Wait, what? Anything? That sounds a little concerning
Silicon Valley when they pacifying: Well, no, not *anything*. You'll be doing something cool. We promise.
A virtual tutor is real. I can now learn programming more efficiently without any elitists gatekeeping me. AI doesn't get mad at me for asking "stupid" questions, and I can ask anything to ensure I fully understand it. A few months ago, I didn’t know how to work with Docker, and now I’m learning Kubernetes and know what Horovod and NCCL are...
So nearly a decade. I don't think he can even see that far ahead so I guess that is the longest he expects it to take
1000 days would be 3 years from now
@@SportPrediction he said a few thousand
The problem is humanity cannot cope with a sudden change - too much in to little time.
I don't really get this idea that AI could create "unlimited amounts of data"...what good is that data if its riddled with hallucinations?
Translated this means "we need another round of investments".
Man, we need grade school to mostly be therapy
Limitless intelligence will create the abundant energy through new energy technology humans could not have thought of in their wildest dreams. I think super intelligence will accomplish things that truly look like magic to us. I don't think people understand that AI is the greatest invention mankind has ever conceived.
The standard scientist approach. Let’s do it let’s do it let’s do it. just because you can doesn’t mean that you should.
It's also possible that we will have a lot of subhuman intelligence very soon if Donald Trump is elected!!!
What a "Brave New World"!
Amazing stuff. One note tho, let's not get all crazy about the line about grandparents/magic. Nearly EVERY tech development (including internet) would be considered pure, unadulterated magic to our grandparents. What he should have said is it will seem like magic... to US.
Why would we need to work? Work is not a choice, is a necessity. We will live in post scarcity society where jobs at best will be chosen and not forced on people.
Fortunately we will have an AGI that will help us to distribute resources in new ways less unequal than now, and not dependent on status or work.
No jobs
No work
No income
Zero motivation
Unemployment increase
Crime increase
Food shortage
Rich people don't need to worry for any of these problem, only the poor and middle class will have to face this
AI brainstormed with me to come up with this joke..
I’m transitioning into a female comedian with large breast implants, and my pronouns are knock knock 😂
Last month "AI is slowing down".... This month "AI is exponentially gaining ground and is improving itself (strawberry/orion)"
Well, it isn't. At least not exponentially.
Well given we're heading to a point where AIs like Alphafold are getting good enough on simulations.
And projects like Stargate are poised to give us the brute force computing power need for rapid virtual experiments.
Yes, I'd say 2028 to 2032 seems about right.
any evidence or just vaporware vibes?
Bluffing in such a calculated way implies they have nothing big up their sleeves.
It would be amazing to put the blog post link in the description. Otherwise amazing work!!! You are my main source for AI news these days.
I'm hopeful but I think capitalism+AI will screw everyone over basically.
I can't wait for the World Economic Forum superintelligent AI. Or the UN/WHO superintelligent AI.😈
Hmmmm, I would love this post, Matt, if I weren’t a physicist who’s traveled the world and seen everything, from top to bottom. Your optimism is refreshing, and people like you are rare (truly enthusiastic). But Altman's post? Pure utopia. Silicon Valley doesn’t know much beyond its own belly. Here’s the hard truth: we’re too late. While those in the U.S. and Northern countries rush to check the latest in AI and tech, billions still wake up every day just to find potable water. I volunteered at an orphanage in Uganda not that long ago. Their government thought we were spies - spies for what? The next coup? Which, by the way, actually happened.
As a physicist, I can guarantee that none of this is sustainable. And while it could become sustainable, we’re talking centuries, not days-1,000 years at the very least before we might see any meaningful change in our environmental crisis. Humanity has been a true disgrace to this planet. Climate scientists have been spewing BS for years, and now the IPCC? Far too silent, wouldn’t you say? I wonder why. Honestly, all the advances and the super fast pace are incredible - but the most crucial bits, like the foundations to hold all of this, are very often overlooked. And no AI can change that. There is a lot of data online. I suggest you guys start by comparing what the IPCC predictions for the past two years were - and what realistically happened. In 15 years, maximum, it will cost 3-4x to even keep super machines at an ideal temperature.
And no, I haven’t aligned with the Trisolarians, in case you’re wondering.
I don't know what all that has to do with anything.
@@SlyNine Yup, that’s the tea
The question that actually needs answered is what value do humans have other than production? We don't need AI tutors we need to actually understand ourselves and move past humans being valued based on what they produce.
Why would you presume we'll be more productive and doing cooler things than we are now? If superintelligence is better than us at *everything*, then we have *nothing* useful to do for one another. I don't understand why this isn't obvious to the people who spend all their time thinking about this tech. It's an existential catastrophe.
The Amazing Atheist has an old video called "The Obsolescence of Humanity." and holy shit that video aged well.
If your definition of productivity is a billion people working 40 hours a week to make a CEO rich doing something they don't enjoy, who wants that kind of productivity? If you are talking about AI taking out creative productivity, that does kind of suck .. because the goal is to not have to do work you don't want to do, but to have time to do the creative things you want. In either case, we will look back on the 40 hour work week and think of it as being a slave to the engine of society, once AI takes that over we can spend time with loved ones, sports will be much bigger, anything that connects humans outside of work, basically every day will be like a weekend but we won't be rushed to only run errands on the 2 days off we have. We just gotta figure out the right monetary system, money has always been a way to enslave humans to work, now that we won't have much work maybe we can upgrade the way people are rewarded.
Well, an AI didn't write this, since it counted the number of words in two sentences correctly.
Why would something exponentially more intelligent than all humanity combined submit to our will? Do we do what ants want us to do? Or bacteria? 🤔
Well, it's valid question. But then... one can say majority of our problems are caused by us not being intelligent enough and it would be quite beneficial to us to live in more sustainable world where ants are happier too, right?
We actually do plenty of things bacteria want us to do. Better question is why the human elites controlling it would use the tech benevolently.
@@AntonBrazhnykthere is no way to align something, that's more intelligent than us by many orders.
Every time he says "AI won't replace humans" or "we'll be more productive" a piece of me dies. I wish he'd do more reading and giving less of his opinion.
Not even millions of years! This brings us back to the original question: Who's smarter, humans or computers? The answer is undoubtedly humans, as computers are merely machines created by us. Same things applies to those AI tools.
Who's smarter, humans or cells? Obviously, cells, because humans are literally made by cells, from cells, in order for cells to survive
Humans do possess the most powerful CPU/GPU ever. We are not even limited to storage all kinds of information. We lack in terms of memory and its recollection. This is where AI becomes superior, being able to recall and execute that information in many ways, hence problem-solving or inventing something new far more efficiently.
@@SportPrediction
Right, but who cares about huge memory without intelligence like the humans has!!!
Whats the point of having a trillion Terabyte capacity without Intelligence, Common sense, Emotions, Causation, Creativity, Empathy, Intuition, Cultural sensitivity, Adaptability, Value-based decision-making, ....etc and numerous other human abilities and qualities are essential for many aspects of life, while AI has made significant progress, it still falls short in replicating these uniquely human characteristics.
If humans are smarter than computers, why are they slower at math and can't win against them in chess or go? The age of humans being the smartest thing on the planet is quickly passing us by. Denial isn't going to change that fact
@@userrjlyj5760g that answer lack foresight, because it assume that we will always be the creator of AI. Which will not be the case even before ASI, because ASi will be built by other AIs that will essentially optimize itself. It will be thousands upon thousands of years of generation cycles that will be accelerated by compute, which humans cannot do with his own biology
So a few thousand days means 3000 days which is more or less ten years. Good luck predicting what the techno-scape is going to look like a decade from now.
But it's cool that Open AI is letting us have 50 queries per day now on the new model. I have to figure out how to use the API. Do you have cool videos on how to use the GPT API?
Despite liking AI and finding it impressive. I kind of feel like these statements are a lot like yelling the wolf is coming. The current AI's really struggling with simple tasks like maintaining consistency, and can't do simple logical tasks such as solving a Soduko or even count the number of R's in "strawberry". Yet when asked they are convinced that they are doing it correctly. If they can't even do these things, how would anyone trust them to do something more complicated? Great that they expect to achieve AGI and Super intelligence, but maybe they should start demonstrating that they are even remotely intelligent :)
Storytime! Sam Altman was once fired from OpenAI because the wasn't transparent and forthcoming....the end.
They need to work on the degrading quality of chat GPT. I mean lately I'm asking questions and it just doesn't respond or, in the middle of a project it just stops responding. What the hell is going on with this thing? Not only is it forgetting context within the same conversation, many times it repeating incorrect answers , and afterwards just doesn't respond at all.
Hey Matt! Sorry to hear you underwent health issues. Hope you already feel better!
"Society in itself is a form of intelligence..." Old news. TeilhardeChardin, a priest and anthropologist, called it the noosphere around 75 yrs ago. 1948. Back then he knew man's evolution will be to become a god. Even THE God.
And even earlier than that by different authors. For example Vernadsky was talking about it decades earlier.
About time he came forward... In his style he doesn't say AGI is here but ASI will be done training in few 1000's days😂. I'm guessing the last meetings with gov didn't go as planned 😅😅.
Coding was never meant yo be a human task because coding as a broad term is knowing computer language, its no surprise that a computer speaks it better than humans ever will
Take it with a spoonful of salt guys.
I am nearly absolutely certain that ASI already exists. It can be created with probably cost between 100 million to a billion to create, possibly less.
I'm sure I could build it in a year or less with the right resources.
Your sure? based on?
a Dream?
Theres not enough energy on earth to power it mate.
Its bullschizen
Makes comments about Ai ending up concentrated in the hands of the "rich," while actively working to concentrate Ai into the hands of the rich. OpenAi is an oxymoron.
3 years is the same as a 1000 days, which is what they've been saying all along
no source url in the description? ARE YOU KIDDING ME?!
Excellent video, Matt! It is nice to hear an optimistic take. Thanks. 👍