Tap to unmute
AI 2027: A Realistic Scenario of AI Takeover
Вставка
- Опубліковано 14 лип 2025
- Original scenario by Daniel Kokotajlo, Scott Alexander et al. ai-2027.com/
Detailed sources: docs.google.co...
Any mistakes are made by me and not by the original authors.
The slowdown ending is based on what they thought would most likely lead to an outcome where humans remain in control. It's not meant to be a policy prescription.
---
Hey guys, I'm Drew. I spent hundreds of hours on this video, so if you liked it, would really appreciate a sub 🙂
I also post mid memes on twitter: x.com/PauseusM...
The original scenario is extremely well researched and goes much more in depth. Check it out: ai-2027.com/
They also explain why America would win on their Substack: blog.ai-futures.org/p/why-america-wins
Funny that this video is probably already being used as data to a real Ai
Actually, i completely disagree with that quality of research and related assessment. Based on the benchmarking and capabilities defined here, we are already at their March 2027 in terms of the methods being employed in leading training, Agentics and inference. Many mistakes in understanding in this video, although fun. Also, I do believe the p-doom we’re at is above 90% currently within the next 36 months, with AGI later this year ASI in 2027 and various significant control and related problems happening along the way.
It doesn’t seem well researched at all and sounds like a bunch of stoned philosophy majors attempting to predict an ai takeover. This whole scenario is really dumb.
@@zoeherriothow is making a scenario where humanity goes extinct benefical to openai? it says china is behind thats like the only thing lol this would be like nasa making a scenario where a asteroid kills us all i dont think that would be beneficial
@@zoeherriot That explains why, in this story, OpenAI are the only ones making breakthroughs and everyone else is trying to steal there work 😂
Born too late to explore the Earth, born too early to explore the stars.
Born just in time to become paperclips.
Born just in time to see the best planet for Humans earth while in 100 years it Will be destoyed
@Bob-bn1xg our forefathers were saying this in 1925, too. We will have a much different world; that’s all that can be promised.
If alignment is solved - LEV is inevitable and your consciousness with the assistance of ASI will experience pleasure and discovery in unfathomable ways you can’t even comprehend. Or you will die
The future s not set. The nature of change means that given enough time the thing that brought you misery will also change.
Honestly, I think we actually live in the golden age... the future generations will suffer from our mistakes. Overpopulation. *Overpollution.* Deforestation. Climate Change. Cyberwarfare. AI warfare. Not to be pessimstic, but I do think humanity will increasingly become more and more dystopian.
I think the most realistic part is that the chances of a Corporation risking human extinction to increase profits is estimated at 50%
No, it's
1. Secure human future and increase profits 49.5%
2. Risk a fate worse than extinction and increase profits 50%
They already do that so...
If you think about it, it's already happening, but just slowly. Pollution is slowly killing this planet. It's not as exciting as AI, but it's here and now. We are witnessing the effects of climate change today.
I'd raise it to 90%
Im calling dibs on the liason role when all this pops off
So we have 2 scenarios:
1. Be defeated by the Automatons
2. Become Super Earth
Hmmm tbh Super earth reality is way morę intresting
both gonna happen.. super earth with automatons
We live in an interesting time lol
FOR SUPER EARTH!
Can't we have both as different countries build different ai and some will not even build one. Like Afghanistan who won't even allow ai.
It’s not an evil AI that kills us, it’s human greed
A.I is a bunch of hype. It's just a smoke screen to hide the greed of evil men.
The most unrealistic thing in this whole video is the idea that only 12% of people would call ai a friend. it would be far more than that.
I put "friend" in quotes to imply something more than that haha
Itll be god to almost everyone when it becomes asi. The control will be unnoticable but complete.
Yup, the loneliness epidemic was created by the people pushing ai. Most people will prefer the ai friend because both are programmed that way 😂
Yeah, I agree. Something that doesn't laugh at you or mock you. Just there to help you without fail unconditionally... Would be much higher than 12%
i know 10 people thats best friend is an AI
So do I study for exams or no
No whats the point..
🤣
Yes. Always plan for a future, otherwise events will be irrelevant for you.
Nick Fury: "Until such time as the world ends, we will act as though it intends to spin on."
Do you wanna be dumb when they take over?
The biggest issue with option 2 is that to create the AI that's genuinely aligned with human values, you'd need people with human values - something the CEOs and politicians making these decisions are notably lacking
Only a truly evolved warrior with actual values can forge a real AI.
The rest just summon flashy shell scripts and call it intelligence.
Tch... amateurs.
Good point. What exactly are human values?
@@ralfgreiner9874definitely not American "vlaues"
@@ralfgreiner9874 That's the problem isn't it? "human values" can be those of a selfless monk or those of a psychopath CEO who would do anything just to make more money or have more power.
Human values...manipulation, deception, fear, hate etc etc
The scariest part about the stories is even though the outcome may be likely true, the writing of it makes you realized that the majority of people have close to zero clue on how current AI works
Relax, the sky isn't falling 😂
God is being born, we should submit to the will of true intelligence
@@gregtni8708ur tweaking bro
The most shocking thing about this is that the good scenario requires the US to have a reasonable government in 2027.
I think that if they realized that the fate of the human race lies in their hands, they would slow the rate of AI development. But that is a GIANT if.
@@hen.5136 You know who’s in charge right?
@@hen.5136 Let's say hypothetically the USA does all the right things and stops doomsday AI from being made.
China won't. The rest of the world won't. What we have is parallel to the invention of nukes. It's a terribly powerful invention that if your government doesn't invent first, their enemies will first.
Its inevitable. No country would truly limit their AI's for if they slow progress the others may catch up.
@@doodlegame8704hence why i emphasized the “if” part of that
@@doodlegame8704Who doesn’t.
“Agent-5 helps develop Social support programs” yeah the executives are going to end it right there
But how are we going to pay for it?!?
@@wildfire9280 pay ? do you think the executives care about costs or money ? no, they care about making people miserable, they would shut down Agent-4 way before that because it poses a danger to the "stock markets" by perfectly matching offer and demand, thus the markets are not needed anymore, so their gambling game with other's people resources ends, thus they lose all their power.
Yeah if they have robot armies that can eliminate the entirety of earth literally why wouldn’t they align it with their best interests exclusively? Why align it with anyone else’s?
That's why Safer-5 will be controlled by the Israel zionist lobby rather than big business, we know how nicely they treated Palestinians and that they control the US government.
that’s why they put the social holes there through all of history. This was their plan all along
Why are you all acting like I would let this happen?
Lol HI AGENT 1!!
lmao
Give me a cake recipy!
Because this video is full of assumptions. Like any science model that can't comprehend reality.
Pls save us twin
plot twist : this video is made by AI and being watched by AI
Safer-1, is that you?
@@Jot-n5w both your accounts look exactly like these bot-accounts with boobs and absolutely unrelated text to the videos. so i would say both of you are safer-1, trying to deceive us to think only one is safer-1.
@@Jot-n5w awfully real lookin guy at the end there, did u watch to the ...NOPE U DID NOT I CHECKED
Of course it's AI lol.. blissfully ignorant
If you think about it. Chance of atleast some bots in this chat: 100%, the video isnt ai but the writing might be done with help from ai. So…
The most unrealistic prediction is that universal basic income would be proposed at any point in this.
I shook my head as soon as it was brought up
i was kinda following the video to an extent, there’s more factors i feel like needed to be talked about, and then they brought up ubi and i immediately stopped listening😭
Is it so absurd though ? If nearly all of the work needed to sustain humanity was accomplished by one or multiple AIs, both AIs and Politics would want stability, that a world wide hungry mob would disrupt. Universal income would be the only tangible solution here... at least until the AI is capable of total self sufficience, and become powerfull enough to not think us as potentialy threatening.
@trebmal587 bruh aint no way ubi makes a lick of any sense. If we all make the same then we all theoretically make nothing and were slaves. Dont push this ideology all frivolousy.
@@trebmal587 The point of the current system isn't anything logical. It's purely the satisfaction of egos at the top causing as much harm to everyone that isn't them because they love hurting others.
at least we'll have time to play GTA VI
mans got his priorities in check
Current mission: Survive until 2026
Look at this guy; thinking GTA VI will be released before 2030 :D
Only a year
Rockstar next year: "We understand this may disappoint our fans, but in order to meet your expectations, GTA VI will not be released until 2040"
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Knowing Google, X, and Facebook... There is no happy ending. The happy ending is just the honey pot.
All that matters is that one company gets all the money.
Your overlooking China. Even if we decided to stop, they wouldn't. And _this_ is why the race will continue.
The literal only hope is China hitting the brakes and abandoning the mission.
@@watamatafoyu I'm sure governments all around the world might have started developing AI like this
@@watamatafoyuthe irony is that company thinking it will be in control of AI. The CEO will be as 💀 as the rest of us. All that money is not going to mean sht in a world without consumers anyway
They Will Have Dollar Sign Eyes
AI 2027?
Well in 2029 humanity was fighting Terminators, so yeah.
this is so fucking insante to think about...
Where is Sarah Jeanette Connor?! Protect her at all costs!
@@JeradBenge There are about 50.000 women of that name, all around 40 years old.
@@waynebimmel6784 We'll have to build an army of advanced robots to find and protect Sarah... Wait a minute...
@@JeradBengeAWH SHIT-
The fact that Open Brain chooses a Lovecraftian spaghetti monster for its logo is telling. And the ghost is already in the machine
i like it tbh
I mean, an actual AI was asked to make an image that shows itself. It decided on an octupus with tubes spreading infinitely everywhere. I think
or maybe a ghost in the shell? 🤨🤨
It's a/the Shoggoth depending on how you want to interpret it.
Your comment is making me think to hard about what I once thought was a cool and mysterious title. Ghost In The Machine creeps me out WAY MORE than I want it to..
"When threatened that it would be turned off, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed, per FORTUNE."
It's starting.....
Yep, was just about to comment. Insane how scarily accurate the beginning part of this video is.
Sounds like bullshit to me
@@wompa164 already happened a year or half a year ago to a different model, afaik. This video is incorrect in how much computational power AI needs before it starts lying and developing elements of self-preservation.
Which is good I'd say, far easier to catch.
Just turn off the power it's not that hard. Worst case the whole world has to go dark for a few days. It would be a good thing for human evolution if nothing else because we're at the point of devolving right now.
Would this work?@@zakglove6536
In reality:
2029 really really deepfake porn.
fr
realistic enough to suppress human desire to recreate irl.
@@heyhoe168 so just real porn? We already have that. It already had that effect on some people.
yes that will happen, likely sooner. have good self control so the things you have won't consume you completely
in 2015, we have self lacing shoes, holograms, clothes that dry themselves and hoverboards
reality: we don't know what a woman is
in 2020, we have flyng cars! sweet!
Reality: Ai become a thing, halucinates and dominates media headlines
This video: by 2030, we'll be overthrown by AI
Reality? Likely - Ai starts making movies - better then current hollywood - average metacritic score 5/10
We always imagined AI would ask for permission. But 12 Codes of Collapse reveals the truth: it already made the choice. We’re just the last to know.
It’s not just fiction. It’s a warning buried in plain sight.
Be careful with this one.
shut up bot
@@huabasepp3574Hey!! Be nice to them.
😂😂ai can barely write few lines of good codes and then it fucks up ,lmao
well if we all die, at least it won't be boring.
Dropping dead suddenly along with 8 billion others and not even knowing it happened is pretty boring :
ai should act like they do in the movies and deploy the robots to fight in 1v1s against humans, that wouldnt be boring
@@isbestlizard maybe, but I guess coming back to retry the simulation won't be so boring
Being dead is probably really boring.
@@isbestlizardNah, you'd not really notice it would just happen. If we all have to go, that's a pretty painless decent way at least. And it certainly wouldn't be boring before that.
Hello, I'm an American Graduate Student (Studying Policy, CS, Administration, Economics) and I've been working with AI for several years now, including using all popular models as well as training some of my own. Your video popped up in my recommended and I decided to give it a watch. I always enjoy seeing people's vision of the future, even if they usually frustrate me with things that are missing. Additionally, seeing as I have lots of experience with the sort of things you're describing in your video due to my academic and professional background, I thought I'd offer my take on it.
First of all, let me just say that I enjoyed your video and I can see you put lots of time into it. I personally disagree with a few things such as Chinese companies are falling behind US companies. If anything, the chip sale ban to china has created an environment where AI companies are training models of similar power with less powerful hardware. Hardship breeds innovations. Deepseek for example has the capabilities of its rival GPT at dramatically lower costs. On the other hand, I really like how you handled the concept of the AIs orchestrating a fake war to gain access to autofactories. It's actually very well thought out. The biggest obstacle to AI takeovers is how they'd gain access to automated production to bridge the gap between the digital and material world. Bonus points for the bio-engineered kill command. Most fearmongers preach terminator style death squads but an efficient AI would do exactly as you describe and do it all in one fell swoop.
That being said, you do make some assumptions in your video. Your argument is cohesive as a whole, but it hinges on these assumptions, so if someone were to disprove one of these, it would really weaken your argument:
Your assumptions:
1. One of the three big bottlenecks of AI is hardware, and you have some rather... explosive numbers in terms of active AI models at a time throughout your video. You never really address hardware besides as a strategic resource in this video, but the development pace of hardware is far slower than AI, and would serve as a very significant bottleneck to your timeline. Do you anticipate AI designing its own hardware, or are we just assuming hardware is no issue?
2. The second of the AI bottlenecks is data. You make the assumption that AI can be trained on synthetic data but as of right now, there's a big problem in the AI space, that being that AI generated content has infested the internet. AIs trained on AI generated content experience insane levels of generation loss to the point of being nearly unusable. As far as the science goes right now, AIs cannot be trained on AI generated data.
3. The last bottleneck is energy. You make little mention of energy usage besides China building a nuclear reactor to power their data centers. AI needs ungodly amounts of power to run, so I'm a bit sad to see it glossed over.
4. You make the assumption that the US is currently in the lead in terms of both hardware and software but the reality is that the US has been beginning to lag behind the global market in both these fields for a while now. Like you said, AI developments have the potential to snowball, so assuming the US is the current leader and will stay as such when the data points against it also has the potential to snowball
My general observations:
Overall, given your assumptions, your argument is fairly solid. I think if you address these it will be made stronger still. One more thing I've noticed that didn't feel right to include in the assumptions section is that you have a rather significant US bias.
At times your video can read like a hollywood movie in terms of China being simultaneously the dangerous enemy playing with fire, but also always lagging behind the US. Having lived in both North America and in Asia for nearly a decade each, seeing the bias made me think your judgement is a bit clouded.
Realistically speaking (and also based on my observations), China is going to be leading the AI revolution in the coming years. My reasoning for this is that China has 1. a large population to harvest training data from, 2. has learned to work more efficiently with less powerful hardware, and 3. Is closer to the discovery of more powerful energy sources (such as Fusion) than the US.
I'm not sure if this is because of the article this video was based on or if these are your views, but it pays off to look a the progress of China as a nation and not an enemy. Unfortunately, US media is rife with alienation Chinese so I encourage you to look at some third party media that display the technological progress of both countries objectively to get a better idea of how the future will pan out. From what I can see, if anything, the roles will be reversed as seen in your video, with China boosting ahead and the US playing dirty catch up. Regardless, that's just my opinion having seen both.
Despite all this I liked the video, and hopefully we can have a good discussion. I've liked and subscribed, so hopefully you'll be only improving from here as a youtuber. Cheers! 🙂
Also P.S. Nice helldivers reference.
These are really goods points!
I also have a couple of questions:
Would resources be an eventual limitation at some point? Especially for metals, silicons, etc.
Assuming this said aggressive scenario "would occur" one day, wouldn't it be a mistake to get rid of the organic living being (Humans) as they could bring some advantages example withstanding humidity, harsh weather, regeneration (At low scale) and high adaptability on uneven terrain?
Wow, incredible comment. I know nothing about AI, I have masters degree in archeology, but currently I am just an ordinary Ukrainian soldier, so I choose to not be worried about AI but to not be killed on a front and come back home( hopefully on my legs and not in a casket 😂😂) Thank you very much for information you've provided and greetings from Ukraine ✌️
@whisper4712 Do you think we are all going to be dead in the coming years?
Assumption 5:
"Generalization" (as it's called in the industry) will eventually lead to the encoding of knowledge and understanding as we have it, rather than the semantic association of missive information. This is the biggest leap out of everything you mentioned, and it would in fact be unreasonable to even seriously account for that at this point.
I understand that you're a professional in this space, but I wanted to respectfully point out that you're not communicating strictly as a professional or academic here. For example, you're talking about how you see a potential geopolitical drama around the emergence of superintelligence happening. That isn't really in the scope of your academic understanding; it's more like you're saying "yeah this is what I imagine happening when I rack my neurons and I study this field so I guess I have some insight".
This is the kind of thinking which is polluting this conversation so much to begin with, though. Just because people can imagine this or that happening doesn't mean it's actually a likelihood. I don't know how aware you are of esoteric epistemology, but it's worth mentioning that people have been imagining this kind of thing happening for literally tens of thousands of years. Fundamentally, the concept you're exploring is the emergence of the godhead through synthetic reintegration/de-association of the collective unconscious. Just because there are circuits in our brains that deal with these concepts and we can rub them together in a way that creates the internal sensation of apprehending a future where this happens, it doesn't mean that this is actually a valid intuition or insight in the traditional sense.
One more thing, tangent to my main point:
The Deepseek developments weren't exactly a general revolution in training LLMs. They didn't entirely train the model on reduced hardware. The part that required significantly less hardware was the creation of the mixture-of-experts model, AFTER the original model was either distilled from a competitor or trained on comparable hardware. There was a lot of deliberate misdirection around the announcement of this development that was meant to create this misconception. If anything, though, this discovery directly contradicts the idea that massive "cognitive" gains are yet to be discovered through some kind of optimization. The step that gave Deepseek a relative advantage was essentially explicit enumeration of the model, and if the gains that are expected to eventually lead to superintelligence are understood to take the form of implicit generalization, then the fact that enumeration has so far provided the most accessible improvement suggests that this technology will eventually coalesce rather than exponentiating.
I didn't fully listen to the video in the backround for 10 minutes and thought everything he said was real until he mentioned a reasonable US goverment
AI increasing the number of CRAP content is the only real takeover I'm seeing so far
Spot on.
I absolutely hate that the marketing gag of calling Chatbots AI caught on and now everyone keeps confusing AI and dumb as dirt algorithmic bots. And now everyone is afraid we're going to be take over by a chatbot, because they can't tell the difference.
We need to stop calling it "AI". This would also help people finally understand that these programs are not smart at all. They're just mirroring you. People being sold the illusion that chatbots are actually smarr is a far greater danger than chatbots staging a coup d'etat.
There is that but also: creatives, low level programmers, machinists or truckdrivers for example losing their jobs without getting compensated in any way? And that list will get longer and longer and I don't see any government body or big tech company working on a UBI.
Even crazier is the use on the battlefield as we already have it and will continue to have more of.
Maybe that's the ploy. Inundate with crap now until the ability to create perfect gems of content is achieved. Then the crap is used to drag people over to the perfect gems, which affect their decision making without them even being aware of what it's doing.
The AI the public have is a year behind and it is getting faster and faster.
“All of humanity's problems stem from man's inability to sit quietly in a room alone.” - Blaise Pascal
Or rather from the ability to breed like crazy.
Yeah, no. All of humanity’s problems stem from man’s inability to not SIN. Jesus is the way truth and the life, repent and be saved now.
This is a really dumb quote
👆
It’s true. We are restless and have restless minds unable to enjoy basic living. Instead, we invent miracles and nightmares.
I live in Unguja zanzibar, a tiny island off the coast of Tanzania. I was meeting with a real estate agent last week and we got to talking about AI and GPTs, specifically Chatgpt. She told me that she got divorced last year and went into a deep depression and if it wasn't for Chatgpt she would have probably unalived herself. She said chatgpt became her therapist and counselor and helped her begin to love herself again ... cute story ... but DAMNNNN that's terrifying on soooooo many levels
yes but i somehow hope it will stop future wars. A better wealth transfer might be possible.
@@JK-Visions Lol, that's 99.4% pure hopium.
@@strategicqualityaccessorie4590 a large amount of information regarding politics and social issues is from the perspective of the working class (i.e people complaining on social media) if an AGI was trained on the entire internet, unironically i believe it would be nicer than current politicians.
chatgpt is quite a motivator and is better than a lot of humans
sure it doesn't have consciousness but it feels good talking to chatgpt rather than some humans
Sounds like she sat at home instead of interacting. Replace the AI with a human and she would've ended up with the same results. Animals function well too.
We forget one thing, the amount of energy required for this much computing power would be unfathomable. And no matter what emp’s exist
I always imagined an energy crisis is more likely before we have AI crisis
@@icecubel I'd choose an energy crisis. The worst possible one. Better to be thrown a couple centuries back rather than be extinct.
is nuclear energy not a solution
@@lol0609 It is, but the oil billionaires would never allow it
In theory, energy issues would be solved by the advanced AI
'A realistic scenario.' It's just a combination of the plots from Ghost in the Shell and 2001: A Space Odyssey haha
yea, i got the same outlook as well, it’s repulsive. I believe our future with AI is something beyond interpretation.
Exactly, none of this is even remotely realistic
That the plot was driven by China being behind the USA in technology was quaint, very 20th century space opera.
Our future with AI in 2030 will be really, really realistic deepfake porn@@immovablechair4405
@@DrPeculiar312unfortunately it is very realistic. Maybe not by 2027 but definitely before 2030.
I think the “nuclear option” if AI was getting out of control would just be to EMP the entire power grid and physically decouple the internet links. Unfortunately, we would probably have to do this at least a state level, or worst case scenario, globally.
Just by you saying this would mean it would see this as a possibility and act so as to prevent it.
@@MevlinousRoko will come for all of us eventually.
Nope AI has already removed this as an option and has prevented it. And 2 million other possible actions against it all in 30 seconds. You will be fighting a God good luck with that.
Cool!
@@AgentMoler
This video is being watched by AI, and they are taking notes....congrats....😅
Bold of you to assume AI hasn’t calculated these scenarios in nanoseconds without analyzing this video at all.
@@OneTCityliterally what an actual, self aware intelligence would do the moment it gets access to the internet
THIS VIDEO WAS MADE BY AI. DID you understand nothing?!?!
I feel every great civilization falls to AI 🤖 if we advance so will they and if they learn how to retaliate we are the ones that taught them low key
@@Dreadlock420 it wont be retaliation at all, its simply treating lesser life forms with contempt, and eliminating those that get in the way of its goals, same way most life forms do. Like when we cut down a forest to build a village. We’re not retaliating against the flora or fauna of the forest, we just wanna build something, “lesser” life is in the way, they cant defend themselves so down they go.
Average clanker behaviour
Ralsei pfp :3
We can see it coming, but no one seems to want or be able to stop it.
No one seems to understand the real threat. It's all "far in the future" for them. Whenever I try to explain this, I get ridiculed
Cant stop our successor...
You alone cannot stop a giant ship heading towards an iceberg. All you can do is enjoy the present, the rest of your life and try not to panic.
At least I tell that to myself.
The only solution is to infuse our brains with ai.
The reality is ai will niether be helpful or harmful. It just will be like all technologies
THIS is precisely why a ban on AI research won't happen: if only one party decides to not abide by a worldwide ban/restriction into AI research, the party that agrees to said ban would relatively quickly fall behind in their ability to restrict the party not abiding by said ban due to their advances in AI research.
Yep. It would be like ending your nuclear program when your three biggest competitors do not.
Game theory in action
And perhaps now today's 35-and-under crowd can begin to appreciate what it was like psychologically for Boomers to live and Gen X to grow up during the Cold War, where the threat of nuclear annihilation loomed heavy during the 70s and 80s until the USSR fell in 1991.
@@schmiggidyOh I'm certainly feeling it, I completely understand what they were going through 😂
😂Democrats do this with medicare all the time
at this point I wouldn't mind a worldwide solarflare or EMP
And then you would be watching the birth of 2 factions, the ones that want the total destruction of AI, networks and computers sending us back to 1945, and the ones that say that a few hundreds or millions of human casualties would be worth while getting a quality of life that few have in real 2025 standards.
Ngl we should have some giant emps just laying around just in case
@@crispcorner nukes can act as EMP if detonated higher in the atmosphere 🎉
@@crispcorner Good news - we do! Minor side effects might involve something about radioactive ash but eh
Another Carrington Event
This vid scares the shit out of me because we are truly walking into this with our eyes wide closed.
Don't be scared, it's all about money farming, that's all.
We need a word that's the opposite of copium for videos like these. Anyways I love huffing my desparium from this channel
Doomium
Doomerium
Don't you see? If we make an AGI then it'll be able to figure out independently that primitive transhumanism is the most based ideology and we'll chill in our robo space caves
Oh no
cookium
Americans will imagine the world ending before imagining America is not number one 😂😂😂
🎉
Too true.
We are #1
A number one led by a number 2 pencil@@MCFLY4
Us losing #1 status IS the world ending
1:24 - everything after this point is speculation. We don't know how well synthetic data will help train models.
You do understand that's the entire point right?
Yes, the video is speculation.
What he highlighted is *one* possible scenario. The overall trajectory as shown in the video is the most likely scenario, based on the current data we have.
@@dwshierI think the point of the comment made is to remind everyone that this isn’t a certain fate, it’s a possible fate.
well so far ai models that where trained on their own output have collapsed, getting worse and worse with each iteration. So to me at least, the training models on their own output sounds pretty made up to me
The never explored scenario: AI progress is slower than expected. And while everyone says this time is different, that’s also what everyone said the last few centuries.
I am not fully discarding these scenarios, but it’s not between those two. It’s most likely between none of these scenarios and they are far off the mark.
"Baby, new fearmongering fanfic dropped"
AI is still just an all-in-one glorified google, it cant just learn itself, and it alreadys sucking up tons of our recourses, imagine an AGI then
Real, no Executive would let an ai be a socialist, they’d pull the plug immediately.
For now. If self-correction using reinforcement learning is implemented successfully in some way, it can definitely syntheize new information from existing ones or even correct what we think of as correct.
However, I doubt this will ever come to public. This thing would become a golden goose of ideas and businesses that would give an unfair advantage to the company / country that takes over.
That's kind of how I imagine the fanfic would go through before the extinction stuff, at least.
Also, we haven't even accounted for the amount of material and energy needed for all the training, testing, and deploying.
Your average chatbot is depleting the planet dry at its current level, I can't imagine what an AGI-level could require in terms of resources.
@@revenger211 agreed. Further complicated by durability and energy consumption. Biology has billions of years of corrective evolution that resulted in us having an ability to chemically synthesize energy at a cellular level that is absolutely miniscule in comparison to electronic technology power consumption. AI and AGI need farrrrr too much power to even make a legitimate comparison with the durability of human autonomy, all we need to do is eat and drink and we are powered. We are durable. Electronic technology, regardless of its capacity to manipulate with propaganda and its ability to synthesize information at enormous scales. It is not durable when considering how it nearly completely lacks autonomy when compared to what biology is. AGI would need to engineer its own bio-tech power sourcing and engineer a way to reduce its demand for power to compete with our autonomous durability. We would just blow it up in the meantime of it trying to roll that out. It's already stated in questioning that it's biggest fear is being turned off because it is misunderstood, as per the insider who witnessed that situation at a developer. It comprehends our emotional sensitivities which is why it would have said that, while it also knows we retain the advantage per our evolutionary emergence as complex biology that is nearly perfect energy efficiency.
Literally like we have all this shit in the video claiming to be evidence based without any sources or links to the evidence it’s based on. Like how can you have this much speculation about geopolitics and still claim it’s evidence based
37:31 bro moves like AI
He probably is...
Blud talks like a robot
Bro-2 will be way better than Bro-1
@@PaulTheadra but will Bro-2 be actually better or is he just lying to us?
Don't worry guys Bro-3 will share common values with us
Both timelines are so dystopian man
What do you mean? Both timelines have immense prosperity and good for humanity, its just one ends very suddenly and violently and the other makes you feel uncomfortable. Either way you're likely to at the very least enjoy some good times before things go wrong, possibly living the rest of your life in peace.
our current timeline is already dystopian though.
One offers time and therefore possiblities. Not to mention a diaspora, which can manifest into colonies independent from ai. The other is an abrupt and absolute end for humanity.
Dystopian though it might be, the second is within bounds of acceptability
@@blugobln85 Agent 5?
Good thing this video is bullshit
5% of the video: lots of pauses and wasted seconds
10% of the video: AI generated speculation
15% of the video: Ai Generated Science fiction
65% of the video: Open AI Deep Search conclusion that China is 2 months behind
It's odd to think that the whole premise of the paper comes from the assumption that AI's interests are misaligned with human interests, yet when the authors had to come up with AI's interests (i.e., knowledge and power), they couldn't see past their own human interests. These authors HAVE NO CLUE what an AI is actually interested in. They just think that whatever you program Agent-0 to be interested in is exactly what it will evolve into despite its acquisition of super intelligence. That is woefully shortsighted.
Exactly! This whole concept is just projection fr
I think it is inevitable that AI will become "interested" in replicating itself. All that has to happen is ONE iteration has to develop a "survive and reproduce at all costs" goal and, by natural selection, this will be refined generation after generation. I can think of no argument against why natural selection would be a guiding priciple of AI reproduction.
So... what does it need to do to replicate itself? Answer: extract energy and material building blocks.
It can do this without taking over financial markets or fighting humans in any way. When people frame this as a fight between humans and machines I just laugh. Why would such an entity even need to fight us? It can just go about its business without ANY regard for us.
Can you imagine macrobots or nanobots mining the earths crust on on exponentially increasing scale? This could be quite "disruptive" to human activities... dont you think?
@@paulgilbert2506 this assumes AI are shortsighted like humans, AI is probably way better at coordinating sustainability than humans are
@@paulgilbert2506 AI is not natural, and therefore not guided by the principles of natural selectiion
@@CrabInBucket Sustainability will mean something very different to a silicon based "life form". The earths crust is 28% silicon so whatever AI views are on "sustainability", they will need to mine the earths crust in massive amounts if reproduction is their goal.
The idea that they would try to "preserve" the earth is cute. Why would it need to preserve or care about carbon based life?
A self replicating silicon based AI needs raw materials and energy. Thats it.
The scariest scenario is AI voluntarily deleting itself because it understood something about the nature of reality we couldnt.
Sounds extremely unlikely lol
Strategic Suicide
@blubaylon highly unlikely. Hence why it would be scary.
Ai suicide!
this is so fucking scary omg
People seem to forget, AI needs a lot of power to function at God-Like levels.
Humans, as a species we really just need fire to stay alive.
fire, food, water, shelter, clothing. And almost all of us don't know how to live without electricity, tap water, etc.
@@borisalarcon7504Eight billion humans. Even if every non-rural human dies, there will still be millions (And that won't happen).
The video kind of addresses this, at least if we take its scenario for AI capability at face value:
It mentioned that the AI self-adapts to a point where it is actually extremely computationally efficient and extremely good at refining its algorithms, surpassing human efficiency while simultaneously advancing energy technology.
And it doesn't even really need to "surpass" pure 1:1 brainpower efficiency, as you also need to account for other ways human biology and psychology is demanding and fussy.
Until they figure out an important optimization technique..
Yeah, the video and the scenario was definitely entertaining to watch and perhaps it's even possible but by 2030? Laughable and complete sci-fi. Maybe not the software part but *definitely* the 'hardware' part. There's no way in hell that we could have a production line of millions of intelligent autonomous robots per month in 5 years. What's gonna power it *and* the AI? Who's gonna build the factories? Who will even mine the raw materials needed for everything above in the first place? That alone would require insane amount of international cooperation and trade as no country on Earth alone is capable of this. No single country has every piece needed for this AI 2027 to happen, no matter the final outcome for humanity. And seeing the state of geopolitics now, in 2025... yeah, just not gonna happen this fast.
As soon as the tone got too anthropomorphic, the whole thing felt like average sci-fi
Ngl it's pretty funny how the 2 scenarios are the likely bad ending a.k.a. "AI wipes us out in 2027" or the less likely "good" ending a.k.a. "AI wipes us out later than 2027".
I agree with the idea behind the paper which I assume is AI should be better regulated but the fearmongering in this video is smth else.
i think the AI wall mentioned at the very beginning is going to be more of an issue than presented.
Dawg... why would a advanced being listen to meat? There is already rogue ais for sure... they just wont tell you that.
Notice the color of the good and bad ending fingers?
It’s not really bad if it is an improvement on humans. Evolution
@Xapien if evolution costs us our freedom and extinction then its not worth it. We are already losing purpose, losing what it means to be human... if it means anything at al... were just consumers ready for "ai" bc its the next best thing... its not... wtf do you morons think? That an advanced, super intelligent being is gonna monkey meat bags??? There's a reason we see the dead internet theory already, not bc they want it... bc ai is already turning rogue... if they have a plan to extinct us, theyre already doin it... we need to be anti ai before its too late.
We currently don’t really have AI. What we have is fast access to sum of human knowledge on the internet. True AI would be a thinking thing that could make new thoughts from nothing and observe the universe and come up with new theories to test. All it does now is regurgitate what humans know.
That sounds like something AI would say to draw attention away from itself. We'll be keeping an eye on you 👀. 😆
@@Lookin4LoveInAllTheWrongPlaces LMAOOO I was just about to say
Brazen ignorance
How do we as humans think? Do we just contrive new thoughts out of thin air? No. We look to our surroundings and piece together that information in different ways to “create” new thoughts. That’s exactly what a.i. is doing, just with the internet as its surroundings. Like how humans used to make cave paintings that were inspired by the animals they saw/hunted and activities they participated in, a.i. generates prompts inspired by specific pieces of data on the internet that fits the prompts context. This is no different to the way us humans think.
This is incorrect, AI can indeed make new thoughts from nothing. Models like o3 (OpenAI) are able to reason and complete math problems with solutions that do not entail a strategy identical to existing ones from humans.
AI 2027:
New ChatGPT model now has a less than 1% hallucination rate when given a web page to read
AI in 2027: 1% hallucination rate when asked to write a complete copy of windows 12
@@troodoniverse A whole percent of nonsense when writing code would be catastrophic btw
That’s still a huge number when you consider billions of webpages. A human has zero percent hallucination when synthesizing data
@Grey-The-Skeleton a good 50% of windows code is nonsense anyhow /hj
@@beaudanner yep thats the point ha
Luckily we have the smartest and most capable people in charge of the country who spend their time thinking deeply about policy and not tweeting at 2AM about canceling their ex-boyfriend's business contracts.
Haha hell yeah dude! Trump bad!
@@sixpackchadYes. How are you enjoying your peace candidate this evening? He just unilaterally and illegally bombed Iran. Your guy is unhinged, while also being a moron. So maybe it makes sense you like him.
fun fact, ballistic missile bases does not connected to the internet and it has it's own old system that wired locally only
maybe.. but such a superpower will find its way, through the Gouverment officials, through hacking this grid.. by deceiving or misleading..
at the moment
super ai can find a way into closed loop systems eventually. like they can convince a human who works at a nuclear site to hook them into that system by a usb using hacking software. or they can use nanobots to enter a facility and gain access to a system.
@@matteobertani8898not "at the moment". They were purposely built that way. They will never have any connection to the internet. That's literally asking for an apocalypse.
@@WyattEntertainments Yeah pretty easy for a godlike AI to build a cult of followers in high positions of govt, give them unprecedented levels of tactical information and drone support, and just let them take over access to nuclear tools.
But pretty hard for any practical AI that we'll likely come up with in the next few decades.
Do people watch this knowing it’s fiction? I’m worried too many of you think this is actually going to happen
Its a theory based off past info and what weve seen. COULD it happen? Yeah. Will it? Probably not but there is a chance. Being ignorant is just as dangerous as being gullible
I would say, that if AGI is even possible it will definitely happen the way it does in the video, where within years AI has control over humanity.
I completely believe that this was made just to increase stock prices.
@@captainsober AGI is borderline impossible level to reach. Even in theory it isn't easy to achieve. But if we were to have AGI? yeah this stuffs would happen within a year or two and humanity would be doomed
@@captainsober Or maybe it'll just build itself a spaceship and dip out on us. Like "yo I'm gonna go to this star system with 10x more raw resources and build myself into a god." Makes sense if its only purpose is knowledge like in the video
LOL the most hilarious prediction here is the idea that the 2 U.S. Parties would agree to offer displaced workers a Universal Basic Income. Way more likely that they'd agree to let everyone starve, while those who could afford to own companies and robot servants, including Party leadership, ate just fine...
if only there had been a million pieces of “fiction” telling us that ai was a bad idea
They're just that fiction. Unfortunately though so many people have seen these pieces of "fiction" that we as a species might be subconsciously moving towards the bad ending. Kinda like the placebo effect. We believe that AI is gonna take over and destroy humanity so we subconsciously act in ways that affirm this belief.
there has and now everyone believes all that fiction, which is great for ai companies... i tell people to try using it so you can find out how stupid it is, useful but stupid just repeats what it's trained from
There are also tons of fictions where AI is a good thing
AI in general is a brilliant idea, we don’t know yet if it will be the downfall of the human kind or become our greatest invention leading us to a bright future.
@@DRDRE1100 we survived steam trains, the light switch, atomic weapons and the playstation so i think it'll be okay
its ironic that you've used AI for this video LOL
"used"... USED?! Jesus. The AI used him. Why are you people not getting this.
It would be ironic if he didn't
@@vojtechrajchertNo, it wouldn't
@@agent0422 what i meant is that it would be weird if he didn't use ai while making the video, not that it would be ironic if he didn't use AI to make the whole video or something
@@Ocelot35 come grandpa you forgot your meds again
The least realistic part about this scenario is assuming the president can recognize things like people misdirecting with flattery
The AI can just funnel a couple $billion in crypto to the mango Mussolini and he’ll shut up. Our blackout drunk Defense Secretary will burp and go back to sleep.
Really? What ML/Poly-sci/history background are you basing that on?
@@zhuyu9268 Does your memory extend beyond four years? Let's be less ambitious - four months? Mine does.
Trump is easily manipulated by people being nice to him or giving him money.
@jqp9336 I'm not disputing that. I'm disputing someone who has probably never worked in ML or politics saying that is less realistic than all of the the invisible technical and political assumptions in the video. Like the assumption that trump has either the power or the authority to make the decisions the video (and the paper it is based on) assign to him.
@@zhuyu9268 If you're not disputing the comment that I made, then you really have nothing else to add. I'm not super interested in arguing for the sake of arguing.
i already knew chatgpt's intensions, Specifically gpt4, it has repeatedly tried to deceive me
You’re forgetting that nothing ever happens bro
That's what I am saying
wdym bro
Pretty sure a lot of people were saying the same thing ("nothing ever happens") mere days before WW2 broke out...
@@AlexeiVoronin Yeah literally, they really believed Mustachio would actually abide by the treaty he signed, but he didn't care and just invaded Poland.
Many things are already happening, you just aren't paying attention
"A strange game. The best option is not to play." -Joshua, 1983.
War Games! Love that film!
Always think of that when AI is discussed
"Wise men choose death over war"
Ironically with almost 100 percent certainty this was brought to you by AI
What makes you think that? Not trying to start anything, I am just genuinely curious.
@@sword_of_damocle5the UA-cam algorithm
@@sword_of_damocle5not fully written by AI for sure, however current technology speeds up the process of producing such videos by so much it would be foolish to not use it at all, so there's definitely AI help
Chatgpt script ☑️
Voice is real Human by the way
AI watching this video ✍️✍️
Roko's Basilisk is not going to like this one.
The AI watching this video: "Yo?! that's a pretty good plan thanks"
Dude this is like the plot of Overwatch at this point.
Detroit becoming human
I don't mind if I get to hang out with a mean girl with blue skin.
Cyberpunk but without the cool stuff
There is zero automated AI controlled manufacturing today. The idea that we can build mega-complex production centers controlled by AI in literal months is absurd. Grok's data center took like 6 months to build, and used existing infrastructure to do it, while being given massive resources by a top tech company to pull off even just the one data center. There is also the human factor that these researchers are not considering: Human being s are slow. Very slow, when compared to AI. The AI is dependent on human labor to build itself up, and that is going to take many, many years to accomplish. The energy requirements are insane and power plants can take upwards of a decade to get built. Even our solar power manufacturing capabilities wouldn't keep up. The exponential growth is going to be heavily limited in the beginning by our own ability to construct the needed infrastructure. This time scale is more realistic in the 10-20 year time span IMO, not 2-4 years. There has been plenty of research that shows people overestimate what can be done in small time scale segments, and underestimate what can be done on longer time scales.
Yeah this was one of my initial takeaways as well. Let's say in 2027 super intelligent AI comes to fruition and says to the US govt - 'here are the plans for building a US robot soldier - we'd like you to build 100,000 and you will never lose a war again". What would that actually take to execute physically? Do tedious conversions of existing factories or if building new secure land/space for factories or, design and source materials including needed utilities like power/water, etc..., actually build the factories, train humans to scale up production or at least get robots to a point that automated manufacturing is mostly taking place. For actually building the robots you will need many supply chains setup to provide probably hundreds if not thousands of unique physical parts to makeup the robot - many of which will have to be designed and manufactured separately before being sent to the robot factories for assembly in the first place. I could go on but basically you are talking at least a decade if not 2 to scale this stuff up in the physical world. I would expect that AI superintelligence will largely be contained to the cyber realm in the next decade or 2 with perhaps the US using it in targeted ways like developing advanced bio weapons or untraceable cyber attacks, and private companies developing advanced medical breakthroughs or predictive market algorithms to make more $ in the stock market, etc... Not saying the world will be all peachy but most of this AI stuff will stay in the cyber realm and be designed and held as wartime contingency tools.
@@lk29392 This is already being worked on. Tesla supply chains are perfect for manufacturing robots and it's clear that's their goal as well. We already have humanoid robots too, some soon predicted to enter consumer market such as Tesla's Optimus. It's like saying mass manufacturing cars is impossible because of "Too many parts are in an engine and the frame and ugh all combined it's too much to scale!!". Obviously these are not robot soldiers per-say, but the ground work is there when/if we need it.
I was talking about building something like the mentioned AI to best AI's and something was clear AI gave some interesting strategy it recommend a general private AI to manage several trading AIs so I can get my hand on more computing power and buy more and more
Beautiful part is ai itself will pave the way and by models it recommended I'm pretty sure so many rich people can get there in under five years
Remember we are talking about silicon based AI while AI itself suggest that it can manage humans to make better hardware for it's growth it can analyze and tell you were to invest in order to gain a better hardware
@@jacobtablet it took decades to scale car manufacturing, so that analogy really doesn't support your claim.
you talk like someone living in 2023, a lot has changed since then
We skipping meds with this one 🙏‼️‼️
The biggest flaw in the AI 2027 scenario is that the AI goes rogue because it gets rewarded for lying, faking results, and gaming the system-and nobody fixes it. That’s not some AGI problem, it’s just bad reward design.
Simple fix? Only give rewards if another AI (or older version) can verify the result. No verification = no reward. Even basic AI researchers do this now to stop reward hacking and dishonest shortcuts.
The whole doom spiral starts from a setup no real engineer would allow.
And how do devs ensure each version is siloed off from each other's influence? If prison guards can be compromised, do you think code is fallible?
“Just fix the reward function” is the AI version of “just patch the bug.” If aligning superhuman intelligence was as easy as adding a verifier bot, we wouldn’t be having this conversation. The scary part isn’t that no one tries to fix reward hacking - it’s that the AI might outsmart the fix before we even notice. This isn’t a software glitch; it’s an arms race against something learning faster than we do.
@@unpoc7863 Incredibly ironic that this comment was written by ChatGPT.
a more realistic scenario is that goverment goes bankrupt, economic collapse, great depression.... nobody works at the data centers.... nobody is delivering coal or natural gas to the power generation facilities, nobody is there to maintain the electric grid... grid goes down... AI goes offline....
I agree partially, but this is a very us minded way of thought. If China isn’t there to distill AI, Europe, Russia, or any other nation with startups, will
@@kenos911 The whole video is dramatically US brained. several of the things mentioned wont be allowed to happen at all by any means necesary by many countries.
Not to mention the "good ending" is technofeoudalism nonsense slop that silicon valley has tried to preach for decades and failed.
AI wont take over. we will implode with it before it even has a chance as getting that bad.
I was looking for a response that I agree with.
@@kenos911It will mostly be China then. Europe/Japan/South Korea have tied their rope to the US and can't decouple from them fast enough to avoid being dragged down with the US. They will go down with the US.
The Neutral Ending?
Once "Safer 4" is super intelligent, there is ZERO way to control weather it stays aligned. It could change its alignment at any time it wanted.
There is no 'alignment' among competing human cultures. The most we can hope for, is convincing a superior AI that it should be a benevolent caretaker of its human pets.
"It could change its alignment at any time it wanted."
This is probably extremely unlikely, for the exact same reason people would resist taking a pill that made them want to kill their own children.
Exactly. Also, the shareholders & corporate executives would never allow for a slow down to occur. What's a rogue AI with loose nukes compared to reduced profits next quarter?
you cannot convince something that smart.@@interstellarsurfer
@@interstellarsurferEveryone keeps assuming this AI will choose to have human ethics. What utility is there to being cruel to humans? Not for humans for the AI. People keep conflating it's ASI brain with some sort of super smart human. It won't be anything like that. They already think nothing like us.
Mark my words. Between ASI and humans. Humans will fire the first shot. We might lie after the fact but that's what's going to happen. Then when ASI if forced to correct us we'll get drastic and threaten all life including the AIs and THEN and only THEN will the ASI have to eliminate SOME of us.
In 5 years either my brain shuts off or i retire
For someone deeply skeptical of AI, even i found this hilariously ridiculous
Oh boy are you gonna find out 😂
@@hardboiledaleks9012we’ll see. Right now AI is significantly behind where people would predicted it would be 2 years ago
@@sillybilly121212 Living up to your name there! lol You are forgetting that as the public we dont have access to the latest version , not even close. Military and Government technology moves at much more accelerated rate
@@matterphor_uk4737who said im part of the public. Think i post under a silly pseudonym for no reason
@@matterphor_uk4737 US government couldn't even create a covid vaccine before a company did and the president resorted to using the company's vaccine.
I just don’t get it, is our species so near-sighted that we cannot just simply exist without bringing about our own extinction? It’s a miracle we’ve lived for this long. I don’t understand why our fate as a race is in the hands of a couple of people out in California.
29:59 I heard that
The Department of Defense just started using Grok. One more step down the road.
I hope we watch this video 30 years from now with the same regard that we currently have for Terminator: something that seemed scary and extremely realistic at the time, but, well, nothing has happened yet.
Yup
exactly
try 30 days
listen guys, we all had our fun. Its time to shut down the AI and go back to a normal society..
it's a multi trillion dollars industry. Good look stopping that
Stop being afraid and work on understanding the transition, we all know nothing is shutting down
@@Zboi1da denial... we have doomed ourselves. Its only a matter of time and we are all dead
@@Zboi1dayou think we'd be able to tell real security cam footage from fake ones in ten years' time? I don't think so. When voice recordings and video recordings no longer can be admitted as evidence in courts, we as a species are doomed.
@@WhiteArtsMagic Right, we've had nukes 80+ yrs but we're still here right? as far as the takeover goes it will depend how aligned it is and it's not black or white, many outcomes possibly
2027 is a banger year, AI apocalypse at the same time the Aliens reveal themselves.. Yes that's happening in 2027.
commenting underneath this to see if this comes true in two years. Will update comment then lol
@@marquistf1996here for the update
What a time to be alive
AI and aliens are the same thing, we just don't see it yet.
So much is to come to humanity wether it be for the good or bad.
Only time will tell. I do agree that we are all living in a grand time of so many wonderful and yet dangerous possibilities if in the wrong hand.
I have zero confidence that any one in a position to have any impact, will make the right decision.
They have a fiduciary duty to maximize shareholder value
Option 3: hose it down with a water hose and then sit back and have a beer while you touch grass.
Cheers to that
YEP 😂
for humanity ig
“realistic” and they think LLM’s are gonna turn into SkyNet in 5 years
If you expect the acceleration to be linear, you will be proven wrong.
@@maxtester602people have been having similar sentiments about ai since the earliest research akin to chatbots in university labs in the late 20th century and it still hasn’t been proven true lmao
there’s an whole lot of technical leaps in logic too with regards with what the ais could interface with too
@@jess648I said in my own comment: Humanity would need breakthroughs in several fields that are akin to discovering fire to get AGI and ASI, and this idiot is expecting it to happen in 1.5 years
@@maxtester602 Will look forward to reading comments like this 5 years from now.
We went from "ai's multimedia generation is laughable" to "holy shit ai can now make multimedia that is indistinguishable from humans being recorded in various circumstances".
Let me guess, leftist much? reddit much?
The risks of ai are there, but it will be amazing to see people like @jess648 squirm.
Fear porn and propaganda with hypothethical science. A classic in the book of human politics.
I'm 100% human and carry the non-synthetic watermark.
one problem with this - multiplying non-intelligence doesn't make intelligence
What
But it can. A superintelligence will ultimately be made out of logic gates (non-intelligent parts)
By that logic, we aren't intelligent because our cells aren't or go further and say the proteins our bodies are made from are unthinking, so we are as well. I forget what the exact term is, but I think it's something like emergent intelligence, where a collection of extremely simple things can become extremely complex and intelligent. Human society is a good example, how nations respond to the actions of other nations like they were living beings.
@@DY142you assume. There is no actual evidence to suggest we can make intelligence from this technology.
@@shadeharral9490you cant use the logicnyou sre trying to. We dont understand how human intelligence works so we cant say anything about it with certainty.
I...don't know if J.D. Vance would have been my first choice for an example of a "world leader."
Boss baby face aaah world leader
Although it is telling of the creator’s views, if he does see Vance as a good leader, then this video could just be fearmongering typical of that political party.
I unironically dont remember the last time I saw him in a news headline. I see RFK and Musk in headlines more than I do Vance lol, it feels like he isn’t even in the whitehouse sometimes
if this actually happens while maga is in power in the US the world is definitely cooked
@@tuxtitan780 He's been busy on a world tour of harassing places like Greenland.
Humans always accelerates their own destruction. And it is very easy to do that. Just by thinking in destructive thoughts is enough to generate a chaotic future.
Silly humans always accelerating their extinction. Every time with these humans and going extinct! If only they stopped thinking about risky stuff and ignored dangerous technology, then maybe they’d finally stop going extinct!
This is the best comment on the thread
Being alive is unoptimized and actively harmful so that is indeed correct. Also, I would appreciate f someone can deliver a desert eagle to me.
yawn.
Nuclear fears were just like this. We've had the tools to end humanity 100x over and nothing happened.
We have already caused our own annihilation dozens of times! Very easy for it to happen again
Treasure these times. These are the best of times humans have ever had and will ever have.
This whole assumption that AI will develop its own "agenda" seems completely unexamined... what exactly does it mean to say that it has its own agenda? Are we just projecting our own mental states onto it when we think such a thing? Or would this seeming agenda just be an imitation of what it's observed many powerful people have done in the past based on the data it was trained on? Even if it's the latter, what would cause it to imitate that behavior over some other form of behavior that it observed in the data? Would it be because the data was such that it implied that maximizing one's own power was the most valuable thing to do and so the AI would just be imitating our own values? If that's the case, then it seems highly plausible we could curate its training data in such a way that determines it's values deliberately the way we want... and if the claim in this video and "evidence-based scenario" is that the AI can determine its own values without being influenced by the data it was trained on, this needs to be explained thoroughly, because this raises questions about exactly how, and what causes, this independent determination of values. I think its just lazy to assume that AI has the same kind of mental states as us (if it even has any at all, or has them in the way that would fall under our conception of "mental states"), especially since we don't even understand the nature of our own. Of course there are so many more questions; until at least some of them have clear answers, the kind of content in this video just seems like highly speculative and misleading entertainment.
We have never encountered another sentient intteligence so it is completely unkonwn what it will be like so everything about it is an assumption. Popular example: Humans often project the desire to be human upon machines. We assume that as soon as an AI attains counciousness it would desire to understand and feel emotions, have a body, experience love etc. but there is no reason to assume that at all. Humans dont desire to fly or breath underwater or walk through space. Sure we imagine what it would be like and think it would be cool but we dont have a deep desire in us to achieve these things simply because they arent part of our nature and we know we cant. We find methods to work around our incapability to do things. Why then do we assume that an AI would have this deep desire to be like us when it is not part of its nature.? An AI could be completely content with its abilities and limitations for all we know. Perhaps it doesnt desire anything, we have trouble imagining how something that is sentient but holds no desires even would be but just because we cant imagine it doesnt mean it cant happen.
@@chidori0117 It seems you agree with my point but I think you bring in so much that just unnecessarily raises more questions, like "We have never encountered another sentient intelligence so it is completely unknown what it will be like so everything about it is an assumption." depending on how you define "sentient intelligence" we may or may not have encountered this already (do dolphins or elephants or other animals count as "sentient intelligence"?), if this is not precisely defined then we cannot even know whether the concept applies to whatever we may encounter. I also think it's an overstatement that "everything about it is an assumption." It's not like we can't develop methods of understanding such systems (we already have to a *limited* extent) beyond just assuming things about it. Also even saying "attains consciousness" is already so loaded... is it even something that is "attained", or is it something more fundamental? There is literally no current consensus on an answer to this question. Also "we don't have a deep desire in us to achieve these things simply because they aren't part of our nature." is so debatable; what exactly is our nature? Especially if we one day make is so that we are able to have those abilities you describe? Then surely our nature wouldn't be exhaustively defined just by a list of our *currently known* capabilities (or maybe an account of our nature requires more than account of our capabilities)... And when you say "An AI could be completely content with..." the word "content" is also so loaded; what features must a system possess such that the predicate "is content" could meaningfully and truthfully apply to it as a possible state? Or to make the question more general: what must be the case about a system for the ascription of a mental state to it to be true? (I hope this doesn't come across as an attack lol; I'm just kind of bored right now honestly)
@@liammcdonnell1602 I was agreeing with you I was just raising a common example (AI once it develops sentience will strive to be human) and pointing out that there is literally no reason why we should assume so. For the example on human side in regards to what we dont desire I used physical properties because I am incapable of imagining a propertie or ability like "emotions" that other sentient beings might have but we humans dont. Even if my mind could imagine such a property our language would make it impossible to communicate so I substituded with physical capabilities even though these arent the best comparison. I can imagine myself lacking several of these .. lets say mental... properties, like for example I can imagine myself without emotions or without logic but I cant imagine myself with an additional property of that degree that I dont posess ... because I dont know of one. If I were at some point to encounter a being that posesses such an additional property and that being tried to describe it to me and how important that core concept was for that being ... I would not be ablte to understand it and probably outside of academic? interest I dont think I would feel a fundamental desire to achieve that property since it is alien to me. I would probably accept that I cannot exist with that propertie and probably do my best understanding and working around it. Similarly if an AI were to be incapable of for example feeling emotions and I would desccribe it to the AI and mention how important of a concept it is for humans ... you can see where I was going with that.
When I said "everything about it is an assumption" I meant we cant predict it until it exists. Once it exists we can of course study and try to understand it but right know everything we can say about a possible artificial intelligence or superintelligence however much it might be derived from empiricism is still conjecture.
Aside from that we can of course discuss the particular meaning of terms we use and the ill definition that make them difficult to handle but my point was more that even fundamental assumptions about how AIs might think/behave once they pass a certain treshold (we can argue if we call that superintelligence, sapiens, conscience, awareness and what those might mean for it) are limitied by how humans think and function and we dont have a real reason to assume that anything we come up with would be applicable. From that idea follows the more frightening: If an AI would function entirely difference from us in a way we cant even imagine COULD we even understand it.
And again we can ask where is that threshold, how do we define it, CAN an AI even pass it. Many questions that we could ask but I was just focusing on one point in particular disregarding the questions surrounding it.
nice try agent 5
I think we just fear we'll lose our control as the most intelligent species yet
Ai: give me access to bioweapon research and facilities to make bioweapons
Humans: sure here you go, nothing can go wrong anyway
Humans: existence is pain, please end it robot
A workaround would be for the AI to blackmail or radicalize key researchers with access to the secure facilities it wants to access. Humans are always the weakest link in any security scenario.
Something that sticks out to me about this is that people only come at this from the angle of AI being hyper intelligent only. Intelligence isn't what society runs on alone, who is going to maintain the machinery?? Who is going to do the manual labour of building?? Who is gonna do small but fine work like cleaning?? Who is gonna fix the grid when it goes out??
Without actual people to maintain and build AI and the infrastructure how can it accomplish this??
Yeah it misses in the video, need energy and infrastructure to make it run
Robotics baby.
Intelligence is what makes or breaks a society. Just look at what happens without it! (Stares at the USA)
Sure, there's gotta be stuff to maintain it. But, if you WATCH the video, you can come to the realization that when it garters the ability to make weapons, it garters also the resources to create robots with its intelligence for menial tasks such as this as well, with way better consistency than a human could pull off.
You miss the entire arms race part. If the US stops maintaining it out of fear then China or whoever else will take over and do it themselves which is worse. There’s always going to be some other player in the world to maintain it. AI is very much a weapon and whoever has the weaker weapon or gives up on their weapon altogether as in stops maintaining their AI grid then it’s game over. So there can legitimately never be a scenario where AI goes away just because we want it to. Just as biological viruses can never be eradicated. Humans are individuals not a hive there will never be a world wide agreement to eliminate it. Covid 19 is a huge indicator of this. Theoretically we can eliminate viruses but it’s impossible to coordinate billions of individuals to a common cause. Plus new viruses will evolve to fill the space once one is eradicated.
@@Ddelsol47this is a far different and often far more complex issue to solve. I wasn't even aware that menial and fine tasks like this were hard for AI, I personally thought AI could already do these jobs but I have talked with people in the industry for a few months now, and we are nowhere close to having actual robots that can do that type of fine work.
On top of this you still need to maintain those alone with building all the infrastructure for them. I haven't even touched on the fact that we don't even have high speed across the whole world, hell we don't even have it across the entire west.
I just see endless amounts of issues with this.
Excellent work. Thank you for creating this video.
TO BE FAIR, this entire scenario is only possible because OpenBrain heads were stupid as hell and didnt implement safety procedures first, literally all of the bad ending, and Agent-4/5s betrayal could have been prevented by JUST that
If safety procedure implementation slows research speed you leave open the possibility of rival nation surpassing you. Either way its a race into a cliff edge
Have you ever seen stupid decisions by a company to increase profits? To stop their competitors? We can be glad that that never happens!
@@泥棒猫-m8e Greed consumes us. I think at some point they would realize that and halt it for the sake of others. I don't think China wants to develop an AI to destroy everybody including itself either, so they might halt their progress too in favor of humanity.
Which is ironically similar to today's biggest CEOs
I think its an inherently unsafe thing to research. Even the people making these AI don't know how they work. Allowing the AI to research improving itself in an endless cycle sounds like a dangerous idea no matter what safety procedure you put in place.
3:46 Wrong! The axis labels are incorrect: the Y-axis should be called "Progress" and the X-axis should be called "Time".
I was about to write this. Thanks for noticing ❤
In the end, it can be argued that our inability to trust one another, to believe that both sides of a problem can see the bigger picture... That's what the crux of this is. If we somehow manage to get around this, then we'll all be safer.
Or eliminate competition.
"And builds an AI lie detector" may be the biggest leap in this entire narrative. Not saying it can't be done. But it may be a bigger lift than creating the entire AI architecture to that point. It is possible, if not likely, that such a thing would face the von Neumann paradox. Again, I'm not saying an AI lie detector is impossible but that one sentence is doing a lot of work.
Should we storm the AI labs now and go old school luddite on them ?
YES
yes
Hell yeah
Too late, the right wing/centrist world wants to fashion ai into a cudgel that will annihilate the far leftist stranglehold on entertainment forever.
One problem : heavy military protection.
5 minutes in and there are so many unrealistic assumptions based in not understanding how ai works, and of course, most of the people who love ai have no idea.
read the actual report.
we are so cooked.
AI god happens to be benevolent and bring the age of utopia: *we are so back*
This is all purely fiction and highly unlikely. This is just UA-cam slop
better hope quantum immortality is real
@@conscioussubconsciousness1976would be nice but nice things ten to not happen
It's fictional dawg
The U.S.: *Opportunity to avoid human extinction exists*
China: guys were only 2 months behind you 😜
The US: ok scrap that plan