What Everyone Gets Wrong about AI
Вставка
- Опубліковано 8 лют 2025
- 🌏 Get NordVPN 2Y plan + 4 months extra here ➼ NordVPN.com/sa... It’s risk-free with Nord’s 30-day money-back guarantee! ✌
Most politicians totally misunderstand the trouble that artificial intelligence is going to bring. This isn’t a race for profit, it’s a race for power. And that power will be in the hands of a few very rich people. Does that sound like a good future?
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donorbox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.sub...
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfe...
👂 Audio only podcast ➜ open.spotify.c...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #ai #politics
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
- Frank Herbert, Dune
yeah that might come to pass exactly that scenario.
The same thing happens with government.
@@gobadgego Exactly. We give politicians power in the hope they will solve our problems. But the power corrupts the politicians. Instead, we should come together voluntarily to solve problems in our communities and via non-profit orgs.
That process began in Europe in the 16th century, when paper making machines, printing presses and cloth weaving looms, and the investors and bankers who funded their building, began what we can call early industrial capitalism. The machines could run without tiring 24/7. Workers, previously used to seasonal agricultural activity, determined by the weather, and daylight, were suddenly indoors and on machine time, working day and night. Those who built and owned those machines soon controlled the world. They still do. Those who invest in AI and control it will be the next rulers of the world.
@@TheDavidfallon Actually, coming from the fields to the factories made the poor better off financially. They could earn a dollar a day in the fields, but they could earn $3 a day in the factories. They used this money to feed their families. As time went on, employees got richer and richer. Now, they didn't get richer as fast as the factory owners, but the point is that this industrialization process does not represent a taking from the poor by the rich. It involves both the rich and the poor getting richer, just the rich getting richer faster. The result is a greater gap between the rich and the poor, but in the process the poor also are better off. And you are right that there is a risk if the few own and control AGI. But my bet is that the same thing happens with AGI that happened with industrialization: the poor will be better off, but those who control AGI will be MUCH better off.
As a software developer currently working on implementing actual AI-features in our software, I disagree that the "frontier models" are (almost) always better than smaller models, barring niche applications.
In many applications, using other small and freely available models, and tuning those according to your needs often yields better results than using the latest "big" LLM. Plus, the computation is much faster and less energy consuming; both things should not be underestimated when thinking about scalability.
You must not know how corporate america works. Eventually, your smaller models will be brought out by the company owns frontier models, the code is buried and products have no choice but to switch to using these frontier models. And if everything else is already using front tier models, then to make the code maintainable, everything will be moved to them.
I also read a memo supposedly from Google engineers in which they expressed their concern that big companies really had no "magic sauce", implying that anyone with some GPU power can set up models relatively quickly. I guess they might be worried investors all betting on few big companies figure that out as well at some point. Would you agree with that?
ollama Server FTW, They can't have my data.
@@Caledoriv tbf the scope of the video as most AI talk at surface level is was directed at "General Intelligence"
@@rolyars the only difference is speed...you can run a small model on almost anything now it'll just be slow, and Nvidia's "DIGITS" (AI minibox for desktop use) could make huge leaps in that space...they're going to be very hard to source for quite a while I'll bet!
It's weird seeing the dystopia unfold in real time.
I don't think it'll be dystopia or utopia, but humans will be better off overall
Apocalyptopia... lol
It'll be manufactured scarcity to maintain the rhythm of the worker-class drum.
Ho NO, i will miss the utopia we currently experiencing 😭😭
for real
Unfortunately Sabine’s timing is bad on this one. In the matter of this weekend the frontier models are now on their heels as a $6m startup in China called “Deepseek” is running a model equivalent to GPTo1, but the parameters cost 95% less to operate. The Deepseek model (or an American model like it) will replace current models. OpenAI will need to pivot quickly to avoid loss of market share.
China has made what Sabine suggested Europe should do, an Open free IA. Europe is late again. The future of EU seems to be tourism and cheese.
And is open source.
They are not just 6 million dollars company😅 it’s just the training of their model costs 6 million dollars
@@dACE20 and very nice clothes....
problem is that one lucky model, which is not spotted in time even by China government is not changing anything, next one would go under CCP control, making XI more powerful. in two months google,openai,x ai would have better models than deepseek r1, may be same goes for CCP, but this time they would not share it openly, while people wont understand that they need opensource models on their phones which does not share their info with any corporation, e.t.c., we going right into duna dark future.
Currently my biggest issue with AI is the “pollution” of the Internet with automatically generated “content” that’s factually just inaccurate. The sheer amount and quality is in no comparison to just a few years ago where it was relatively easy to spot those for humans.
Half the people I talk to in my every day believes eyes shut whatever they read on Facebook, no matter how illogical, magical or ridiculous it is, so I see no difference.
Maybe I should have qualified it “was relatively easy to spot those for critically thinking humans” ;-)
And the cost in use of electricity leading to pressure for more electricity, more. And more...
@@abavariannormiepleb9470 and then they train on this garbage
@@abavariannormiepleb9470 it can turn the entire internet into a closed AI bubble - in which no new knowledge nor original content will accrue as time progresses.
Exactly!!!
Poor man wanna be rich
Rich man wanna be king
And a king ain't satisfied
'Til he rules everything
~ Bruce Springsteen
Thats valid for Springsteen himself?
Great song
why wouldn't i wanna be a king after getting rich?
so weird. i just want to help people. donate. be happy, live my life. try to make others happy too.
is there something wrong with them, or me?...
@@differentone_p With them, but unfortunately people who think like you will never chase riches big enough to be king, only the greediest and sociopathic individuals who can never have enough and will never give to others.
Some insane people have the inner experience of owning and controlling all of existence. They often report that is not working out for them.
I'm a bit confused. Sabine recently stated that AI has hit a wall and is overhyped, yet now she claims that AI companies will dominate the world in the near future. How to reconcile these seemingly contradictory views?
I am similarly puzzled, confused, and surprised.
I think it's exactly that. Ai is over hyped.
It gives a lot of applications, but specialized solutions or problems are not solvable.
You need a lot of energy and computational power. Therefore, states can control this technology by simply controlling the hardware
Both can be true to be honest. Because even though a wall in the funtionality an trueness of AI can be hit, the level that is reached can be used by companies to dominate.
Good point
Both made you click didn't they?
While third parties may find it difficult to access your traffic when you are using a VPN, remember that you are directing all your traffic through the VPN owner’s nodes and so the VPN owner can see your traffic. So strictly speaking your data is not completely private.
Indeed, people need to understand anything you do online can be and most like is registered somewhere in some server's logs. Your "choice" is where it is logged to some extent. Now the best way to protect your privacy is to never do anything online. Google, Meta, Apple and Amazon thrive on your data, they can most likely predict what you want or need before you do.
Scott Manly mentioned the same thing recently, saying he doesn't use a VPN. He's a smart Scottish engineer so I'll take his word on that.
worst part is smartphone - it's always with you, it can always listen, use camera, and knows it orientation and position. just see recent Apple siri listening lawsuit.
Yeah. VPNs have their uses, but if you're logging into your accounts through them, they will not protect your privacy.
And all these tech billionaires are ruthless, narcissistic, manboys…what can possibly go wrong.
Manboys, haha, yes I also would prefer Queen Sabine.
Be bad 😂😂😅😅
No please no queen Sabine. Even worse than Uschi.
tech billionaires are welfare queens
Come on. You wont find better psychopaths anywhere.
I can't wait to procrastinate better than ever before.
You'll just to procrastinate ineffectively until you get there!
well don't wait- start today!
Eventually.
Don't put off till tomorrow what you can achieve already today ! Just put your mind to it and you could do it ! Get off that couch! - oh, wait...
I'd reply to your comment appropriately but it's way too soon 😸
I don't think the hegemony of frontier models is sustainable. The Chinese have just released 'DeepSeek-R1,' which competes with OpenAI's Frontier O1 model, and they have made it available as an open-source model.
It's free, but not open source. You can't modify the code.
@@honestlocksmith5428it's an MIT licence but most of the work is how you interact with the model and the weights and they've released both.
@@honestlocksmith5428it’s under the MIT license, you indeed can modify the code.
It's not the model that matters, but the supercomputer that powers it
@@honestlocksmith5428 "This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. "
I've been saying this since about 2 weeks after chatgpt came out. Thank you for explaining it in simple, plain language so maybe our politicians can comprehend.
There's a difference between what politicians are telling the public in speeches and what they actually believe privately. I think given Stargate its pretty clear the US is aware of what the game that's being played is actually for.
She talked about Biden. Stargate is Trump. Trump's supporters and advisors are exactly those who build these frontier models, so he's well informed.
The biggest issues ahead will be: 1. The end of anonymity. 2. Custom diseases tailored to destroy a certain race, ethnicity, or group of people not already inoculated. We already got a taste of that, but AI is going to both boost efficiency of deadly diseases and also the antidotes to them.
Yes, don't underestimate politician. They are crazy smart. Sometimes, they just act stupid to manipulate our perspective.
@@JamesJohnson-uw5fe
Exactly THIS is the tradegy.
@@arifbagusprakoso2308 Yeah and sometimes it seems like they acted stupid for so long that they've actually forgot how to smart.
At last someone with a bit of audience drops this plain and clear ! Thank you
Palantir: the “crystal ball” that Sauron watches back and corrupt Saruman. AI: the ring to rule them all.
the history of the company and its projects is what i would call insidious. of all the companies in the tech space, they are perhaps the most concerning.
Thanks!
You did a good job of pushing back against the nationalistic approach to dealing with AI (our country NEEDS to lead in AI to protect our future), but I think that the ultimate threat is the complete destruction of virtually all of our economic models of human activity. The whole idea that you go to school, work hard, learn multiple skills and then find employment using those skills for an employer ... if a collection of AI models works better than any human even after years of training, what will people do? People need to eat and to live somewhere. What will they do to earn these things? Will Meta or OpenAI buy them for you? Doesn't seem like their style. This will all be a lot of fun until it isn't. The social unrest could be worse than anything we've seen before.
history rhymes. Isn’t this is what luddites said while smashing printing presses during the industrial revolution
Tendency of the rate of profit to fall
She kind of called it. Governments need to own the AI outright, then do what’s necessary to socialize the benefits of automation. AI completely destroys the social benefits of capitalism. If this government won’t do it, then people should start funding non profits that are legally obligated to create AI that benefits all
@@justinmallaiz4549 they weren't smashing printing presses, they were sabotaging mechanized spinning machines and the like. They were furious that they were swiftly losing their livelihoods without any recompense or social programs, just tossed aside callously without intervention in order for factory owners to reap the benefit. And they were right. Industrialism was beneficial in the long run, but it came at the cost of unfathomable human suffering. We should learn from our mistakes rather than repeat them. People losing their jobs to AI should be helped, not discarded as wet paper in the gutter. The capital ownership of these new models of production should be as spread out as possible, not concentrated in the hands of a few dozen utter maniacs to whom human lives are irrelevant.
that's why human has to become robots itself in a decade or so with built-in AI, otherwise worth of any humans and the amount of ask completed by humans will look like nothing by comparison . that's why the investment should go toward the digitalization of humans from higher capacity storage in Petabytes to uploading or downloading minds ; it's become more relevant if AI become sentient by the help of 3d Data via robots and we are replaceable even in real body requirement tasks in 10-20 years
The real risk is that AI will still be stupid (and will always be stupid ) , but will increasingly be put in the position of making critical decisions.
Yeah I don’t think that’s an issue.
Pretty bold of you to call a super intelligence 'stupid'. 😂
The real risk has always been human stupidity and that covers all the hype about AI too.
Don't worry, ecological overshoot will get us way before any AI shenanigans. In fact it might contribute to that, not because it will do some terminator shit, but simply by how much energy we will put into it, and our problem today is using too much energy, and we are trying to use even more energy so the AI will tell us how to solve the problem of using too much energy XD.
@ There is a long way until it is superintelligent though. Large language AI models still fails all the time. Though it is a good tool, but quality is not good enough. And if the AI starts to learn on AI generated material, then it may paint itself into a corner.
These speeches likely signal we're at the peak of inflated expectation and heading for the trough of disillusionment.
For sure. They seem completely disconnected from reality. Even some of Sabine's arguments seem too charitable about the capabilities and impact of AI.
AI is going to destroy more or less every job there is. That's the disillusionment.
Peaking real hard. Sabine is throwing around “super intelligence in a few years” in this video. I’m not seeing products that are any more capable of doing my job than ChatGPT was when it first came out.
@@brianskog9947
You're not looking hard enough then.
Said by someone with next to zero knowledge of the technology.
Perfect timing. The problem of having AI controlled by only a few people shows up today (23rd Jan) as BBC reporting that chatGPT is down in the UK, with issues worldwide.
Optimizing procrastination will be the killer app.
Procrastinapp.
I will write one as soon as I have finished catching up on the youtube channels i follow.
that's called UA-cam
tiktok is almost close to perfection for many people nowadays as you can tell, it has taken over all the zoomers and all the boomers. The only improvement it would be to scroll it with your mind, so there's no physical effort like at all.
Sedate Me - best app ever
For governments it will become "indispensible", "they wont be able to compete" .... meanwhile 80% of government in my country still running on windows XP and paper data archives... i think you overestimate the expenditure government is willing to spend on upgrades... or underestimating their unwillingness to change anything, for any reason.
What is your government GDP per capita ? ...
It's mostly a new bubble, big data was going to burst, but AI has provided a way to prevent the crash... with more hype!
Trump just announced $500 billion for his friend in the high tech industry!
These are the people filling their pockets with this hype, thanks to their tools in the governments!
Meanwhile at Davos:
- We should tackle climate change seriously.
- But AI needs a great deal of energy to operate.
- Ok, ok...maybe climate change is not that serious after all.
They don't lose their seats in club. They just manage stuff. You and me must be worried about stuff. Not them.
AI can and will solve many of the problems that are associated with its development.
Climate change is happening either way, yet development of AI is our best opportunity for solving the issue among many others.
That being said, this is a terrible truth in the grand scheme of things because the risks of AI/ASI development are boundless and entirely inconceivable.
Prisoners be having a dilemma.
I hope they don't think that we can kill the planet because ai will fix everything. If they are, i for sure hope they're right.
Powerful AI do require a lot of power. If people would get over their fear of nuclear power that wouldn't even be an issue
This is why OpenSource is so critical, and I totally agree with you Sabine, the current trajectory will lead to greater instability
Well, DeepSeek v 3.0 changed AI landscape totally. It is developed by a small Chinese company with 100 people with a fraction of the cost. It performs comparably or better than Chat GPT. It is open source and free.
The hype about AI by big US tech companies look like a joke in comparison.
Only if they release an O3 rival at the same time as OpenAI, I'll believe it.
Lol. DeepSeek is, like most things out of China, 99% theft and 1% added layers to conceal the theft. DeepSeek is a good fake.
@@rogue_minima Deepseek R1 was released. it' rivals openai. did you learn about it yet.
AI scares me becasue it is dead wrong quite often, but totally confident in its conclusions.
How does that differ from humans?
@@Asher-kc6fe Fair. But I have found that accuracy has declined with AI. One would expect improvement from such monumental investment. I'm sure it's just infancy issues.
@@undaware I was an early adopter of language learning models. I used chatgpt when it first came out, and fell in love with it immediately. But then, it kind of turned into crap, and I stopped using it. But recently, I tried it again, and honesty I think it's gotten a lot better. Partially maybe just because I've learned how to use it right, what kind of questions it'll ignore and what it'll get wrong.
@@undaware I've been using AI since the launch of gpt3 and I've only noticed improvement over time. The only reason I've noticed for more errors is that it's now capable of processing more complex prompts that invite more room for error.
@@Asher-kc6fe . To be clear my complaint is pretty specific and has only to do with search results to questions about electrical, plumbing, home repair and many other techincal questions. These results are 'powered by AI', not LLM stuff per se. What I have noticed is a stepwise decline in answer quality becasue AI represents a reset of the knowledge base because it is not curated by experts. One could argue it is exactly expert knowledge because it draws on the totality of knowledge, but I'm arguing that because it uses a LOGIC do discern the truth, it may fail to discern a negative from a positive. Meaning it has often told me to do the thing that experts are warning against doing because it can't tell the diffferene between a DO and a DON'T DO. That's just one error mode. Others are clearly logical fallacies. 'Because A is like B and B is like C you can use A and C interchageably. Which is not the case. I need to start screen shotting these things.
It is still said that 'there is no moat' and open source models are only months behind closed source models. But the highest amount of of intelligence will still need a lot of data centers so that is what Europe needs to build in any case.
More and more regulation builds no data centers. Lack of energy won't power said data centers. Europe is lost with its current leaders. But at least we have the moral high ground.
yes, you are reading the situation correctly. Open models are like 6 months behind closed models, and at a sufficient capability they will be able to catch up quickly too, I think. The most important thing is computation power that will allow this
@@andrehoffmann2018
Good thing ASML is a European company.
Great for energy consumption... I wonder if we really need AI/LLMs.. Even the military. Will it really give that much of an advantage?
@@e.d.1642
Depends on your comparison baseline. Famously, for image classification, AlexNet (2012) improved the state of the art on the imagenet dataset from 30% to 70%. It's exceedingly rare go get an improvement of over 2x in a seemingly mature field. Same goes for object detection, natural language understsnding, summarization and translation, tracking, etc. All extremely useful. But, improvements in 2025 are smaller and more gradual, particularly on well established tasks.
*I love how passionately politicians trying to make their goals of having AI (as a task from WEF) sound like it is for good of the masses*
This video did not age well. DeepSeek is such a small startup, yet manages to disrupt the industry
Good thing they didn't rip off ChatGPT.😂
A small startup called China.
Lets wait until the hype dies down.
Deepseek is a distillation of GPT though
I was going to comment exactly that
The leading models do not have a monopoly, and open-source alternatives are only 3-6 months behind, a gap which is steadily getting smaller. Competition is key for free general AI that doesn't END EVERYONE. If the only general AI in town was supplied by Uncle Sam, you would achieve the bleak future you are trying to avoid. Imagine your political opponent being in charge of the only superintelligence on earth. Free and open market competition is the only way to achieve balance and avoid the AI apocalypse.
You are still giving your money to Nvidia
Exactly. Sabine this time was a doomer and, essentially, gave a fascist speech about nationalization.
As if a rich guy could be more dangerous than the government.
Yes, and those models are often much smaller and can be run on consumer hardware. Data centers and large models will still be relevant but will not take over everything. Sabine needs to calm down.
@@andersonm.5157elons more powerful than many countries
@@andersonm.5157it’s not fascist to say we need a public rather than private frontier model. Also she said Europe needed to. No nationalism at all
Welcome to the new era of a few Kings, the army of Big Brother and a world of expendable peasants.
the same as it has been since we discovered agriculture and started building cities. just, you know, on a grander scale.
Points still valid, but this week was a big win for open compute!
So what about the Chinese company that humiliated the big billionaire AI companies in the US?
Palantir operates in a specialized niche where it provides critical insights to governments and enterprises, by analyzing the massive amount of niche data they own. Whereas OpenAI, Google, Meta, and xAI are more focused on broad AI applications for consumer and enterprise markets. (Language models)
This is a very politically correct way to say Palantir is international-scale spyware that primarily uses the data that only governments have to give minute resolution into individual and group lives
The only AI that I could use without entering a phone or email address was copilot, and I was not impressed. It gave me the name of another engine for computer language development, and I was not impressed either.
China listened to this and delivered ;)
This technology will never lead to superinteligent AI. It makes most basic mistakes. It just makes most basic computer tasks faster.
It’s important to recognize that no one fully understands how these models function. If they were to achieve autonomy (Note that current frontier models are static and not dynamic which means they are not learning when interacting. For the time being they are just throwing out information that they have learnt), even influential individuals might struggle to control them. The mathematics behind neural networks remains largely opaque. Despite decades of effort, attempts to construct a mathematical framework of the intuitive mind of human beings have consistently fallen short.
Nonsense.
I finally got to play with some deep learning models last year. Wrote some PyTorch and Python code and trained models and used the image predictions models with neural networks. Had to find a GPU so used a Mac 2. I was amazed at all the thousands of passes that were made and how the slightest alteration of the model settings would produce more or less accurate results. This was strictly for hobby and learning. It was nice to get back to pure data science and review a little math. In 35 years of COBOL, Java, C#.NET, Angular, React and database work for business it was rewarding to code it and see it work. I didn't feel obsolete. Of course that doesn't help with globalization and continuing to work remote in my happy little finish to an exciting career in IT. I hope there is just another decade before it is all gone.
DeepSeek go brrr
But seriously, there is still no moat with current AI techniques. They're all trained on the same internet, if there's a breakthrough it is unlikely to look like a further turn on the crank of transformer models.
Yes, I agree with that. I am also wondering if DeepSeek is open source what else is going on in China^
@@rwantare1 Flooding the internet with shit seems a good strategy to make other peoples models worse..
@@SabineHossenfelder Excellent video. But I just recently watched your video on transathletes. You based your view on a few studies and not on a review which would have unmistakenly told you that testosterone isn't banned for nothing in sports. Even intramuscular coordination is better in males meaning even matched for muscle size a male muscles is stronger than a female one.
@@SabineHossenfelder yep. My first thought
You are focused on the frontier models, but that isn’t where most of the progress is being made. Most applications are in smaller, efficient models. Can you please do a video explaining why you think this all pivots around the frontier models?
I fully agree with Sabine. More than a "tech singularity" we are approaching a "power singularity", back to a feodal world with no way out to return to democracy, since AI will prevent any such attempt. And those politicians who believe they are riding the horse, will instead be the horse... and bring down all of us in their decline 😢
DeepSeek LLM has surpassed OpenAI's top models, and it's open source and from China. It beat them at their own game without the mass computation overhead. Anyone that has been following things closely would have noticed that China is on par with the US when it comes to AI, and they have a numbers advantage. Most professionals agree that Claude 3.5 is the best for doing actual work, it has been quietly winning in the background.
The joke is on these billionaires, because they will never control AI. Anything they can build, people can build themselves. Any models they have, regular people will be able to have themselves in a few years or less. At most they will have slightly better AI models for a short time, whoop-Dee-Doo. Decentralized computing means that regular people could always beat the big companies in terms of compute power
Thanks for the tip. Yes, DeepSeek's very impressive!
I suppose deepseek r1 a kind victory for vvestern culture over illiberal orwellian chinese cultural norms in the meta sense.
I mean, how much does deepseek r1's chain of thought think in english and western norms to be competitive in a general AI race?
That Is, are its latent space parameterized western corpus components largely doing the heavy lifting in terms of innovative reasoning given the lingua franca of modernity is english in many domains?
*me asking deepseek::* _"given english is the lingua franca of many domains then surely you bias toward english chain of thought?"_
*deepseek's reply:* _"You're absolutely right to point out that English, as a global lingua franca, plays a significant role in shaping my training data and, by extension, my responses. This does introduce a bias toward English-based patterns of thought, especially in domains where English dominates, such as science, technology, and international discourse."_
@@blengi It seems that you'd benefit to use AI to write your comments properly. You know that you can have a new lines without posting your message for each line, don't you ?
I think you're confusing what politicians say with what the people behind the politicians know. I garauntee that the people who actually are in charge of the governments around the world know that the race for AI is the race for world domination. Politicians don't actually run countries. They're just a user interface.
Elegant brilliance! Best description of politicians ever!
In the case of Joe Biden, that premise holds 100% true. He could only have been more of a robot if he were completely dead, instead of only mostly dead.
@davidkachel Give him a break. He's been mostly dead all term.
Don't you mean "Useless Interface" ? 😂
Pretty much, though the west is definitely lagging far behind on developing a nationally funded AI system. I guess the US is basically a hollow shell at this point since the vast majority of the government has been privatized over the last 35 years. It's essentially just mega corporations wearing a government trench coat at this point.
Sabine, always cheering me up
Thank you for talking about ICP without mentioning ICP❤
My daughter and I are both doctors, though I am retired. Last week, she tried a new AI product that listened to her talk with her patient (me, play acting), then wrote out a complete History and Physical, with treatment plan. It took the AI three minutes. The result was as good as anything she or I would have written, and *better* than any H&P I have ever seen from a physician’s assistant or nurse practitioner.
We are entering a new world.
That AI doctor was that good because the accumulated history of illnesses and ailments, their symptoms and solutions / results were fed into its training data and it was simply a number crunching and transformation equation after that - essentially all of human knowledge in the particular field was then distilled and then made instantly available by that AI doctor interface you were interacting with.
The epiphany comes when you realize we can essentially do this for every single discipline and field of human interest.
The only thing the AI models can't (yet) do, is infer new inter-field knowledge based on this plethora of data points it can crunch.
For example (assuming this theory and knowledge-base wasn't in its dataset already) it wouldn't spontaneously come up with the theory of evolution if you fed it the fossil record.
Edit: most businesses don't need it to be at that level of creativity and insight though, and even the current models (with the various models ranging from IQs of 100 to 130) are good enough to completely disrupt society.
When they eventually start pairing these current gen AI's with advanced, nimble movement robotics like Atlas or the other humanoid bots, humanity will be forced to ask itself some serious questions
Imagine what it can do in the military field then. A new world indeed.
@@scroopynooperz9051 AI can already find links between disciplines that humans have not (yet). One of the science channels I follow (might have been this one) made a video about it. It had something to do with papers from field A and field B citing a few of the same papers consistently. It can come to new conclusions. AI trained on the game GO made a novel play that was not part of the training set nor known to the players of the game IIRC.
@@scroopynooperz9051We should be asking and answering right now. People have a real problem understanding the consequences of exponential growth when it applies to computational complexity management. At this point in time the natural limiter is the computer power needed to build the model and the ‘feed stock’ required to populate these models. All ready AI companies having stolen copyrighted art works by scrapping the internet, are asking people to provide them with more. Don’t feed the monster…
Nevermind on all the hand wringing. This woman just saved 2 minutes of typing!
Open source LLMs are very close to those of closed companies, for example DeepSeek-R1 was just released. The future isn't one company, it's millions of people collaborating to build the future, no matter where they come from.
LLMs are a dead end, as soon as the flow of free training data evaporates due to privacy awareness. No new data means they will be like a 1950's set of encyclopedias.
Opium of the people?
True, which means it will eventually come down to who can serve those models most efficiently. Open-source models are already as capable as many private ones, so there’s no need to waste resources creating new ones from scratch. Instead, Europe could focus on building the necessary compute infrastructure-like advanced datacenters or specialized chips-to run these models as efficiently as possible. This could be a key step toward staying competitive and reducing reliance on external providers and without wasting recourses on training our own models.
R1 is a good example-it's open source, yet no other provider comes close to their API pricing. Either they’re operating at a massive loss, or they’ve found a way to run their model extremely cost-efficiently. If Europe were to invest in compute infrastructure and, most importantly, in optimizing the efficiency of running these models, we wouldn’t have to lag behind other countries in AI.
@@laurensjvg The problem is that a major part of the costs of running datacenters is energy and energy prices in Europe are multiples of the USA due to Net Zero policy. This makes Europe structurally uncompetitive and thus uninvestable. Play stupid games, win stupid prizes.
What I am kind of waiting for is a Sabine that asks her viewers to start caring for each other. So far, I have gotten the impression that she is actually rooting for a world in which people are doing well, and not one where they are dominated by few singular entities. A different world is possible, but it requires that people actually behave differently on a micro level. Musk and others are not independent from the masses but they are a result of how people behave individually and towards others.
I think that you are selling her short.
I did the grad grind and met Nobel prize winners, so I feel like I know where she is coming from.
She is earning a living and explaining important technical news.
You can’t run a channel where all you say is “Be nice to each other”.
No, it’s the power people that not telling you what you need to know, but rather what you want to hear. They are in every human endeavor.
Even with a PhD in physics, this is the only physics channel that I listen to regularly.
Dr Hossenfelder is a thoughtful, caring person because she fills an important role telling us what we need to know.
That’s how I see it.
@@edwardlulofs444 What I meant is that I believe that elites like Musk are not in their positions solely because of their particular skills (if they have any), but that the existence of these positions is an emergent property of how all humans, or at least a critical mass, behave at the micro level.
If people were to change the rules that keep the game going, it would mean a fundamental change in how the world works. I don't think it's selling short to ask a person who has achieved great authority in the field of knowledge to demand such a change from her audience. I also doubt that Sabine would be concerned about her UA-cam channel if the alternative was a world where people had fundamentally changed their behavior, nor that running a UA-cam channel would play a very important role in such a world.
She talks about some companies trying to achieve world domination. What’s the alternative? I think it's appropriate to discuss Sabine’s potential influence at this level.
Six days later and there's a new sheriff in town, called DeepSeek, which is the very opposite of big, expensive Frontier Models. DeepSeek is instead small, cheap, open source and definitely not US centered. While the jury is still out on exactly how much of a game changer DeepSeek is, it shows the perils of predicting anything as fickle and fast moving as AI. Admittedly, sawpping the US for China is not exactly an improvement, but the direction of travel is.
This video aged like milk
What about hallucinations -- things AIs say that are outright false, made up, pandering to the prompt, etc. The ever-present possibility of hallucinations is an absolute blockage to the kind of all-powerful applications, and there is no solution on the horizon. In fact, given that AIs depend on digesting human-produced material, and such material perforce contains multitudes of mistakes, falsehoods, hallucinations, the problem may be insoluble.
If AI becomes our overlords and it repeats the same hallucinations enough time, 80% of the people will believe it. Look at North Korea or Stalin or even Trump.
People hallucinate and make stuff up all the time. At least with AI, we can teach it to not hallucinate. Also, AI doesn't depend on human generated data. It can synthesize its own and it's already doing exactly that
you probably did not use thinking models lately, they actually very much precise in exact sciences.
additionally amazon did solved hallucination problem, but solution is closed source and quite complicated standing outside of llm normal training and usage procedures, so for now no one replicated solution, and it is available only for some amazon customers used with their quite stupid llm, while it does prevent from hallucination, obviously with such bad llm it is not quite popular, again thinking model is going in right direction without such complex apparatus.
I’ll use AI to part my bills? 😂 This is the silliest thing I’ve heard since the 1970s when everybody used to say that personal computers would be used to store recipes!
Sure it can, in an indirect way eventually. Imagine if things keep progressing at this rate. Eventually there really will be no need for 99% of people to have jobs anymore. This is where concepts like UBI come into play. If this happens, in a way, AI will be paying your bills.
Don't we store recipes on personal computers?
They also said we'd have thinking computers, household robots, 15 hour work weeks and colonized the solar system around the year 2000 back then.
@@__jonobo__ Some people probably do. The thing is, in the 1970s, “storing recipes” was one of the very few things they could imagine personal computers being useful for and the ONLY reason why “mom” would be interested in having one in the house.
Not silly. I paid my bills with Bank of America dialup UNIX interface as my first tool of productivity.
In the long run monopoly controlled services get cheaper and user friendlier? What? The Enshittification has already begun.
I mean the complicated thing about monopolies is they can lead to a reduction in the overall cost but it has the double-edged sword of by most of the time greedy people are the ones who established monopolies and most of the time it doesn't actually decrease the cost of anything to the consumer. Competition between companies is what actually gets prices lower. Usually
@@borttorbbq2556 Correct. Is decreases cost for the consumer at first, which is why it becomes a monopoly.
When a company has become a monopoly, it will use those cost savings to further increase profits instead.
This is a well known strategy mentioned in many books. This why companies actively try to become monopolies.
Enshittification has been going for a decade and a half already, if not more.
@@borttorbbq2556 getting things cheaper is not intrinsically good, because the competition that gets things cheaper, involves cutting more and more corners and figuring out how to externalize costs as much as possible, which eventually destroys the basis for all life.
@@mitkoogrozev that can happen.. but I'm not talking about cheap stuff. Because something can be inexpensive but still not cheap. Like for the most part I don't like buying cheap stuff I'll usually only do that if I just need something for like a one off throwaway purpose. I have bought stuff that was genuinely super inexpensive like lowering costs and things that I would call cheap yeah they were pretty darn good quality like the quality of something that is many many times the cost of it.
I’ll give you credit you understand and articulate the politics of this quite well. As a species we are truly boldly going where no one has gone before.
It looks like the AI appraisals by b-movie politicians have already been generated by AI
😆👍 Couple of weeks ago, she made a video AI had already reached its generative peak. Means investments are money burned. She forgets fast.😀Danke, der_kleine_Toni! Vielleicht mache ich selbst ein Video darüber! Sabine haut die halbbackenen Videos raus da kommst du mit den Responses nicht mehr hinterher!😁
I love how subtle Sabine's use of AI is in her videos. She does it so well, even while mocking world leaders for not getting it. 🤣
LOL, and then Deepseek came along...
More like MeSeek. They stole it didn't they?😂
"Politicians totally misunderstand what's going to happen " applies to literally everything
7:20 Never a truer word spoken! 🤣
Being first will not guarantee permanently winning all the power, because the "no moat" condition still exists, and being second is substantially less expensive than being first. Apple was first in cellphones, but has a substantially smaller market share than Android worldwide. Also China has fully realized the damage their failure to compete effectively in the OS wars has caused them, and will not make that mistake with AI.
I hope Deepseek will keep doing what they're doing. distilling small models and enabling them to use reinforcement learning doing wonders and now I can use a pretty powerful model locally. the deepseek R1 32b qwen model are really good, at least for coding it's better than 4o
btw llama is nowhere near the frontier LLM as of now
@@vaingaler5001 doesn't matter, next one is already training. just keep iterating.
Open source will not change the doom trajectory. Yes, you can run Deepseek on your computer, but OpenAI & co. will be able to run similar models with 100000x more computational power than you. So your model will be crushed, and won't be competitive on the market. That's why they are investing trillions in infrastructure.
There may be open source models, but are there open source companies? The companies are making their models open source because it is their way to get publicity. In the long-term, they hope to earn money. If the big players have unlimited resources, then the small players will give up. Maybe it will turn out a bit like with Amazon. Amazon had enormous resources from capital market and didn’t pay dividends at all.
@@xiyangyang1974 the coming ai doesn't have economics of scale
Sabine is reading out the plot of the last Mission Impossible movie.
4:32 Is the guy with the rifle supposed to keep the meeting in order? 😂
Right, and where's the guy in the spacesuit symbolizing space power.
Seeing how a small Chinese shop hijacked a frontier model and improved it, then released open source kind of breaks both Sabrine and global leaders vision of AI development.
Sabine is right, and the CCP deep-seek (china), is just as WOKE, as open-AI; This is NOT a good thing Sam Altman is the cucks cuck of silicon valley going back to y-combinator CIA inQtel grooming days of Paul Graham; Sure I can see where this is going, more woke than OPEN-AI, the CIA gods must be going nutz that CCP can create a more woke 'grooming AI bot' than the best minds in CIA's silicon valley; Not super-human intelligence, just a machine that cancels non-woke citizen lives;
distillation is not hijacking
Technically, it’s not that important who owns the model. The transformer architecture is well-known, if you have powerful enough HW, you can train your own… But here’s the catch; you need the _training data_ . That’s what matters: ownership of the data-and the data is in ownership of these big corporations.
So effectively, Sabine is right. If users give the social networks the ownership of the data, democratic countries can’t do anything about that.
Aerospace engineer - that's the first really smart intelligent and insightful comment I have seen on this page.
WELL DONE.
@ Thank you, kind sir. Well, I’m a SW eng. who happens to be working as a part of a development team that works on a solution for acceleration of LLM inference-so I know one or two things about it. You could also say that I’m a part of the problem… ;-)
But (and relatedly), guess which social networks (if I don’t count TouTube into that) I’m present in… ;-)
(In my country, we have a saying: blacksmith’s mare walks unshod.)
I am amazed that we don't have even more conspiracy theories involving AI by now.
If you use NordVPN, then NordVPN potentially sees all you do. Your network provider potentially sees less. In addition to the companies that provided the software on your mobile phone and/or the service providers you use. Not sure if that’s good or bad.
Circumventing Geoblocking? Definitely.
I've had a vpn for over five years. Last year I just didn't renew it and haven't had much problem without it. Besides geoblocking, is it still a good idea to get a vpn?
@@FlightSims No, it is not. OTOH, geoblocking is still a thing, and some of us nerds love to run our own VPN servers all over the place for sh*ts and giggles, so I don't know…
BTW, IP geolocation/geoblocking and circumventing it with VPN is often bullsh*t:
1) E.g., I have a VPN server that is physically located in Portugal, but Google/UA-cam and Spotify place it in Britain - just because the company that owns the data center is registered in the UK. But it's kinda funny to listen to Tesco ads in the Bri'ish accent when listening to Spotify (I'm an American).
2) Some geoblocking sites track the IPs of well-known VPN providers like NordVPN (or even data center providers they don't find trustworthy) and flat out refuse to serve you if your connection comes from one of those. So you need to choose your VPN wisely.
@@FlightSims Tom Scott said 5 years ago, "The best choice for gay people, pirates, assassins, and gay pirate assassins." Basically, if you need to hide info from your network admin (your church, your work, or your parents), want to pirate media, are planning to kill someone, or all 3, VPNs are useful
@@FlightSims I think VPN for ordinary people is a scam.
Only sellouts do advertisments for NordVPN. Sorry Sabine, you should know better.
I'm bookmarking this so I can come back "in a few years" to check whether AI is "more intelligent than everything and everybody else on the planet". How's ten years for "a few years"?
Honestly 2 or 3 years wouldn't surprise me.
Depending on how you define intelligence, I think, the frontier models already passed that bar.
Honestly isn't already slowing down significantly? The first releases were spectacular but now it seems increasingly hard to perfect it.
@@rolyars That was a myth based on the belief that AI would continually be trained on really bad internet data. AI is being trained on itself and simulations, which is faster and better.
@@rolyars Take a look at the big picture: single cellular life forms, multicellular life forms, mammals, humans, society, magical boxes (computers) that can simulate all aspects of reality, pattern recognising algorithms within said boxes, and so on, and so on. Everything within exponentially shorter time spans like years, months, days.
When we can feed an AI all the data we had in the 1900s and it comes up with the Theory of Relativity from that data, i will take AI and its potential seriously.
I take it serious now. And not because it is smart. But because it is smart enough to cause trouble. You do not need an AI that can come up with Theory of Relativity for it to be used as a weapon, or to scam people, or to spread miss information, and so on. And I feel that this attitude of waiting to the AI models become "Good" make us far more passive in how the handle the issues we have today. It seems like people feel they should not act until we have a rogue Skynet on our hand, but we have issues today.
@@Cythil A well thought out position.
So you are starting taking AI seriously when we get to superintelligent systems that can do everything better (and cheaper) than humans. At this point, it won't matter. Humans will have created a successor species. Why do you think you will be able to contain them at this point? What is your great plan for controlling superintelligent AI systems? Surely you must have one, with how little you are concerned about this.
@ All that matters is that intelligence survives. The form it takes does not matter to me.
@@Stumdra @ All that matters is that intelligence survives. The form it takes does not matter to me. This is a good video that may allay your fears.
ua-cam.com/video/qtfG1dM8C-U/v-deo.htmlsi=Uguhu3713Z2DMUf4
Get back to me when AI can feel pain. No pain, no gain. It's how humans learn.
We are really FAR from these performance problems, the hype is too great... there are applications where the output does not need to be perfect, like marketing or image or video creation, but for the rest, where you need to be 100% correct (working, driving, programming), the current models are not suitable.
The thing is you don't need to be 100% correct, you just need to be more correct than the average human.
The biggest reason why LLMs are not yet productive, is that they only work on a very limited scale. In programming for example it can only effectively deal with a couple hundred lines of code, but real products start at about 10.000 and go into the millions. Meaning anything a model writes is out of context, because it doesn't know the code base.
So far every time they train them on more context they get less accurate over all. We might just be a couple breakthroughs away from them being good enough.
@@shynrou2for the scale that will increase with bigger models. The big problem (and it's the same problem why we don't have a self-driving car after 20 years of trying) is that these machines are trained by examples, if something is too far from the existing dataset they will invent the answer.
@@PracticalAI_ We already have fleets of self-driving taxis on the roads
@@PHIplaytesting In a very limited area that requires years of mapping the roads to prep for. The promise of fully automated vehicles has failed completely.
@@shynrou2 I use LLMs to help with programming. They have already reduced my programming burden by 75%. AIs are getting smarter by the minute.
I agree that this is a huge challenge and that my current home continent of Europe has gone (at least partly) down the road of decade's of squabbling over sizes or pie slices that they are refusing to see that their pie has gotten much smaller.
Still, the job of those CEOs is to secure the future of their firms (for their shareholders/owners) and that is a more immediate challenge than who will "rule the world" five years out. (Let's remember Steve Jobs died at 56 and all these guys have an end date that isn't secured by wealth.) They're doing what they need to be doing. So perhaps the democratic response isn't only to have the government involved, which it is on several levels, but to have more of the electorate holding equities and perhaps boards with better representation of numerous small share holders. The boards are elected by the shareholders and they can remove a CEO (See Steve Jobs, again.)
I'm not suggesting this is the ultimate answer, by any means. Sabine just got me thinking.
Yes, the pie is smaller - because we ate a substantial part of it without baking new. And yes, I think the stock markets have a word to say...And yes, happily the US presidency ends after four years and human´s lifespan after some decades.
What does that matter when a rogue judge, presumably paid by Joe Biden can stop a CEO - Elon Musk - doing his job @ Tesla?
A judge in bumfuck Delaware just says no and you are out of the loop.
Why should anyone want to attend those boards anyway?
It's time we moved away from a "shareholder" form of capitalism to something like a "stakeholder" one.
@xelasomar4614 I don't see the difference between those two in this case. Perhaps you can elaborate?
@@thomasjgallagher924 Shareholders are those that own part of the company Stakeholders also includes those that are interested in the company's success and activities that include employees, customers, and the public.
Sabine really nailed it with this one. It's about power, the spoils are astronomical, and the oligarchs will betray all of humanity for it.
We need Artificial Luigi.
sabine what stocks should i buy
Sabine knows being smart makes you the ruler of the world because she owns Germany.
That would be nice.
I'd love her to lead Germany!
With the release of R1 this video is outdated before it was released
Except that o3 hasn't been released and OpenAI can use it internally.
O3 is likely an iteration on o1. R1 is the stepping stone for even more innovations openly. So o3 will not be a major game changer, but r1 will definitely be.
R1 is open model but not open source, the training data isn't available anywhere. But this model can be used to train other models so in a sense the playing field is made a bit more level, at least the baseline starting point. But Ph4, llama models, qwen, mistral etc. are all open model. R1 is just the first reasoning model that's open model, still a huge milestone though.
Q7 was released 4 seconds ago and has left all these in the dust.
@ What's a Q7?
I personally disagree it is too late for a new company to win out as the science behind AGI is fairly green. LLMs have not been proven to be the only piece to love forward, they are one piece of many future components.
Additionally, if high quality smaller models connected together become the proper path, then it is an open field.
I do agree about world dominance of power and greed driving development.
New companies just get bought out by the big guys.
mining bitcoing to pay energy bills - GOLD
Such a good video -
1. Regarding the politicians, when they say those speeches I already understand it as "hey we are already behind and we are possobly going to be taken advantage of"
2. "frontier model owners and power "- also agree - look how chatgpt opened up to the public, including copilot and gemini - its just free now
It feels like nuclear energy. It all depends how it's used.
at least nuclear energy produces something useful
Or any man made tool.
A knife can be a useful tool, or a deadly weapon of oppression.
An aircraft can deliver you to a holiday, or bring aid to an area in need, or drop deadly weapons.
The list goes on.
I think there's a difference. It's hard for a private person to make a nuclear reactor. But any billionaire can create a data center and set a hundred million bots loose on the internet.
@@andreisopon4615 companies in very poor countries operate bazillions of scambots already. You already can buy software that can create social media accounts in bulk, and operate them on mass scale with "human like interactions" as advertised on their page. Adding Ai will make them even better scambots.
@@andreisopon4615plus nuclear power doesn't think on it's own or needs alignment
I'm afraid I need to point out that governments already are owned by companies - most so in the US, where companies and their owners decide whom to buy the presidency.
"On the whole, capitalism is growing far more rapidly than before; but this growth is not only becoming more and more uneven in general, its unevenness also manifests itself, in particular, in the decay of the countries which are richest in capital." Lenin
It's a new bubble, big data was going to burst, but AI has provided a way to prevent the crash... with more hype!
Trump just announced $500 billion for his friends in the high tech industry!
These are the people filling their pockets with this hype - thanks to their tools in the governments, as shown in the video!
Répondre
As long as the companies that "own" the US government are incompetent as they look like, I wouldn't worry. If you are a US citizen it is a different story though.
You only reference large models in the video, but the biggest practical applications will use smaller models that are optimised to run on constrained hardware and that are more specialised. For example, if we want to have physical helper type robots they will use these type of "smaller" models on premise/on device. That being said, if you want to develop medicine or discover formulas you will need datacenters and large models
Another thing that many people don't understand: The power of a super-intelligent AI does not come from companies or the government, which “owns” and believes to control this system, but from the system itself!
What that British guy said about importing ai makes no sense if open models are available
it does if they're the user of the tools, not the end consumer
It's unclear how it will play out. If you look at image recognition networks, then every nerd can come close to state of the art. If this also will be the case for LLMs, then the import/export thing is indeed moot.
he meant importing AI technologies, this would be in addition to any model (open/closed).
The British cannot understand the concept of sharing, the UK has some of the worst intellectual property rights in the world and they literally invented the idea of copyright.
It does when American Google buys British DeepMind and not the other way around. Some models are open but AI expertise, data and compute are neither open nor free.
So you may get Llama 17 for free but you'll need an american chip to run it, an american software framework to train and fine-tune it and if you access it online, you'll likely be helping American companies train on your data.
Ah, Palantir. Whitney Webb has a lot to say about them and I tend to find her research compelling. Great video, ty.
TBF open source models like DeepSeek R1 are not far behind OpenAI's frontier models, and the gap is closing. OpenAI do not have a monopoly on AI intelligence and they know it, which is why they're pushing so hard for more compute capacity
Sabina just got blindsided (like most of us) by DeepSeek. DeepSeek just destroyed her "Frontier Models Only" argument. Deepseek is cheaper, arguably better and capable on less than top notch chips.
Wtong. So wrong. 😂
This video is so so so important. You hit the nail on the end.
yes, I haven't been interested before, but her delivery really sold me on it!, the future is nordvpn
This video feels more like a manifesto than an actual argument.
I see more the disruption of the job market, and secondly the danger if people rely on the information and it is manipulated, as it is done currently to diminish the undesired answers.
Geez. One trend that will not continue is that LLMs will somehow get much "smarter", if you throw more computing power at them. One trend that will continue is that the AI people are consistently terrible at predicting how far their current paradigm can be pushed, and great at creating hype, and that a lot of people fall for it.
I don’t understand how the fuck AI is a „hype“ bro AI already made humanoid robots like a billion times better than they were just 3 years ago and I am using an LLM every day for programming and it made me like 3x faster in creating proof-of-concepts with unknown technologies. Also it’s getting so much better every month.
In my eye, AI ist completely and utterly living up to that hype.
@@__maxyz AI programming is worse than copy & paste from stackoverflow, because most people on stackoverflow check if their answers work. Or at least compile, for that matter. It falls off exponentially the more you know about what you are doing. Try to ask your LLM how many 'r' are in strawberry or something. The hype we are comparing this to is "taking over the world in 5 years", see video.
@@__maxyz , LLMs do a very sophisticated extrapolation (find the most probable words/images to suit the prompt) BUT have no reasoning, abstraction or generalization power. That's the wall. To reach reasoning is not enough to make them bigger, new architectures would be needed ("hybrid models"?).
One trend that doesn't seem to be going away is SEO-optimised AI slop squeezing out genuine results from search engines. It's a right pain in the arse, but the upshot is that we'll have to learn new (and relearn old) ways of finding information.
@@jesusluxthat's entirely false.
ai has the capacity to reason and do abstraction.
this is evident through science and all the research papers published.
so no your point is empirically invalid.
try again.
scientific evidence does not support your claims and is the opposite
The problem with those declarations is the assumption that AI will keep improving.
That's true, but let's just assume the worst scenario
if you assume that it will, you get way more creative liberties to fearmonger and collect people's money
While sitting at red traffic light late at night with no other cars in any direction, I was wondering if AI is so smart how come it's not being used to control traffic signals.
Why do you assume it isn't? You are presuming that AI is interested in your convenience rather than in playing with you.
Because it isn't mature enough for safety-critical applicatilons. And a traffic light is pretty dull tech, nobody is going to gain global power by doing an AI traffic signal.
It needs the data first. We have sensor lights where I live. So in that situation, it would change the light green for me.
And you think that's a good application? First we had road strips to detect traffic, then a simple sensor. Why over complicate it?
Some want to rule, others don’t want to be ruled. It’s this polarization that’s keeping the balance in everything throughout history. Polarization is the reason nobody ever ruled the world nor anyone ever will.
The most jarring point is how on earth do governments still believe they have an ounce of agency left *as of now*?
Capitalism already has done away with government agency for a long time
Sabine will edit this video after finding out what DeepSeek is doing
I don't think it invalidate the message
@@neociber24 Actually her videos regarding AI hype invalidate the message already.
Sabine, you are already the Queen of Europe!
For me yes!
@Thomas-gk42 Sabine points out the true challenge with AI : DOMINATE THE WORLD by having access to the best intelligence on the planet, intelligence available at will, and never complaining !!! Wars of the future will be chess games of one intelligence against the other.
Honestly it's really hard to take anyone's opinion in "AI" seriously considering that even 5 years ago nobody would tell you how scaling transformers would end up. Nobody reads newspapers from the yesterday unfortunately, but that gives you a much clearer perspective imo.
What everyone gets wrong is no company will end up with the power but the systems that they create instead. Controlling a super intelligence is like expecting a dog to control their owner.
Good thing superintelligence is not something that comes from LLMs with chain of thought or RL. Saying stuff like what you just did is a walking advertisment for OpenAI.
Exactly my thinking. There's short term employment disruption. But what's most dystopian is when we create AI systems that we don't understand how they work or hallucinate
We also neglect to understand different behavior and poisoned test data. Robotics papers have shown our systems and especially in AI are also (unintentionally) racist. If what we train them on is the worst of humanity, that is the product we will get. So. Probably they'll just create things like government and healthcare policies that are biased in ways you can't measure,nor do they care. Once these systems are in place, it would be like trying to get rid of red light cameras (another clear failure in terms of innocents and accuracy). The AI systems would control everything in their separate silos. Supermarkets, economic movers, employers, government systems, traffic flow analysis, police systems (facial recognition etc!)....
Sabine is still sleeping on X-risk, but if there's a warning shot we survive, I'm confident she'll come around.
As long as you can still pull out the plug.
I have met plenty of dogs who control their owners lol.
In China, the state, the party, and the companies are just the same thing. This doesn't mean that their Ai dominance benefits the average person.
USA is the same, now that Trump is in power. US democracy is a thing of the past.
The real problem is that AI will be so useful that we will come to rely on it completely. As a result we will forget how to do our own thinking, the way muscles atrophy if you don't use them.
The origin of where Peter Thiel came up with the name for his company ... "Palantir" really made my heart sink.