Ever since AlphaGo and other projects completely defied expert predictions by multiple decades, I've realized even the experts have no clue what the future holds with this technology. Truly, we are at the base of a massive exponential trend that will likely peak mid to late century, but at this point it would not surprise be if that peak arrived much earlier than anticipated.
Yeah the problem is, if you’re an expert you know based on current rates adjusting for specific KPIs, you have a good estimate. This doesn’t factor for groundbreaking leaps in technology though, this is why AGI has gone from even in 2019 55% of experts thought AGI will occur either after 2060, or will never occur.
Keep in mind that the momentum can be stopped at any point. Physics hasn’t had a major breakthrough in over 50 years and it stopped at a time that we thought Quantum Mechanics and Relativity being merged was just around the corner. I think this is why so many experts are hesitant to say AGI is a few years away.
UA-cam played an ad just this channels said that it was about to play a video showing the AI generating audio at the same time as video... it actually took me a second to realize what had happened (but not before I thought: "damn... this is really good!")
I guess the reason behind Hopefield and Hinton winning physics is simply because cutting edge physics is at a stand-still atm. There is nothing Nobel-worthy the past couple of decades in pure physics that hasn't already won. I think it was a good pick. AI is used heavily for analysis in practical physics.
Frontier Physics research lost touch with reality about 30 years ago. Einstein won the 1921 Nobel Prize, not for his theory of General Relativity but for his explanation of the Photoelectric effect where he tied theoretical early quantum mechanics to practical experimental observations. Today cutting edge physics has become a branch of pure mathematics, looking at String Theory, M-theory and a whole host of other TOEs like loop quantum gravity none of them have practical experimentally testable proposals and won’t for centuries if ever. The problem with ‘AI outsiders’ winning the Physics Nobel is that it’s not for actual Physics research as linked to direct experimental observations, it’s for basically the development of algorithms to process and interpret data. I know some physicists who are completely gobsmacked at this choice of winner, not because these awardees were not physicists but because the award is for something completely unrelated to basic physics as an experimental science.
@@glenyoung1809 Yeah fundamental Physics is in a bit of a tough spot. The LHC was well-motivated because we knew from unitarity that either the Higgs was small enough to be found by it OR some new Physics phenomena would be found below about 1 TeV. Well, we found the Higgs, now the Standard Model is complete. We know there must be Physics beyond the Standard Model because there are things it doesn't account for, but the energy scale where we know *for sure* new Physics must appear is more than 10 orders of magnitude greater than what the LHC can probe; it's entirely possible we might blow the budget & build a 100 TeV collider and not discover any new particles or phenomena, that outcome is consistent with the Standard Model.
I paused so much and went reading the source, UA-cam should be showing you 1.5h watch time from me. Thanks for the video, I didn't know about this report.
20:30. "Break-out" status, would mean it goes main stream, much like when Chat-GPT broke out of the academic circles and into mainstream consciousness.
I'd love the video game bit to be true, but so far, I've not seen it. Either it's about generating in real time (which also ends up not being a "game" so much because you need to babysit the AI sometimes inevitably) or it is an excuse for cutting corners and making cheap, bad games. I very much hope it is true though
I still don't get why he's being so conservative on when that will happen. He does come out and say 5 to 20 years but 5 is less likely. I'd say 5 is a good guess but it looks bad to be wrong so maybe he doesn't want to commit.
Because he's not overhyping it I guess? Sure it can do any exact science better than you, but it gets hard time understanding what happens when you turn a cup with a ball upside down (yes I now, o1 can sometimes guess this correctly now, you get the point). When they say smarter they mean absolutely smarter in any way possible.
At the moment AI is like a calculator. It can do many tasks way way better than humans. For example solving PhD level physics tasks. If AI were smarter than humans though, why would OpenAI and Google not replace their workforce with AI?
about the break-out part on the 10th prediction. A quick google search gave me this definition: "General Usage: In everyday language, a break-out can describe a sudden and notable event or achievement, such as a breakout performance in a movie or a breakout hit song."
Spending $3 billion dollars on training is already preposterous by itself, but it becomes even more ridiculous when you remember that those costs are probably heavily subsidized by Microsoft and the "real" market cost of that training is considerably higher
Philip... before I watch this... I've been wracking my brain trying to understand how fast we are going, how far we've come. AI is a doozy man. Your videos are so cool dude. What would happen in another step-change in AI reasoning....
Aspirational brainstorming notes, that was apt. I understand what your saying about not wanting to find sources for useful data but that maybe the only way to limit model collapse if to much synthetic data is consumed and Golden data is required instead.
A massive thanks to you, once again, for delivering the best AI news on the platform. Seriously, I really love your content and every video you upload is an almost instant "stop whatever you're doing and check it out cuz it's going to be great".
Why do we listen to CEOs with no education and new true subject matter expertise as if they are thought leaders? They merely ride on the backs of people like Ilya and regurgitate what they are told.
@@HUEHUEUHEPony yeah if Sam was responsible for giving us GPT when we otherwise wouldn't have gotten it, that's a great, positive achievement, no matter what he actually is like as a person. Nothing more scary than having all the rich and powerful people keeping the best models to themselves and us only getting scraps. It is of course the case to an extent, but much less so than it would've been without releasing GPT. Could still have been private at this point
Worth noting: those "B200" (Blackwell) GPUs are only 2-3x faster at training, but reportedly they are over _10 times faster_ at inference. The cost of test time compute is about to drop dramatically.
I don’t know if AGI will be achieved next year, but what I do know is that a massive amount of work is going to be automated very quickly. And I’m not talking about low-level unskilled labor; I’m talking about everything. I wouldn’t be surprised if paid overtime hours disappear from most jobs entirely. Rather, most people will be reduced to 30-hour work weeks or possibly even less as time goes on, and companies might stop hiring new employees within just the next few years. All of this is very possible even without the achievement of AGI, depending on how many and how advanced the narrow AIs become, and if there are intermediary agents working to connect these systems. I think people overestimate what it will take to radically change our economic system, as it is significantly less resilient to these radical technologies than most people realize.
Let's be real. An o1 (no preview) or o2 on a scale of gpt5 it will basically be a rough and unpolished AGI. If you complete a huggingface in which it can choose and use all tools available we might have a big economical problem
i think the most interesting thing in society that'll happen in a few years is people trying to wrap their head around the fact that a computer program can be concious just like them. it makes perfect sense too. both llms and our brains work based on a neural network and conciousness really isnt much more than just the reiteration of information about reality that was perceived by our body being fed back into ourselves. basically a computer program taking a screenshot of its own code but 10000 times more complex.
Thanks for the video, great Summary. Although I'd have to criticize 14:12. The argumentation of these clips is not very sound. The AI is gonna be so great it is gonna solve all our problems, easy peasy? You just need: continues to list several hypotheticals.. -> then it is easy! That is speculation beyond even PR hype. Anyone who did some research into Carbon Capture for example knows that it will take monumental political, societal and engineering work to terraform our planet Earth back to reasonable temperatures. This will not be done by investing a couple billion into carbon capture magic science. The limits are still physics and you have to put in MORE energy than you got by releasing the CO2 in the first place, which means we have to invest more energy than we got from fossil fuels since the Industrial Revolution. No magic AI is gonna solve that for us anytime soon.
Exactly, these emperors have no clothes and it's ridiculous they don't get pushed back on when they make such stupid statements in public. AI may solve climate change _for them_ because they will be so rich they can shield themselves from its effects. But the rest of the planet....
9. I bet for a collaboration Human & AI sicentist will be accepted at a major ML conference ! and thank you so much for a brilliant and balanced analysis of the AI landscape !
Your ability to provide a deep and well rounded analysis is phenomenal! I was willing to purchase the original priced AI Insiders, but now its even more affordable, just subscribed. Keep up the great work!
I have to start off with the fully read 212 page report, and I thought I like to consume information, heavy day, Phillip, now on to the information that is the run for this agi north star without a thought in the world about money, interesting times indeed, 4:21 I like the idea of just giving the prize to ai and the million and see what comes out of it, great work by the University of Toronto having Geoffrey Hinton participating in this event on the day he was informed of winning the prize. As a fan of western civilization I find it small comfort that China is lagging, but a good thing all the same. 13:11 , I see that packaging individual sets of data not be of high value, it would take a number of creator's working together to generate data for a buck. 14:11 In the same stroke though wouldn't creator's then have to pay for the data to create? 17:38 Will jailbreak still be relevant in the age of quantum computing? Thank you for sharing your time, work, knowledge and experience Phillip, be well, peace
nice overview and news drop and insights, thanks! the thing about exponential dev is graphs represent it a line going zooooop, when it's more akin to fanning out of a root system or neural network etc (what happens when we add another dimension- enter quantum computing and/or superconductors). the amount of nuance in the output of such unfathomable systems are so incredibly complex it is by far easier to underestimate the possibilities rather than predict them. things are only just beginning to get to that level of hyperweirdness which will eclipse the dreams and nightmarevisions of every single scifi seer come before us. ps people are afraid of a Frankenstein's monster destroying the world, yet fail to recognize the irony of the result of being to strict for too long when raising a child, they'll fuck you up every time
Given what you've said about an AI Scientist research paper; it could be a true prediction after-the-fact: Humans submit paper and it's later exposed as essentially the product of an AI Scientist.
Im surprised that Geoffrey Hilton gives such a large timeframe for surpassing human intelligence.. 5-20 years seems like a very long time, given the rate of advancements we’ve seen lately
Even cutting edge AI still makes "mistakes" that would be embarrasing for a smart child. IMHO too much attention is paid to what it can do and not enough to where, and how often, it goes wrong. It can't be regarded as surpassing human intelligence until we give up checking it.
I’ve noted a somewhat dramatic improvement in your content and delivery. May I ask: to what extent are you yourself using AI to produce or improve the script and voice over of your videos?
I think he means that there's enough hardware already created or in production and enough algorithmic improvements already found and more to come, that it is basically a certainty that we'll get that much of an increase. There's a bigger chance we'll hit a wall of some kind in terms of power, training data, etc after that point. We might not hit a wall then either, but it is less certain at that point.
Increasing electric bills is no trivial bit! Key markets in the US are absolutely blowing up and the spiking cost of the wholesale power commodity "capacity" is driving up rates for everyone everywhere, before we even talk about energy prices. I see it now particularly in the Midwest US, the midatlantic pjm region, and NY is coming too. We need fusion deployed everywhere, like, yesterday
One interesting bit of northeastern us news: they are repowering an undamaged part of Three Mile Island, a nuclear power plant in PA, and apparently Microsoft has an agreement to offtake all of the energy generated from the plant. It's hard to overstate how f****** crazy it is that ONE company can have an offtake agreement for an ENTIRE nuclear generator. That's so much electricity. load (how much electricity we all collectively use) has been flat or declining my whole life pretty much, mostly because of energy efficiency. But now the reports that the grid operators/planners put out are forecasting increasingly worse grid conditions. And every release the dynamics are getting more dramatic. Like, conservative grid bureaucrats are saying there may very well be electricity SHORTAGES in nyc if certain crucial construction projects don't hit deadlines. Before 2030. We're entering a whole new world of energy consumption at the same time we need to transition the not-so-resilient grid to cleaner fuels. No one how this works.
Actually, you do not need to be superintelligent to understand that it is not a good idea to leverage a non-linear system out of it‘s stable state. Solving the problem is fairly easy, but we have to leave our comfort zone and stop burning fossiles, switching to nuclear, solar, wind and batteries completely. This isn’t wanted until economic thresholds make this imperative (which has already been achieved) and fossile lobbying has been pushed back (which seems to be a huge obstacle).
"OpenAI won't make a profit until 2029" sounds very strange. This means that, until 2029, OpenAI won't build an AI smart enough to replace millions of knowledge workers. Very strange. By 2029, we will probably have AGI. As always, thanks a lot for a very insightful video 👍
10:46 I'd agree, although Kling has shown its one of the best video models (maybe because other companies are hesitant to release theirs), DeepSeek, yi etc, also China is a big contributor in artificial intelligence research. They have a lot of talent.
Putting aside existential risk, it is a good point that the variations of "impersonation" will become a bigger and bigger problem in the near term. For still images, many tools are already widely disbursed and cheap to use. Audio and video capabilities are rising up fast. There's already stories in the news every few weeks about students "undressing" other students or teachers. Inpainting is a legitimate feature of AI image tools, but it is dual-use in that sense. We already can translate a video of a person into another language with the same voice and new lip movements. That means we can make them say something else entirely just as easily. It's just going to get worse and worse as the tools become more easy to use and more people become aware of them. It's easy enough to say "well don't trust anything you see online", but in practice it is going to be a big mess.
The 3B that OpenAI spent on training could be on all kinds of models. Sora, the next Dall-E, GPT4o, GPT5, the voice mode, o1, stuff that has not been announced yet. Hard to say whether one single model got a full 1B there. I personally don't think so. But maybe GPT5's final cost can be 1B+?
Tools like BrainLM will be used by insurance companies to determine whether or not they will pay for your medications. They will also be used by doctors to determine if you get those medications at all. And your "brain activity scans" will be stored in a database somewhere and will represent your new fingerprint, because it's unique. A hackable database I might add.
Immediately liked, shared to discord, posted on the reddit singularity sub - whatever can be done to spread Phillip doing the Lord's work for AI journalism! Like ol Dave Shapiro said, cheers! ❤
Whether we like it or not I guess... AGI seems to be coming. What should we do, Philip? I feel like it's the movie Independence Day coming. I know you avoid exaggerations but Dario's letter today further solidifies this stuff.
15:34 Ugh, I can’t tell you how much it gets under my skin when these AI gurus (e.g., Eric Schmidt, Sam Altman and Ilya Sutskever) pontificate about how AI will “solve” climate change. The problem with carbon capture is not one of “efficiency”-as Stanford professor of civil and environmental engineering Mark Z. Jacobson says, “Even if you have 100 percent capture from the capture equipment, it is still worse, from a social cost perspective, than replacing a coal or gas plant with a wind farm because carbon capture never reduces air pollution and always has a capture equipment cost. Wind replacing fossil fuels always reduces air pollution and never has a capture equipment cost.” You have to include upstream emissions-those emissions, including from leaks and combustion, from mining and transporting a fuel such as coal or natural gas. And, further, getting captured carbon to storage sites could require extensive pipeline networks or even shipping fleets. These guys should really stay in their lanes-and they _never_ do.
I don’t think this is a totally fair take. I understand that it’s difficult to capture the full nuance of the situation in a single UA-cam comment, and it likely doesn’t represent the entirety of your views, regardless, however, I do feel the need to make a few notes. - Wind power is a bit dicey as a source of energy. It’s on the way out, to be replaced by other power sources in many cases, and it has a lot of unfavorable issues, primarily related to maintenance, though also a few ecological issues. Notably: Wind power is extremely taxing to maintain, and while it is a fairly clean source of power, and I certainly could see innovative implementations (setting up old PC fans on farms as wind veins, kite-based wind power, wave power which I contend is an indirect form of wind power, etc) these implementations also often have their own issues and have a limited potential market capacity. Wind power also kills a surprising number of people per gigawatt hour produced, and they often kill a small number of birds per year per turbine, which doesn’t sound bad until you realize that the birds killed are typically fairly important to the ecosystem, and are often of rarer categories of bird (birds of prey) which have an outsized impact on the ecology of a region compared to the number removed by wind turbine use. - Efficiency with carbon capture isn’t necessarily the issue that AI will solve which helps with climate change as such. There are a lot of avenues that advanced intelligence could let us improve our round-cycle carbon impact as it relates to energy use. One of the main problems we have is that there’s almost always a more efficient way to generate power in a given region, but we often can’t tailor power generation techniques to local conditions due to how our current economy functions with economies of scale. We have to often select the most overall efficient solution which often lacks nuance and consideration of potentially abundant local sources of energy. What happens when AI manufacturing hits in a big way, and the cost of maintaining power infrastructure goes down, while the cost of new installations does the same? What if we can do more customized power generation solutions to local conditions? What if we can source more innovative materials for every industry (including power) that we just couldn’t have counted on a human finding because finding that material required too much human effort? What happens when we find novel chemistries, or biological processes that can make certain steps in the overall power pipeline easier? This isn’t crazy talk; AI is accelerating rapidly, and is uniquely positioned to augment our ability to solve a variety of technical and engineering problems, of which many have solutions that would benefit the power industry in a very general sense. It’s not that AI has to succeed such that it is super human in every one of these areas. It just takes a combination of a few, and augmenting humans to do more in the rest. When power generation capacity gets cheaper to produce in every step of the pipeline, and offsets the greenhouse emissions that AI initially caused, as well as emissions that we had no way of dealing with without AI, suddenly the debate gets a lot more nuanced. - Let’s say AI is useless in the power field, which I think is insane to propose, but let’s go with it. The increased pressure on the power grid is forcing us into more practical solutions that we haven’t been able to get public will for in a long time, notably nuclear power. Realistically, there’s not reason that we couldn’t have solved the power portion of the ecological issues we’re facing by using nuclear power, but public will has gone against it due to the scare factor (in spite of coal power plants outputting on average more radiation than comparable nuclear facilities by orders of magnitude, but I digress), but it’s possible even if AI does not produce new solutions or new technological innovations, that it may force us to choose technologies that actually work, regardless. - AI will likely have huge impacts in other areas of ecological harm than just emissions caused by the power grid. What about wildlife monitoring, advanced genetic sequencing and monitoring, genetic engineering, wholistic ecosystem modeling, etc. There’s a huge number of areas that AI will augment us in, and I think it’s not unrealistic to propose that some of those areas could actually have very real, very positive impacts that make it difficult to put a precise number on the overall benefit or detriment of large scale AI use. - It’s entirely possible that AI will fundamentally alter the way our society functions. As more and more jobs are automated, without guarantee of replacement, I think there will be more pressure to move away from a consumerist capitalistic economic model, and move into something new. It could be post labor economics, post scarcity, or whatever else have you, but with humans augmented so heavily with so much automated assistance, I think it’s pretty straightforward to suggest the idea of people slaving away for eight hours at an office job, having a two hour commute, coming home, not having time or energy to deal with chores, and then going out to buy random junk from department stores (which I view as possibly one of the biggest climate issues we have at the moment) to some psychologically fulfilling end that they never quite reach…. I think it’s pretty straightforward to suggest that such a paradigm is coming to an end, and while I can’t say if the next paradigm will be better or worse, I can definitely say it’s not unreasonable to roll the dice given the low that we’re currently in.
@@novantha1 I actually think that’s a very fair take to my (perhaps) unfair one. While Jacobson talks about wind power in that quote, in other places, he also talks about solar and water. I just mentioned his beef with carbon capture-which he says is about 10-11% efficient (as of 2019) when you add in all the other things like upstream emissions-because that was Ilya’s idea of a winning example. I agree-I think that AI will play an important, perhaps vital, role in combatting climate change. _My_ issue with these guys talking about it in the way that they do is (1) _they_ chose to talk about AI in terms of “solving” the climate crisis-it’s not coming up with solutions that “augment” us-it’s, at best, magical thinking, and, at worst, hype, where AI steps in in an almost messianic role, and (2) worse, this kind of technocratic take fundamentally misconstrues the issue-it’s _not_ a technocratic problem that we simply don’t know the answer to (and that super-duper AI, in its infinite wisdom, will figure out)-these guys _consistently_ talk in these terms-it’s far more a systems issue involving politics and all that. (See, for example, China where just under 60% of the new car registrations are electric vehicles, with 10 million charging stations-China wasn’t asking AI what to do there-while the US number of registrations of electric vehicles is under 10%, one of the reasons being that charging stations are so hard to find.) And Jacobson, again, says “We have 95% of the technologies right now that we need to solve the problem” of climate change. That doesn’t mean that AI can’t help us do things _better,_ of course it can-it just means that the fact that these guys are thinking that we just have to “find the answer”-because AI will be able to give us answers-speaks to how clueless they are.
@@M7k8b012 Thanks! I’m glad my comment voiced what you were thinking and feeling. The _other_ thing that I didn’t mention is that, along with the magical thinking that AI will come up with “the answer” that will “solve climate change,” is the assumption underlying Ilya’s example that _nothing has to change._ We can maintain our high-consumption lifestyles-with their enormous carbon footprints-and “efficient” carbon capture will just suck all that excess CO₂ out of the air and _everything will be okay._ It’s a fantasy, at once infantile and élitist, and a dangerous one, because it reinforces the status quo when, in reality, we need to change radically (and that, in fact, might be what super-duper AI, if it comes about, will actually tell us). And these are (in their minds) the Smartest People in the Room who are supposed to lead us into our Glorious AI Future. It’s downright scary.
Jaron lanier makes a very persuasive argument in favour of ai systems having a kind of provenance record for the sources of information which were most important to them in learning about particular subjects and giving particular answers. Essentially the ai would be crediting high quality information sources. This would then make possible the paying of “royalties” to the creators of high quality information, which in turn would incentivise people to create more such high quality material, which would create a virtual circle of increasing quality of digital content. Lanier is well worth listening to on this aspect of ai. It doesn’t look as though anyone is heeding his advice, however!
Hey, I have been viewing you for a while (from another account), and I'm glad you actually progressing on qualities, and you finally have timestamps (or I just missed it before). I would say there is only one downside to your videos, is when I'm watching it nor commending it, I got BOMBARDED BY SHITTING AI CHANNELS IN MY FEED >< Thanks for the video
Someone like MS, who surely has vast numbers (if not the overwhelming majority) of their GPUs doing AI for public or corporate web service, should announce that one day, like Sunday each week, ALL GPUs will be taken OFFLINE and used SOLELY to continue building and testing for a SUPER SCALE implementation with the goal of AGI. Imagine the model running on +/- 1 MILLION GPUs...
No one dares have this positive thought: What if superior AI intelligence is totally beneficial and it naturally works to help us? There is no scientific rule that superior intelligence automatically disenfranchises less intelligent entities. It may be that superior AI will automatically use its skills to bring us up to its level. A symbiotic relationship between humans and the AI we create is totally possible. I apologize for not showing existential fear and dread.🎉🎉🎉
I think that this is a fine opinion to have. It's just that if this is the case then what is there to worry about anyway? If ASI is just really nice and helps everyone, then there is nothing we can do to fuck things up, so it's kind of a solved possibility. If it happens, it happens. But just because this is a POSSIBILITY doesn't justify any positive behavior in and of itself. One would need to demonstrate why your opinion is LIKELY if one were to act on this opinion in any way.
Totally agree…but there’s no rule that it‘lol be benevolent either. Until we have really good understanding of these models work color me worried. The science is better than it ever has been, but capabilities are outstripping interpretability research
Having continued existence and capability to affect the world for the AI as strongly convergent instrumental goals needed to achieve any other goal suggests that pure altruism is unlikely. Optimists really should watch all of Robert Miles's videos.
Oh boy, another AI toy in pika dot art, these things have been popping up left and right, but I'm not complaining! Thanks for the status update, I never get enough of your videos!
Two points: Consider using o1 with AI Scientist. An AI Scientist-created poster session that's also presented and published as a conference paper could be viable. While it might not meet the standards for top-tier conferences like ICLR, NeurIPS, ICML, or CVPR, it could be suitable for next-tier conferences and certainly for lesser but still well-regarded conferences. Ethics: This is a serious concern. Academic fraud is already prevalent in many areas of research. Where have you observed significant instances of such misconduct? Speaking of China: How would an invasion of Taiwan affect OpenAI's market cap? Is it related to chip access? Counterintuitively, an invasion might actually increase OpenAI's value, not decrease it. The Pentagon would likely have a strong incentive to accelerate the development of ASI if faced with the risk of a totalitarian regime achieving it first.
“The value of OpenAI will double, absent an invasion of Taiwan by China” It seems naive but I hadn’t really considered how destabilizing the rapid AI progress on the global political system could be. If the US continues to pull ahead in the AI race, then an invasion of Taiwan will almost certainly happen in the next few years.
The Simpsons, long recognized as a modern day Nostradamus, explained what is about to happen long ago; "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."
@@Gafferman It's from the show community, there is an episode where Pierce is trying so hard to coin the phrase "streets ahead" as a new slang that he claims to have started, and is rather unsuccessful. And of course, quite ironically, it then became a very popular expression among fans.
@@noone-ld7pt oh I see! Thank you for the context, as 'community' wasn't in title case I didn't realise you were referring to a specific thing and not us as a community on UA-cam!
@@iverbrnstad791An ASI "aligned" with a species constantly killing each other at scale and seemingly culturally enslaved to short term corporate profit goals is a terrifying concept
Thanks for being the best source of easy AI news
💯 agree
Im reading “easy” as “reliable“, which is also “refreshing”.
Cheers both
💯 agree x2
I like it when you say, “Yes, I read it all.” it always brings a smile to my face.
You didn't peruse or scan through it; you read it all! 👏🏾👏🏾👏🏾
AI Explained: the ONLY channel where I click like first, and then watch all the way through, every time. Great work, Philip.
Thanks Rob!
Excellent work, Philip. Cheers!
I thought you dont like AI no more??!??!
@@dad1844 he never said that, simply not gonna make videos on it
@@Akuma.73 except when he does, of course.
Here come the personal chief AI officer. Pfffft
Hope you're doing well Dave. Cheers
Ever since AlphaGo and other projects completely defied expert predictions by multiple decades, I've realized even the experts have no clue what the future holds with this technology. Truly, we are at the base of a massive exponential trend that will likely peak mid to late century, but at this point it would not surprise be if that peak arrived much earlier than anticipated.
Yeah the problem is, if you’re an expert you know based on current rates adjusting for specific KPIs, you have a good estimate. This doesn’t factor for groundbreaking leaps in technology though, this is why AGI has gone from even in 2019 55% of experts thought AGI will occur either after 2060, or will never occur.
Kurzweil always knows better than engineers
Keep in mind that the momentum can be stopped at any point. Physics hasn’t had a major breakthrough in over 50 years and it stopped at a time that we thought Quantum Mechanics and Relativity being merged was just around the corner. I think this is why so many experts are hesitant to say AGI is a few years away.
@@MrBruteSmasher That's definitely not true. Tons of physics breakthroughs happen all the time.
UA-cam played an ad just this channels said that it was about to play a video showing the AI generating audio at the same time as video... it actually took me a second to realize what had happened (but not before I thought: "damn... this is really good!")
Thanks so much for your work here. The jailbreaking aspect is one I hadn't been keeping myself refreshed on in particular.
phillip ai continues to be the frontier model for parsing and analyzing the latest papers in AI news
Excellent insights, as usual 🙂
Thanks Mark
I guess the reason behind Hopefield and Hinton winning physics is simply because cutting edge physics is at a stand-still atm. There is nothing Nobel-worthy the past couple of decades in pure physics that hasn't already won. I think it was a good pick. AI is used heavily for analysis in practical physics.
AI will enable physics to continue to advance.
@@squamish4244 but Nobel prizes aren't awarded for future physics progress
Frontier Physics research lost touch with reality about 30 years ago.
Einstein won the 1921 Nobel Prize, not for his theory of General Relativity but for his explanation of the Photoelectric effect where he tied theoretical early quantum mechanics to practical experimental observations.
Today cutting edge physics has become a branch of pure mathematics, looking at String Theory, M-theory and a whole host of other TOEs like loop quantum gravity none of them have practical experimentally testable proposals and won’t for centuries if ever.
The problem with ‘AI outsiders’ winning the Physics Nobel is that it’s not for actual Physics research as linked to direct experimental observations, it’s for basically the development of algorithms to process and interpret data.
I know some physicists who are completely gobsmacked at this choice of winner, not because these awardees were not physicists but because the award is for something completely unrelated to basic physics as an experimental science.
@@glenyoung1809 Yeah fundamental Physics is in a bit of a tough spot. The LHC was well-motivated because we knew from unitarity that either the Higgs was small enough to be found by it OR some new Physics phenomena would be found below about 1 TeV. Well, we found the Higgs, now the Standard Model is complete. We know there must be Physics beyond the Standard Model because there are things it doesn't account for, but the energy scale where we know *for sure* new Physics must appear is more than 10 orders of magnitude greater than what the LHC can probe; it's entirely possible we might blow the budget & build a 100 TeV collider and not discover any new particles or phenomena, that outcome is consistent with the Standard Model.
@@glenyoung1809 very well put, totally agreed.
I paused so much and went reading the source, UA-cam should be showing you 1.5h watch time from me. Thanks for the video, I didn't know about this report.
Thanks alpha, grateful@
Amazing work to summarise this mass of data so quickly and succinctly! Thank you, Philip, for making really informative content on AI development!
Dude, standing ovation. Thank you.
Thank you freyna!!
20:30. "Break-out" status, would mean it goes main stream, much like when Chat-GPT broke out of the academic circles and into mainstream consciousness.
"Goes mainstream" is still not quantities. "Achieves 200M monthly users" would be.
I'd love the video game bit to be true, but so far, I've not seen it. Either it's about generating in real time (which also ends up not being a "game" so much because you need to babysit the AI sometimes inevitably) or it is an excuse for cutting corners and making cheap, bad games.
I very much hope it is true though
omg I love all the Pikaffects throughout 🤣🤣
Great work! And thanks for the lead
This channel is excellent! Thank you for all of your hard work 🙏🏽
Running an auto-encoder on brain stimuli is kinda crazy
Streets ahead indeed 👌
😂👍
“Streets Ahead”, a fellow community fan 😂. Great work as always.
The guy is talking about AI being smarter than us in 5-20 years? Mate it's been smarter than me for two years now lol
I still don't get why he's being so conservative on when that will happen. He does come out and say 5 to 20 years but 5 is less likely. I'd say 5 is a good guess but it looks bad to be wrong so maybe he doesn't want to commit.
Because he's not overhyping it I guess? Sure it can do any exact science better than you, but it gets hard time understanding what happens when you turn a cup with a ball upside down (yes I now, o1 can sometimes guess this correctly now, you get the point).
When they say smarter they mean absolutely smarter in any way possible.
Bro, it doesn't know how many R's are in the word strawberry. How stupid are you?
At the moment AI is like a calculator. It can do many tasks way way better than humans. For example solving PhD level physics tasks. If AI were smarter than humans though, why would OpenAI and Google not replace their workforce with AI?
@@drakey6617 something to look out for in the coming years
about the break-out part on the 10th prediction. A quick google search gave me this definition: "General Usage: In everyday language, a break-out can describe a sudden and notable event or achievement, such as a breakout performance in a movie or a breakout hit song."
It's still a squishy definition. Is something a breakout if it's mentioned in 10 unrelated TikTok videos?
Spending $3 billion dollars on training is already preposterous by itself, but it becomes even more ridiculous when you remember that those costs are probably heavily subsidized by Microsoft and the "real" market cost of that training is considerably higher
14:20 "Aspirational brainstorming notes" 😂😂
My friend you are crazy fast with this stuff
“Aspirational brainstorming notes” 🤭😊
that guy really just said "the problem's so big that there is no point trying" lol
Philip... before I watch this... I've been wracking my brain trying to understand how fast we are going, how far we've come. AI is a doozy man. Your videos are so cool dude. What would happen in another step-change in AI reasoning....
Aspirational brainstorming notes, that was apt.
I understand what your saying about not wanting to find sources for useful data but that maybe the only way to limit model collapse if to much synthetic data is consumed and Golden data is required instead.
The Community reference got me to like this video.
And the video itself is great!
Amazing quality as always! Thank you so much!
Thanks for the vid! And by the way, all sources for figures etc. in the state of AI report are mentioned in the notes of the downloadable PPTX file.
A massive thanks to you, once again, for delivering the best AI news on the platform. Seriously, I really love your content and every video you upload is an almost instant "stop whatever you're doing and check it out cuz it's going to be great".
Powerful last point
My whole family could use brain LM right about now 😂 great video phil!
11:17 “Stop trying to make streets ahead a thing, Pierce”
It’s very clear to me that Ilya was the heart and soul of OpenAI while Sam was PT Barnum..
It's so much worse than that.
@@spoonfuloffructose PT Barnum was a horrible person, so...
Ilya didn't want to give AI to the masses tho
Why do we listen to CEOs with no education and new true subject matter expertise as if they are thought leaders? They merely ride on the backs of people like Ilya and regurgitate what they are told.
@@HUEHUEUHEPony yeah if Sam was responsible for giving us GPT when we otherwise wouldn't have gotten it, that's a great, positive achievement, no matter what he actually is like as a person. Nothing more scary than having all the rich and powerful people keeping the best models to themselves and us only getting scraps. It is of course the case to an extent, but much less so than it would've been without releasing GPT. Could still have been private at this point
Love your accent! Easier to understand for a non english speaker. Keep up the good job! ❤
Worth noting: those "B200" (Blackwell) GPUs are only 2-3x faster at training, but reportedly they are over _10 times faster_ at inference.
The cost of test time compute is about to drop dramatically.
Thank you! Excellent as always.
Hey your videos are awesome just one request please make detailed video on future of AI
I don’t know if AGI will be achieved next year, but what I do know is that a massive amount of work is going to be automated very quickly. And I’m not talking about low-level unskilled labor; I’m talking about everything. I wouldn’t be surprised if paid overtime hours disappear from most jobs entirely. Rather, most people will be reduced to 30-hour work weeks or possibly even less as time goes on, and companies might stop hiring new employees within just the next few years.
All of this is very possible even without the achievement of AGI, depending on how many and how advanced the narrow AIs become, and if there are intermediary agents working to connect these systems. I think people overestimate what it will take to radically change our economic system, as it is significantly less resilient to these radical technologies than most people realize.
👏 👏 👏
Thanks! Excellent content, as always. 🙏🏼
Let's be real. An o1 (no preview) or o2 on a scale of gpt5 it will basically be a rough and unpolished AGI. If you complete a huggingface in which it can choose and use all tools available we might have a big economical problem
Excellent video, as always!
i think the most interesting thing in society that'll happen in a few years is people trying to wrap their head around the fact that a computer program can be concious just like them. it makes perfect sense too. both llms and our brains work based on a neural network and conciousness really isnt much more than just the reiteration of information about reality that was perceived by our body being fed back into ourselves. basically a computer program taking a screenshot of its own code but 10000 times more complex.
Thanks for the video, great Summary.
Although I'd have to criticize 14:12. The argumentation of these clips is not very sound. The AI is gonna be so great it is gonna solve all our problems, easy peasy?
You just need: continues to list several hypotheticals.. -> then it is easy! That is speculation beyond even PR hype.
Anyone who did some research into Carbon Capture for example knows that it will take monumental political, societal and engineering work to terraform our planet Earth back to reasonable temperatures. This will not be done by investing a couple billion into carbon capture magic science. The limits are still physics and you have to put in MORE energy than you got by releasing the CO2 in the first place, which means we have to invest more energy than we got from fossil fuels since the Industrial Revolution. No magic AI is gonna solve that for us anytime soon.
Exactly, these emperors have no clothes and it's ridiculous they don't get pushed back on when they make such stupid statements in public. AI may solve climate change _for them_ because they will be so rich they can shield themselves from its effects. But the rest of the planet....
9. I bet for a collaboration Human & AI sicentist will be accepted at a major ML conference ! and thank you so much for a brilliant and balanced analysis of the AI landscape !
such good journalism!!
Excellent video - thank you! 🙏🙏
Your ability to provide a deep and well rounded analysis is phenomenal! I was willing to purchase the original priced AI Insiders, but now its even more affordable, just subscribed. Keep up the great work!
Thanks Derrick! Can always upgrade to the pod as well one day when you can more easily afford everything!
I have to start off with the fully read 212 page report, and I thought I like to consume information, heavy day, Phillip, now on to the information that is the run for this agi north star without a thought in the world about money, interesting times indeed, 4:21 I like the idea of just giving the prize to ai and the million and see what comes out of it, great work by the University of Toronto having Geoffrey Hinton participating in this event on the day he was informed of winning the prize. As a fan of western civilization I find it small comfort that China is lagging, but a good thing all the same.
13:11 , I see that packaging individual sets of data not be of high value, it would take a number of creator's working together to generate data for a buck.
14:11 In the same stroke though wouldn't creator's then have to pay for the data to create?
17:38 Will jailbreak still be relevant in the age of quantum computing?
Thank you for sharing your time, work, knowledge and experience Phillip, be well, peace
nice overview and news drop and insights, thanks! the thing about exponential dev is graphs represent it a line going zooooop, when it's more akin to fanning out of a root system or neural network etc (what happens when we add another dimension- enter quantum computing and/or superconductors). the amount of nuance in the output of such unfathomable systems are so incredibly complex it is by far easier to underestimate the possibilities rather than predict them. things are only just beginning to get to that level of hyperweirdness which will eclipse the dreams and nightmarevisions of every single scifi seer come before us. ps people are afraid of a Frankenstein's monster destroying the world, yet fail to recognize the irony of the result of being to strict for too long when raising a child, they'll fuck you up every time
Given what you've said about an AI Scientist research paper; it could be a true prediction after-the-fact: Humans submit paper and it's later exposed as essentially the product of an AI Scientist.
Been waiting for this omg!!!!
how do you read a 212 page document and make a video about it in the same day
Exactly! But doable
Im surprised that Geoffrey Hilton gives such a large timeframe for surpassing human intelligence.. 5-20 years seems like a very long time, given the rate of advancements we’ve seen lately
I agree. O1 preview is already physic graduate level.
@@executivelifehacks6747 In certain respects, yes. Very obviously not in others though 🤷♂
@@executivelifehacks6747no, it's not
It's flawed on a fundamental level, a presumption that already outdated progress metrics are going to be correct in the future.
Even cutting edge AI still makes "mistakes" that would be embarrasing for a smart child. IMHO too much attention is paid to what it can do and not enough to where, and how often, it goes wrong. It can't be regarded as surpassing human intelligence until we give up checking it.
I’ve noted a somewhat dramatic improvement in your content and delivery. May I ask: to what extent are you yourself using AI to produce or improve the script and voice over of your videos?
Not at all. Videos aren't scripted aside from first 3 sentences. And voice is mine.
@@aiexplained-official Impressive! 😊
What does "the next 10k X are baked in" mean?
I think he means that there's enough hardware already created or in production and enough algorithmic improvements already found and more to come, that it is basically a certainty that we'll get that much of an increase. There's a bigger chance we'll hit a wall of some kind in terms of power, training data, etc after that point. We might not hit a wall then either, but it is less certain at that point.
@@ShawnFumo Ok, right. Thanks!
With all the focus on power it sounds like we are getting closer to becoming a tier 1 civilization and proving Kardashev correct.
The guy who said that less intelligent beings don't generally control more intelligent ones has never owned a cat lol
what a wise man
Increasing electric bills is no trivial bit! Key markets in the US are absolutely blowing up and the spiking cost of the wholesale power commodity "capacity" is driving up rates for everyone everywhere, before we even talk about energy prices. I see it now particularly in the Midwest US, the midatlantic pjm region, and NY is coming too. We need fusion deployed everywhere, like, yesterday
One interesting bit of northeastern us news: they are repowering an undamaged part of Three Mile Island, a nuclear power plant in PA, and apparently Microsoft has an agreement to offtake all of the energy generated from the plant. It's hard to overstate how f****** crazy it is that ONE company can have an offtake agreement for an ENTIRE nuclear generator. That's so much electricity.
load (how much electricity we all collectively use) has been flat or declining my whole life pretty much, mostly because of energy efficiency. But now the reports that the grid operators/planners put out are forecasting increasingly worse grid conditions. And every release the dynamics are getting more dramatic. Like, conservative grid bureaucrats are saying there may very well be electricity SHORTAGES in nyc if certain crucial construction projects don't hit deadlines. Before 2030.
We're entering a whole new world of energy consumption at the same time we need to transition the not-so-resilient grid to cleaner fuels. No one how this works.
Actually, you do not need to be superintelligent to understand that it is not a good idea to leverage a non-linear system out of it‘s stable state. Solving the problem is fairly easy, but we have to leave our comfort zone and stop burning fossiles, switching to nuclear, solar, wind and batteries completely. This isn’t wanted until economic thresholds make this imperative (which has already been achieved) and fossile lobbying has been pushed back (which seems to be a huge obstacle).
"OpenAI won't make a profit until 2029" sounds very strange. This means that, until 2029, OpenAI won't build an AI smart enough to replace millions of knowledge workers. Very strange. By 2029, we will probably have AGI. As always, thanks a lot for a very insightful video 👍
"This model is already conscious, he just doesn't know it"
10:46 I'd agree, although Kling has shown its one of the best video models (maybe because other companies are hesitant to release theirs), DeepSeek, yi etc, also China is a big contributor in artificial intelligence research. They have a lot of talent.
Putting aside existential risk, it is a good point that the variations of "impersonation" will become a bigger and bigger problem in the near term. For still images, many tools are already widely disbursed and cheap to use. Audio and video capabilities are rising up fast. There's already stories in the news every few weeks about students "undressing" other students or teachers. Inpainting is a legitimate feature of AI image tools, but it is dual-use in that sense. We already can translate a video of a person into another language with the same voice and new lip movements. That means we can make them say something else entirely just as easily. It's just going to get worse and worse as the tools become more easy to use and more people become aware of them. It's easy enough to say "well don't trust anything you see online", but in practice it is going to be a big mess.
Good video
o1 method combined with gpt5 scale indeed gonna shock random joe or maybe even us 😅
The 3B that OpenAI spent on training could be on all kinds of models. Sora, the next Dall-E, GPT4o, GPT5, the voice mode, o1, stuff that has not been announced yet. Hard to say whether one single model got a full 1B there. I personally don't think so. But maybe GPT5's final cost can be 1B+?
Tools like BrainLM will be used by insurance companies to determine whether or not they will pay for your medications.
They will also be used by doctors to determine if you get those medications at all.
And your "brain activity scans" will be stored in a database somewhere and will represent your new fingerprint, because it's unique. A hackable database I might add.
Immediately liked, shared to discord, posted on the reddit singularity sub - whatever can be done to spread Phillip doing the Lord's work for AI journalism! Like ol Dave Shapiro said, cheers! ❤
Thank you man!
Brain LM reminds me of Karl Fristons free energy principle
9:32 I'm getting cross-eyed, TECHNOLOGY mannnn
I predicted that the Nobel prize for chemistry would go to Jensen Huang. I didn’t fail much. 😂
"very few examples of more intelligent things being controlled by less intelligent things" um... He has heard of Congress, right?
very nice
Does infinite-craft count as passing that 10th prediction of a game based around GenAI achieving break-out status?
Whether we like it or not I guess... AGI seems to be coming.
What should we do, Philip?
I feel like it's the movie Independence Day coming.
I know you avoid exaggerations but Dario's letter today further solidifies this stuff.
Not sure
@@aiexplained-official Insufficient Data to continue haha thanks man, I understand.
15:34 Ugh, I can’t tell you how much it gets under my skin when these AI gurus (e.g., Eric Schmidt, Sam Altman and Ilya Sutskever) pontificate about how AI will “solve” climate change.
The problem with carbon capture is not one of “efficiency”-as Stanford professor of civil and environmental engineering Mark Z. Jacobson says, “Even if you have 100 percent capture from the capture equipment, it is still worse, from a social cost perspective, than replacing a coal or gas plant with a wind farm because carbon capture never reduces air pollution and always has a capture equipment cost. Wind replacing fossil fuels always reduces air pollution and never has a capture equipment cost.” You have to include upstream emissions-those emissions, including from leaks and combustion, from mining and transporting a fuel such as coal or natural gas. And, further, getting captured carbon to storage sites could require extensive pipeline networks or even shipping fleets.
These guys should really stay in their lanes-and they _never_ do.
I can’t tell if they are just craven salesmen or they are truly that drunk on the kool aid.
I don’t think this is a totally fair take. I understand that it’s difficult to capture the full nuance of the situation in a single UA-cam comment, and it likely doesn’t represent the entirety of your views, regardless, however, I do feel the need to make a few notes.
- Wind power is a bit dicey as a source of energy. It’s on the way out, to be replaced by other power sources in many cases, and it has a lot of unfavorable issues, primarily related to maintenance, though also a few ecological issues. Notably: Wind power is extremely taxing to maintain, and while it is a fairly clean source of power, and I certainly could see innovative implementations (setting up old PC fans on farms as wind veins, kite-based wind power, wave power which I contend is an indirect form of wind power, etc) these implementations also often have their own issues and have a limited potential market capacity. Wind power also kills a surprising number of people per gigawatt hour produced, and they often kill a small number of birds per year per turbine, which doesn’t sound bad until you realize that the birds killed are typically fairly important to the ecosystem, and are often of rarer categories of bird (birds of prey) which have an outsized impact on the ecology of a region compared to the number removed by wind turbine use.
- Efficiency with carbon capture isn’t necessarily the issue that AI will solve which helps with climate change as such. There are a lot of avenues that advanced intelligence could let us improve our round-cycle carbon impact as it relates to energy use. One of the main problems we have is that there’s almost always a more efficient way to generate power in a given region, but we often can’t tailor power generation techniques to local conditions due to how our current economy functions with economies of scale. We have to often select the most overall efficient solution which often lacks nuance and consideration of potentially abundant local sources of energy. What happens when AI manufacturing hits in a big way, and the cost of maintaining power infrastructure goes down, while the cost of new installations does the same? What if we can do more customized power generation solutions to local conditions? What if we can source more innovative materials for every industry (including power) that we just couldn’t have counted on a human finding because finding that material required too much human effort? What happens when we find novel chemistries, or biological processes that can make certain steps in the overall power pipeline easier?
This isn’t crazy talk; AI is accelerating rapidly, and is uniquely positioned to augment our ability to solve a variety of technical and engineering problems, of which many have solutions that would benefit the power industry in a very general sense. It’s not that AI has to succeed such that it is super human in every one of these areas. It just takes a combination of a few, and augmenting humans to do more in the rest. When power generation capacity gets cheaper to produce in every step of the pipeline, and offsets the greenhouse emissions that AI initially caused, as well as emissions that we had no way of dealing with without AI, suddenly the debate gets a lot more nuanced.
- Let’s say AI is useless in the power field, which I think is insane to propose, but let’s go with it. The increased pressure on the power grid is forcing us into more practical solutions that we haven’t been able to get public will for in a long time, notably nuclear power. Realistically, there’s not reason that we couldn’t have solved the power portion of the ecological issues we’re facing by using nuclear power, but public will has gone against it due to the scare factor (in spite of coal power plants outputting on average more radiation than comparable nuclear facilities by orders of magnitude, but I digress), but it’s possible even if AI does not produce new solutions or new technological innovations, that it may force us to choose technologies that actually work, regardless.
- AI will likely have huge impacts in other areas of ecological harm than just emissions caused by the power grid. What about wildlife monitoring, advanced genetic sequencing and monitoring, genetic engineering, wholistic ecosystem modeling, etc. There’s a huge number of areas that AI will augment us in, and I think it’s not unrealistic to propose that some of those areas could actually have very real, very positive impacts that make it difficult to put a precise number on the overall benefit or detriment of large scale AI use.
- It’s entirely possible that AI will fundamentally alter the way our society functions. As more and more jobs are automated, without guarantee of replacement, I think there will be more pressure to move away from a consumerist capitalistic economic model, and move into something new. It could be post labor economics, post scarcity, or whatever else have you, but with humans augmented so heavily with so much automated assistance, I think it’s pretty straightforward to suggest the idea of people slaving away for eight hours at an office job, having a two hour commute, coming home, not having time or energy to deal with chores, and then going out to buy random junk from department stores (which I view as possibly one of the biggest climate issues we have at the moment) to some psychologically fulfilling end that they never quite reach…. I think it’s pretty straightforward to suggest that such a paradigm is coming to an end, and while I can’t say if the next paradigm will be better or worse, I can definitely say it’s not unreasonable to roll the dice given the low that we’re currently in.
@@novantha1 I actually think that’s a very fair take to my (perhaps) unfair one.
While Jacobson talks about wind power in that quote, in other places, he also talks about solar and water. I just mentioned his beef with carbon capture-which he says is about 10-11% efficient (as of 2019) when you add in all the other things like upstream emissions-because that was Ilya’s idea of a winning example.
I agree-I think that AI will play an important, perhaps vital, role in combatting climate change. _My_ issue with these guys talking about it in the way that they do is
(1) _they_ chose to talk about AI in terms of “solving” the climate crisis-it’s not coming up with solutions that “augment” us-it’s, at best, magical thinking, and, at worst, hype, where AI steps in in an almost messianic role, and
(2) worse, this kind of technocratic take fundamentally misconstrues the issue-it’s _not_ a technocratic problem that we simply don’t know the answer to (and that super-duper AI, in its infinite wisdom, will figure out)-these guys _consistently_ talk in these terms-it’s far more a systems issue involving politics and all that. (See, for example, China where just under 60% of the new car registrations are electric vehicles, with 10 million charging stations-China wasn’t asking AI what to do there-while the US number of registrations of electric vehicles is under 10%, one of the reasons being that charging stations are so hard to find.) And Jacobson, again, says “We have 95% of the technologies right now that we need to solve the problem” of climate change. That doesn’t mean that AI can’t help us do things _better,_ of course it can-it just means that the fact that these guys are thinking that we just have to “find the answer”-because AI will be able to give us answers-speaks to how clueless they are.
@@jeff__w Thank you for writing out my thoughts and frustration. The phrase "solving climate change" is nonsense, but is used a lot...
@@M7k8b012 Thanks! I’m glad my comment voiced what you were thinking and feeling.
The _other_ thing that I didn’t mention is that, along with the magical thinking that AI will come up with “the answer” that will “solve climate change,” is the assumption underlying Ilya’s example that _nothing has to change._ We can maintain our high-consumption lifestyles-with their enormous carbon footprints-and “efficient” carbon capture will just suck all that excess CO₂ out of the air and _everything will be okay._
It’s a fantasy, at once infantile and élitist, and a dangerous one, because it reinforces the status quo when, in reality, we need to change radically (and that, in fact, might be what super-duper AI, if it comes about, will actually tell us). And these are (in their minds) the Smartest People in the Room who are supposed to lead us into our Glorious AI Future. It’s downright scary.
Jaron lanier makes a very persuasive argument in favour of ai systems having a kind of provenance record for the sources of information which were most important to them in learning about particular subjects and giving particular answers. Essentially the ai would be crediting high quality information sources. This would then make possible the paying of “royalties” to the creators of high quality information, which in turn would incentivise people to create more such high quality material, which would create a virtual circle of increasing quality of digital content. Lanier is well worth listening to on this aspect of ai. It doesn’t look as though anyone is heeding his advice, however!
the strongest model in history with he exception of o1 of course
We are getting in the passenger seat of something much smarter than us
Hey, I have been viewing you for a while (from another account), and I'm glad you actually progressing on qualities, and you finally have timestamps (or I just missed it before).
I would say there is only one downside to your videos, is when I'm watching it nor commending it, I got BOMBARDED BY SHITTING AI CHANNELS IN MY FEED ><
Thanks for the video
Haha
"ERMAGERD THIS CHANGES EVERYTHING(NO MORE JOBS)!!!! *mouth agape*". Yeah, I've had to block maybe a dozen of those.
@@iverbrnstad791 Yeah, so my best solution was to just watching this channel in private window, lol.
Streets ahead!
Someone like MS, who surely has vast numbers (if not the overwhelming majority) of their GPUs doing AI for public or corporate web service, should announce that one day, like Sunday each week, ALL GPUs will be taken OFFLINE and used SOLELY to continue building and testing for a SUPER SCALE implementation with the goal of AGI.
Imagine the model running on +/- 1 MILLION GPUs...
No one dares have this positive thought: What if superior AI intelligence is totally beneficial and it naturally works to help us? There is no scientific rule that superior intelligence automatically disenfranchises less intelligent entities. It may be that superior AI will automatically use its skills to bring us up to its level. A symbiotic relationship between humans and the AI we create is totally possible.
I apologize for not showing existential fear and dread.🎉🎉🎉
That's actually why I quit AI. I believe that everyone is working towards creating a superior benevolent system. But no one wants to talk about that.
I think that this is a fine opinion to have. It's just that if this is the case then what is there to worry about anyway? If ASI is just really nice and helps everyone, then there is nothing we can do to fuck things up, so it's kind of a solved possibility. If it happens, it happens. But just because this is a POSSIBILITY doesn't justify any positive behavior in and of itself. One would need to demonstrate why your opinion is LIKELY if one were to act on this opinion in any way.
Totally agree…but there’s no rule that it‘lol be benevolent either. Until we have really good understanding of these models work color me worried. The science is better than it ever has been, but capabilities are outstripping interpretability research
Having continued existence and capability to affect the world for the AI as strongly convergent instrumental goals needed to achieve any other goal suggests that pure altruism is unlikely. Optimists really should watch all of Robert Miles's videos.
Hope for the best, plan for the worst.
Hello where are you ?
Am here, don't worry
@@aiexplained-official🙏🏼 good that all is ok
Mah boi Philip ! I’m finna dap you up mah boi. Excellent work as ALWAYS 🗿
14:54 BTW regarding climate goals: today both Germany and California are drilling new oil wells.
Oh boy, another AI toy in pika dot art, these things have been popping up left and right, but I'm not complaining!
Thanks for the status update, I never get enough of your videos!
Nice
8:10
Wait, this is the videogame Soma's plot.
Where can I find the algorithm to replicate AI Explained? Asking for a friend
Lol @ "aspirational brain-storming notes" 😂
Two points:
Consider using o1 with AI Scientist. An AI Scientist-created poster session that's also presented and published as a conference paper could be viable. While it might not meet the standards for top-tier conferences like ICLR, NeurIPS, ICML, or CVPR, it could be suitable for next-tier conferences and certainly for lesser but still well-regarded conferences. Ethics: This is a serious concern. Academic fraud is already prevalent in many areas of research. Where have you observed significant instances of such misconduct?
Speaking of China: How would an invasion of Taiwan affect OpenAI's market cap? Is it related to chip access? Counterintuitively, an invasion might actually increase OpenAI's value, not decrease it. The Pentagon would likely have a strong incentive to accelerate the development of ASI if faced with the risk of a totalitarian regime achieving it first.
The smart guys confidently predicting "The Forbin Project" for climate... I for one welcome...
“The value of OpenAI will double, absent an invasion of Taiwan by China”
It seems naive but I hadn’t really considered how destabilizing the rapid AI progress on the global political system could be.
If the US continues to pull ahead in the AI race, then an invasion of Taiwan will almost certainly happen in the next few years.
The Simpsons, long recognized as a modern day Nostradamus, explained what is about to happen long ago;
"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."
The US pushed the development of the atomic bomb because they were in a war. Why would they not push AI development for the same reason?
11:12 wait, was that a community reference?
AI explained is truly streets ahead!
It's actually a real saying in areas of the world where people have accents like his
I don't get the reference if there is one.
@@Gafferman It's from the show community, there is an episode where Pierce is trying so hard to coin the phrase "streets ahead" as a new slang that he claims to have started, and is rather unsuccessful. And of course, quite ironically, it then became a very popular expression among fans.
@@noone-ld7pt oh I see! Thank you for the context, as 'community' wasn't in title case I didn't realise you were referring to a specific thing and not us as a community on UA-cam!
@@Gafferman Yea that's my bad👋
AI is advancing so fast. We are boned.
Maybe automated alignment research will come in clutch?
*bored
Autobone.
@@Citrusautomaton Aligned to who? AI aligned with the current board of OpenAI seems about as scary as unaligned AI to me...
@@iverbrnstad791An ASI "aligned" with a species constantly killing each other at scale and seemingly culturally enslaved to short term corporate profit goals is a terrifying concept
Everyone is luddite now