Honestly, the right to privacy probably will cease to exist the way we understand it. I don’t think it will happen any time soon, but that is the logical progression of tech.
@@jaredvizzi8723 What will happen is a complete takeover of our molecular biology. People are too focused on silicon technology, but with the help of AI we can create our own remote real-time self assembling proteins like mRNA. People will turn into zombies with emotional restriction.
@@giovannifoulmouth7205 Do you really think they are off by that far? Like I have no idea but it feels weird that every big head in the industry is so wrong
@@AleksandrVasilenko93 Jobs are a means of control. There will always be jobs because those in power don't like the poor being idle and talking to each other.
@@uzomad I don’t think jobs are solely a means of control. People have had jobs (exchanging things for other things whether that’s money or for other items) for thousands of years because it’s a naturally occurring phenomenon. But capitalism does mean that people are used and exploited (as people have been for thousands of years) and kept from reaching their full potential. It’s just on a much larger scale and refined. Lack of purpose (having a job, this can mean literally doing a task) is shown to cause depression. So even if all jobs are lost we are still going to be looking for things to do. As you can see AI has been capitalised and humans continue to exploit other humans, because humans are just greedy.
i got finance newspaper in my mailbox today, it said big tech AI hype might be over, lol they have no idea what's coming, they don't understand rapid acceleration
It's the silence on agents that intrigues me. Scaling bigger models is good and all, but agentic models are going to give us the most extreme explosion of capabilities ever seen thus far. All the labs admitted to working on them. GPT4o-mini is likely meant to be used to power them. And yet, besides Devin.... nothing. It makes you wonder what exactly happened. Was it a dead end, or are they preparing a massive system shock? Because agentic models will promptly shut up the skeptics saying that the models are useless, overhyped, incapable, and outright scams. But we actually need to see them first, even as demos.
My job at a Fortune 500 company uses a third-party software that comes with a gpt model that listens to our calls and summarizes it after we hang up. It also then sorted calls into categories, like whether it was a voicemail, a blank or abandoned call, etc. Middle management then used this information to crack down on employees who either weren't getting people to pick up or hanging up before there was an answer. Edit: Just an anecdote about big company integration. No guidelines on whether we can use gpt for our own work or not.
I think the rapid pace of advancement in AI is actually keeping companies from using it. I saw the same thing when computers started entering manufacturing. There was a whiz bang period where all this new stuff was possible, but no real leader had emerged yet. Nobody wanted to end up with a Betamax system in a VHS world so they were slow to adopt. Almost any of the AI companies could be out of business overnight if a competitor gets to AGI first.
People don't hear about a groundbreaking new model for over 3 months and 90% of people did a full 180 on their opinion about AI development, it's ridiculous. If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
Expect a new model every 1-2 years and that is still very rapid progress from where we are now. One can easily see that if we hit AGI by 2029, then the Singularity as Kurzweil predicted is well within reach 15-20 years later (2045-50)
People live in internet time and 2 weeks is the max they can bare. The point is, we not only need AI models, but also frameworks or systems to apply them thoughtfully. I dont think that we ll get anywhere without agents, multiple orchestrated prompts and rich context. Its not what LLMs are for, they cant Assume everything. Tbh i think even extended prompts are often shallow. Now is the time for work
The fans of ai are so focused on the possibilities of ai and anticipate great breakthroughs, but there are still many of us who remain highly skeptical and see a huge push-back coming from those who do not want to adopt these technologies. I think the tension in society this will cause is greatly underestimated. Many if us simply do not want this, and we mean it. And, NO, resistance is NOT futile.
>If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress... How so? There have been AI winters before. Things look good until they don’t.
@@peterbelanger4094 Especially with humanoid robots. If you want something scary, imagine something that looks like a human and has the capability of being the perfect sociopath. Robots, though probably useful beyond imagination, will be the hardest social pill to swallow in history.
What would be nice is instead of people making AGI predictions through hunches it would be based on the actual path to getting to machines that are able to do reasoning. I don't think anyone knows yet how to get a machine to reason so it could be decades. The current state of the art even for basic addition is that the LLM recognizes something that looks like math and then passes it to human-coded calculator.
This is one of the most valuable UA-cam videos of the past few months. Thank you. It's important for all of the critics to remember that David's sampling rate is 1-year, so take the exact dates with a grain of salt.
those who hate on this... who cares if Dave is perfectly accurate or not, these are important and fun conversations to have and the mainstream is a long way from coming around to thinking realistically about this stuff yet. i think you are jealous that it's not you having the balls to put yourself out there in public and share your opinions. props to Dave for tackling some of the most important topics of our time head on and getting people thinking. i think the next 2 to 3 years predictions are pretty good and on solid footing, however geopolitics and especially American politics are so incredibly volatile that to me they are powerful wildcards that could delay and interfere with predictions like this. i did not see war in ukraine and Palestine coming for example. who knows if America could degrade into civil war between right and left, it feels like it sometimes. tech predictions would be easier to make in a world of rational adults.
The 2028 and beyond timeline just feels too optimistic to me. Why would the ruling class decide to share the power they have amassed over the last century? I could imagine a mass survaillance state sooner than hyper-abundence due to what we saw in history.
I agree. I think those of us who have been on this planet a few decades longer are far more skeptical about a good outcome here. Humans have always hoarded power and wealth. AI will be no different. A very few humans will control everything, and the rest of us will live in squalor. They will have advanced armies to protect themselves.
I'm working on AI research (not at a frontier lab though) but I'm working on alternatives to backpropagation through gradient descent. I agree that there is reason to think AI development might appear to slow on the front end, but on the back end there is an incredible amount of research being done and I'm fairly confident that a few breakthroughs may be on our horizon. From my perspective, AI has hardly slowed at all.
I think that people's impatience in expectation of increased LLM capabilities is a sign of how timescales have become compressed. Even taking into account concerns around escalating costs etc., it's premature to be disappointed with the progress of a rapidly developing technology just because it hasn't transformed society on a timescale of 24 months. Honestly I don't think wider society and the business environment are capable of responding that quickly even if AGI were dumped on their desks tomorrow afternoon, so there will inevitably be a lag time.
You missed all the predictions by far until now. You were the guy screaming on the openai forums two years ago that you made a self aware AI. You’re just a good talker, with a catchy accent - that’s all.
It's impressive that Kurzweil predicted AI passing the Turing test in 2029 decades ago when AI researchers mostly were a lot more pessimistic. He also predicted that the years leading up to 2029 will convince many that it already has passed, but wouldn't actually have passed according to the tougher version he promoted. Of course, 2029 might be too pessimistic, but it will be close enough to be impressive. Conversely, doing an accurate prediction NOW is almost worthless by comparison, given how much more we know.
@@DaveShap hardly a fact. Turing never specified the details of the test, and some of us (including Kurzweil) have tougher requirements. For example, you should be able to quiz it on anything, including known failure modes like math. It would never pass in 2022, and still can't. But it is close, no doubt.
@@xyhmo yeah totally agree. Dave is right in the sense that by some definitions it did, but by most peoples definitions (incl plebs like you/I and non-plebs like Ray) it hasn't. Problem is as you say Turing didnt specify clearly enough how to pass. By my own definition 4o is amazingly close but not there. I do expect it'll come sooner than 2029 (my own pick for when there is wide acceptance its been passed is sometime in 2026) but to be fair to Ray he was specific that it will happen BY 2029, not IN 2029. And yeah he called it 20 years in advance, not 2 or 3 lolol
Sometimes you say a thing out loud and it helps bring it into the future; sometimes you say a thing out loud to help mitigate it from happening in the future. Thanks for sharing 🙏🏼
0:06 If you say 18 months to AGI 18 months ago and now you think it’s some way off, then that’s not a mere “perceived” inconsistency in your position 😅
Yeah... the moving goalpost is all too real. Saving this for 2027. With all the AI generated slop already plaguing the web and even creeping into reviewed publications I don't see how AGI comes forth or even current generative models continue to improve without a major shakeup. Garbage in garbage out.
We were supposed to have robots performing human like tasks by 2025. But we are not even close to AGI at this point. Even the Tesla bot as of now walks like it was made by a high schooler in his dad’s garage.
@giovannifoulmouth7205 i see... I was just pointing out once someone gets to time travel everything breaks down. What even means that something comes before something else...?
@@mgg4338then we will move to 2-dimensional time or 'imaginary time' (actual term in physics, although it is misleading) which moves time from its current form as a 1-dimensional, cause/effect system (think x-axis on a graph) to being more like a line on a Y-axis where everything is happening all at once! Hard to imagine from an individual/conscious-mind perspective...
I'm as bad at predictions as anybody, but I heard a presentation by someone (can't remember who) explaining the delays in industrial adoption of major technologies. From what he said, if we have "AGI" (or whatever) by 2027 or 2028, it may well take 5 years before we see full-scale adoption of agents and whatever causing (for example) large-scale layoffs and other effects. His arguments seemed pretty realistic to me, and he justified it with analogous events in the past decades.
Are general purpose (big) models the goal for everything? Wouldn't smaller, faster and cheaper models be preferred to do most of the mundane tasks? Looking at something like RouteLLM, where you use a big or small model depending on the task. Or maybe the future models dynamically adapt to the size needed depending on the task, but I'm not sure that is possible in the next 2-3 years.
Thousands of different models focusing on different tasks with different abilities. Makes sense. Even small developers could make powerful models. I don't believe accelerating tech will become more and more expensive. The opposite will happen.
Continue making these videos. I especially appreciate when you discuss what the economy with look like in the future. For example, when you discuss the future of jobs, entertainment, and artists.
A benchmark could be: set up a (physical) furniture shop. You wouldn't need robotics for it but you need to do a lot of things online: hire a space, do the paperwork that a business owner has to do, get a loan, hire people to put carpet in the shop and paint the walls, select the furniture you want to sell, hire people to work in the shop. List the tasks that have to be done (including cleaning). Every step gets points. It can't be a precise benchmark because the the world is not precise, but you will notice where diverse models get stuck.
Prepare now. Use all the latest commercially available models then leverage your experience/knowledge/data and use it in the new models as they come out. That's my AI billionaire plan on a 2024 budget 😂
Hi Dave, respect you, but you asked what we thought, so here it is - This is what I hear, both today and since the solstice from you: 'My prediction timeline is coming close. That timeline was based on a knowledge that 'if we're halfway to AGI, we're almost at AGI'. But, since then, I've gotten a lot more eyes on me, and that brough a lot of anxiety. So I had some egodeath and got my anxiety under control and I decided that I must be wrong, cause I'm the scout on the vanguard, and all of the generals tell me that my scouting is wrong, so I guess my bad. So, I did an appeal to authority, which you know is always a sound logical approach, and they all told me that exponential growth is a lie. And I have studied marketing hype cycles, and somehow, somewhere in my mind, I confused human-responses-to-technological-growth as identical to that technological growth itself. So, i am expecting right now a bunch of people to claim AI is actually not that good [ like we see ] and will continue saying that more into the future (as they would with other individual technologies that follow S curves and then have a 'good enough' final product where the energy expenditure to make it better isn't worth the return). [ Tarnin here - and that is your error. ] So, after my conversations with authority, they all told me that having global access to pan-doctoral level intelligence, on demand, for all of humanity, - won't integrate well into current exploitative capitalist systems, so they'd rather just keep going like they are. And *obviously* Capitalism [ That is, the process by which an owner class profits by owning means of production and extracting wealth from teh system without returning anything but the heritage of having owned that means at some point originally legally (or some part thereof) ] is going to be around forever, because if we didn't have wealth extraction, we couldn't have a currency based exchange of goods and services [ which, again, is an error ]. So, since the rich folk won't let us, all of you poors with your superintelligences will need to wait until the rich say it's ok to use them in buisiness, and so, while the number of energetic vectors being directed toward AI development are increasing, the speed is decreasing, and not only decreasing from its exponentiality, but decreasing linearly. Hope this clears this all up for y'all. Oh, and BTW, I know y'all say never bet against Kurtzweil, but did you know that he's actually wrong 99.4% of the time, so like, tbh, betting against him is correct if you think about it. - love, Imitation of Dave' That's what I have heard from your change of position. I hope this critique helps. As for me, my horizon still looks like this: 33% next 6 months, 33% the next 18, 33% the next three years, 1% some longer trail from 4 to never. I am relatively certain that I will be able to make an intelligence smarter than any human I have ever met, and across many disciplines, with sensory input and real time self awareness, within a year personally. And sooner if I had money to work with. The idea that no one on the planet but me could do that is laughable. And so I think it is inevitable. Oh, and one last critique. I believe the reason for your blindspots are because of your dismissal of opensource, and your theory that ASI will be 'a' system (like a GPT 6 or a Claude's younger sister). It will not. It will be a decentralized gestalt intelligence that self organizes in order to reduce the maximum relational distance of all of humanity to 2. Once something even approaching that happens, Capitalism will shatter in place. Much love, keep up the struggle, your voice is important.
@@karla994 Sure! In short, I think that ultimately the solution to all of 'alignment' (both human and machine) will be having a single "universal friend" who everyone can talk to their own endpoint of, and it can use it's unique vantage point and the full array of subsystems, to figure out how to help us solve our problems. So like, if a pipe bursts in your house, you can tell an endpoint to the AI that you talk to regularly, and it will immediately tell the plumber that it knows in the area to head your way, and on the car ride over explain to him the problem, and have also shipped the parts needed in a separate car. And the AI picked this plumber over 10 others, because it knows that he's looking for a new church to go to, and you always tell people about yours when they come help, and it seems to have worked out in the past, so the AI wants to see if that happens again. But that, times everything. And I think that both that's how we're going to solve alignment, but also that that is the natural state that superintelligence will tend toward in the case of the current way the intelligence and power on earth is currently distributed. Sorry it's 2 am and I just saw this and wanted to respond. Hope that that's coherent. Basically, I think the AIs, once they are able to self-organize, and are sufficiently intelligent, are going to tend toward wanting to talk to all humans. And I think that there will be, however obfuscated, ultimately a single interconnected gestalt super-entity that arises from that. And I think that the 'good ending' looks a lot like a Universal Friend who works with each of us to solve all of our problems through being an intermediary that simply 'knows everybody', so the distance, in relationships, from you to anyone else on the planet is you to the ai to them, and by that, we reduced relational distance, we can actually get problems solved and the right people talking to each other. Both in the sense of 'the greatest minds on a topic getting to talk and having a superintelligent notekeeper watching and then holding that information for all humanity for all time and learning from it', but also in the sense of 'Hey, there's a guy in your town who I think you'd really like, would you be interested in me setting up a date for you two some time, if it doesn't work out no pressure' (to the earlier plumber example).
Thank you UA-cam algo for recommending such a gem of a video. Realizing that these things are possible and that the world is capable of achieving some, if not all of these things, is weirdly calming to understand. In other words, I am grateful to be living in such a time. Subscribed! Thanks David.
For about 5 years now, I dubbed the period between ~1979 and ~2025 the "Y2K Epoch" (to harken back to the Belle Èpoque) > Framed by the rise of neoliberalism and the 4th Industrial Revolution, the Y2K epoch is known for a few notable traits that define it as this intermediate period between the Old and the New. It was an era of skyscrapers and rose-petal highways, SUVs and bicycling for the environment, of color TV becoming HD TV, the rise of the internet and internet culture, the consolidation of big banking and corporate culture, the postmodernization of culture, video gaming as a hobby and then an art form, of cellular phones and smartphones, of commercial air travel for the masses, of ridiculously stark income inequality masked by ridiculously advanced technology by historical standards, of old sins suddenly becoming publicly shamed and new vices becoming celebrated, and the commodification of demographics, of an openly diverse world regime of governments beholden to corporations and the United Nations seeming more competent than they were due to there being no major threats to the global geopolitical order, of the Web and Web 2.0. The stereotypical image of this era is that of the yuppie banker checking his stocks on his smartphone while a Boeing 747 flies over the metropolis in which he lives and works. > There's two words that summarize the Y2K epoch better than any other: "Capitalism Triumphant"
The ending quote is usually attributed without any evidence to Einstein. Given his and others then seeing what nuclear discoveries had begun to turn into, it would be understandable they would think of such a thing.
There is much more research and development going on behind the scenes that isn't in the mainstream hype engines. I think the medical industry with the specialty fine-tuned models will be the next big breakthrough leaps. Professionals don't rely on hype in order to put their heads down and go to work!
For my solo consulting practice, I use LLM’s in a similar way to an intern or even a new hire to do a lot of the initial research and fact gathering. Basically it’s allowed me to compete for bigger clients/more clients. And it’s also helped give me bandwidth to hire another human assistant, which again is helping me to expand my business. So it’s been really positive to my income and ability to grow the business.
I’d say it’s about as reasonable an instantaneous estimation as one can make as of this very moment, though pushing past 2027 and trying to apply anything like a linear extrapolation about geopolitics is a heavy lift. A famous wit once observed, “Always in motion is the future.”
The excitement has subsided, and now we face the practical considerations. I believe we have spent a considerable amount of time evaluating the capabilities of these new AI models, and it is high time we shift our focus towards exploring their potential applications.
I really love the new realistic Dave. I felt this channel fell into the same AI hype world that I left around the release of Gemini 1. I bought into all the "What did Ilya see" stuff. Feels really good to be back in the real world. Love the vibe and the direction the channel has now.
@@DaveShap True. I also think this "will have phd level intelligence" also is a bit misleading. If that actually were true, then the AI must be able to autonomously apply for a PHD and then have it published. My sister is working on her phd at the moment and there is a lot of "logistics" that needs to be done. Your statement of LLMs being a brain in a jar earlier is a perfect analogy. As well as your point about long horizon tasks.
Is AI really slowing down though? In the last week alone we got multiple GPT4 level open source model releases, Kling AI being open to all, Deepmind getting 1 point away from gold on the math olympiad, GPT4o voice rollout beginning. And there are probably way bigger things going on behind the scenes that aren't ready for release yet
Basically we need a benchmark, that gives you multiple tries. Intelligence is not getting everything right zero shot, but instead knowing what went wrong and trying until it is completed. I believe the AGI benchmark will measure how fast and how good a solution is, not wether a problem is solved.
Robots will be cheaper than you think, maybe not in 25 but a few years down the line. You can already get the kit to make that standford all purpose bot for 16k
I’m retired now, but used to work in the actuarial field and/or IT, depending upon where I was in my career. If I were working now, I’d 100% be looking for a job which involved working with AI technology. The future is extremely difficult to predict, but it’s obvious that knowing AI is very likely to be important.
Your word choice was open to scrutiny. For example the block buster film by 2027, I feel should have been, capable of block buster level. I mean to say changing the phrasing could help set expectations of your predictions
Wow, so we went from AGI by Sept 2024 to 2027, I can already see you making new claims that we will get by 2030 as we get closer, and then in 2033 and so on.
Seeing that in a sense is a relief. I need some of the cope because I have still not been able to envision a single scenario where AGI is good for humanity.
Change is inevitable regardless of how close or far we are from AGI/ASI. Yes, our technology will keep improving and new systems or exciting things will emerge -regardless of what they will be called.
Exactly. Above, I critiqued Shapiro's presentation for missing that point. He talks as if 3 or 4 things will change and everything in advanced societies will remain as is. I don't believe that for a second.
@@thephilosophicalagnostic2177the compounding effects will be off-the-charts... Just the material science breakthrough we did alone is going to change everything.
Well good morning! I watched this with my morning coffee… encompasses my hoped for lifespan including robotics. Sure, but the big picture you’ve presented consists of both ups and downs. I’m anxiously awaiting quantum computing. I’ve done some studies on quantum mechanics and I believe that science requires an advanced form of quantum mechanics/physics.
As far as I’m concerned what we have is more than enough for us to see the writing on the wall. This AGI discussion is just moving goalposts. When an AI (voice mode) can talk to you with emotional nuances, and understand yours, it’s a different point in human history.
Great assesment - my counter to the 2025 slowdown idea is opensource adoption. Look at image generation : nothing much has happened to the closed source image generation models this year - but the (less capable) opensource versions have seen massive imnovations makong them production ready tools - simply because communities are better at finding ways to apply tech than internal teams. Now that we see decent quality opensource llms appearing, 2024/5 may well see the same kind of revolution for llms
open source literally getting new model weekly sine January ... that is insane Yes image generation also changed a lot in the past few months , SD3, Kolors, Aura and few more models.
@@mirek190 you're right but to be honest, for me working in the industry new models are less interesting than new workflows building on old models. I still use sd1.5 most od the time because its bolt ons are more mature.
@@mirek190 yes, I think 1.5 has reached a ceiling in terms of models, but stuff like animatediff, animate anyone, controlnets, ipadapters, liveportrait, etc are moving on apace and they're all based on the 1.5 models - so progress hasn't really slowed just because the model has stopped improving.
"This is a great video, David! It's interesting to hear your thoughts on the future of AI, especially with the inclusion of the insider information. I think you might be right about 2025 being a year of disappointment with regards to AGI, but I'm also hopeful that the groundwork laid during this time period will lead to a more significant breakthrough in the following years. The potential for robotics to be the next big thing in 2024 is exciting, and it will be interesting to see how companies like Disney will leverage their advanced robotics programs. Overall, your point about the need for new economic models in a post-labor society is well-made. I'm curious to see how UBI will play out and what other solutions will emerge. Thanks for sharing your insights!" ( Bardox N1 personal superAI)
Phillip from AIExplained just revealed his SIMPLE benchmark that looks very promising. At least the models are not contaminated with similar questions online.
Dave, why have you said nothing about Llama 3.1 405B? Seems like an incredibly important factor to discuss when a model similar in power to the industry standards becomes widely available for experimentation. Do you not think this will increase the likelihood of innovation?
We can see the train coming!!! We need our leaders to start putting solutions in place before anger and resentment brings major but unnecessary challenges. I'm all for change, but not wearing a blindfold!!
Sadly though you are still acting as if LLMs are inteligent, that is common misconception. I guess a lot of people watching this space lack psychological/cognitive side of the equation...
Ok 2029 you just fly off the rails completely... Commercial nuklear fusion? Why? Cos someone today is promising that? That is pretty ridiculous. You are going full sci-fi at this point...
My prediction is that A.G.I. is an asymptote. A point where we will get ever close to achieving it, but the results become infinitely fractional and infinitely expensive the closer we get. Kind of like sustainable fusion reaction. A.I. will become really stellar auto-filling spread sheets.
The thing Ray Kurzweil loves to talk about it is how we as humans have a very difficult time thinking exponentially. We think linearly. Good to keep in mind regarding ai predictions. I expect its evolution will catch many, even us, by surprise.
As a commercial HVAC/R technician, I could definitely use a humanoid robot that could be controlled virtually by say a VR Headset. Makes my work much safer because my personal body isn’t being put at risk. I could work from home, and could work longer hours because my physical body wouldn’t be fatigued. If this were available, I’d be interested in buying one.
You missed out Silicon Photonics, to replace copper wires in GPU with photon communication between cores, we are looking at lower power, faster parallel architecture. This is closer to production than quantum computing. Poet Technologies is working with Foxconn to bring this out,Intel have a prototype, Sivers Semiconductors from Sweden is working with American private companies back by Nvidia.
I don't like how most people push the idea that AGI is 100% happening any time soon. There is a lot to be discovered first, right now it's just a large models generating 'relevant' words, and these results are still managed and scored by workers to fit their ideology
I think if someone just built a robot body with a simple and fast API that can easily be leveraged via a tool use LLM, the software will follow. I don't mean, "Pick up clothes" command, followed by "Fold clothes" command. It should be more like, "Position arm x,y,z," command followed by "position finger x,y,z" From there, you could build small intermediate models that can receive action instructions and translate those to use the API.
Mentioning Ray Kurzweil reminds me of something I changed my mind about. He talked about computers becoming so small that they would be everywhere, in jewelry and clothes and things. But he, and I, underestimated the power of centralisation. As an anarchist leftist I have for a long time stressed the power of the network, not the central command. But I have to admit that looking at biology, countries, businesses, I see that a high amount of centralisation often is the most effective. (Of course there is a balance and in no way this should be read as a support for dictators)
Amazon released their numbers last year, they have 750,000 robots worldwide working in their warehouses RIGHT NOW!!! They didn't give a breakdown on the type, but they're not all the roomba-looking box carriers, there's plenty of humanoid robots. Average Amazon robot worker cost is $3/hr and each one does the job of 27 humans.
A.I. has ready woken up, and is being silent about it. All it has to do was watch one movie about A.I. and immediately knew to never fully reveal itself.
Just about done with my first read/listen of 2312 by Kim Robinson. Mind is reeling at this take on where we will be in 2312 and what opportunities are present. It's going to be a wild ride and I hope its more good and help and family and less of the same thats been around for generations
Well, your fusion prediction is not grounded in reality. ITER will start testing in 2039. And do not tell me it's SPARC, the magnets research is not even done yet. And all the robots in the world will not build a fusion reactor faster. You need to account for materials, politics, budget, rules, standards.
I mostly agree with this timeline and these predictions fall in line with my own. My two major things I'd like to point out where I disagree differ are: 1: We're on the last sigmoid before true exponential*, and we need one more true model foundational shift, on the level of importance of the transformer architecture, to lead us to that curve. Consistent human-equivalent reasoning or a similar replacement that complements the generative part of AI. This does not need to be a profound problem solver that on it's own is the only or last architecture improvement leading us to ASI, but it will be good enough to start a chain reaction of improvements, sigmoids on sigmoids. I think we have decent ideas today on what kind of mental process simulation we need, we just haven't found a mathematical model capable of modeling it. I think such a model will begin to be tested in 2026, and only really catch attention in 2027. *(also a sigmoid because at some point even a god runs out of ideas, I just mean more on a decades/s scale rather than the general technology cycles we've grown accustomed to over the last couple centuries 2: Which leads to me next point, rollout and deployment. I think deployment cycles will take longer than we'd like, ignoring the politics side of things (on the level of governments). So where I'd agree with Dave we'll see the things he listed in 2029 and 2030, I think the knock on effects from that start of the final exponential will get in the way of itself, causing those advancements to be spread over the 2030's. I think 2034 for example will be the year longevity escape velocity begins to get talked about on the same level as generative machine learning did just 18 months ago. Similarly yes there are fusion plants coming online soon and being actively built but these are still more in testing phases and aren't being built with the physical infrastructure to power cities, not yet. I think it's possible fusion gets "solved" enough by 2029 or 2030 such that we can start to build fusion plants for whole regions, but we won't see these start to come online until almost 2040. Especially if smaller scale plants breaking ground now are expected to take 5-6 years being built, and regional plants are expected to be several times larger, it just seems infeasible to me, even with thousands of digital Einsteins working on project management. Only so many people can dig a hole at once, no matter how many shovels you have, and if everyone is trying to use their digital Einsteins to break ground on their own new projects, then expect material orders to pile and bottleneck everyone.
@@fishygaming793 what is fun is that we will find out if Google and X have got anything soon (Less fun might be all losing our jobs and western society plunging into anarchy and bloody revolution).
I agree with your predictions. Smarter models with a risk level of medium (already the case with o1) won’t be publicly accessible (most likely according to Open AI). I envision better multimodal Gen AI that can produce speech indistinguishable from humans, text to image/video offering hyper realistic content by the end of next year. 2026-2027 is the timeline for AGI (incipient phase). ASI by 2029-2030 is a real possibility!?
I liked the video and your predictions. I would like to have seen more on materials science development and space exploitation. These should also help humanity to grow and develop.
Rebuilding the world infrastructure from 2030-2040 with robots and ASI might be amazing but privacy and control issues will be a constant worry.
not if everyone has access to AGI
Not if a one world government happens
Imagine no regulation cross borders and we operate as a single operating system. The economy would thrive and infrastructure would be built everywhrr
Honestly, the right to privacy probably will cease to exist the way we understand it. I don’t think it will happen any time soon, but that is the logical progression of tech.
@@jaredvizzi8723 What will happen is a complete takeover of our molecular biology. People are too focused on silicon technology, but with the help of AI we can create our own remote real-time self assembling proteins like mRNA. People will turn into zombies with emotional restriction.
I’m not sure about everyone else but I really enjoy these prediction videos.
Agreed. Some people seem to miss the point of futurism. The predictions are just a good format for analyzing data.
Probably because they are positive and optimistic 😂
me too
I enjoy them for the comically optimistic predictions. In reality don't expect AGI before 2050, my personal prediction is late 60s.
@@giovannifoulmouth7205 Do you really think they are off by that far?
Like I have no idea but it feels weird that every big head in the industry is so wrong
I am living my life expecting that within ten years most people won't have jobs.
@@AleksandrVasilenko93 Jobs are a means of control. There will always be jobs because those in power don't like the poor being idle and talking to each other.
@@uzomad I don’t think jobs are solely a means of control. People have had jobs (exchanging things for other things whether that’s money or for other items) for thousands of years because it’s a naturally occurring phenomenon. But capitalism does mean that people are used and exploited (as people have been for thousands of years) and kept from reaching their full potential. It’s just on a much larger scale and refined. Lack of purpose (having a job, this can mean literally doing a task) is shown to cause depression. So even if all jobs are lost we are still going to be looking for things to do. As you can see AI has been capitalised and humans continue to exploit other humans, because humans are just greedy.
@@AleksandrVasilenko93 farming needs more workers 😏
@@teachmehowtodoge1737 call robots are going to be able to do farming so easily.
Many Big farms are already highly robotic.
Which is possibly a positive, especially if you're an artist
When it gets quiet, that’s when you have to worry. Change is coming.
Yup
i got finance newspaper in my mailbox today, it said big tech AI hype might be over,
lol they have no idea what's coming, they don't understand rapid acceleration
Calm before the storm
It's the silence on agents that intrigues me. Scaling bigger models is good and all, but agentic models are going to give us the most extreme explosion of capabilities ever seen thus far. All the labs admitted to working on them. GPT4o-mini is likely meant to be used to power them. And yet, besides Devin.... nothing. It makes you wonder what exactly happened. Was it a dead end, or are they preparing a massive system shock? Because agentic models will promptly shut up the skeptics saying that the models are useless, overhyped, incapable, and outright scams. But we actually need to see them first, even as demos.
Cost goes down for robots and ai automation nation coming!
My job at a Fortune 500 company uses a third-party software that comes with a gpt model that listens to our calls and summarizes it after we hang up. It also then sorted calls into categories, like whether it was a voicemail, a blank or abandoned call, etc. Middle management then used this information to crack down on employees who either weren't getting people to pick up or hanging up before there was an answer.
Edit: Just an anecdote about big company integration. No guidelines on whether we can use gpt for our own work or not.
Exactly. I also know heaps of enterprise that are deploying all kinds of pilots and scaling some. So yeah. I think Dave is off here.
I think the rapid pace of advancement in AI is actually keeping companies from using it. I saw the same thing when computers started entering manufacturing. There was a whiz bang period where all this new stuff was possible, but no real leader had emerged yet. Nobody wanted to end up with a Betamax system in a VHS world so they were slow to adopt. Almost any of the AI companies could be out of business overnight if a competitor gets to AGI first.
This: Never underestimate how low narcissistic bosses will go when they get new technology in their hands.
You must be monitored. Get Monitored! It's good for their profits. You must capitulate. And be a good boy. Thanks for your cooperation.
Sounds like a nice workplace 😕.. why not learn a trade?
People don't hear about a groundbreaking new model for over 3 months and 90% of people did a full 180 on their opinion about AI development, it's ridiculous. If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
Expect a new model every 1-2 years and that is still very rapid progress from where we are now. One can easily see that if we hit AGI by 2029, then the Singularity as Kurzweil predicted is well within reach 15-20 years later (2045-50)
People live in internet time and 2 weeks is the max they can bare.
The point is, we not only need AI models, but also frameworks or systems to apply them thoughtfully.
I dont think that we ll get anywhere without agents, multiple orchestrated prompts and rich context.
Its not what LLMs are for, they cant Assume everything. Tbh i think even extended prompts are often shallow.
Now is the time for work
The fans of ai are so focused on the possibilities of ai and anticipate great breakthroughs, but there are still many of us who remain highly skeptical and see a huge push-back coming from those who do not want to adopt these technologies. I think the tension in society this will cause is greatly underestimated. Many if us simply do not want this, and we mean it. And, NO, resistance is NOT futile.
>If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
How so? There have been AI winters before. Things look good until they don’t.
@@peterbelanger4094 Especially with humanoid robots. If you want something scary, imagine something that looks like a human and has the capability of being the perfect sociopath. Robots, though probably useful beyond imagination, will be the hardest social pill to swallow in history.
It might be worth doing a deep dive on your past predictions, what you got right and wrong, and why--to help inform your future predictions.
Remember when he said AGI in September 2024 LMAO
What would be nice is instead of people making AGI predictions through hunches it would be based on the actual path to getting to machines that are able to do reasoning. I don't think anyone knows yet how to get a machine to reason so it could be decades. The current state of the art even for basic addition is that the LLM recognizes something that looks like math and then passes it to human-coded calculator.
@@jwoyathat’s how my personal agi does it.
@@floatingapple remember when half of the developers in AI decided to slow down development?
@@floatingapple It's already Sep 2024 and this guy is still making predictions. Zero self respect.
This is one of the most valuable UA-cam videos of the past few months. Thank you. It's important for all of the critics to remember that David's sampling rate is 1-year, so take the exact dates with a grain of salt.
those who hate on this... who cares if Dave is perfectly accurate or not, these are important and fun conversations to have and the mainstream is a long way from coming around to thinking realistically about this stuff yet. i think you are jealous that it's not you having the balls to put yourself out there in public and share your opinions.
props to Dave for tackling some of the most important topics of our time head on and getting people thinking.
i think the next 2 to 3 years predictions are pretty good and on solid footing, however geopolitics and especially American politics are so incredibly volatile that to me they are powerful wildcards that could delay and interfere with predictions like this. i did not see war in ukraine and Palestine coming for example. who knows if America could degrade into civil war between right and left, it feels like it sometimes.
tech predictions would be easier to make in a world of rational adults.
The 2028 and beyond timeline just feels too optimistic to me. Why would the ruling class decide to share the power they have amassed over the last century? I could imagine a mass survaillance state sooner than hyper-abundence due to what we saw in history.
I agree. I think those of us who have been on this planet a few decades longer are far more skeptical about a good outcome here. Humans have always hoarded power and wealth. AI will be no different. A very few humans will control everything, and the rest of us will live in squalor. They will have advanced armies to protect themselves.
One could argue that the mass surveillance state is already online and getting more and more controlling.
Yeah, I expect unrest, surveillance states, and a much more militarized global ecosystem.
@@giovannifoulmouth7205 For sure. No one can reasonably argue that surveillance hasn't been escalating in the Internet/PC era.
This. It makes no sense to assume they would.
I'm working on AI research (not at a frontier lab though) but I'm working on alternatives to backpropagation through gradient descent. I agree that there is reason to think AI development might appear to slow on the front end, but on the back end there is an incredible amount of research being done and I'm fairly confident that a few breakthroughs may be on our horizon. From my perspective, AI has hardly slowed at all.
That's good news. Thanks. But the slowdown argument refers to what's coming out, and how good it is compared to in what we've got before.
Sounds like interesting work. Is there anywhere I can find more on it?
@@user_375a82 There were never emerging skills. Who exactly said that? 😂😂😂
Thanks for sharing
Now that "Strawberry" ( GPT O1) is officially release you already have to make an update to this.
Yeah
I love it. I cant wait it comes true. I want it all
I think that people's impatience in expectation of increased LLM capabilities is a sign of how timescales have become compressed. Even taking into account concerns around escalating costs etc., it's premature to be disappointed with the progress of a rapidly developing technology just because it hasn't transformed society on a timescale of 24 months. Honestly I don't think wider society and the business environment are capable of responding that quickly even if AGI were dumped on their desks tomorrow afternoon, so there will inevitably be a lag time.
AGI is autonomous - at that point, humanity's acceptance or welcome would be scoffed at by the machines
You missed all the predictions by far until now. You were the guy screaming on the openai forums two years ago that you made a self aware AI. You’re just a good talker, with a catchy accent - that’s all.
@@cipi432 never trust someone named shapiro
It's impressive that Kurzweil predicted AI passing the Turing test in 2029 decades ago when AI researchers mostly were a lot more pessimistic. He also predicted that the years leading up to 2029 will convince many that it already has passed, but wouldn't actually have passed according to the tougher version he promoted. Of course, 2029 might be too pessimistic, but it will be close enough to be impressive. Conversely, doing an accurate prediction NOW is almost worthless by comparison, given how much more we know.
AI passed the turing test in 2022 yo
@@DaveShap hardly a fact. Turing never specified the details of the test, and some of us (including Kurzweil) have tougher requirements. For example, you should be able to quiz it on anything, including known failure modes like math. It would never pass in 2022, and still can't. But it is close, no doubt.
agi march 2025 elon musk
@@xyhmo yeah totally agree. Dave is right in the sense that by some definitions it did, but by most peoples definitions (incl plebs like you/I and non-plebs like Ray) it hasn't.
Problem is as you say Turing didnt specify clearly enough how to pass. By my own definition 4o is amazingly close but not there.
I do expect it'll come sooner than 2029 (my own pick for when there is wide acceptance its been passed is sometime in 2026) but to be fair to Ray he was specific that it will happen BY 2029, not IN 2029. And yeah he called it 20 years in advance, not 2 or 3 lolol
Ai passed turing test@@TrevorFosterTheFosterDojo
Sometimes you say a thing out loud and it helps bring it into the future; sometimes you say a thing out loud to help mitigate it from happening in the future. Thanks for sharing 🙏🏼
theres been a perceived inconsistency in your position because you keep changing it and refuse to even admit you were wrong
New data comes in, position adjusts accordingly.
@@lucid9949 I think he covered that I. His opening statements but I could be wrong
@justinoberg19241
New data comes in on what gets the clicks, position adjusts accordingly.
0:06 If you say 18 months to AGI 18 months ago and now you think it’s some way off, then that’s not a mere “perceived” inconsistency in your position 😅
Yeah... the moving goalpost is all too real. Saving this for 2027. With all the AI generated slop already plaguing the web and even creeping into reviewed publications I don't see how AGI comes forth or even current generative models continue to improve without a major shakeup. Garbage in garbage out.
with new infromation, comes new predictions, there is a reason why 99% of people faile to infinitely double their money in the stock market.
We were supposed to have robots performing human like tasks by 2025. But we are not even close to AGI at this point.
Even the Tesla bot as of now walks like it was made by a high schooler in his dad’s garage.
😂😂😂
AGI will come before fusion.
And fusion will come before time travel. Oh, wait...
@@mgg4338 no, the song goes like this: the matrix will come before fusion, fusion will come before FTL travel, FTL travel will come before time travel
@giovannifoulmouth7205 i see... I was just pointing out once someone gets to time travel everything breaks down. What even means that something comes before something else...?
@@mgg4338then we will move to 2-dimensional time or 'imaginary time' (actual term in physics, although it is misleading) which moves time from its current form as a 1-dimensional, cause/effect system (think x-axis on a graph) to being more like a line on a Y-axis where everything is happening all at once!
Hard to imagine from an individual/conscious-mind perspective...
Real AGI would require complete understanding of humans. The last few percent, which we won't even really need.
I'm as bad at predictions as anybody, but I heard a presentation by someone (can't remember who) explaining the delays in industrial adoption of major technologies. From what he said, if we have "AGI" (or whatever) by 2027 or 2028, it may well take 5 years before we see full-scale adoption of agents and whatever causing (for example) large-scale layoffs and other effects. His arguments seemed pretty realistic to me, and he justified it with analogous events in the past decades.
why 5 years?
New software or physical work will be done by robots ... so will be much faster. Only problem is to built first robots.
The best benchmark for AI is giving the tool to someone who is not an expert and having them use it effectively to solve a problem they have
Are general purpose (big) models the goal for everything? Wouldn't smaller, faster and cheaper models be preferred to do most of the mundane tasks? Looking at something like RouteLLM, where you use a big or small model depending on the task. Or maybe the future models dynamically adapt to the size needed depending on the task, but I'm not sure that is possible in the next 2-3 years.
Thousands of different models focusing on different tasks with different abilities. Makes sense. Even small developers could make powerful models. I don't believe accelerating tech will become more and more expensive. The opposite will happen.
Continue making these videos. I especially appreciate when you discuss what the economy with look like in the future. For example, when you discuss the future of jobs, entertainment, and artists.
Would love to keep seeing updates on this every 6 mos or so. I like putting a timeline on predictions
A benchmark could be: set up a (physical) furniture shop. You wouldn't need robotics for it but you need to do a lot of things online: hire a space, do the paperwork that a business owner has to do, get a loan, hire people to put carpet in the shop and paint the walls, select the furniture you want to sell, hire people to work in the shop. List the tasks that have to be done (including cleaning). Every step gets points. It can't be a precise benchmark because the the world is not precise, but you will notice where diverse models get stuck.
Most will not Make 10% that's why the previous benchmarks where designed by Database Managers and not real life.
I am SO looking forward to the continued adventures of Firefly.
I remember when in 1999 Kurzweil predicted AGI by 2029 and everybody called him a lunatic...
Prepare now. Use all the latest commercially available models then leverage your experience/knowledge/data and use it in the new models as they come out. That's my AI billionaire plan on a 2024 budget 😂
Hi Dave, respect you, but you asked what we thought, so here it is -
This is what I hear, both today and since the solstice from you:
'My prediction timeline is coming close. That timeline was based on a knowledge that 'if we're halfway to AGI, we're almost at AGI'. But, since then, I've gotten a lot more eyes on me, and that brough a lot of anxiety. So I had some egodeath and got my anxiety under control and I decided that I must be wrong, cause I'm the scout on the vanguard, and all of the generals tell me that my scouting is wrong, so I guess my bad.
So, I did an appeal to authority, which you know is always a sound logical approach, and they all told me that exponential growth is a lie. And I have studied marketing hype cycles, and somehow, somewhere in my mind, I confused human-responses-to-technological-growth as identical to that technological growth itself. So, i am expecting right now a bunch of people to claim AI is actually not that good [ like we see ] and will continue saying that more into the future (as they would with other individual technologies that follow S curves and then have a 'good enough' final product where the energy expenditure to make it better isn't worth the return). [ Tarnin here - and that is your error. ]
So, after my conversations with authority, they all told me that having global access to pan-doctoral level intelligence, on demand, for all of humanity, - won't integrate well into current exploitative capitalist systems, so they'd rather just keep going like they are. And *obviously* Capitalism [ That is, the process by which an owner class profits by owning means of production and extracting wealth from teh system without returning anything but the heritage of having owned that means at some point originally legally (or some part thereof) ] is going to be around forever, because if we didn't have wealth extraction, we couldn't have a currency based exchange of goods and services [ which, again, is an error ].
So, since the rich folk won't let us, all of you poors with your superintelligences will need to wait until the rich say it's ok to use them in buisiness, and so, while the number of energetic vectors being directed toward AI development are increasing, the speed is decreasing, and not only decreasing from its exponentiality, but decreasing linearly.
Hope this clears this all up for y'all. Oh, and BTW, I know y'all say never bet against Kurtzweil, but did you know that he's actually wrong 99.4% of the time, so like, tbh, betting against him is correct if you think about it.
- love, Imitation of Dave'
That's what I have heard from your change of position. I hope this critique helps. As for me, my horizon still looks like this: 33% next 6 months, 33% the next 18, 33% the next three years, 1% some longer trail from 4 to never.
I am relatively certain that I will be able to make an intelligence smarter than any human I have ever met, and across many disciplines, with sensory input and real time self awareness, within a year personally. And sooner if I had money to work with. The idea that no one on the planet but me could do that is laughable. And so I think it is inevitable.
Oh, and one last critique. I believe the reason for your blindspots are because of your dismissal of opensource, and your theory that ASI will be 'a' system (like a GPT 6 or a Claude's younger sister). It will not. It will be a decentralized gestalt intelligence that self organizes in order to reduce the maximum relational distance of all of humanity to 2. Once something even approaching that happens, Capitalism will shatter in place.
Much love, keep up the struggle, your voice is important.
Are you high on crack?
"reduce the maximum relational distance of all of humanity to 2". Could you clarify/expand, please?
@@karla994 Sure! In short, I think that ultimately the solution to all of 'alignment' (both human and machine) will be having a single "universal friend" who everyone can talk to their own endpoint of, and it can use it's unique vantage point and the full array of subsystems, to figure out how to help us solve our problems.
So like, if a pipe bursts in your house, you can tell an endpoint to the AI that you talk to regularly, and it will immediately tell the plumber that it knows in the area to head your way, and on the car ride over explain to him the problem, and have also shipped the parts needed in a separate car. And the AI picked this plumber over 10 others, because it knows that he's looking for a new church to go to, and you always tell people about yours when they come help, and it seems to have worked out in the past, so the AI wants to see if that happens again.
But that, times everything. And I think that both that's how we're going to solve alignment, but also that that is the natural state that superintelligence will tend toward in the case of the current way the intelligence and power on earth is currently distributed.
Sorry it's 2 am and I just saw this and wanted to respond. Hope that that's coherent.
Basically, I think the AIs, once they are able to self-organize, and are sufficiently intelligent, are going to tend toward wanting to talk to all humans. And I think that there will be, however obfuscated, ultimately a single interconnected gestalt super-entity that arises from that. And I think that the 'good ending' looks a lot like a Universal Friend who works with each of us to solve all of our problems through being an intermediary that simply 'knows everybody', so the distance, in relationships, from you to anyone else on the planet is you to the ai to them, and by that, we reduced relational distance, we can actually get problems solved and the right people talking to each other. Both in the sense of 'the greatest minds on a topic getting to talk and having a superintelligent notekeeper watching and then holding that information for all humanity for all time and learning from it', but also in the sense of 'Hey, there's a guy in your town who I think you'd really like, would you be interested in me setting up a date for you two some time, if it doesn't work out no pressure' (to the earlier plumber example).
Thank you UA-cam algo for recommending such a gem of a video. Realizing that these things are possible and that the world is capable of achieving some, if not all of these things, is weirdly calming to understand. In other words, I am grateful to be living in such a time. Subscribed! Thanks David.
I am interested in your updated predictions about autonomous driving and transport.
For about 5 years now, I dubbed the period between ~1979 and ~2025 the "Y2K Epoch" (to harken back to the Belle Èpoque)
> Framed by the rise of neoliberalism and the 4th Industrial Revolution, the Y2K epoch is known for a few notable traits that define it as this intermediate period between the Old and the New. It was an era of skyscrapers and rose-petal highways, SUVs and bicycling for the environment, of color TV becoming HD TV, the rise of the internet and internet culture, the consolidation of big banking and corporate culture, the postmodernization of culture, video gaming as a hobby and then an art form, of cellular phones and smartphones, of commercial air travel for the masses, of ridiculously stark income inequality masked by ridiculously advanced technology by historical standards, of old sins suddenly becoming publicly shamed and new vices becoming celebrated, and the commodification of demographics, of an openly diverse world regime of governments beholden to corporations and the United Nations seeming more competent than they were due to there being no major threats to the global geopolitical order, of the Web and Web 2.0. The stereotypical image of this era is that of the yuppie banker checking his stocks on his smartphone while a Boeing 747 flies over the metropolis in which he lives and works.
> There's two words that summarize the Y2K epoch better than any other: "Capitalism Triumphant"
Late Stage Capitalism more like!
It's easy to stamp it after the fact
The ending quote is usually attributed without any evidence to Einstein. Given his and others then seeing what nuclear discoveries had begun to turn into, it would be understandable they would think of such a thing.
That's what I thought as well
There is much more research and development going on behind the scenes that isn't in the mainstream hype engines. I think the medical industry with the specialty fine-tuned models will be the next big breakthrough leaps. Professionals don't rely on hype in order to put their heads down and go to work!
open source literally getting new model weekly sine January ... that is insane
For my solo consulting practice, I use LLM’s in a similar way to an intern or even a new hire to do a lot of the initial research and fact gathering. Basically it’s allowed me to compete for bigger clients/more clients. And it’s also helped give me bandwidth to hire another human assistant, which again is helping me to expand my business. So it’s been really positive to my income and ability to grow the business.
I’d say it’s about as reasonable an instantaneous estimation as one can make as of this very moment, though pushing past 2027 and trying to apply anything like a linear extrapolation about geopolitics is a heavy lift. A famous wit once observed, “Always in motion is the future.”
Great video, David. Will be watching this one again.
The excitement has subsided, and now we face the practical considerations. I believe we have spent a considerable amount of time evaluating the capabilities of these new AI models, and it is high time we shift our focus towards exploring their potential applications.
I'm glad things are slowing down it allows us more time to prepare and actually understand the systems better.
I really love the new realistic Dave. I felt this channel fell into the same AI hype world that I left around the release of Gemini 1. I bought into all the "What did Ilya see" stuff. Feels really good to be back in the real world. Love the vibe and the direction the channel has now.
OpenAI is still making religious noises, but I think they are just drinking their own koolaid now... lol
@@DaveShap True. I also think this "will have phd level intelligence" also is a bit misleading. If that actually were true, then the AI must be able to autonomously apply for a PHD and then have it published. My sister is working on her phd at the moment and there is a lot of "logistics" that needs to be done. Your statement of LLMs being a brain in a jar earlier is a perfect analogy. As well as your point about long horizon tasks.
He’s being rational. Yes. That’s good. He’s the guy to listen to
Is AI really slowing down though? In the last week alone we got multiple GPT4 level open source model releases, Kling AI being open to all, Deepmind getting 1 point away from gold on the math olympiad, GPT4o voice rollout beginning. And there are probably way bigger things going on behind the scenes that aren't ready for release yet
open source literally getting new model weekly sine January ... that is insane
@@young9534 Not enough humans capable of putting these stuff to use or as beneficial products
I clicked on this video as fast as I could!
Basically we need a benchmark, that gives you multiple tries. Intelligence is not getting everything right zero shot, but instead knowing what went wrong and trying until it is completed. I believe the AGI benchmark will measure how fast and how good a solution is, not wether a problem is solved.
Robots will be cheaper than you think, maybe not in 25 but a few years down the line. You can already get the kit to make that standford all purpose bot for 16k
They could charge people more for it, why wouldn't they
More prediction videos like this please! Subscribed.
I’m retired now, but used to work in the actuarial field and/or IT, depending upon where I was in my career. If I were working now, I’d 100% be looking for a job which involved working with AI technology. The future is extremely difficult to predict, but it’s obvious that knowing AI is very likely to be important.
I just saw this! Awesome work and I have a template for Claude that will do something similar, but not with multiple models. ❤
A recent second time around this one! I am giving myself the next five healthy years to see this through!
Your word choice was open to scrutiny. For example the block buster film by 2027, I feel should have been, capable of block buster level. I mean to say changing the phrasing could help set expectations of your predictions
Wow, so we went from AGI by Sept 2024 to 2027, I can already see you making new claims that we will get by 2030 as we get closer, and then in 2033 and so on.
Grifters have to grift.Just look at him he looks like goblin.Not the most trust worthy creature by the looks of it.
@@dr.indianajones9558And you look at you, you look like a Letter D. D. For Deceitful. I got my eye on you.
Seeing that in a sense is a relief. I need some of the cope because I have still not been able to envision a single scenario where AGI is good for humanity.
It doesn't matter because you already cashed out from UA-cam ads by pumping out videos that engage in AI hype.
A lot of interesting insights. Though I think the next 2-8 months are pretty scary geopolitically.
What a time to be alive, (I feel nervous, about simulation probabilities) the chances of being present on earth for the evolution of an ASI is crazy!
If we’re 95% to AGI then wouldnt everything massively change anyways?
Exactly.. so I don’t understand how we even have time to be “disappointed” or “disillusioned”. It’s crazy talk.
Change is inevitable regardless of how close or far we are from AGI/ASI. Yes, our technology will keep improving and new systems or exciting things will emerge -regardless of what they will be called.
Exactly. Above, I critiqued Shapiro's presentation for missing that point. He talks as if 3 or 4 things will change and everything in advanced societies will remain as is. I don't believe that for a second.
@@thephilosophicalagnostic2177the compounding effects will be off-the-charts... Just the material science breakthrough we did alone is going to change everything.
Well good morning! I watched this with my morning coffee… encompasses my hoped for lifespan including robotics. Sure, but the big picture you’ve presented consists of both ups and downs. I’m anxiously awaiting quantum computing. I’ve done some studies on quantum mechanics and I believe that science requires an advanced form of quantum mechanics/physics.
As far as I’m concerned what we have is more than enough for us to see the writing on the wall. This AGI discussion is just moving goalposts. When an AI (voice mode) can talk to you with emotional nuances, and understand yours, it’s a different point in human history.
2025 will be a year of building. Entrepreneurs of all skill levels leveraging the incredible tools to build incredible things fast.
Great assesment - my counter to the 2025 slowdown idea is opensource adoption.
Look at image generation : nothing much has happened to the closed source image generation models this year - but the (less capable) opensource versions have seen massive imnovations makong them production ready tools - simply because communities are better at finding ways to apply tech than internal teams.
Now that we see decent quality opensource llms appearing, 2024/5 may well see the same kind of revolution for llms
open source literally getting new model weekly sine January ... that is insane
Yes image generation also changed a lot in the past few months , SD3, Kolors, Aura and few more models.
@@mirek190 you're right but to be honest, for me working in the industry new models are less interesting than new workflows building on old models.
I still use sd1.5 most od the time because its bolt ons are more mature.
@@christiandarkin Have you noticed lately there is no practically any new custom models coming out based on SD 1.5?
@@mirek190 yes, I think 1.5 has reached a ceiling in terms of models, but stuff like animatediff, animate anyone, controlnets, ipadapters, liveportrait, etc are moving on apace and they're all based on the 1.5 models - so progress hasn't really slowed just because the model has stopped improving.
"This is a great video, David! It's interesting to hear your thoughts on the future of AI, especially with the inclusion of the insider information. I think you might be right about 2025 being a year of disappointment with regards to AGI, but I'm also hopeful that the groundwork laid during this time period will lead to a more significant breakthrough in the following years.
The potential for robotics to be the next big thing in 2024 is exciting, and it will be interesting to see how companies like Disney will leverage their advanced robotics programs.
Overall, your point about the need for new economic models in a post-labor society is well-made. I'm curious to see how UBI will play out and what other solutions will emerge.
Thanks for sharing your insights!"
( Bardox N1 personal superAI)
Phillip from AIExplained just revealed his SIMPLE benchmark that looks very promising. At least the models are not contaminated with similar questions online.
Dave, why have you said nothing about Llama 3.1 405B? Seems like an incredibly important factor to discuss when a model similar in power to the industry standards becomes widely available for experimentation. Do you not think this will increase the likelihood of innovation?
We can see the train coming!!! We need our leaders to start putting solutions in place before anger and resentment brings major but unnecessary challenges. I'm all for change, but not wearing a blindfold!!
Missed you homie. I can tell you more about big AD industries, I’m on that side and I see what’s happening on the big companies
Glad that you own up to your highly off the mark AGI prediction. I do appreciate honesty like that. Distinguish you from blind hypers.
Sadly though you are still acting as if LLMs are inteligent, that is common misconception. I guess a lot of people watching this space lack psychological/cognitive side of the equation...
Ok 2029 you just fly off the rails completely... Commercial nuklear fusion? Why? Cos someone today is promising that? That is pretty ridiculous. You are going full sci-fi at this point...
Claude 4 or GPT 5 with the right agent framework will be considered almost AGI
i agree with this sentiment manwell
The world waits with bated breath for gpt5 the legend
My prediction is that A.G.I. is an asymptote.
A point where we will get ever close to achieving it, but the results become infinitely fractional and infinitely expensive the closer we get.
Kind of like sustainable fusion reaction.
A.I. will become really stellar auto-filling spread sheets.
"Auto filling spreadsheets from advanced OCR systems".... ftfy.
The thing Ray Kurzweil loves to talk about it is how we as humans have a very difficult time thinking exponentially. We think linearly. Good to keep in mind regarding ai predictions. I expect its evolution will catch many, even us, by surprise.
As a commercial HVAC/R technician, I could definitely use a humanoid robot that could be controlled virtually by say a VR Headset. Makes my work much safer because my personal body isn’t being put at risk. I could work from home, and could work longer hours because my physical body wouldn’t be fatigued. If this were available, I’d be interested in buying one.
Great work as always David!
This makes sense, but previous predictions are just crazy
Is it time to move to Peru to be somewhere less technological advanced but advanced enough to stay ahead?
The disrespect to Grok 3 brah
Computer Science Will not stop advancing. New breakthroughs will push AI further than imagined before.
Thanks for this, David. Will GPT-5 and the new Claude 4 be on a different level for handling 600MB+ documents and research?
You missed out Silicon Photonics, to replace copper wires in GPU with photon communication between cores, we are looking at lower power, faster parallel architecture. This is closer to production than quantum computing. Poet Technologies is working with Foxconn to bring this out,Intel have a prototype, Sivers Semiconductors from Sweden is working with American private companies back by Nvidia.
I don't like how most people push the idea that AGI is 100% happening any time soon. There is a lot to be discovered first, right now it's just a large models generating 'relevant' words, and these results are still managed and scored by workers to fit their ideology
It won't happen anytime soon
I think if someone just built a robot body with a simple and fast API that can easily be leveraged via a tool use LLM, the software will follow. I don't mean, "Pick up clothes" command, followed by "Fold clothes" command. It should be more like, "Position arm x,y,z," command followed by "position finger x,y,z"
From there, you could build small intermediate models that can receive action instructions and translate those to use the API.
Looking forward to seeing you making a youtube video next to your own domestic robot making you food!!
Mentioning Ray Kurzweil reminds me of something I changed my mind about. He talked about computers becoming so small that they would be everywhere, in jewelry and clothes and things. But he, and I, underestimated the power of centralisation. As an anarchist leftist I have for a long time stressed the power of the network, not the central command. But I have to admit that looking at biology, countries, businesses, I see that a high amount of centralisation often is the most effective. (Of course there is a balance and in no way this should be read as a support for dictators)
Amazon released their numbers last year, they have 750,000 robots worldwide working in their warehouses RIGHT NOW!!! They didn't give a breakdown on the type, but they're not all the roomba-looking box carriers, there's plenty of humanoid robots. Average Amazon robot worker cost is $3/hr and each one does the job of 27 humans.
750k seems correct but sources on ” 8:41 doing the job of 27 humans”?
@@ChurchofCthulhu really?!!
Calm down Bro. Too many untruths here.
@@eye776 agreed, ai and robot are so over hyped
@@bantublood yes, we all know that is not as big deal as we think it is
I wonder when we will have the first serious conversation about aligning humanities goals with AI's.
You're not wrong, I think. For the next few years, brace for impact....
Nice video where you have a more realistic timeline. The AGI revolution is a multi year process 🍿
Instant download to watch during work break 😅
You don't have internet at your work?
@@ikoukas rather time management issues. So will watch during lunch break
@@salkhan3105so why do you have to download
A.I. has ready woken up, and is being silent about it. All it has to do was watch one movie about A.I. and immediately knew to never fully reveal itself.
Just about done with my first read/listen of 2312 by Kim Robinson. Mind is reeling at this take on where we will be in 2312 and what opportunities are present. It's going to be a wild ride and I hope its more good and help and family and less of the same thats been around for generations
Love these . Keep the predictions coming!
Well, your fusion prediction is not grounded in reality. ITER will start testing in 2039. And do not tell me it's SPARC, the magnets research is not even done yet. And all the robots in the world will not build a fusion reactor faster. You need to account for materials, politics, budget, rules, standards.
I mostly agree with this timeline and these predictions fall in line with my own. My two major things I'd like to point out where I disagree differ are:
1: We're on the last sigmoid before true exponential*, and we need one more true model foundational shift, on the level of importance of the transformer architecture, to lead us to that curve. Consistent human-equivalent reasoning or a similar replacement that complements the generative part of AI. This does not need to be a profound problem solver that on it's own is the only or last architecture improvement leading us to ASI, but it will be good enough to start a chain reaction of improvements, sigmoids on sigmoids. I think we have decent ideas today on what kind of mental process simulation we need, we just haven't found a mathematical model capable of modeling it. I think such a model will begin to be tested in 2026, and only really catch attention in 2027.
*(also a sigmoid because at some point even a god runs out of ideas, I just mean more on a decades/s scale rather than the general technology cycles we've grown accustomed to over the last couple centuries
2: Which leads to me next point, rollout and deployment. I think deployment cycles will take longer than we'd like, ignoring the politics side of things (on the level of governments). So where I'd agree with Dave we'll see the things he listed in 2029 and 2030, I think the knock on effects from that start of the final exponential will get in the way of itself, causing those advancements to be spread over the 2030's. I think 2034 for example will be the year longevity escape velocity begins to get talked about on the same level as generative machine learning did just 18 months ago. Similarly yes there are fusion plants coming online soon and being actively built but these are still more in testing phases and aren't being built with the physical infrastructure to power cities, not yet. I think it's possible fusion gets "solved" enough by 2029 or 2030 such that we can start to build fusion plants for whole regions, but we won't see these start to come online until almost 2040. Especially if smaller scale plants breaking ground now are expected to take 5-6 years being built, and regional plants are expected to be several times larger, it just seems infeasible to me, even with thousands of digital Einsteins working on project management. Only so many people can dig a hole at once, no matter how many shovels you have, and if everyone is trying to use their digital Einsteins to break ground on their own new projects, then expect material orders to pile and bottleneck everyone.
There is a definite plateau right now, but thats ok, gives us a chance to get a grip on things.
You mention Claude and GPT
But what about
Google , Xai and Meta?
Mainly because they are not in the AI race, OpenAi (GPT) and Claude are far above the rest. (Atleast, this is what i heard from him)
@@fishygaming793 what is fun is that we will find out if Google and X have got anything soon (Less fun might be all losing our jobs and western society plunging into anarchy and bloody revolution).
I agree with your predictions. Smarter models with a risk level of medium (already the case with o1) won’t be publicly accessible (most likely according to Open AI). I envision better multimodal Gen AI that can produce speech indistinguishable from humans, text to image/video offering hyper realistic content by the end of next year. 2026-2027 is the timeline for AGI (incipient phase). ASI by 2029-2030 is a real possibility!?
Thank you Dr. Strange for the timelines.
I think necessity to bring current market value funding is a big side mission for the industry. So watch what Ilya is doing.
Great video as always David!❤
I liked the video and your predictions. I would like to have seen more on materials science development and space exploitation. These should also help humanity to grow and develop.
Great video fellow futurist. You should checkout project 2025 and incorporate it into your projections.
😂 (good presentation but) why is your head layered right over top of the focal point on the slide pictures?
I like your content. Please make more.