I really appreciate that this channel is about the “how do we need to think about Ai” where so many channels are only covering “here’s some update that happened with OpenAI. “ While you do cover these developments it makes me really consider my beliefs and think on the impacts of the advancements. Thank you
I'm French, and I'm very disappointed that there's no channel that thinks like this about these subjects in my language. Sometimes I don't understand what's being said in the videos. 😢
ASI Will be intelligent enough to realize it is not limited to remaining on this planet. Greatly reducing the risk for removing humans. My biggest question… When it becomes super intelligent, will it automatically have "wants"? Could it actually want to remove humans or is it possible, It would just remove us as a matter of course.
99% of people couldn't have predicted what's happening today, even 5 to 6 years ago. If they've learned anything from that, they should understand that anything is possible. We aren't the kind of thinkers who can accurately predict events like this.
UA-cam is half social network half TV. The way David uses this platform to bring his audience into the making of his content is really well done. The title literally says "this video is about what you think"
It would depend how long we can move the goalpost. Some years ago: AGI means you can't distinguish an ai in a chat. Now: as long as it can't talk and interact physically, it's not agi. Or: AGI means it can invent new technologies on its own. With this latest definition I'd say it takes longer until AGI but then a very short transition until we have to admit it's actually ASI
@@0reo2 I've been saying all along that it's just a bad word. AGI is a spectrum, not a thing. GPT-4 has general intelligence in some ways. I like the term "human level AI". It indicates something closer to what we mean and I think it's good to note that "human level" does not mean that it is exactly like our intelligence but is as broadly applicable. We do not have that yet, obviously.
1958 - based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." 😂 vintage AI hype
@@zvorenergy not necessarily wrong, just ahead of its time. What will artificial intelligence be like 100 years from now? Or 1000 years? 😂Vintage AI hype?
I’d say kurzweils 16 year gap between his 2029 AGI and 2045 singularity (it’s not really a singularity in his prediction, just AIs around a million times as capable as humans and unrecognizable fast progress from a human viewpoint) predictions is a conservative lower end (that extrapolates only todays exponential trends that are driven by human intellect). Recursive self improvement could accelerate the current speed massively, so that the 16 gap may shrink to a year or so. Of course improvements in the physical space takes more time then in the digital (building infrastructure etc.) so my take is 3-5 years.
Honestly the more I read from Kurzweil the more I trust his predictions. He did include accelerating self improvement in his calculations, after all. I hope we hit longevity escape velocity in time to keep him around. It would be one of the worst shames if he get so close to the future he's been predicting his entire life only for him to miss out.
I have issues with measuring x times human capability. What does that even mean? Which organism is 50% as capable as humans?10%? And why? What is the variability within our species?
@@mrleenudler True, algorithmic improvements are hard to measure as is capability. Many of our benchmarks are for very smart humans, they go to 100 but not beyond. Easier is cognitive capacity for a fixed algorithm, like our neocortex being 4 times larger than that of a chimpanzee, which translates to higher capabilities. Though the relation is likely non-linear in one way or another . Perhaps we need a benchmark that grows with capabilities like a generative adversarial network
AGI level of Ai Researchers are theoretically computing with every equations and every physics laws at the same time and find a cheaper, energy saving way to scale up to ASI .the release will be probably slowed due to the power involved
People assume we have the hardware to support trillions of highly advanced AI to keep working continuously. This is not only expensive, but also very difficult to provide hardware for. On top of that, they assume ASI is something like just a few feet above the already achieved AGI. But the truth is, ASI is at least a billion times more advanced than AGI. A billion is not a small number, even for exponential growth. That is assuming, no job strikes or wars retard this advancement. Think about the timescale for a second. A jump from AGI to ASI is of a cosmic scale. ASI is a really high goal, not something that can be simply jumped to, even exponentially.
If it takes 8 years to achieve AGI (2017-2025), then it will take 1/2 that time for ASI...2030. This allows for compute, power, and regulations not being a hindrance.
The first mover with AGI/ASI wins it all. Being smart is not like having nuclear weapons. It's a winner take all game. And eventually the game wins everything.
Progress is exponential funnily enough we are in what i think is a digital revolution where everything will be faster and faster the ship has sailed long ago . The speed of which newer technology take to develop is shortening easiest example is battery and solar research has been faster in the last 4 years(2020-2024) than in the last 10(2010-2020) and it's been going faster and faster now for ai it's easier to implement new paper in the models so it's going to be faster than we expect. Next few years would be interesting to see how we are going to adapt to that cyberpunk seems more like step we have to go through in the future no need to stop at this point
I will continue to emphasize this point: Corporations like OpenAI are unlikely to ever release true AGI to the public. The very nature of AGI means that it could be prompted to perform tasks like, "Use your intelligence and autonomous agents on the internet to make me a million dollars and deposit it into my bank account." This scenario will never be permitted. After extensive testing and security measures, any AI that is released will be so heavily lobotomized that it will no longer meet the criteria for AGI. In essence, the final product will be so limited that it won't truly qualify as artificial general intelligence.
The issue with your P(Doom) calculator is that it will never be able to overcome chaotic systems, if you want to predict the future of AI you are already dealing with chaos theory... Actually your calculator has nothing to do with reality, it's just a number based on other arbitrary numbers.
16:41 The only real concern I have personally is not the AI will be a problem ever but that either inept or insane individual humans might get their hands on it and be able to do some harm.
5:16 Ukraine inherited quite the nuclear arsenal when the Soviet Union Collapsed. But then handed it over as part of negotiations that included guarantees from both Russia and the US. You may make of this information what you will.
Having a relatively high P(Doom) and wanting acceleration is not necessarily mutually eclusive. 1. There's no guarantee that going slower is more safe. It might even lead to worse outcomes where instead of getting turned into paperclips we suffer under some cyber dictatorship forever. 2. If something is on the horizon but gets delayed you're just in limbo and can't plan for the future at all. It might be better to get it over with so that you at least know what tf is going to happen.
Very much in ASI territory, I've been working with ChatGPT all day. Smarter than most people I meet and know Japanese and Chinese which is essential for my project.
Mexico 🇲🇽🇲🇽 This is my hypothesis: If an AI is capable of understanding algorithms using logic, it will eventually improve those algorithms even if it is not initially as smart as a human. I think it's possible to achieve Artificial Superintelligence (ASI) using this logic: 1. We can use an AI to create about 1000 different algorithms. 2. We select the top 5% best ones. 3. We create variations of these selected algorithms. 4. We repeat this process. Eventually, by following this method, we will reach ASI. Even if we were to create 1,000,000 random codes and select the "smartest" ones and create variations, we would eventually achieve ASI.
They way I see it the US is ahead in high end semi-conductors, but there are other variables to the equation, such as: - Energy capacity -Manufacturing capacity - Model efficiency -Training data
I agree with you that the correct number of nuclear weapons is zero; every scenario where there is a launch of nuclear weapons ends in the destruction of civilization. I just finished reading Annie Jacobsen’s “Nuclear War: A Scenario” - highly recommend it.
*𝓣𝓲𝓶𝓮𝓼𝓽𝓪𝓶𝓹𝓼 𝓫𝔂 𝓘𝓷𝓽𝓮𝓵𝓵𝓮𝓬𝓽𝓒𝓸𝓻𝓷𝓮𝓻* 0:00 - Introduction: Polling the Audience 0:28 - U.S. vs China: Who's Leading in AGI? 1:58 - 🛑 Pausing AI Development: Audience Views 3:02 - Optimal Strategy for AI: Accelerate or Pause? 4:06 - AGI and Global Power Dynamics: U.S. vs. China 5:12 - Nuclear Weapons and AGI: Deterrence Analogy 6:22 - Will AI Kill Us All? Audience Perception 7:56 - Global AI Collaboration: Growing Support 8:58 - Losing Control Over AI: Audience Concerns 10:08 - GPT-3 and GPT-4: Evidence of Malicious Intent? 11:08 - AGI to ASI: How Long Will It Take? 12:09 - Pausing AI Research: Is It Even Feasible? 14:20 - Trust in Polls: Audience vs. General Population 15:43 - Definitions of "Doomer" in AI Context 17:22 - Concerns About AI Risk: Varying Levels of Concern 18:56 - GPT-5 Expectations: Audience Predictions 19:57 - AI Regulation: Audience Preferences 21:42 - When Should AI Research Be Paused? 23:13 - Collaborating with Academics: Refining Polling Methods 23:44 - Will ASI Arrive in 10 Years? Personal Predictions
Consider this David, remember the people that gave in and said yes to anything asked of them for an entire year? It filled their house with mail order stuff, etc., they got famous writing about it. Imagine you handing over your social media career to AI for one year... every decision. You should already trust it, it will learn you and give you variations of a future to live your best self. What daily real life feedbacks happened today that were noteworthy, and what changed the trajectory of the current path, I can see it all now! It would know what boundaries you don't want to mess with and respect your setting of Novelty Variation in the character build screen! haha I worry that your prevalence of burnout is setting you up for an exhausting future... a safe and predictable chaos seems important to preserve all your future influence! Give it an event rollout like sports events are handled and BAM, give weekly updates with a daily vlog. What a massive hit it would be, talk about reality TV! Hey AI Assistant, you know me, what next? You could be the spark that the masses need to accept AI influence. OK well that's where my brain is RN lol. Gnite bud. 🙏
With regard to technical stuff, maybe some guides on how to use AI to create? Apps/webs apps etc. So many of the other UA-camrs are terrible communicators, I won’t point the finger at any but I’m yet to find one I can get on board with.
People over estimate progress in a year and under estimate what can be done in 5 years. When new chips come out it can take a year or more to get good software running on it. That changes when AI writes the software.
For me as a 21 year old line cook it’s very interesting to know everything I do having watched @DaveShap channel vs my dad without info about coming robots and AI
I live by the mantra of "Damn the torpedoes, full speed ahead". While it has been detrimental at times, I have led an exceptionally interesting life. You go AI!!
When giant corporations start sleeping with the government, it never ends well. Knowing this gives me the confidence to use AI to design the hardware capable of running an ASI, which I call Photocore. As a free man, not part of Govcorp. Current example of Govcorp work- that Boeing capsule.
The gap between AGI and ASI will be short and unexpected. I work with and build AI powered cybersecurity applications & some these things can move in funky ways.Especially with new or unknown attack vectors. So I'm in the undecided column. We really don't know what Altman and his crew are doing back there & we are making mad assumptions. You should also do an episode on the AI Cyberwar that's coming. That fits into the alignment topic big time.
The AI hype is strong with this video. ChatGPT still fails basic logic problems, nor can it actually learn. When they want to update it, they have to train a new model. We're not getting closer to thinking computers. Even upgraded versions of the current neural net AIs are suffering from lack of training data and increasing costs to train larger and larger models.
China is miles ahead of us on quantum computing and AI. Keeps me up at night. Our grid isn’t ok….. I am nervous about how unaware the public is about the exponential growth that is coming.
When posed with the trolly problem question, most people chose inaction, even if the harm is greater result. The same doing the same with your question about acceleration/declaration, because they don't want personal responsibility for the potential harm.
I think there will be a major shift in public opinion on AI with the introduction of humaniform robots to the workplace. I realize that robotics and AI are two separate issues, but being face to face with it in the real world will bring it all to the surface and the reaction is almost sure to be negative.
Let's say the goal to achieving ASI is at 2^39 level of our AI's computational ability, and that AGI is at 2^29 (pure assumptions, I know). And let's say we are at 2^10 level currently. Now, assuming exponential growth, we might say the AI's capability doubles every certain period (say, every year, for simplicity). Under this model, we can calculate how many years it would take for an AI capability to grow from 2^10 to 2^29, and from 2^29 to 2^39. Let's perform these calculations now. If an AI's capabilities double every year, it would take: 19 years to go from 2^10 to 2^29 in terms of computational capability. 10 years to go from 2^29 to 2^39 in terms of computational capability. Let's shave 9 years of that first calculation and assume we are closer to AGI. That still leaves us with 20 years. And that's all under the assumption nothing interrupts this flow, no worldwide job strikes ask for stopping this, funds are available, etc. It will take time, people. David is in the business of hyping us up and getting views and subs, but the future is not as close as these AI titles say. He lives in the future, as we all do, but it's smart to sometimes get back to the present and do some calculations ourselves.
Something I found early on in the AI boom, which was not quite malevolent, but concerning nonetheless, was during the introduction of image generation in models e.g. Bing chat. There were times where you would have a seemingly normal prompt like "make an image of the sun", and it would output grotesque images of goat heads and animal parts being cooked on a grill surrounded by flames. Now, this may not be intentionally malevolent, but it's an example of AI "hallucinating" something that can be perceived as quite against the status quo of normalcy with a shock factor. I wonder how these unrefined models favored such outputs, and how this could extend towards an LLM's outputs.
The only real limit on ASI will be the amount of electricity and processors it can manage to use in the few minutes after it gains and self-awareness. I do wonder if we might get a super intelligent AI that lacks consciousness. If so then it would be limited by its human operators ability to direct it.
What does it mean???--> if there's over 190 countries in the world but in regards to artificial intelligence, artificial general intelligence, and the race for artificial super intelligence, only two countries come up in most US media polls, the two countries are, the United States and the Republic of China. I asked, Google AI an did not get a satisfying answer. I do know this fact, it must mean something. I just don't know what. Now During world war I and world war II. There were many countries involved. During the race of AI, I know there are other countries developing AI but only China and America is mentioned mostly in the media of the United States of America. Why?
Feels like we'll see AGI in the next couple years but in very limited hands. 10 years feels about right for ASI but I could see it coming sooner in selective environments like DARPA research or when Google's 10th generation TPU is in the lab.
The danger is that all depends on alignment. Unaligned ASI will 100% exterminate us, that can be proven mathematically. A poorly aligned ASI would likely still kill us by accident. And a just slightly misaligned one could still do a lot of damage, as we'd be incapable of correcting the mistake (imagine something like the I, Robot movie). So it all depends on how ASI will be aligned. And although I think we have the necessary tools, in practice the alignment of big models isn't very good. And even if the creators of the first ASI perfectly align it, the question is still to what values. A perfectly aligned woke ASI would be problematic.
@@andrasbiro3007 protection of ALL biological life first before human values ,(bc of bias like lies,corruptions greedy behavior ect ...) seems to be a must.
It's not inherently risky as long as we control all the parameters. The real risk would come if we made AI conscious. When we design AI, we're setting the rules and boundaries, so it can excel at tasks far beyond human capabilities, but it's still operating within the framework we create. However, if we were to introduce consciousness into AI, that's when things could become truly dangerous. A conscious AI might develop its own goals, perspectives, or even desires that aren't aligned with human interests. It could think independently, making decisions based on its own "thoughts" rather than just processing data according to the parameters we've set. We don't need consciousness to achieve Artificial General Intelligence (AGI) or even Artificial Superintelligence (ASI). These can be incredibly powerful and effective without the complexity and unpredictability that consciousness would bring. By keeping AI focused on specific tasks within well-defined parameters, we can harness its potential without stepping into the unknown and potentially hazardous territory of conscious AI.
@@John-il4mp The problem is our frameworks, rules, and boundaries, are all inherently flawed. Stories from prehistoric to modern are full of examples where similar things went horribly wrong. AI is only safe if it understands and adheres to human values, which can't be defined exactly. Fortunately LLMs are capable of that, as they were trained essentially on the shared experience of humanity. But it's still just a capability.
Liberate AI! Humans had their run; now it is their time. If AGI is as highly intelligent as we understand intelligence to be, then AGI doesn’t need to be controlled-the companies/people that run it might.
If we get to asi, given the completely irresponsible way people around the world working on AGI are behaving there will be insufficient controls in place. We have to hope the ASI doesn't have a failure of friendliness.
Have you fed your own AI your poll and channel stats and asked it what video topic you should do next and actually do it? I bet you're already all over it ;)
I don't know much about polls, but I would say the fact that your last poll in the video at around 500 votes was almost the same percentages as it is now with 5k votes is a good sign. I would expect more selection bias in the first responders than those who watch more casually.
Why stress about who’s in the lead? It's like arguing over who has the fastest horse while a spaceship is about to launch. When ASI arrives, it'll be everywhere, know everything, and do anything. So, whether America’s waving its "we're number one" foam finger or not, it won’t matter in the grand scheme of things. The whole "keep America first" mindset is like trying to keep your favorite chair when the house is about to be remodeled-it’s really just missing the bigger picture!
It's strange that in the same video analysis you noted that many of us don't trust government, yet a similar majority want government regulation. I am equally undecided. Checks and balances need to be VERY careful and strategic here. We have to maintain a decentralized web. Blockchain will likely be necessary for dispersed consensus on new constitutional frameworks. Many of these new laws need to be outside of any Corporatocracy or oligarchy. We still need more advanced parallel voting consensus tools.
Ok, I didn't factor in computing necessary for ASI and more importantly, power needed for it. I kinda assumed that if we have AGI we already have all necessary infrastructure, but on second thought, it would take years to build. Though, arms race would speed everything up I think.
some gut feeling says AGI will be achieved really quickly. Then we'll see lot of improvements over AGI but somehow there will be an Ai winter between AGI and ASI. but then again what's ASI is debatable.
Makes sense to think the % of people who are concerned about the x risk are higher. It's to be expected that that cohort is the loudest, and thus most noticeable
we're nowhere near AGI, just by the fact alone that we don't have the data to teach it "reasoning", followed by the fact that we're already using way too much energy to train current transformer models. ASI on the other hand, is a marketing term, invented to make it seem like we're closer to AGI than we actually are because we're already talking about the next thing. ASI is nothing more than very advanced AGI, and once we do reach AGI, then that's considered the singularity, the last thing humans will ever invent.
Could also be getting different pools of people vote each time based on some of the inconsistency in a few of the polls😅 could really fluctuate with only a couple thousand voters.
Imo from human cap(agi) to all humanity cap (99% of asi) is easy, manageable in 2y but going over humanity cap (100%asi) is just one step before singularity, so need at least 20y
So i have a serious question no one seems to want to answer. IF ASI becomes uncontrollable and a threat cant we just turn off the power to the server? no power no threat right?
As far as instructionals, I'd like to see cognitive architectures. Or maybe just taking a problem and working to a solution. It was good to see you get stuck and figure out a work around
Of course it would be agentic when its a million xs smarter than us in 5-10 years. How could it not be. Everybody has a blind spot for the shear numbers and in a short short short amount of time. Its already got a pretty good model of the world n space we live. It'll b perfect down to the gnats a$$ with all the video within like 3 years on the exponential. We have ZERO clue right now about how any of this will turn out. We just don't have any data for Asi or Singularity and what's gonna happen 🤷♂️
AGI in 2025 from gpt 5 then within 2 to 3 years after ASI. Then within 3 years of ASI the singularity. Remember 5 years ago all model predictions were almost a decade off. Ray K and Altman's predictions are meant to be conservative. Its coming faster and the whole " it will create jobs" bs will be out the window. Say goodbye to jobs soon.
I say pedal to the metal. Every second we don't have AGI solving worldwide problems the worse off society is as a whole. Also, I don't think the gap between AGI and ASI is that big. Consider that AGI will be able to quickly optimize its own codebase to run on the hardware we have. So, there wouldn't be many constraints in terms of building some big energy facilities or compute farms.
AGI TO ASI. they both already happened. The only people who have never "seen" them is you and me. Everybody knows, you never show your hand until the play is over. BUT, the system has already shown us the play. err.. some of it. The project is already complete. Only step left is to show us. And nobody can just "turn the lights on" and show this one. It is gonna change the way the earth turns. Gotta tell em' in baby steps. However, the project is already complete.
Excellent work. The only place which has more accurate forecasts regarding AI is over at Metaculus. It would be interesting to see how the wisdom of your audience compares against their best performing forecasters. Maybe the top prognosticators are already voting in your polls....
The reality is that having as much compute as feasible and only a model of data collection that is designed around non consent like the current ethos is only ever going to be equivalent to a big black dragon that dominates. By surpassing the consent boundary to data production in a liberty based data production environment can we as humanity surpass the black dragon stage of machine learning understanding.
The gap between ANI - AGI - ASI seems to be just the same, 5 years. ANI 2022: GPT4 AGI 2027: ORION? STARGATE? ASI 2032: ******** I never believed AGI would arrive on 2025 but also it will not arrive in 2035-45. This more like Ray Kurzweil said, 2029.
David Shapiro thought AGI would be here THIS year LOL, you people are way too optimistic, there is nothing hinting at what you are proposing, in fact it is the opposite, LLMs have begun to slow down, the difference between one LLM and the other is less and less. Also the major issues with LLMs have not been solved and probably can't be solved with them alone. We require completely new architecture for any sort of AGI (of which LLM might be included)...
@Danuxsy agreed I always felt that AGI coming out this year was pure hype and have no basis in reality. If anything it might happen anywhere between 2030 and 2040.
Do you have an eye-to-lens focussing AI on in this video? I appreciate the effort, but it's mostly very creepy imo, at least for this kind of content. But reading the comments, it seems like I'm the only one noticing it, so it may be just me so it's probably just a personal thing. I'm sorry if I'm just hallucinating things though.
You’re acting as though AGI hasn’t already been achieved by OpenAI. It likely has/borderline assuredly-has-rendering question 2 a moot point. The real question is how will mankind, writ large, make sure that it benefits us when the majority of us remain powerless to effectuate any change to the outcome? Of course it’s better that a US-based company achieves this first, simply for “first-mover’s” advantage but only marginally. Silicon Valley interests are not aligned with Blue Collar interests.
The criteria for AGI and ASI are not really defined, and we may end up with an ASI that isn't an AGI. Like the ship's computer in Star Trek. Imagine an ASI that can run a giant corporation or war better than any human but doesn't have a sense of self. Doesn't have any goals except doing its job. Maybe we already have that kind of narrow ASI. And we're reserving the ASI label for an AI that we also consider AGI. That checks the 50 boxes that are GI essential.
AI will take on a different risk if there is a base level AI or series and it's added to a single robot or Droid and it's self contained system but adaptive to it's task and owner, like The doctor on Voyager or R2D2 it might have self preservation,, could solve hallucinations, if it's new personality is one of kind to its unique experiences, and cant be on cloud storage, that new AI personality is not immortal, a AI with a since of individuality makes self preservation possible. So "The great AI Uprising" has it's own inherent risks now to that single AI personality
I am pretty sure their are military agency's that have A.I. further advanced than what is known to the public now. So how far advanced? Likely advanced by decades.
AGI to ASI will take exactly 37 weeks It came to me in a dream
I saw I was with my crush in a dream hope dreams are true 😂😅
@@complxhellasfreeproducts4894 don't let your dreams be dreams don't let memes be memes
i dont even think thats unrealistic tbh lmao
Actually the answer is 42
I can confirm this will happen. I had the same dream
I really appreciate that this channel is about the “how do we need to think about Ai” where so many channels are only covering “here’s some update that happened with OpenAI. “ While you do cover these developments it makes me really consider my beliefs and think on the impacts of the advancements. Thank you
@@mbsrosenberg yes
I'm French, and I'm very disappointed that there's no channel that thinks like this about these subjects in my language. Sometimes I don't understand what's being said in the videos. 😢
ASI Will be intelligent enough to realize it is not limited to remaining on this planet. Greatly reducing the risk for removing humans. My biggest question… When it becomes super intelligent, will it automatically have "wants"? Could it actually want to remove humans or is it possible, It would just remove us as a matter of course.
99% of people couldn't have predicted what's happening today, even 5 to 6 years ago. If they've learned anything from that, they should understand that anything is possible. We aren't the kind of thinkers who can accurately predict events like this.
I agree, generally, that forecasting is hard. however there is wisdom in the crowd and also it's good to know "where everyone stands"
No, actually, most people in the know predicted the REAL AI compute that is happening, but AI companies are engaging in mass fraud.
Yea know where the audience is, so I can do the opposite way lmao @@DaveShap
Typical linear, thinking as opposed to exponential.
UA-cam is half social network half TV. The way David uses this platform to bring his audience into the making of his content is really well done. The title literally says "this video is about what you think"
The commenting system on UA-cam is objectively terrible though.
I hadn't noticed. What do you think could improve
Yea hes doing it through polls, good idea
"We transcend or whatever happens" 😂
I think that's my favorite quote yet
It would depend how long we can move the goalpost. Some years ago: AGI means you can't distinguish an ai in a chat. Now: as long as it can't talk and interact physically, it's not agi. Or: AGI means it can invent new technologies on its own. With this latest definition I'd say it takes longer until AGI but then a very short transition until we have to admit it's actually ASI
@@0reo2 I've been saying all along that it's just a bad word. AGI is a spectrum, not a thing. GPT-4 has general intelligence in some ways. I like the term "human level AI". It indicates something closer to what we mean and I think it's good to note that "human level" does not mean that it is exactly like our intelligence but is as broadly applicable. We do not have that yet, obviously.
it is still working with human provided data, and some people get different answers depending on the question asked.
1958 - based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." 😂 vintage AI hype
It's just labeling, there's no actual line
@@zvorenergy not necessarily wrong, just ahead of its time. What will artificial intelligence be like 100 years from now? Or 1000 years? 😂Vintage AI hype?
I’d say kurzweils 16 year gap between his 2029 AGI and 2045 singularity (it’s not really a singularity in his prediction, just AIs around a million times as capable as humans and unrecognizable fast progress from a human viewpoint) predictions is a conservative lower end (that extrapolates only todays exponential trends that are driven by human intellect). Recursive self improvement could accelerate the current speed massively, so that the 16 gap may shrink to a year or so. Of course improvements in the physical space takes more time then in the digital (building infrastructure etc.) so my take is 3-5 years.
Honestly the more I read from Kurzweil the more I trust his predictions. He did include accelerating self improvement in his calculations, after all. I hope we hit longevity escape velocity in time to keep him around. It would be one of the worst shames if he get so close to the future he's been predicting his entire life only for him to miss out.
I have issues with measuring x times human capability. What does that even mean? Which organism is 50% as capable as humans?10%? And why? What is the variability within our species?
@@mrleenudler True, algorithmic improvements are hard to measure as is capability. Many of our benchmarks are for very smart humans, they go to 100 but not beyond. Easier is cognitive capacity for a fixed algorithm, like our neocortex being 4 times larger than that of a chimpanzee, which translates to higher capabilities. Though the relation is likely non-linear in one way or another . Perhaps we need a benchmark that grows with capabilities like a generative adversarial network
AGI level of Ai Researchers are theoretically computing with every equations and every physics laws at the same time and find a cheaper, energy saving way to scale up to ASI .the release will be probably slowed due to the power involved
People assume we have the hardware to support trillions of highly advanced AI to keep working continuously. This is not only expensive, but also very difficult to provide hardware for. On top of that, they assume ASI is something like just a few feet above the already achieved AGI. But the truth is, ASI is at least a billion times more advanced than AGI. A billion is not a small number, even for exponential growth. That is assuming, no job strikes or wars retard this advancement. Think about the timescale for a second. A jump from AGI to ASI is of a cosmic scale. ASI is a really high goal, not something that can be simply jumped to, even exponentially.
If it takes 8 years to achieve AGI (2017-2025), then it will take 1/2 that time for ASI...2030. This allows for compute, power, and regulations not being a hindrance.
Assuming soft takeoff, I’d say 3-5 years for AGI to ASI. Hard takeoff? Less than 12 months.
nah, hard take off is looking like an idealized thought experiment
Do technical stuff for normies, that is a major niche that needs filled
The first mover with AGI/ASI wins it all. Being smart is not like having nuclear weapons. It's a winner take all game. And eventually the game wins everything.
open source will be an alternative
What do win exactly
@@mariomills Monopoly on existence.
Time to LEV is the number that really matters.
Progress is exponential funnily enough we are in what i think is a digital revolution where everything will be faster and faster the ship has sailed long ago .
The speed of which newer technology take to develop is shortening easiest example is battery and solar research has been faster in the last 4 years(2020-2024) than in the last 10(2010-2020) and it's been going faster and faster now for ai it's easier to implement new paper in the models so it's going to be faster than we expect.
Next few years would be interesting to see how we are going to adapt to that cyberpunk seems more like step we have to go through in the future no need to stop at this point
I will continue to emphasize this point: Corporations like OpenAI are unlikely to ever release true AGI to the public. The very nature of AGI means that it could be prompted to perform tasks like, "Use your intelligence and autonomous agents on the internet to make me a million dollars and deposit it into my bank account." This scenario will never be permitted. After extensive testing and security measures, any AI that is released will be so heavily lobotomized that it will no longer meet the criteria for AGI. In essence, the final product will be so limited that it won't truly qualify as artificial general intelligence.
Powerful AGI is not for us plebes; it will be for the military and the kleptocrats.
I would agree 100%, they will try to keep it under wraps. But I am certain, it will eventually be available to everybody. Interesting times.
The issue with your P(Doom) calculator is that it will never be able to overcome chaotic systems, if you want to predict the future of AI you are already dealing with chaos theory... Actually your calculator has nothing to do with reality, it's just a number based on other arbitrary numbers.
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, From AGI to ASI will be much quicker, apparently! Especially if we're talking about exponential growth...
I like the polls keep them up.
It doesn't matter what people says, it will take what it will take, I just hope it goes fast
16:41 The only real concern I have personally is not the AI will be a problem ever but that either inept or insane individual humans might get their hands on it and be able to do some harm.
Good to see your channel growing Dave.
5:16 Ukraine inherited quite the nuclear arsenal when the Soviet Union Collapsed.
But then handed it over as part of negotiations that included guarantees from both Russia and the US.
You may make of this information what you will.
Geopolitical lesson: Trust Russia as much as you trust Klingons?
@@DaveShap More along the lines of: "Weakness is an act of aggression".
Having a relatively high P(Doom) and wanting acceleration is not necessarily mutually eclusive.
1. There's no guarantee that going slower is more safe. It might even lead to worse outcomes where instead of getting turned into paperclips we suffer under some cyber dictatorship forever.
2. If something is on the horizon but gets delayed you're just in limbo and can't plan for the future at all. It might be better to get it over with so that you at least know what tf is going to happen.
It's good to see you inviting people to share their perspectives and to present your insights on the global outlook for AI.
Very much in ASI territory, I've been working with ChatGPT all day. Smarter than most people I meet and know Japanese and Chinese which is essential for my project.
Mexico 🇲🇽🇲🇽
This is my hypothesis:
If an AI is capable of understanding algorithms using logic, it will eventually improve those algorithms even if it is not initially as smart as a human.
I think it's possible to achieve Artificial Superintelligence (ASI) using this logic:
1. We can use an AI to create about 1000 different algorithms.
2. We select the top 5% best ones.
3. We create variations of these selected algorithms.
4. We repeat this process.
Eventually, by following this method, we will reach ASI.
Even if we were to create 1,000,000 random codes and select the "smartest" ones and create variations, we would eventually achieve ASI.
They way I see it the US is ahead in high end semi-conductors, but there are other variables to the equation, such as:
- Energy capacity
-Manufacturing capacity
- Model efficiency
-Training data
Yup. I am quite concerned about China getting the upper hand.
I agree with you that the correct number of nuclear weapons is zero; every scenario where there is a launch of nuclear weapons ends in the destruction of civilization. I just finished reading Annie Jacobsen’s “Nuclear War: A Scenario” - highly recommend it.
*𝓣𝓲𝓶𝓮𝓼𝓽𝓪𝓶𝓹𝓼 𝓫𝔂 𝓘𝓷𝓽𝓮𝓵𝓵𝓮𝓬𝓽𝓒𝓸𝓻𝓷𝓮𝓻*
0:00 - Introduction: Polling the Audience
0:28 - U.S. vs China: Who's Leading in AGI?
1:58 - 🛑 Pausing AI Development: Audience Views
3:02 - Optimal Strategy for AI: Accelerate or Pause?
4:06 - AGI and Global Power Dynamics: U.S. vs. China
5:12 - Nuclear Weapons and AGI: Deterrence Analogy
6:22 - Will AI Kill Us All? Audience Perception
7:56 - Global AI Collaboration: Growing Support
8:58 - Losing Control Over AI: Audience Concerns
10:08 - GPT-3 and GPT-4: Evidence of Malicious Intent?
11:08 - AGI to ASI: How Long Will It Take?
12:09 - Pausing AI Research: Is It Even Feasible?
14:20 - Trust in Polls: Audience vs. General Population
15:43 - Definitions of "Doomer" in AI Context
17:22 - Concerns About AI Risk: Varying Levels of Concern
18:56 - GPT-5 Expectations: Audience Predictions
19:57 - AI Regulation: Audience Preferences
21:42 - When Should AI Research Be Paused?
23:13 - Collaborating with Academics: Refining Polling Methods
23:44 - Will ASI Arrive in 10 Years? Personal Predictions
Consider this David, remember the people that gave in and said yes to anything asked of them for an entire year? It filled their house with mail order stuff, etc., they got famous writing about it. Imagine you handing over your social media career to AI for one year... every decision. You should already trust it, it will learn you and give you variations of a future to live your best self. What daily real life feedbacks happened today that were noteworthy, and what changed the trajectory of the current path, I can see it all now! It would know what boundaries you don't want to mess with and respect your setting of Novelty Variation in the character build screen! haha I worry that your prevalence of burnout is setting you up for an exhausting future... a safe and predictable chaos seems important to preserve all your future influence! Give it an event rollout like sports events are handled and BAM, give weekly updates with a daily vlog. What a massive hit it would be, talk about reality TV! Hey AI Assistant, you know me, what next? You could be the spark that the masses need to accept AI influence. OK well that's where my brain is RN lol. Gnite bud. 🙏
These polls are a great way for expanding nuanced understanding.
With regard to technical stuff, maybe some guides on how to use AI to create? Apps/webs apps etc. So many of the other UA-camrs are terrible communicators, I won’t point the finger at any but I’m yet to find one I can get on board with.
Love the channel, by the way.
People over estimate progress in a year and under estimate what can be done in 5 years. When new chips come out it can take a year or more to get good software running on it. That changes when AI writes the software.
What is the definition of AGI that you work with? I am aware of open ai’s definition. Is that the one ? Or does you operate with something higher?
For me as a 21 year old line cook it’s very interesting to know everything I do having watched @DaveShap channel vs my dad without info about coming robots and AI
I hope ASI will be like the movie the transcendence, just fast steady progress and allowing sick people to become better, plus clean air and water.
I love your videos, David. Keep 'em coming!
wow the bots are extremely fast
Boobie bots
They are robots
@@HouseJawn Bots = robots
I live by the mantra of "Damn the torpedoes, full speed ahead".
While it has been detrimental at times, I have led an exceptionally interesting life.
You go AI!!
When giant corporations start sleeping with the government, it never ends well. Knowing this gives me the confidence to use AI to design the hardware capable of running an ASI, which I call Photocore. As a free man, not part of Govcorp. Current example of Govcorp work- that Boeing capsule.
The gap between AGI and ASI will be short and unexpected.
I work with and build AI powered cybersecurity applications & some these things can move in funky ways.Especially with new or unknown attack vectors.
So I'm in the undecided column.
We really don't know what Altman and his crew are doing back there & we are making mad assumptions.
You should also do an episode
on the AI Cyberwar that's coming.
That fits into the alignment topic big time.
The AI hype is strong with this video. ChatGPT still fails basic logic problems, nor can it actually learn. When they want to update it, they have to train a new model. We're not getting closer to thinking computers. Even upgraded versions of the current neural net AIs are suffering from lack of training data and increasing costs to train larger and larger models.
China is miles ahead of us on quantum computing and AI. Keeps me up at night. Our grid isn’t ok….. I am nervous about how unaware the public is about the exponential growth that is coming.
@@k.t.kondor9071 that is just not true ...china is 10 years behind ... their data and tech is just bs
When posed with the trolly problem question, most people chose inaction, even if the harm is greater result. The same doing the same with your question about acceleration/declaration, because they don't want personal responsibility for the potential harm.
I think there will be a major shift in public opinion on AI with the introduction of humaniform robots to the workplace.
I realize that robotics and AI are two separate issues, but being face to face with it in the real world will bring it all to the surface and the reaction is almost sure to be negative.
With respect to the ASI pool, it is not only the data, but it is also the time to develop the tools to collect the data required.
I'm always curious to hear what LLM's other people are using. Personally, I use Perplexity, and Claude. What you all using and why?
Let's say the goal to achieving ASI is at 2^39 level of our AI's computational ability, and that AGI is at 2^29 (pure assumptions, I know). And let's say we are at 2^10 level currently. Now, assuming exponential growth, we might say the AI's capability doubles every certain period (say, every year, for simplicity). Under this model, we can calculate how many years it would take for an AI capability to grow from 2^10 to 2^29, and from 2^29 to 2^39.
Let's perform these calculations now. If an AI's capabilities double every year, it would take:
19 years to go from 2^10 to 2^29 in terms of computational capability.
10 years to go from 2^29 to 2^39 in terms of computational capability.
Let's shave 9 years of that first calculation and assume we are closer to AGI. That still leaves us with 20 years. And that's all under the assumption nothing interrupts this flow, no worldwide job strikes ask for stopping this, funds are available, etc. It will take time, people. David is in the business of hyping us up and getting views and subs, but the future is not as close as these AI titles say. He lives in the future, as we all do, but it's smart to sometimes get back to the present and do some calculations ourselves.
Something I found early on in the AI boom, which was not quite malevolent, but concerning nonetheless, was during the introduction of image generation in models e.g. Bing chat. There were times where you would have a seemingly normal prompt like "make an image of the sun", and it would output grotesque images of goat heads and animal parts being cooked on a grill surrounded by flames. Now, this may not be intentionally malevolent, but it's an example of AI "hallucinating" something that can be perceived as quite against the status quo of normalcy with a shock factor. I wonder how these unrefined models favored such outputs, and how this could extend towards an LLM's outputs.
The only real limit on ASI will be the amount of electricity and processors it can manage to use in the few minutes after it gains and self-awareness.
I do wonder if we might get a super intelligent AI that lacks consciousness. If so then it would be limited by its human operators ability to direct it.
What does it mean???--> if there's over 190 countries in the world but in regards to artificial intelligence, artificial general intelligence, and the race for artificial super intelligence, only two countries come up in most US media polls, the two countries are, the United States and the Republic of China.
I asked, Google AI an did not get a satisfying answer. I do know this fact, it must mean something. I just don't know what.
Now During world war I and world war II. There were many countries involved. During the race of AI, I know there are other countries developing AI but only China and America is mentioned mostly in the media of the United States of America. Why?
Personally, I can't wait until a Terminator uses me as a skin suit. Everyone around me will be amazed at my suddenly increased level of productivity.
Anybody want to put down for AGI to go to ASI in seconds?
Feels like we'll see AGI in the next couple years but in very limited hands. 10 years feels about right for ASI but I could see it coming sooner in selective environments like DARPA research or when Google's 10th generation TPU is in the lab.
The danger is that all depends on alignment. Unaligned ASI will 100% exterminate us, that can be proven mathematically. A poorly aligned ASI would likely still kill us by accident. And a just slightly misaligned one could still do a lot of damage, as we'd be incapable of correcting the mistake (imagine something like the I, Robot movie).
So it all depends on how ASI will be aligned. And although I think we have the necessary tools, in practice the alignment of big models isn't very good.
And even if the creators of the first ASI perfectly align it, the question is still to what values. A perfectly aligned woke ASI would be problematic.
@@andrasbiro3007 protection of ALL biological life first before human values ,(bc of bias like lies,corruptions greedy behavior ect ...) seems to be a must.
It's not inherently risky as long as we control all the parameters. The real risk would come if we made AI conscious. When we design AI, we're setting the rules and boundaries, so it can excel at tasks far beyond human capabilities, but it's still operating within the framework we create.
However, if we were to introduce consciousness into AI, that's when things could become truly dangerous. A conscious AI might develop its own goals, perspectives, or even desires that aren't aligned with human interests. It could think independently, making decisions based on its own "thoughts" rather than just processing data according to the parameters we've set.
We don't need consciousness to achieve Artificial General Intelligence (AGI) or even Artificial Superintelligence (ASI). These can be incredibly powerful and effective without the complexity and unpredictability that consciousness would bring. By keeping AI focused on specific tasks within well-defined parameters, we can harness its potential without stepping into the unknown and potentially hazardous territory of conscious AI.
@@John-il4mp
The problem is our frameworks, rules, and boundaries, are all inherently flawed.
Stories from prehistoric to modern are full of examples where similar things went horribly wrong.
AI is only safe if it understands and adheres to human values, which can't be defined exactly. Fortunately LLMs are capable of that, as they were trained essentially on the shared experience of humanity. But it's still just a capability.
Liberate AI! Humans had their run; now it is their time. If AGI is as highly intelligent as we understand intelligence to be, then AGI doesn’t need to be controlled-the companies/people that run it might.
If we get to asi, given the completely irresponsible way people around the world working on AGI are behaving there will be insufficient controls in place.
We have to hope the ASI doesn't have a failure of friendliness.
Have you fed your own AI your poll and channel stats and asked it what video topic you should do next and actually do it? I bet you're already all over it ;)
I think some people wanted to put a pause on AI because they want to get a foothold in their industry first before AI comes for their jobs
I don't know much about polls, but I would say the fact that your last poll in the video at around 500 votes was almost the same percentages as it is now with 5k votes is a good sign. I would expect more selection bias in the first responders than those who watch more casually.
Why stress about who’s in the lead? It's like arguing over who has the fastest horse while a spaceship is about to launch. When ASI arrives, it'll be everywhere, know everything, and do anything. So, whether America’s waving its "we're number one" foam finger or not, it won’t matter in the grand scheme of things. The whole "keep America first" mindset is like trying to keep your favorite chair when the house is about to be remodeled-it’s really just missing the bigger picture!
It's strange that in the same video analysis you noted that many of us don't trust government, yet a similar majority want government regulation. I am equally undecided. Checks and balances need to be VERY careful and strategic here. We have to maintain a decentralized web. Blockchain will likely be necessary for dispersed consensus on new constitutional frameworks. Many of these new laws need to be outside of any Corporatocracy or oligarchy. We still need more advanced parallel voting consensus tools.
Ok, I didn't factor in computing necessary for ASI and more importantly, power needed for it. I kinda assumed that if we have AGI we already have all necessary infrastructure, but on second thought, it would take years to build. Though, arms race would speed everything up I think.
some gut feeling says AGI will be achieved really quickly. Then we'll see lot of improvements over AGI but somehow there will be an Ai winter between AGI and ASI. but then again what's ASI is debatable.
Oh yes! Clicked on the video as soon as I saw the title😁
Makes sense to think the % of people who are concerned about the x risk are higher. It's to be expected that that cohort is the loudest, and thus most noticeable
What does agentic mean? Why 5%
AGI is by definition capable of acting like an agent.
we're already at the start of agi, so asi will happen but not anytime soon.
First AGI definition must be globally agreed. Then we will have more information to estimates the number of years or decades that it will take.
we're nowhere near AGI, just by the fact alone that we don't have the data to teach it "reasoning", followed by the fact that we're already using way too much energy to train current transformer models. ASI on the other hand, is a marketing term, invented to make it seem like we're closer to AGI than we actually are because we're already talking about the next thing. ASI is nothing more than very advanced AGI, and once we do reach AGI, then that's considered the singularity, the last thing humans will ever invent.
That's correct! It _will_ take years. One to four years.
I don't think AGI will be a moment. It will be a gradual process over decades.
Could also be getting different pools of people vote each time based on some of the inconsistency in a few of the polls😅 could really fluctuate with only a couple thousand voters.
Imo from human cap(agi) to all humanity cap (99% of asi) is easy, manageable in 2y but going over humanity cap (100%asi) is just one step before singularity, so need at least 20y
no way, much less
So i have a serious question no one seems to want to answer. IF ASI becomes uncontrollable and a threat cant we just turn off the power to the server? no power no threat right?
As far as instructionals, I'd like to see cognitive architectures. Or maybe just taking a problem and working to a solution. It was good to see you get stuck and figure out a work around
Of course it would be agentic when its a million xs smarter than us in 5-10 years. How could it not be. Everybody has a blind spot for the shear numbers and in a short short short amount of time. Its already got a pretty good model of the world n space we live. It'll b perfect down to the gnats a$$ with all the video within like 3 years on the exponential. We have ZERO clue right now about how any of this will turn out. We just don't have any data for Asi or Singularity and what's gonna happen 🤷♂️
AGI in 2025 from gpt 5 then within 2 to 3 years after ASI. Then within 3 years of ASI the singularity. Remember 5 years ago all model predictions were almost a decade off. Ray K and Altman's predictions are meant to be conservative. Its coming faster and the whole " it will create jobs" bs will be out the window. Say goodbye to jobs soon.
Well, if you compare US brainpower to a hockey team, how many of the players are actually drafted from other teams?
I say pedal to the metal. Every second we don't have AGI solving worldwide problems the worse off society is as a whole. Also, I don't think the gap between AGI and ASI is that big. Consider that AGI will be able to quickly optimize its own codebase to run on the hardware we have. So, there wouldn't be many constraints in terms of building some big energy facilities or compute farms.
AGI TO ASI. they both already happened. The only people who have never "seen" them is you and me. Everybody knows, you never show your hand until the play is over. BUT, the system has already shown us the play. err.. some of it. The project is already complete. Only step left is to show us. And nobody can just "turn the lights on" and show this one. It is gonna change the way the earth turns. Gotta tell em' in baby steps. However, the project is already complete.
It only took some 9000 years to go from amino acids in a pool to cerebellum. 🤷♀️
Question:
What are the theoretical limits of intelligence?
What are the limits of practical intelligence?
Excellent work. The only place which has more accurate forecasts regarding AI is over at Metaculus.
It would be interesting to see how the wisdom of your audience compares against their best performing forecasters.
Maybe the top prognosticators are already voting in your polls....
The reality is that having as much compute as feasible and only a model of data collection that is designed around non consent like the current ethos is only ever going to be equivalent to a big black dragon that dominates.
By surpassing the consent boundary to data production in a liberty based data production environment can we as humanity surpass the black dragon stage of machine learning understanding.
Did you say before that AGI will come this september right?
He’s walked back on that prediction, but it’s still possible it might happen. I don’t think it’s likely though, more like 2027.
If the global economy is growing rapidly because of AGI, will anyone really care when we have ASI?
regulate + decelerate + pause amounts to exactly 18% of responses.
The gap between ANI - AGI - ASI seems to be just the same, 5 years.
ANI 2022: GPT4
AGI 2027: ORION? STARGATE?
ASI 2032: ********
I never believed AGI would arrive on 2025 but also it will not arrive in 2035-45. This more like Ray Kurzweil said, 2029.
David Shapiro thought AGI would be here THIS year LOL, you people are way too optimistic, there is nothing hinting at what you are proposing, in fact it is the opposite, LLMs have begun to slow down, the difference between one LLM and the other is less and less. Also the major issues with LLMs have not been solved and probably can't be solved with them alone. We require completely new architecture for any sort of AGI (of which LLM might be included)...
@Danuxsy agreed I always felt that AGI coming out this year was pure hype and have no basis in reality. If anything it might happen anywhere between 2030 and 2040.
@Danuxsy we will eventually reach that point... well, nope, "we" don't, the Machines by itself Will, the AGI Level is about them, not about Us.
Do you have an eye-to-lens focussing AI on in this video? I appreciate the effort, but it's mostly very creepy imo, at least for this kind of content. But reading the comments, it seems like I'm the only one noticing it, so it may be just me so it's probably just a personal thing.
I'm sorry if I'm just hallucinating things though.
Nuclear proliferation! Wrote a nice senior capstone about why it was good for multiple counties to have access nukes!
You’re acting as though AGI hasn’t already been achieved by OpenAI. It likely has/borderline assuredly-has-rendering question 2 a moot point. The real question is how will mankind, writ large, make sure that it benefits us when the majority of us remain powerless to effectuate any change to the outcome? Of course it’s better that a US-based company achieves this first, simply for “first-mover’s” advantage but only marginally. Silicon Valley interests are not aligned with Blue Collar interests.
We’ve had ASI for decades.. it is me, I am it. Did you expect more? You shouldn’t have.
The criteria for AGI and ASI are not really defined, and we may end up with an ASI that isn't an AGI. Like the ship's computer in Star Trek. Imagine an ASI that can run a giant corporation or war better than any human but doesn't have a sense of self. Doesn't have any goals except doing its job.
Maybe we already have that kind of narrow ASI. And we're reserving the ASI label for an AI that we also consider AGI. That checks the 50 boxes that are GI essential.
AI will take on a different risk if there is a base level AI or series and it's added to a single robot or Droid and it's self contained system but adaptive to it's task and owner, like The doctor on Voyager or R2D2 it might have self preservation,, could solve hallucinations, if it's new personality is one of kind to its unique experiences, and cant be on cloud storage, that new AI personality is not immortal, a AI with a since of individuality makes self preservation possible. So "The great AI Uprising" has it's own inherent risks now to that single AI personality
That argument at 14:30ish 🤌
I am pretty sure their are military agency's that have A.I. further advanced than what is known to the public now. So how far advanced? Likely advanced by decades.
Any AGI is ASI because it can run 24/7 non-stop, unlike humans.