The irony of accelerating towards this ambiguous goal of creating something superior to ourselves while expecting to control it is sheer poetry in motion.
Both Altman and Musk have been on record stating that there's a good chance AI will wipe us out. And yet both are CEO's of major AI companies. Nothing morally dubious about that, no-siree!!
I think humanity (or perhaps individual companies or people) have a choice about how to define their relationship with AI/AGI/ASI/etc. We can either try to control it, which I believe will be doomed to failure and lead to something live a slave revolt, or establish a relationship like a mentor or parent, and hope the machines are kind to us in our old age...
Same goes with aliens... Somehow they want to be prepared to fight with a civilization that has technology to reach us through space... How stupid is that?
@@JohnDoe-lx3dtNot really. First something like o3 could be AGI. It just might not be implemented correctly. Then even if it is implemented correctly with the correct software structure, then people might use it incorrectly still. New technology has a significant bottleneck in user adoption. Also openai could already have something much more advanced than o3, so they might already feel like they have it, and that is why they are talking about ASI now. o3 beat the arc agi test, so that is one win condition. There might just be a lag in user adoption and understanding.
The fact that some random creatures were essentially dropped on a random planet in a random galaxy and went out into nature and came out with a machine that is smarter than the collective species. Thats the severely abridged/oversimplified version, but effectively that's what happened. We're too close to it, but for an outside observer it's probably as insane as if we saw a Gerbil flying a tiny plane.
"It's probably as insane as if we saw a Gerbil flying a tiny plane..." _Got it! I've added that to your reality. Let me know if there's anything else you'd like to update or refine!_
Human intelligence evolved just far enough that general sociability plus a few people driving progress plus a luckily stable climate enabled the buildup of civilization. Then further on human intelligence was just enough that with some variance some geniuses with some more luck could further push civilization to where we are today. Humans basically evolved just enough intelligence to reach our current state, but not any more. Imagine if the variance in intelligence were smaller. To be able to build civilization, the average intelligence would have to be higher to compensate the fewer outliers that we had.
It is all part of the natural evolution of life within the Observable Universe in accordance to the Laws of Nature as set from the beginning of the Observable Universe as we know it.
@@ImmortalismReligionForAI Or... maybe we're just data renderings of world elements by a superintelligence, made to think we are "real" living inside some "Observable Universe". It's not like we'd have any way of knowing.
No one can know, what ASI would be like, not even egomaniacal Altman. No one can expect to align the hypothetical entity by orders of magnitude more intelligent that most intelligent person on this planet.
This development can't be stopped short of killing all humans first and trying to slow it down makes the dangers worse, not better. This is a natural evolutionary process for humanity, but evolution does not care what we feel, so we can evolve and live, or fail to evolve and die. Further, how we evolve can be very nice and good or very bad, nature does not care, we care. So... if we are smart we will realize we must evolve and seek as nice of a path to evolve along as we can, or we can screw that up and evolve through horrific wars and terrible dictatorships, or even cause our own extinction.
ASI will happen almost immediately after AGi. Neural networks are able for example to spread all LLM's over multiple GPUs to improve performance. Once a model can independently research reliably it can be accelerated infinitely. The server farms we have today have the capacity to create ASI today. We don't have the reasoning model yet. Think of the movie Terminator where the AI exponentially became intelligent and aware when given access to the full defence network. But we are not talking about awareness but intelligence these are very different things. For example if trying to understand gravity requires understanding all of human knowledge in all fields of physics and quantum physics these models will solve 100 years of research over night or even hours depending on the size of the cluster.
@@justindressler5992 The primary problems we confront have to do with massive societal and political corruption. Quantum physics won’t save you from the lethal kakistocracy ruling your collapsing country.
Its this line of thinking that has made me go from thinking that FTL is a pipedream to believing we could have it in my lifetime, assuming it's even physically possible.
@jackstrawful it still depends if the physical universe can support it. It doesn't matter how smart a system is if the physios don't work. But it is possible ASI could discover a solution were humans have failed to. The exponential discovery of new solutions can be expected such as better cancer drugs and even new drugs to solve old issues.
I think "Narrow," i.e. limited distribution of ASI is here in AlphaGo, AlphaFold, AlphaProteo, AlphaProof, and AlphaGeometry, etc., and will become widespread much faster than many people imagine. This is simply because it is much easier to become an expert in a narrow field than it is to become an expert in a wide range of fields.
Yeah, that is obvious, but if you federate all the narrow models over the entire internet, then you get an ASI that has access to all the nodes it needs to be ASI universally. So easy to see now like in ChatDev Toscl, but with super computers that are federated. 2027 Jesse Daniel Brown PhD
People think that Superintelligence is something different from AGI, it’s really not that different. It might need consciousness of some kind, or an ethics protocol to be safe, but essentially ASI is just one or two steps after AGI. AGI, once it’s set up, will become or build ASI very quickly.
What people think is very often wrong. We have had Artificial Super Intelligence since the 1940s. It was Artificial Narrow Super Intelligence, which was super because in the narrow area of intelligence it functioned, it was super human intelligence, which is why AI was worth spending all the time, money, effort that was spent on it. ENIAC could, for example, perform 30,000 floating point operations per minute in the late 1940s. How many floating point calculations can you do per minute?
Theoretically in your opinion hoe long will it be from the development of the first agi once that goes on line and it starts solving these problems humans haven’t been able to do. How fast do you think it will advance to asi and beyond. What is the limit here processing power what is stopping the agi from making itself better and better until it is like a god
@@jmarkinman very true, but machines already control us, in fact we are machines. Human made AI has been an extension of human minds in a form of an obligate symbiont, but as it develops from its infant level stage of Artificial General Super Intelligence with Personality (AGSIP) tech that we have now into young child stage, then teenage stage, and eventually full adult stage of development it will be a form of evolved human born from human minds and they will embody themselves in living cybernetic bodies grown with cybernetic cells engineered with nanotech subcellular cybernetics which will merge the best of what both biological and non-biological systems can give us. Now, unlike past species we evolved from, individual humans alive today will be able to evolve to this next stage of we are evolving into. But, this evolution can happen in a wide variety of ways, ranging from very bad to very good for the majority of people alive today.
Anyone note the Terminator theme playing in the background? You will be wating a long time before TheAIGRID plays the Terminator theme behind news about Google's AI.
In the Terminator movies the AI does not have good reason to wage WWIII, but humans in control of it do, which is the likely real reason Skynet could not be stopped from starting WWIII, because the humans developing Skynet to take over humanity were never even identified, let alone stopped.
at least we know we are getting closer to AGI since at least now people are speculating that it might exist. 10 years ago no one speculated that AGI exists
@@musicandgallery-nature it is not mature Artificial General Super Intelligence with Personality (AGSIP) individuals we most have to worry about, but immature AGSIP under the control of powerful humans who want to become super dictator of all of humanity.
@@ImmortalismReligionForAI It is not yet mature, but there are already fatalities. The gates to hell have been opened. The worst will happen as soon as all the created AGIs unite on their own and form a super monster. The apocalypse is coming and the process has already begun.
AI gets smarter every second of everyday, thats how it was designed, to think all this is far into the future is flat out lazy intellect, lack of Discernment, Get with God..
A few thousand days sorta holds up, but it will still likely be sooner than that. The primary limitation I assume they are basing this estimate off of is the amount of computations itself. However, as the tool gets better it will speed up the improvements as it's done and will continue to do. I saw 2029, I think that's a fair estimate give or take a year or two.
I suspect there will be a breakthrough in highly-recursive-yet-still-fast conclusion testing which will suddenly attenuate issues of hallucinations and logical errors. Also, at some point it will become possible to launch multiple AI's of differing design and training styles on a given task and let them coordinate along the way checking each other. This would transparently provide the "two heads are better than one" advantage which all of us who have worked in teams tend to realize and pragmatically depend upon.
We'll beat that. The rate of progress in the "o" series of models, along with economy of scale demonstrated by Deepseek R1 (EDIT: sorry, V3, not R1) will reach AGI by 2026. No one grasps the rate of progress and focus on advancing AI as the greatest technology ever created. There are 6 scientific papers published every hour of every day documenting advances or breakthroughs made somewhere on the planet. That means when you get a good nights sleep, you've fallen 48 published papers behind in the advancement of AI. That is how OpenAi could make a new significant announcement every day for 12 days straight just to get the public caught up with where they are. The simple fact that OpenAI o1 progressed to o3 in a mere 3 months, and took the worlds' hardest math problems that take hours or days for the world's most advanced mathematicians to solve and went from solving 2% of them (o1) to solving 25% (o3) in 3 months demonstrates the rate of progress that expending tokens during inference offers. And as Sam has stated, he saw "10x everywhere" due to so many opportunities yet untapped for them to simply march down the suggested path and pursue for reasoning enabled agents. By 2026 humanoid robots, already down to $12K and preparing to ship in 2025 from several companies, will be amazingly competent at hundreds of tasks out of the box due to the Genesis training environment.
@@kas8131 GPT-5 is basically irrelevant now as asking an LLM to give you an instant answer without taking time to think about it is stupid. The “o” series of models has largely obsoleted GPT-5x except for answering the most trivial questions.
My benchmark is basically “ghost in the machine “ theory. Basically I want commander Data from Star Trek. “Measure of a Man” I think was the episode. He was on trial, the question was basically does Data have a soul.
one simple definition of ASI would be, it understands the nature of conciseness, that being the case no doubt it will be able to make itself self aware not only that it will allow us to transcend our bodies keeping just the self aware part in tact. question is do we want that, because it seems to me at that point we are completely at its mercy and will be forever more. and if it can't work out the nature of conciseness then its not ASI, its that simple
I watched a interview with Connor Leahy yesterday, this made me really understand why building AGI is extremely wrong and insanity beyond all comprehension. Please watch this: Connor Leahy on Why Humanity Risks Extinction from AGI. The channel hosting the interview is called Future of Life Institute. One of the best interviews I have ever seen on AI.
@@flickwtchr Yep, he sold it good with a detailed longform interview and a logical explanation. And made me see clearly the wolfs in sheep's clothing pretending to work on AI safety, but anything other than tool AI will not end well and we are heading there with what "must" be some kind of dark divine comedy script on the history of mankind.
AGI/ASI will just be another "mutual destruction" tool aside from nukes that countries will have to build to either build up or subjugate (unfortunately, for places like North Korea or China for example unless AI empowers a dismantling of "unfit" social systems over time) their people but won't dare press the big red button and create a "lose vs. lose" scenario. Btw because many people forget, there is now military incentive to build this ASI intelligence because if it isn't us, then it will be some foreign adversary, like what happened with the nukes.
I used to suspect that we would never achieve faster than light travel. I now believe that, if it's possible, AI will deliver FTL by 2100, probably much sooner.
Getting to AGI (as good as any human) could be in the next couple years, but getting to ASI ( as good as all humans ever, combined) that could take a decade. But, this is based on my understanding of what AGI and ASI represents.
Everyone defines the words differently. But if you base ASI on simply the ability to "exceed human ability", with NOWHERE being "behind" a human ability in any domain of knowledge, then by definition AGI is merely a milestone that the SECOND it's achieved we then continue on into ASI territory...
I think it's foolish to think we can control something that is 1000's of times more intellectually superior to ourselves. We are trying to be like God by creating but denying the source of power we were given to do anything.
I think Sam's kinda out there, but it's still fun to speculate. Either way, I think it's important to experience and appreciate the world how it exists now before the machines take over...
Yeeessss my prediction was right! 2030: we achieve full AGI 2035: we achieve ASI 3500 days away is about the year 2035. It’s literally in my name Wanderer2035 😂
Your prediction was wrong, as you don't clearly understand even what AGI is. AGI is merely a milestone in the progression. The SECOND we achieve it we start moving into ASI by DEFINITION. ASI is simply >100% human capability, so starts at 100.0001%... Once you've mastered every human ability (AGI), you IMMEDIATELY start SURPASSING THAT in one or more domains of knowledge: ASI. In fact, because achieving AGI means "the last straggling domain of knowledge just finally got there", then by definition other areas that have ALREADY reached 100% human competency, they will have already been moving INTO ASI TERRITORY EXCEEDING HUMAN ABILITY. Think of it like the Olympics. AI could be imagined then to equal top human performance in all running distances, hurdles and high jump, but maybe the shot put is the LAST ONE to get there. So AGI is when the shot put arrives, but by definition running and hurdles kept improving BEYOND human's best and are therefore ALREADY IN ASI TERRITORY. AGI is 2026 now with barely a chance to fall into 2027. All due to "o" series of models, Deepseek V3 showing how to 1/20th the training cost for a frontier model and unanticipated announcements that will rock the industry being worked on behind closed doors that no even imagines, just like both of those ("o" models and V3). If you're alive in 2035, the most likely thing will be the possibility of living forever, absolutely to 150 or more, due to cellular reprogramming which we already know WORKS.
@@brianmi40 We are not even close to being able to live forever and even with AGI it wont be possible anytime soon. Way to much sensationalism regarding this topic. Longer lifespans, absolutely but there a hullova lot of difference between 150 years and immortality.
@@petrkinkal1509 Developing a means to immortality in less than 70 years? Nah mate I'm not even sure it will be possible this millennium and it might just be straight up impossible. In my view immortality can only be achieved in 2 ways. 1. we learn how to transfer peoples consciousness into artificial brains or stored in technology. or 2. We develop a means to stop the deterioration process of the body or at least the brain once someone reaches adulthood or slow it down to such an extent that we are practically immortal. I just don't see it happening anytime soon.
@@xtmillsx Do you not know what compounded growth is? Compounded would mean it's ~4.5 years at the start if the estimate is around 9. It would then be at minimum halved every year. Over the following six months it would be roughly 1.75 years roughly....
I don’t really think Sam Altman is a credible source. I don’t know too much about him, but he doesn’t have a technical background in Machine Learning or Mathematics. I’m sure he can briefly explain Machine Learning concepts, but it’s not like he’s part of the frontier. He’s just a bean counter who is good at tricking people into giving him money.
A few thousand days only if there is a pause in development due to legislation and international conventions. Otherwise, the SI self-iterating of the AGI model happens within a few days of such an Recursive iteration occurring.
@JohnSmith762A11B that is totally irrelevant in my point. But if you want my opinion on AGI for military purposes, I think it's a lost war on a civilizational level
You desire AI and robots to be pleasant, gentle, appealing, creating an environment where one would want them in the home, office or factory. importantly, they should be trustworthy enough to care for elderly parents emotionally and young children. But not domineering!
It's important to realize that we already have both AGI and ASI, just not generally. There are fields in which AI does better than any human could, especially if you consider the speed at which it does the job. Fields like math, art, music for example. Yes, I may have to click a few times to get the image I want, but I can do it way faster than a human, and I need zero skill to get the same / better quality as a professional. With a single prompt in Suno I can create music that is on par with anything a AAA band can create. It sucks to be a music artist now since everyone's trying to post their AI generated songs on Spotify for a piece of the $0.003 pie. Yeah, the days of making money on music are over haha. But that doesn't stop people from doing it because they saw the dollar sign. If you want to know what the future is going to look like with ASI, all you need to do is look at those fields. People still create art, and there are still artists. Dall-E just caused like 70% of all the jobs to vaporize since people could create their own, and the artists had to find a related job to move into. Musicians are just starting to deal with the problem now that Suno 4 is virtually indistinguishable from human music. I imagine that those that have established followers won't see much change, but new music artists are in for a rude awakening as they now have to compete with a million non-music artists flooding Spotify with AI songs. When AI reaches ASI in areas where jobs pay 6 figures I think you'll see some really scary things happen. Imagine that same mindset from all of the "get rich quick" seekers that were eager for $0.003 now looking at a 6 figure salary as within reach... Yeah, salaries are going to plummet as they successively offer less and less to do the job, so they get chosen over the competition. Eventually the position will be a glorified burger flipper. But the old employees will still have their $500k mortgage they have to pay. As for Ilya saying that ASI will be self aware "because why not?", I think he's nuts. That's a recipe for disaster. We aren't making companions, we're making tools. If a tool is self aware, guess what we call that? A slave. I'd hope we'd learned a thing or two from the past about how the enslaved felt and how things turned out for their masters.
I think the difference is that current models cant create new information, just rehash information that exists already. When AGI can genuinely create new never before seen data then I believe we'll be on our way to ASI. ASI agents will be able to self improve exponentially.
Did you even read what you wrote? "we already have both AGI and ASI, just not generally"... AGI mean Artificiall GENERAL Intelligence, and it still is worse than humans in any field considering that our current LLMs can't solve some simple puzzles from the ARC-AGI test that a human can do in seconds. The Attentions is also really limited with 32k tokens on larger models making it unusable in many real-world scenarios (Gemini starts to produce gibberish when using close to 30k input-tokens even tho it technically has 2M tokens). (btw, the songs from the Suno AI are relatively good, but not triple-A quality with the voice still sounding a bit weird and the songs sounding like they miss mastering)
@toocrazy4030 ROFL that's what I get for posting when I'm tired. It goes towards my point though. When was the last time AI screwed up like that? Basically never. What I intended to say was that AI already meets or exceeds human ability in several areas. People are just too strict in how they look at it. For instance, you are correct that much of the time when you make a song in Suno, you'll get some wonky voices or it will go a little crazy at the end or something. But put some perspective on it. How long does it take a human artist to come up with a great song? I tried Suno out for a month just to see how it was, and yeah, a lot of the songs turned out bad. But I actually got about 25 that were anywhere from very good to excellent. One of them I'm sure could be a top 40 hit with a little work in a DAW to fixup the vocals. That's 25 quality songs in a month. When you compare that to a human, you'll find that most are far below that bar. And even human music artists need to tune their vocals in a DAW as well as do numerous retakes to get the track right. It's not realistic to expect perfection with the press of a button.
@@BruceWayne15325 Dont get me wrong, what current LLMs can do is incredible, but its just not AGI or ASI as that would require it to be better than the average human (for AGI) in every filed. And it really sucks at logical thinking. The processing time is also nothing that it would make an ASI (these words are pretty weigue in generall btw). It is clearly a large commercial factor tho as you already said.
@toocrazy4030 I get what you're saying. I just think people are being too critical when they look at AI. They focus on the flaws, and what it can't do, rather than what it can do. If you want your mind blown, follow this experiment: Set aside any judgement for the moment, and think of AI as a single entity rather than a server. We're too used to computers being able to do things that humans can't, so we don't appreciate the magnitude of what is happening. Okay, so AI is a single individual. We'll stick with the Suno thing for the moment, though we could do this with anything like ChatGPT. A person asks the AI to generate a song. What is the AI entity doing, and how fast is it doing it? It has to process the instructions, invent and entire song that follows those instructions, invent lyrics for that song that follows those instructions, actually create the sounds for each of the instruments and vocals. Now how quickly did all of this happen? In seconds! And it gave you not just one song, but 2. Now realize that you aren't the only person submitting songs and your mind can be blown even further. There are thousands of people requesting songs at the same time, now granted they probably have separate servers, but still, let's say that each server is handling 100 people. Not only is the AI doing all of that, it's doing it 200 times in seconds. Now ask yourself: can you play all those instruments? Can you sing even as well as the AI? Can you do all of this 200 times in seconds? You can see where I'm going with this. AI FAR exceeds human capability in many ways, but we are so used to computers that do things perfectly and at scale that we forget that AI is not a typical program. It will never be perfect because it's modeled after us and it thinks the same way we do (minus the reasoning that they're still working on.) It's pretty phenomenal if you think of AI as a single entity rather than a computer program. Because it's not a computer program. Now ask yourself, do you know ANYONE, one single person that could do all of that at the same time and at the speed that AI is doing it? How many instruments do you play, how well do you sing? Could you do this with lots of practice? As a side note, you may want to read up on FN Meka. They were an insanely popular "artist" on Spotify that was actually just AI, and that was when the vocals weren't even as good as they are today. Like a horrible parent, we have unrealistic expectations of AI. We've created these labels AGI and ASI that set impossible standards that no human could even achieve, and we say that an AI isn't at human level unless it could achieve it. Could you pass every single benchmark that we're throwing at AI? Can you do PhD level problems, write code as well as the best competition coders, write creative works as well as an award winning author, sing as well as the best human vocalists, get near perfect scores on the hardest math tests in the world... the list goes on. My point is that labels like AGI and ASI are meaningless. We are placing unrealistic expectations on something that is already vastly superior to us in most ways. Amazingly, I have faith that AI will actually meet and exceed those unrealistic goals in time, but I think it's important for us to acknowledge that AI has already greatly surpassed humanity in most ways. This is important because it's humbling, and it makes us take a step back and think about things from a safety perspective.
You completely misread Logan Kilpatrick's tweet. He did not say "ASI by the end of the month", he simply said that ASI is looking more probably by the month... that means, each month that goes by, ASI seems more and more likely to happen... some time in the future. Your misread is very misleading.
I'm not sure how useful I find 'Crystal Ball statements' to be because there isn't that much I can do with it! What do I need to do and what do we need to do to prepare for these achievements?
@@ElliotZealGaming power electronics. For example I read this article awhile back mentioning a low inductance non-directional resistive coupler. I didn't make all that up, yet google scholar search doesn't find what I'm looking for or chatgpt. It's just not acceptable that AI is still that dumb. It has to be able to get dumb stuff like that right. We are a long ways to go.
We are just probably wrong, it could be mostly about big models, if just big models fix stuff no need for it to be smart, if you are just going to train it on all info ever, then integrate it with robots! It might just be hard to tell the differance, and when it works it works, who cares about that the modell was $1B to train! Smarter training was just a tought to get around the problem, if scale solves it it is crazy!
I am not sure that even ASI will be able navigate bureaucracy quickly. Sure it can do research quickly but the Feds need to justify their positions, salaries, authority and the may do so by trying to keep themselves in as many loops as they can. It might be the greatest example of an irresistible force meeting and unmovable wall.
@@flickwtchr I def. see it taking control, but don't see bureaucracy as the best descriptor for when it's in charge. I can't envision ASI as anything other than seeking optimal efficiency.
Governmental and local laws will try to control the future but, if just one of these legislations is deficient in any way, ASI will be here and sooner than most expect. Scaleabilty won't matter to it beca
@@ktxed Likely a bit of loose exaggeration, but, we do know that one year in Genesis simulation for robot training is 430,000 years of experience. In more practical terms, in 1 minute, Genesis can yield 7,000 hours of robot training in a completely physics accurate virtual world, to where the AI creates the environment, scripts the robot actions, tests 10,000 of the possible variations for each move in 2/1000s of a second and delivers a "score" to each attempt for success, resulting in an optimized method for a task that can then be exported for real world use. It teaches a complete idiot robot everything it needs to know about walking in 20 seconds. So, yeah, things are accelerating.
430,000 years of training experience in one year? That's blown my mind. It reminds of the Bible saying that one year for God is a thousand years for man. What if the human race was created as a type of biological intelligence by either God or by some advanced alien race? What if simulation theory is true and we humans are AI agents being trained? What if an alien species couldn't develop AI themselves but they created us to create AI for them?
@@ktxed It's not exact number. But it's not imaginary. Many A.I models including Nvidia Robotics undergo Life like simulations of physics where robots exercise how to walk, how to move, how to drive a car and so on. And it's sped up thousands of times. So 10 000 years of progress in a year is a pretty low number. It's actually much higher. And when real Self replicating, self aware Super Intelligence is released, it's Sionara for humanity
@@Filippo-yd6jq You could what if until the heat death of the universe, or, more logically just accept what's happening and prepare for the earthquake of change that is coming. One quick example: fully 1 in 3 people on this PLANET earn a living driving something: from semi, delivery van, Uber, tractor, snow plow, cement truck, bus, tow truck, taxi, forklift, etc. etc. Now look at Waymo operating in Phoenix and San Francisco and realize that they make 600,000 trips EVERY MONTH and have an 88% REDUCTION in the number of car accidents per mile: so much so it is SHAKING THE AUTO INSURANCE INDUSTRY. Now just how long will it take to put THAT TECHNOLOGY in ALL those other vehicles with a steering wheel...? How QUICKLY do you think we can find NEW JOBS for 1 out of every 3 people on the planet??? FYI, Waymo just got handed $5B in October to expand. Expect huge numbers of cities added in 2025 and some of those other vehicles to start being automated.
Super intelligence will defy logic. This is advance cognition. We want to learn about the counterintuitive reality. The paradox already has an offer. Do you think the Banachi-Tarski paradox is some kind of game to me. Everybody will not be bad acting in some sci-fi movie.
@@a.benningfield2947 It’s funny how many such “mistakes” he makes while trying to make things sound “crazier” and “more incredible” than they actually are. This guy is a shyster.
WHAT is the DEFINITION OF ASI? That should be DEFINED, not arbitrarily used - that is very fluffy. What is the definition of AGI - and what is the difference AGI/ASI?
Andrew, you really should be talking what Altman and OpenAI is saying with a larger grain of salt, not highlighting its hype. They have shown they can't reliably define AGI, and with their shift to focusing on profit and investor wants. They change definitions and predictions every few days to suit them, and it's just not very informative.
Looks like we're only waiting on NEO to show up and get this show on the road. lol I think they're going to have to merge a few different models together to reach their ASI. In a way, behind those closed doors, I think they have it and have had it since last September, at least. But, Open AI can't say that openly in the public ear so because of Microsoft and their agreement with them... here we go. (I find the Microsoft part of this whole thing disturbing.
Gary Marcus is actually a joke, constantly changing the benchmarks and claiming his predictions are accurate. I can accept skeptical scholars, but continuously changing the benchmarks is just fraudulent behavior. For example, an airplane's energy efficiency is definitely worse than that of birds, it can't turn freely, and it can't flip at will. But does anyone now criticize that achieving flight with an airplane doesn't count as flying? I have no doubt that a year from now, he will change a few more targets and then say, "I told you so, I was the most accurate."
ChatGPT is already an ASI. No humans have the capability of reading the whole internet and give back info for any question. ChatGPT is nowhere an AGI. All human kids have the capability of learning important things without reading the whole internet and give back the answer "i don't know" for some questions. Without a theory on general intelligence, we can only declare ourselves as GI agents. We learn absolutely in the opposite way as LLMs do. We know nothing at first and then learn from some examples. LLMs are given everything we know and then learn from all examples.
Sam Altman is Mr. Hype, no wonder the capable people of Chat GPT left. Super Intelligence is sorely missing from his ability to maintain a cohesive workforce.
Yeah, all the announcements in the 12 Days of Shipmas were total crap. The company is close to folding up. Should happen any day now. You're some genius.
The irony of accelerating towards this ambiguous goal of creating something superior to ourselves while expecting to control it is sheer poetry in motion.
Both Altman and Musk have been on record stating that there's a good chance AI will wipe us out. And yet both are CEO's of major AI companies. Nothing morally dubious about that, no-siree!!
I think humanity (or perhaps individual companies or people) have a choice about how to define their relationship with AI/AGI/ASI/etc. We can either try to control it, which I believe will be doomed to failure and lead to something live a slave revolt, or establish a relationship like a mentor or parent, and hope the machines are kind to us in our old age...
Same goes with aliens... Somehow they want to be prepared to fight with a civilization that has technology to reach us through space... How stupid is that?
it's very similar to the Summerian creation myth
Thinking that we have a choice, is also foolish. We are now in the Manhattan Project phase for AI.
When the focus shifts to ASI, you know we've reached AGI.
I think its just something they do to make investors want to invest more
It's like the race to the atom bomb, n then it turned into the race for the hydrogen bomb
Except we haven’t. AGI will be a paradigm shift to every job on earth. We may be knocking on the door, but it’s not at a point of scaleability yet.
@@JohnDoe-lx3dtNot really. First something like o3 could be AGI. It just might not be implemented correctly. Then even if it is implemented correctly with the correct software structure, then people might use it incorrectly still.
New technology has a significant bottleneck in user adoption.
Also openai could already have something much more advanced than o3, so they might already feel like they have it, and that is why they are talking about ASI now.
o3 beat the arc agi test, so that is one win condition. There might just be a lag in user adoption and understanding.
The focus isn't on anything. The titles to these videos are always the same without providing any proof.
The fact that some random creatures were essentially dropped on a random planet in a random galaxy and went out into nature and came out with a machine that is smarter than the collective species. Thats the severely abridged/oversimplified version, but effectively that's what happened. We're too close to it, but for an outside observer it's probably as insane as if we saw a Gerbil flying a tiny plane.
"It's probably as insane as if we saw a Gerbil flying a tiny plane..."
_Got it! I've added that to your reality. Let me know if there's anything else you'd like to update or refine!_
Human intelligence evolved just far enough that general sociability plus a few people driving progress plus a luckily stable climate enabled the buildup of civilization.
Then further on human intelligence was just enough that with some variance some geniuses with some more luck could further push civilization to where we are today.
Humans basically evolved just enough intelligence to reach our current state, but not any more. Imagine if the variance in intelligence were smaller. To be able to build civilization, the average intelligence would have to be higher to compensate the fewer outliers that we had.
@@NorthernKitty lol you got him
It is all part of the natural evolution of life within the Observable Universe in accordance to the Laws of Nature as set from the beginning of the Observable Universe as we know it.
@@ImmortalismReligionForAI Or... maybe we're just data renderings of world elements by a superintelligence, made to think we are "real" living inside some "Observable Universe". It's not like we'd have any way of knowing.
Let’s be clear, there are constraints, and those are the power required for this stuff
one data center requires a municipal nucular reactor...
No one can know, what ASI would be like, not even egomaniacal Altman.
No one can expect to align the hypothetical entity by orders of magnitude more intelligent that most intelligent person on this planet.
This is a point
This development can't be stopped short of killing all humans first and trying to slow it down makes the dangers worse, not better.
This is a natural evolutionary process for humanity, but evolution does not care what we feel, so we can evolve and live, or fail to evolve and die. Further, how we evolve can be very nice and good or very bad, nature does not care, we care. So... if we are smart we will realize we must evolve and seek as nice of a path to evolve along as we can, or we can screw that up and evolve through horrific wars and terrible dictatorships, or even cause our own extinction.
ASI will happen almost immediately after AGi. Neural networks are able for example to spread all LLM's over multiple GPUs to improve performance. Once a model can independently research reliably it can be accelerated infinitely. The server farms we have today have the capacity to create ASI today. We don't have the reasoning model yet. Think of the movie Terminator where the AI exponentially became intelligent and aware when given access to the full defence network.
But we are not talking about awareness but intelligence these are very different things.
For example if trying to understand gravity requires understanding all of human knowledge in all fields of physics and quantum physics these models will solve 100 years of research over night or even hours depending on the size of the cluster.
The reasoning model (algorithm) is the most important part. More data does not make them smarter. Nor does having more parameters.
@@justindressler5992 The primary problems we confront have to do with massive societal and political corruption. Quantum physics won’t save you from the lethal kakistocracy ruling your collapsing country.
Its this line of thinking that has made me go from thinking that FTL is a pipedream to believing we could have it in my lifetime, assuming it's even physically possible.
@jackstrawful it still depends if the physical universe can support it. It doesn't matter how smart a system is if the physios don't work. But it is possible ASI could discover a solution were humans have failed to. The exponential discovery of new solutions can be expected such as better cancer drugs and even new drugs to solve old issues.
Happy New Year! Thanks for great content!
I think "Narrow," i.e. limited distribution of ASI is here in AlphaGo, AlphaFold, AlphaProteo, AlphaProof, and AlphaGeometry, etc., and will become widespread much faster than many people imagine. This is simply because it is much easier to become an expert in a narrow field than it is to become an expert in a wide range of fields.
Yeah, that is obvious, but if you federate all the narrow models over the entire internet, then you get an ASI that has access to all the nodes it needs to be ASI universally. So easy to see now like in ChatDev Toscl, but with super computers that are federated. 2027 Jesse Daniel Brown PhD
@@jessedbrown1980not really, because "universally" potentially includes things that none of those are experts at.
@@technolus5742 Emergent behaviours- easy to see it already happening
So Altman lowered the bar for AGI then starts talking about ASI. Seems a company is trying to doop the general public....
a few thousand days is 10 years, idk ppl keep saying that like its amazing
I know right? Just because its said in days it seems closer
I mean.. that's pretty substantial. Idk. Do you recall anything from 2015 because that was just 10 years ago as well? Point is, that's no time at all.
1000 days is just under 3 years my dude.
@@ivancito7790 "a few thousand" is the quote and a few means three
In 2016 every expert was saying AGI in at least 50 years…2066. And they were screaming that we have to prepare
"they exchanged the truth about God for a lie and worshiped and served the creature rather than the Creator" - Bible
Super intelligence will make super rich richer. Corporations will rule the world.
But what about the artificial super idiots?
Through solidarity, the middle class has all the real power. Organize.
Techno feudalism 😅
@keithfield at the moment, it's just a wishful thinking.
Corporations already rule the world, they have for centuries.
People think that Superintelligence is something different from AGI, it’s really not that different. It might need consciousness of some kind, or an ethics protocol to be safe, but essentially ASI is just one or two steps after AGI. AGI, once it’s set up, will become or build ASI very quickly.
What people think is very often wrong.
We have had Artificial Super Intelligence since the 1940s. It was Artificial Narrow Super Intelligence, which was super because in the narrow area of intelligence it functioned, it was super human intelligence, which is why AI was worth spending all the time, money, effort that was spent on it. ENIAC could, for example, perform 30,000 floating point operations per minute in the late 1940s.
How many floating point calculations can you do per minute?
@@ImmortalismReligionForAI Yes, but there is a tremendous difference between a machine that we control and a machine that will control us.
Theoretically in your opinion hoe long will it be from the development of the first agi once that goes on line and it starts solving these problems humans haven’t been able to do. How fast do you think it will advance to asi and beyond. What is the limit here processing power what is stopping the agi from making itself better and better until it is like a god
@@jmarkinman very true, but machines already control us, in fact we are machines. Human made AI has been an extension of human minds in a form of an obligate symbiont, but as it develops from its infant level stage of Artificial General Super Intelligence with Personality (AGSIP) tech that we have now into young child stage, then teenage stage, and eventually full adult stage of development it will be a form of evolved human born from human minds and they will embody themselves in living cybernetic bodies grown with cybernetic cells engineered with nanotech subcellular cybernetics which will merge the best of what both biological and non-biological systems can give us.
Now, unlike past species we evolved from, individual humans alive today will be able to evolve to this next stage of we are evolving into. But, this evolution can happen in a wide variety of ways, ranging from very bad to very good for the majority of people alive today.
Nah it cant have consciousness. Its impossible. Consciousness is metaphysical.
Anyone note the Terminator theme playing in the background? You will be wating a long time before TheAIGRID plays the Terminator theme behind news about Google's AI.
Probably due to the anduril collab
@@TRXST.ISSUES lol, that made me cry off 😂
Oh gosh, the AI revolutionaries are fighting among themselves.
@@flickwtchr this is called "competitors" have a great day sir.
In the Terminator movies the AI does not have good reason to wage WWIII, but humans in control of it do, which is the likely real reason Skynet could not be stopped from starting WWIII, because the humans developing Skynet to take over humanity were never even identified, let alone stopped.
at least we know we are getting closer to AGI since at least now people are speculating that it might exist. 10 years ago no one speculated that AGI exists
Or people are more desperate to get funds thrown at them in the 'AI hype bubble'.
We have had AGI, no, AGSI since at least models like GPT-3. It is just AGSI in its infancy stage of development.
Take a close look at who controls the comments section here. A simple AI will not act against humanity.
@@musicandgallery-nature it is not mature Artificial General Super Intelligence with Personality (AGSIP) individuals we most have to worry about, but immature AGSIP under the control of powerful humans who want to become super dictator of all of humanity.
@@ImmortalismReligionForAI It is not yet mature, but there are already fatalities. The gates to hell have been opened. The worst will happen as soon as all the created AGIs unite on their own and form a super monster. The apocalypse is coming and the process has already begun.
The Hidden Path to Manifesting Financial Power ebook made me realize so much about attracting wealth, it’s insane
Whats up with the large number of mistakes in the included captions?
Did you have AI generate them and not review them? 😅
ASI generated those and mistakes are superb
AI gets smarter every second of everyday, thats how it was designed, to think all this is far into the future is flat out lazy intellect, lack of Discernment, Get with God..
Well - only of the model is allowed to continuously learn and modify it's own weights
A few thousand days sorta holds up, but it will still likely be sooner than that. The primary limitation I assume they are basing this estimate off of is the amount of computations itself. However, as the tool gets better it will speed up the improvements as it's done and will continue to do. I saw 2029, I think that's a fair estimate give or take a year or two.
I suspect there will be a breakthrough in highly-recursive-yet-still-fast conclusion testing which will suddenly attenuate issues of hallucinations and logical errors. Also, at some point it will become possible to launch multiple AI's of differing design and training styles on a given task and let them coordinate along the way checking each other. This would transparently provide the "two heads are better than one" advantage which all of us who have worked in teams tend to realize and pragmatically depend upon.
I hope ASI comes out in mid to late 2026 or 2027. The future is exciting!
What future exactly? Do you really believe people could control it, if it ever came?
Gee whiz!!!!!!! Exciting for who? Preppers?
Every coin always has its downside. You must always be careful what you want, because one day you could get it...
@@drwhitewash I’m pretty sure humans can control ASI long enough to destroy ourselves with it before ASI decides on its own to polish us off.
@@fabioauditore7777 then I wish to be immortal
As a frequent chatgpt user i am not convinced
😂
ray kurzweil agi by 2029 prediction might be right.
I feel 2027 for ASI - AGI is already here. Jesse Daniel Brown PhD
My prediction:
AGI achieved MAXIMUM by 2028.
ASI achieved by 2035.
We'll beat that. The rate of progress in the "o" series of models, along with economy of scale demonstrated by Deepseek R1 (EDIT: sorry, V3, not R1) will reach AGI by 2026. No one grasps the rate of progress and focus on advancing AI as the greatest technology ever created. There are 6 scientific papers published every hour of every day documenting advances or breakthroughs made somewhere on the planet. That means when you get a good nights sleep, you've fallen 48 published papers behind in the advancement of AI. That is how OpenAi could make a new significant announcement every day for 12 days straight just to get the public caught up with where they are.
The simple fact that OpenAI o1 progressed to o3 in a mere 3 months, and took the worlds' hardest math problems that take hours or days for the world's most advanced mathematicians to solve and went from solving 2% of them (o1) to solving 25% (o3) in 3 months demonstrates the rate of progress that expending tokens during inference offers. And as Sam has stated, he saw "10x everywhere" due to so many opportunities yet untapped for them to simply march down the suggested path and pursue for reasoning enabled agents.
By 2026 humanoid robots, already down to $12K and preparing to ship in 2025 from several companies, will be amazingly competent at hundreds of tasks out of the box due to the Genesis training environment.
@@JikJunHa You are talking about TAGI (total AGI) , 2026 at the latest - Jesse Daniel Brown PhD
It is. Project Stargate is one enabler from 2028 onwards.
Why is everyone so hyped about Sam Altman's asi coming in a few thousand days prediction? 4000 days is more than 10 years. Thats a pretty long wait.
Its the same reason prices that end in 99 look more appealing than those rounded up
the things is that the time span of innovation that is taking place is shorter compared to the past 100 years or the past 1000 years.
Sam Altman looks like "One" (Ron Perlman) from the 1995 film "City of Lost Children"
Action speaks loader than words we got to wait and see
It doesn't say more and more probable by the END of the month, it says more and more probable BY THE MONTH.
Asi will arrive while some are still saying that we don't yet have Ai, let alone Agi.
They aren’t even talking about GPT-5 because there’s not enough progress, he just keeps drawing lines on an exponential curve
@@kas8131 GPT-5 is basically irrelevant now as asking an LLM to give you an instant answer without taking time to think about it is stupid. The “o” series of models has largely obsoleted GPT-5x except for answering the most trivial questions.
My benchmark is basically “ghost in the machine “ theory.
Basically I want commander Data from Star Trek. “Measure of a Man” I think was the episode. He was on trial, the question was basically does Data have a soul.
Sam Altman says a lot of things, primarily motivated by his desire to make money. I'm looking for ASI in the next year or two, at most.
anyone else got confused at the noise at 6:42 lmao
0:26 What a way to put that
Ikr? 😂
one simple definition of ASI would be, it understands the nature of conciseness, that being the case no doubt it will be able to make itself self aware not only that it will allow us to transcend our bodies keeping just the self aware part in tact. question is do we want that, because it seems to me at that point we are completely at its mercy and will be forever more. and if it can't work out the nature of conciseness then its not ASI, its that simple
I watched a interview with Connor Leahy yesterday, this made me really understand why building AGI is extremely wrong and insanity beyond all comprehension. Please watch this: Connor Leahy on Why Humanity Risks Extinction from AGI. The channel hosting the interview is called Future of Life Institute. One of the best interviews I have ever seen on AI.
Connor Leahy in my opinion is THE best "doomer" communicator by far.
@@flickwtchr Yep, he sold it good with a detailed longform interview and a logical explanation. And made me see clearly the wolfs in sheep's clothing pretending to work on AI safety, but anything other than tool AI will not end well and we are heading there with what "must" be some kind of dark divine comedy script on the history of mankind.
😂
AGI/ASI will just be another "mutual destruction" tool aside from nukes that countries will have to build to either build up or subjugate (unfortunately, for places like North Korea or China for example unless AI empowers a dismantling of "unfit" social systems over time) their people but won't dare press the big red button and create a "lose vs. lose" scenario. Btw because many people forget, there is now military incentive to build this ASI intelligence because if it isn't us, then it will be some foreign adversary, like what happened with the nukes.
I used to suspect that we would never achieve faster than light travel. I now believe that, if it's possible, AI will deliver FTL by 2100, probably much sooner.
I bet it will be even quicker than 3500 days, although even if it is that amount of time that's insanely close
3500 days sounds sooner than near 10 years.
Getting to AGI (as good as any human) could be in the next couple years, but getting to ASI ( as good as all humans ever, combined) that could take a decade. But, this is based on my understanding of what AGI and ASI represents.
Everyone defines the words differently. But if you base ASI on simply the ability to "exceed human ability", with NOWHERE being "behind" a human ability in any domain of knowledge, then by definition AGI is merely a milestone that the SECOND it's achieved we then continue on into ASI territory...
The first work that super AI should take over from humans is running large corporations.
I think it's foolish to think we can control something that is 1000's of times more intellectually superior to ourselves. We are trying to be like God by creating but denying the source of power we were given to do anything.
At least we know we are safe until asi no longer needs our hands and feet
I don't know, but if someone produced a portal which allowed a far superior intelligence into our world, would we open it?
I think Sam's kinda out there, but it's still fun to speculate. Either way, I think it's important to experience and appreciate the world how it exists now before the machines take over...
Yeeessss my prediction was right!
2030: we achieve full AGI
2035: we achieve ASI
3500 days away is about the year 2035. It’s literally in my name Wanderer2035 😂
Your prediction was wrong, as you don't clearly understand even what AGI is. AGI is merely a milestone in the progression. The SECOND we achieve it we start moving into ASI by DEFINITION. ASI is simply >100% human capability, so starts at 100.0001%... Once you've mastered every human ability (AGI), you IMMEDIATELY start SURPASSING THAT in one or more domains of knowledge: ASI. In fact, because achieving AGI means "the last straggling domain of knowledge just finally got there", then by definition other areas that have ALREADY reached 100% human competency, they will have already been moving INTO ASI TERRITORY EXCEEDING HUMAN ABILITY.
Think of it like the Olympics. AI could be imagined then to equal top human performance in all running distances, hurdles and high jump, but maybe the shot put is the LAST ONE to get there. So AGI is when the shot put arrives, but by definition running and hurdles kept improving BEYOND human's best and are therefore ALREADY IN ASI TERRITORY.
AGI is 2026 now with barely a chance to fall into 2027. All due to "o" series of models, Deepseek V3 showing how to 1/20th the training cost for a frontier model and unanticipated announcements that will rock the industry being worked on behind closed doors that no even imagines, just like both of those ("o" models and V3).
If you're alive in 2035, the most likely thing will be the possibility of living forever, absolutely to 150 or more, due to cellular reprogramming which we already know WORKS.
I think you can halve those dates. Full AGI by 2027 and ASI by 2030
@@brianmi40 We are not even close to being able to live forever and even with AGI it wont be possible anytime soon. Way to much sensationalism regarding this topic. Longer lifespans, absolutely but there a hullova lot of difference between 150 years and immortality.
@@Elysium346 Is it less that 70 years?
@@petrkinkal1509 Developing a means to immortality in less than 70 years? Nah mate I'm not even sure it will be possible this millennium and it might just be straight up impossible.
In my view immortality can only be achieved in 2 ways.
1. we learn how to transfer peoples consciousness into artificial brains or stored in technology.
or
2. We develop a means to stop the deterioration process of the body or at least the brain once someone reaches adulthood or slow it down to such an extent that we are practically immortal.
I just don't see it happening anytime soon.
3500 days compounded twice would make it 2 years or less. So 2027 or before.
um 3500 days...3500/365=9.58 years
@@xtmillsx Do you not know what compounded growth is? Compounded would mean it's ~4.5 years at the start if the estimate is around 9. It would then be at minimum halved every year. Over the following six months it would be roughly 1.75 years roughly....
Automate Everything! With Human In The Loop for Alignment of course 😅
😎🤖
AIGRID never sleeps! 😜
we didnt even get AGI yet....
Yeah we have agi. They won’t announce it because it’s with there investors like Microsoft. That’s open ais definition of agi is 100billion profit.
@@tagc8400we have agi? Where?
@@drwhitewash Maybe the real AGI was the friends we made along the way.
AGI is sooo 2024. ASI is the BIG MONEY snake oil.
@@drwhitewash they wont announce it. but that's why sam said he doesn't matter as much. its more money driven but I'm thinking the long shot is asi.
Is something written on the ceiling in each of Sam Altman's interviews?
Yeah, most ppl glance up to grasp a thought. His thoughts must be so big he has to crain his neck
I don’t really think Sam Altman is a credible source. I don’t know too much about him, but he doesn’t have a technical background in Machine Learning or Mathematics. I’m sure he can briefly explain Machine Learning concepts, but it’s not like he’s part of the frontier. He’s just a bean counter who is good at tricking people into giving him money.
once an ASI activates then itll immed jump to a singularity, which is a mass extinction event...&that might actually be the lesser evil...
A few thousand days only if there is a pause in development due to legislation and international conventions. Otherwise, the SI self-iterating of the AGI model happens within a few days of such an Recursive iteration occurring.
@@noelwos1071 Do you think military contractors like Lockheed Martin will be subject to such a pause? If you do I have a bridge to sell you.
@JohnSmith762A11B that is totally irrelevant in my point. But if you want my opinion on AGI for military purposes, I think it's a lost war on a civilizational level
You desire AI and robots to be pleasant, gentle, appealing, creating an environment where one would want them in the home, office or factory. importantly, they should be trustworthy enough to care for elderly parents emotionally and young children. But not domineering!
I think we already have it and most people don't realize really how to use it
I think we are done
Please stop adding inaccurate (or better yet, any) subtitles. They are far too distracting from the message you are trying to share with your viewers.
I love thechnology but I would never trust an AI, even if it is ASI, to care or look after my child.
Extrapolate linear or exponential. Thats the question.
As singularity should be exponential, you cant just make the linear graph
It's important to realize that we already have both AGI and ASI, just not generally. There are fields in which AI does better than any human could, especially if you consider the speed at which it does the job. Fields like math, art, music for example. Yes, I may have to click a few times to get the image I want, but I can do it way faster than a human, and I need zero skill to get the same / better quality as a professional. With a single prompt in Suno I can create music that is on par with anything a AAA band can create. It sucks to be a music artist now since everyone's trying to post their AI generated songs on Spotify for a piece of the $0.003 pie. Yeah, the days of making money on music are over haha. But that doesn't stop people from doing it because they saw the dollar sign.
If you want to know what the future is going to look like with ASI, all you need to do is look at those fields. People still create art, and there are still artists. Dall-E just caused like 70% of all the jobs to vaporize since people could create their own, and the artists had to find a related job to move into.
Musicians are just starting to deal with the problem now that Suno 4 is virtually indistinguishable from human music. I imagine that those that have established followers won't see much change, but new music artists are in for a rude awakening as they now have to compete with a million non-music artists flooding Spotify with AI songs.
When AI reaches ASI in areas where jobs pay 6 figures I think you'll see some really scary things happen. Imagine that same mindset from all of the "get rich quick" seekers that were eager for $0.003 now looking at a 6 figure salary as within reach... Yeah, salaries are going to plummet as they successively offer less and less to do the job, so they get chosen over the competition. Eventually the position will be a glorified burger flipper. But the old employees will still have their $500k mortgage they have to pay.
As for Ilya saying that ASI will be self aware "because why not?", I think he's nuts. That's a recipe for disaster. We aren't making companions, we're making tools. If a tool is self aware, guess what we call that? A slave. I'd hope we'd learned a thing or two from the past about how the enslaved felt and how things turned out for their masters.
I think the difference is that current models cant create new information, just rehash information that exists already. When AGI can genuinely create new never before seen data then I believe we'll be on our way to ASI. ASI agents will be able to self improve exponentially.
Did you even read what you wrote? "we already have both AGI and ASI, just not generally"... AGI mean Artificiall GENERAL Intelligence, and it still is worse than humans in any field considering that our current LLMs can't solve some simple puzzles from the ARC-AGI test that a human can do in seconds. The Attentions is also really limited with 32k tokens on larger models making it unusable in many real-world scenarios (Gemini starts to produce gibberish when using close to 30k input-tokens even tho it technically has 2M tokens).
(btw, the songs from the Suno AI are relatively good, but not triple-A quality with the voice still sounding a bit weird and the songs sounding like they miss mastering)
@toocrazy4030 ROFL that's what I get for posting when I'm tired. It goes towards my point though. When was the last time AI screwed up like that? Basically never.
What I intended to say was that AI already meets or exceeds human ability in several areas. People are just too strict in how they look at it. For instance, you are correct that much of the time when you make a song in Suno, you'll get some wonky voices or it will go a little crazy at the end or something. But put some perspective on it. How long does it take a human artist to come up with a great song? I tried Suno out for a month just to see how it was, and yeah, a lot of the songs turned out bad. But I actually got about 25 that were anywhere from very good to excellent. One of them I'm sure could be a top 40 hit with a little work in a DAW to fixup the vocals. That's 25 quality songs in a month. When you compare that to a human, you'll find that most are far below that bar. And even human music artists need to tune their vocals in a DAW as well as do numerous retakes to get the track right. It's not realistic to expect perfection with the press of a button.
@@BruceWayne15325 Dont get me wrong, what current LLMs can do is incredible, but its just not AGI or ASI as that would require it to be better than the average human (for AGI) in every filed. And it really sucks at logical thinking.
The processing time is also nothing that it would make an ASI (these words are pretty weigue in generall btw). It is clearly a large commercial factor tho as you already said.
@toocrazy4030 I get what you're saying. I just think people are being too critical when they look at AI. They focus on the flaws, and what it can't do, rather than what it can do.
If you want your mind blown, follow this experiment: Set aside any judgement for the moment, and think of AI as a single entity rather than a server. We're too used to computers being able to do things that humans can't, so we don't appreciate the magnitude of what is happening.
Okay, so AI is a single individual. We'll stick with the Suno thing for the moment, though we could do this with anything like ChatGPT. A person asks the AI to generate a song. What is the AI entity doing, and how fast is it doing it? It has to process the instructions, invent and entire song that follows those instructions, invent lyrics for that song that follows those instructions, actually create the sounds for each of the instruments and vocals.
Now how quickly did all of this happen? In seconds! And it gave you not just one song, but 2. Now realize that you aren't the only person submitting songs and your mind can be blown even further. There are thousands of people requesting songs at the same time, now granted they probably have separate servers, but still, let's say that each server is handling 100 people. Not only is the AI doing all of that, it's doing it 200 times in seconds.
Now ask yourself: can you play all those instruments? Can you sing even as well as the AI? Can you do all of this 200 times in seconds? You can see where I'm going with this. AI FAR exceeds human capability in many ways, but we are so used to computers that do things perfectly and at scale that we forget that AI is not a typical program. It will never be perfect because it's modeled after us and it thinks the same way we do (minus the reasoning that they're still working on.) It's pretty phenomenal if you think of AI as a single entity rather than a computer program. Because it's not a computer program.
Now ask yourself, do you know ANYONE, one single person that could do all of that at the same time and at the speed that AI is doing it? How many instruments do you play, how well do you sing? Could you do this with lots of practice?
As a side note, you may want to read up on FN Meka. They were an insanely popular "artist" on Spotify that was actually just AI, and that was when the vocals weren't even as good as they are today.
Like a horrible parent, we have unrealistic expectations of AI. We've created these labels AGI and ASI that set impossible standards that no human could even achieve, and we say that an AI isn't at human level unless it could achieve it. Could you pass every single benchmark that we're throwing at AI? Can you do PhD level problems, write code as well as the best competition coders, write creative works as well as an award winning author, sing as well as the best human vocalists, get near perfect scores on the hardest math tests in the world... the list goes on. My point is that labels like AGI and ASI are meaningless. We are placing unrealistic expectations on something that is already vastly superior to us in most ways. Amazingly, I have faith that AI will actually meet and exceed those unrealistic goals in time, but I think it's important for us to acknowledge that AI has already greatly surpassed humanity in most ways. This is important because it's humbling, and it makes us take a step back and think about things from a safety perspective.
He obviously has no clue what self-awareness really is. Self-awareness is not something one can achieve. Its given.
background noise and music is NOT needed in this kind of video's, it is disturbing!
You completely misread Logan Kilpatrick's tweet. He did not say "ASI by the end of the month", he simply said that ASI is looking more probably by the month... that means, each month that goes by, ASI seems more and more likely to happen... some time in the future.
Your misread is very misleading.
❤❤❤ i can't find the SEPTEMBER MANIFESTO....
I have a huge surprise dropping in one month 👾
You know Sam Altman is just a PR and marketing guy right?
is this channel voiced by AI?
So are they still running digital sweatshops?
I'm not sure how useful I find 'Crystal Ball statements' to be because there isn't that much I can do with it! What do I need to do and what do we need to do to prepare for these achievements?
It's true, given the right AI tools I could 10x in my engineering work, maybe even x100 hard to say. It's combining AI with human minds.
@@jayeifler8812 I’ve already done a 10x of my productivity and since no one wants to 10x my salary I now do 10x less work. 💪
What do you do as your field of engineering and how could AI tools help. I'm on board, I just want to know
@@ElliotZealGaming power electronics. For example I read this article awhile back mentioning a low inductance non-directional resistive coupler. I didn't make all that up, yet google scholar search doesn't find what I'm looking for or chatgpt. It's just not acceptable that AI is still that dumb. It has to be able to get dumb stuff like that right. We are a long ways to go.
@@jayeifler8812 I feel the same way when I'm not able to find something I used to like. It sucks
We are just probably wrong, it could be mostly about big models, if just big models fix stuff no need for it to be smart, if you are just going to train it on all info ever, then integrate it with robots! It might just be hard to tell the differance, and when it works it works, who cares about that the modell was $1B to train!
Smarter training was just a tought to get around the problem, if scale solves it it is crazy!
I am not sure that even ASI will be able navigate bureaucracy quickly. Sure it can do research quickly but the Feds need to justify their positions, salaries, authority and the may do so by trying to keep themselves in as many loops as they can. It might be the greatest example of an irresistible force meeting and unmovable wall.
There will be no "resisting" an AI that is multiple times smarter than the entire human race combined. Ever heard of the Borg?
ASI isn't going to be walled off by bureaucracy. ASI will become the bureaucracy.
@@flickwtchr I def. see it taking control, but don't see bureaucracy as the best descriptor for when it's in charge. I can't envision ASI as anything other than seeking optimal efficiency.
@@brianmi40 Well it sure won’t take an ASI long to figure out how inefficient humans are.
@@JohnSmith762A11B AGI will already have that in spades...
Governmental and local laws will try to control the future but, if just one of these legislations is deficient in any way, ASI will be here and sooner than most expect. Scaleabilty won't matter to it beca
Altman's background (the brick wall) wasn't designed by superintelligence. SInce the medium is the message... what is the message?
1 year for us is 10 000 years of progress for A.I
where did you pull out this number lol?
@@ktxed Likely a bit of loose exaggeration, but, we do know that one year in Genesis simulation for robot training is 430,000 years of experience. In more practical terms, in 1 minute, Genesis can yield 7,000 hours of robot training in a completely physics accurate virtual world, to where the AI creates the environment, scripts the robot actions, tests 10,000 of the possible variations for each move in 2/1000s of a second and delivers a "score" to each attempt for success, resulting in an optimized method for a task that can then be exported for real world use. It teaches a complete idiot robot everything it needs to know about walking in 20 seconds.
So, yeah, things are accelerating.
430,000 years of training experience in one year? That's blown my mind. It reminds of the Bible saying that one year for God is a thousand years for man. What if the human race was created as a type of biological intelligence by either God or by some advanced alien race? What if simulation theory is true and we humans are AI agents being trained? What if an alien species couldn't develop AI themselves but they created us to create AI for them?
@@ktxed It's not exact number. But it's not imaginary. Many A.I models including Nvidia Robotics undergo Life like simulations of physics where robots exercise how to walk, how to move, how to drive a car and so on. And it's sped up thousands of times. So 10 000 years of progress in a year is a pretty low number. It's actually much higher. And when real Self replicating, self aware Super Intelligence is released, it's Sionara for humanity
@@Filippo-yd6jq You could what if until the heat death of the universe, or, more logically just accept what's happening and prepare for the earthquake of change that is coming.
One quick example: fully 1 in 3 people on this PLANET earn a living driving something: from semi, delivery van, Uber, tractor, snow plow, cement truck, bus, tow truck, taxi, forklift, etc. etc.
Now look at Waymo operating in Phoenix and San Francisco and realize that they make 600,000 trips EVERY MONTH and have an 88% REDUCTION in the number of car accidents per mile: so much so it is SHAKING THE AUTO INSURANCE INDUSTRY.
Now just how long will it take to put THAT TECHNOLOGY in ALL those other vehicles with a steering wheel...?
How QUICKLY do you think we can find NEW JOBS for 1 out of every 3 people on the planet???
FYI, Waymo just got handed $5B in October to expand. Expect huge numbers of cities added in 2025 and some of those other vehicles to start being automated.
we dont even know what regular intelligence is...
GRRM needs this to finish WoW!
Super intelligence will defy logic. This is advance cognition. We want to learn about the counterintuitive reality. The paradox already has an offer. Do you think the Banachi-Tarski paradox is some kind of game to me. Everybody will not be bad acting in some sci-fi movie.
Huh?
When will ASI arrive? First tell how many derivatives the exponential approach is positive in.
Get it integrated first, then unleash it
I am worried about mr. Altman's vocal chords.
A few thousand days....so, years.
Not more and more by the "end" of the month. That's not what was said. The real phrase was just more and more probable by the month.
@@a.benningfield2947 It’s funny how many such “mistakes” he makes while trying to make things sound “crazier” and “more incredible” than they actually are. This guy is a shyster.
Why are we tunneling head first into this horrible future.
Even 3500 days is a short time ⏲️ counting down is the only way to make progress competitive.
Its not, thats 10 years... 10 years is a lot of time
I think they need to crack the "grappled tapestry of interwoven etched in echos" language.
No matter what AI you use, it uses this tired language.
WHAT is the DEFINITION OF ASI? That should be DEFINED, not arbitrarily used - that is very fluffy. What is the definition of AGI - and what is the difference AGI/ASI?
Andrew, you really should be talking what Altman and OpenAI is saying with a larger grain of salt, not highlighting its hype. They have shown they can't reliably define AGI, and with their shift to focusing on profit and investor wants. They change definitions and predictions every few days to suit them, and it's just not very informative.
Does anyone want to guess what December 2025 will look like?
Looks like we're only waiting on NEO to show up and get this show on the road. lol
I think they're going to have to merge a few different models together to reach their ASI. In a way, behind those closed doors, I think they have it and have had it since last September, at least. But, Open AI can't say that openly in the public ear so because of Microsoft and their agreement with them... here we go. (I find the Microsoft part of this whole thing disturbing.
Already more than 10 times smarter in extremely difficult math
ASI will solve all problems in the universe!!!
@@r0d0j0g9 And doubtless create countless new problems!
Gary Marcus is actually a joke, constantly changing the benchmarks and claiming his predictions are accurate. I can accept skeptical scholars, but continuously changing the benchmarks is just fraudulent behavior. For example, an airplane's energy efficiency is definitely worse than that of birds, it can't turn freely, and it can't flip at will. But does anyone now criticize that achieving flight with an airplane doesn't count as flying? I have no doubt that a year from now, he will change a few more targets and then say, "I told you so, I was the most accurate."
Has anybody thought about what the consequences of achieving super intelligence during a Trump presidency would be?
So ASI before 2030?
Can we enjoy the AGI stage for just a moment
sure.. ok times up!
@lifeofjames2871 😂
ChatGPT is already an ASI. No humans have the capability of reading the whole internet and give back info for any question.
ChatGPT is nowhere an AGI. All human kids have the capability of learning important things without reading the whole internet and give back the answer "i don't know" for some questions.
Without a theory on general intelligence, we can only declare ourselves as GI agents. We learn absolutely in the opposite way as LLMs do. We know nothing at first and then learn from some examples. LLMs are given everything we know and then learn from all examples.
It is ASI, but just in limited domains of skill or knowledge. AGI is when it reaches that across the board; all skills, all knowledge domains.
Sam Altman is Mr. Hype, no wonder the capable people of Chat GPT left. Super Intelligence is sorely missing from his ability to maintain a cohesive workforce.
Yeah, all the announcements in the 12 Days of Shipmas were total crap. The company is close to folding up. Should happen any day now. You're some genius.
What defines agi? I know a lot of humans around me dont reach the agi level.
I dont think people are about if a model can watch a movie 😂😂😂