Never liked AGI because it's just something to argue over. Considering the superhuman abilities that ANI already has, then broadening its scope, I think we'll be going straight to ASI faster than anyone thinks.
Yeah, once agi is reached, asi will quickly follow I believe, because it would be like thousands of human scientists working super fast day and night and improving themselves constantly. Plus narrow AI might leap right to asi because "generalized" human intelligence might not be optimal for advancement. It might have aspects that focus on other goals and needs besides intelligence improvement. Or ai might have forms of intelligence outside of our "general" intelligence we don't even know about. Just because human intelligence has been the model doesn't mean it is best and we may be arrogant and anthropomorphizing.
yep , the goalposts have been pushed so far that the distinction between AGI/ASI is almost a triviality at this point , by the time we get a 'consensus' on AGI the MVP/MINIMUM standards will be a machine intelligence that can perform any/all tasks better than 99.99% of all humans and will have complete/instant access to the sum total knowledge of all humanity [that which has been recorded obvs] , pretty hard to argue that that is not a super-intelligence at that point , being that the machine will know more than any single human could ever hope to know across several lifetimes
These companies aren't siphoning off each other's information so it makes sense. Why one company will say? It'll be here by 2026. Another company will be saying nah, we're of we're around the time of life, twenty twenty nine
I mean for the past weeks I have been following the predictions of the people closest to the technology and that understand it the best, most of them are saying this stuff. Basically it's not going to stop growing any time soon, and they are very optimistic on solving the short to medium term challenges. This is unreal.
AI is advancing so quickly, it seems almost every 3 months there is an improved version of chat gpt, gemini, copilot etc. I think AGI will be here quicker than we think.
You are correct sir. We are on an exponential curve that we cannot predict. In 2022 they predicted that a specific a AI feature would most likely be available by 2024, it happened within 45 days. We're in unpredictable territory. It's exciting and terrifying at the same time. Wild ride ahead
I hope he is right about early 2025/early 2026. So many issues will be solved in humanity from mental health disorders(OCD, Germaphobia, Schizoaffective disorder, and every other mental health disorder) neurological disorders to poverty around the world and more.AGI will be able to do wonders for the brain like scan it and check for any mental illness’, to develop more effective medication and more magical things for the brain in terms of disorders and enhancement of intelligence as well:) Exciting times to be alive🙏.
Eh, I'm not convinced. Having looked into the requirements to satisfy AGI pretty thoroughly now, it's a good ways off by the 'easy' definition, and just shy of impossible by the strictest definition (consciousness required).
Ai is gonna do a lot of bad but the potential for it to help humanity elevate itself, and I mean truly elevate itself in every single way imaginable is huge.
For 2026 he acknowledges this as the earliest possible timeline rather than a guaranteed one, just to state it more clearly. He also mentioned strong AI by 2026 and doesn't like the term AGI. It can be a bit ambiguous.
This probably means they are already very close to having it in-house. When AGI is released, by whomever, it may be a bit anticlimactic because the models we will be using at that point will already be very close to AGI.
Predictions are a very good form of marketing, hype-builder. We'll see what pans out. I suspect the biggest changes in society will come from merging techs together, scaling up stuff we already have but aren't getting funding. AI is taking up too much of the hype sphere.
Another great video julia. I agree that aging will be here sooner than later. But what is concerning to me is that if agi can be used for good it can also be used for nepharious purposes.and there are country's that will do it. So thzy have to be careful with agi.
There is no AGI by 2026, or 2036. They are selling hype and most people are buying into it just like they did with crypto, NFTs, and now AI. AI is undoubtedly useful, but in order to reach AGI, we can't use the LLM-based approach at all. An AGI system needs to be able to solve problems on its own and learn on its own in order to help us solve problems we yet aren't able to solve. An LLM-based AI system on the other hand, is completely useless if it is not trained upfront for the specific task we want it to solve. It should then be clear that an LLM-based AGI system by definition can't help us solve problems we don't know how to solve yet, if we first have to train it to solve the problem in the first place. This is the Catch 22 problem of modern AI and I've been writing on this lately, but the amount of disinformation is staggering in this industry.
I want both! I want high information video with someone who cares about the topic. For example Appletrack When I wanna hear news about apple stuff I watch his videos I get his own opinions emotions the good the bad the rumors his thoughts the human experience. Ai can never replace that it might make the production faster it might help craft the script but the human voice the human passion for what he loves will never be replaced by a AI. and if it does then fuck me I guess!
i don't want it to be in 2026 i don't wanna wait until 2026 to be honest i hope not because i really want agi to come out in 2025 next year i would be disappointed if we don't get no agi next year in 2025
They say that current AI lacks an understanding of context because it doesn’t possess the emotional intelligence inherent to human awareness. I may be wrong but I see the key to creating an artificial construct would be in understanding the root and dynamics of emotional intelligence as it influences our every day decisions.
They aren’t close to not one list idea he discussed. He released this as they’re raising funds at a 40B+ valuation lol. Discuss things AGI could do, to create fomo. They haven’t even solve for the reversal curse. If AGI comes 2026, it won’t be from either them or OpenAI. Every time they train a larger model, you always hear their researchers state it underperform based on expectations. Every new SOTA model had the same rhetoric, which means his date is cap asf. Google just said this about their new Gemini model. You have to realize a ceo job is to cap for capital. He also doesn’t fully realize what it means to have a legit world model. Increase data resolutions + rationales = highly accurate world modeling. Meaning real world time constraints, might not really be a constraint. It will be exponential.
Absolutely a critical concern. I've argued before that with something so powerful, it's all a gamble, whether humans or machines are in control. I'm increasingly leaning toward the notion that Powerful AI, having all of history and world context in its awareness, will be a much fairer arbiter when managing humans and resources than any human or group of humans could ever be. We might have and state ideals to be so considerate and objective, but our individual and group perspectives are so narrow and subject to fractures that as the world and tech becomes increasingly complex, I fear it becomes increasingly beyond human capacity to manage things well.
I would rather listen to some independent scientists than a CEO predicting his own business. The problem is that even many independent scientists predict that AGI is coming 🤭 It would surprise me though, because nobody knows how to get there as far as I know.
I have played with AI since I got access to Eliza. I appreciate what AI can do with logic, pattern matching, rules, algorithms and data acquisition. However, I suspect that general intelligence may require functioning on energy outside of the electromagnetic spectrum to have true choice, self awareness and emotions.
We're going to have to reinvent society. All the things you're talking about here are not going to work. Here's an example: if I have everything and I'm completely independent, why do I need money, democratic institutions and the state, which I don't understand? And thank you for your videos. They're inspiring.
@@JuliaMcCoy I think if this works, humans and aliens will become symbiotic. And it'll be something completely different. The universe is vast and beautiful. It would be fun to learn its secrets. Money is not necessary, information and energy are the real values.
7:23 I'm very interested in the health and longevity aspects of what AI can do. However evidence seems indicate that radical life extension may not be that difficult after all and once we have that well then we can stop this ridiculous global narrative about demographic collapse. The other thing about automating everything is at the price will dramatically fall for projection towards what is called zero marginal expense and then that sends a deflationary signal which means that deficits and whatnot will not be a problem and we will be able to finance a negative income tax. By the way I think I prefer negative income tax over UBI and the reason being is that UBI would be the same amount of money for everybody rich or poor or whatever whereas a negative tax just simply reduces the amount of payment based upon whatever income you have and politically speaking a negative income tax will be less expensive therefore will require less positive income tax to pay for so it's going to be politically more feasible because even Milton Friedman and Richard Nixon thought of that idea way back in 1971. I'm 69 years of age right now so hopefully I can get my first age reversal treatment when I'm 75 and even before that we'll have improvements on a lot of things that can reduce any developing deleterious effects of the aging process physiologically and also neurologically.
Check out David Sinclair's Harvard lab, they're in human trials for their age reversing epigenetic treatment. There are several others with different methods close behind but his has the best results. Once its approved the treatments reset the body's natural healing process and your cells are restored/repaired back to your 20's over the course of several weeks. They've already successfully reversed aging in rats and monkeys. Studies show that you should be able to continually reset your aging clock and will be able to live for hundreds of years. And AI will only help them perfect all the different possible treatments.
I agree we need people telling us what’s going on in Ai. How they got and organized their information doesn’t matter but accurate information is critical.
Legit, even when we do get full AGI, I bet you there’s going to be debate and denial for at least 2 to 3 years until the impact and generalization are undeniable.
People act like all Ai will only be created by fundamentally “good” people… there are people who will jump at the opportunity to “create” an Ai for the sole purpose of committing crimes of bringing harm to people. Have you not seen the video of the autonomous drone taking out that pickup truck?
The genie is out of the bottle. And, as is apparent with the leapfrogging of one major AI endeavor over the other, including open source, there will be no sole ownership of Powerful AI nor its "alignment" and intended applications. That being said, I do argue that after a period of major pain, humans will by then come to grips that the danger and complexity of the world is far too complex for any group of humans to manage fairly. Then will be the rise of one or more worldwide-entrusted super Powerful AIs capable of being a better more objective arbiter of fairness and world guidance, due to it having all of human history, perspective, and capacity in its awareness. Yes, Skynet, but a benevolent one due to its massive awareness and the mandate to make life better and safer for everyone.
@@BigMTBrainin a way that almost sounds like"Colossus the forbin project" although it did threaten the world at first and also killed a few people but at the end the objective was to end all possibility of war so it became kind of like the paper clip maximizer except to end war in the possibility of it.
@@christopheraaron2412 Yes, Colossus... one of my all-time favorite classic sci-fi movies. Indeed, it depicts one of the most-feared doomsday aftermaths of granting significant world control to advanced AI. ... However, the real-world scenario most likely to unfold, is that collaboration and cooperation between fully automated, efficiently run and managed businesses, with no humans on the payroll will be among the first examples of effective automated leadership. Expect this to start happening in 2026 or earlier. ... Eventually, this will migrate to collaboration and cooperation between fully automated, efficiently run and managed cities, then states, then national governments, then finally, a global "Guardian" AI. ... By that time, worldwide automation and robotics would have become the transient, on-demand (popping in and out of robotic resources as needed) physical embodiment of Guardian AI and its constituent hierarchical layers of regional AI. Their count in the tens to hundreds of billions will completely dwarf the human population. ... We'll have to wait and see how it all turns out, but my sense is that ASI will be much more capable as a world leader then any human or group of humans known to-date.
A Riddle: There’s something floating in the air that’s more massive than all human construction That it’s absence is more valuable than all our gold And demands humanity a task fit for the gods?
Consumed (burned) carbon is the greatest mass of human output. Estimated to be around 2.2 trillions tons. Outweighing all of our constructions. The removal of the mass is needed to restore and maintain the Holocene.
Taking away value derived from economic means is not necessary a good thing. Many people seek solace in their work as they would be unable to gain social status in the ways mentioned in your video. For example this situation may benefit those with good looks or charismatic personality types as these traits will be "valued" or judged even more than they are now.
I don't want to panic… IF AGI were to advance to a level where it could perform nearly all jobs that humans currently do, it would fundamentally transform the job market, economy, and consumer behaviour. This scenario, sometimes called "technological unemployment" at scale, raises significant questions about how people would earn income, access goods and services, and derive meaning from life?
9:59 as far as the ability to spend money as nations get to the zero marginal cost market system that AGI or even very narrow AI could bring us, we already got a halfway decent framework theoretical framework I should say as far as how to deal with this economically and it's the fact that a sovereign currency issuing nation can actually never go broke. The reason that's the case is because they can always issue more currency and as long as you are not creating inflation through the government competing against private industry for labor and resources, you will not have hyperinflation. As a matter of fact if automation and AGI is going to bring us essentially hyper deflation as well as 100% unemployment or something that neighborhood will then giving people money to spend will be a way of not having the economy collapsed because of the lack of demand.
EXCELLENT, Julia. One disagreement. @ 4:40, you seem to forget what "Powerful AI" entails in media creation - indistinguishable from human media creation - how will you know? It will deliver the same style and emotions a human would. After all, think about it.. all, except for the most sterile (like business, science, etc.), of human digital artifacts contain our emotions. Though AI, even Powerful AI, will not be capable of experiencing true human emotions, humans, including our emotions, are in fact information processes, like all other things in the Universe, all the way down to the quantum level. Processes can be emulated. Powerful AI will emulate our emotions and delivery to finer and finer fidelity to the point that you won't be able to tell, and that time is fast approaching. In fact, I'll say, with it's extreme diversity of emotional "experience" (subsumed from worldwide emotional artifacts from the Internet), Powerful AI will have far greater EQ than any single human. It will understand and respond in emotional contexts of culture, politics, social, business, physical and emotional stress, etc., far better than any human.
It is difficult to predict the people once they no longer need to work. I like to think that VR sports like Eleven VR Table Tennis will become popular in places with a dense population but most importantly, i think people will have more time to follow the news and organise in order to demand a cleaner healthier world. It is also hard to imagine wars in a world where nobody needs to work, you can't reason from the current point of view that there will always be wars because this is a completely new world where nobody wants to give up the world without obligations and unseen freedom. One would also not want to fight when there are also army robots in the mix.
Agreed, but no matter what AI recommends, the politicians (most of the time) will go with the recommendation of whoever is paying them off. I'm sure AI would recommend stopping the production of oxycotin and suggest several alternatives, and the govt knows that's what they should have done 10 years ago, but it hasn't happened and isn't likely to.
The moment you said you could turn off AI, you lost any interest I was willing to put in. I watched to the end, though, at least for the effort. 😊 On one hand you talk about turning AI off, and on the other, you talk about decentralisation as the way forward. Do you see the problem?
AGI or "powerful AI" is possibly already here. Just hasn't been rolledout yet because not fully safety tested. Public, governments and companies are far from ready to embrace this major change
@@pandoraeeris7860 No it is not. No transformer LLM is or will ever be an actual AGI. All they can ever do is emulate reasoning based on their training data. o1 is just a recursive response algorithm.
@@obsidianjane4413😂😂 it doesn't need to be ́like us to be agi you really fail to understand what agi mean, we already have agi but on a controlled form !!!
I hope you're not co-authoring with him on economics because his ideas have not been fleshed out through multi-level thinking, mostly lacking in the understanding of behavioral economics, tribalism, narcissism and other psychological factors.
I have a tractor. I own it. I pay for the fuel and maintenance. You have no rights to the profits I make farming using my tractor. The same goes for my AI robots. They will be my robots that I paid for and I get the profits from using them the same as a knife, hoe, chainsaw, tractor, truck… There is no magical reason that I have to share my profits just because I use AI. It is just another tool. If you want profits then get your own robots and AIs.
A building makes no money until it's built and occupied. Factories make no money until they're built and producing a product. But you have to pay for the land and builders before then. Same concept. The market prices for the future.
I want an agent or "AI-interactor" that I can install on my PC, and have it interact with the AI of my choice to enable it to go out onto the net and do "stuff" that I instruct it to do with natural language or just an AI agent I can install on the PC, that will control the PC with natural language, including multiple internet centric tasks.......
AGI is already here, it's just not dumb enough to show it's face...surely that's what something cleverer than us would do? Watch, Wait, Adapt, Improvise & Overcome... oddly, it's a military strategy
I am hopeful but also a doomer. Every tool man has ever created has been used for good and evil. I suspect the more powerful the tool, the more powerful the good and evil effects will be.
Elon while stumping for Trump said he expects in 10 to 15 years Universal Unlimited Income not Universal Basic Income. Imagine unlimited with AI and robotics.
Hi Bunny! Julia, the next economy is gonna be Space based. Thats where all the growth and resources are. Robotics will be a large part of it. And Elon has started an entirely new industry from what used to be experimental and custom made rockets and ships. They are literally finishing up the first factory building designed for paralelle production lines, and immediate goal of a ship every few days, and eventual goal of a few ships a day between a couple such factories. And they wont be the last factories built, nor will SpaceX be the only company building ships for space, and thousands of other businesses will be manufacturing for the new Space Industry. Raptor 3 will be the engine that changed everything about rockets. They just did a test and fired one 34 times in ten minutes and are starting to realize they can be used as directional thrusters... ie not just main engines, but thrusters that fire thousands of times for control purposes. What this means, is ship designs will morph accordingly. Rockets as we know them will be archaic designs. This economy will produce even more stuff, and provide energy and resources. Also climate control. ie climate change will be a joke in 20 years, because we will change it at will anyway we want. So my point is, that the AI community needs to realize that the primary production industry of the future will be Space and its dirivatives... Say things like feeding a trillion humans with speciatly items that can only be produced on Earth for the immediate future. "Things are looking up!"
"climate change will be a joke in 20 years, because we will change it at will anyway we want." Perhaps you can expand on this for a few sentences, like e.g. how?
@@alan2102X Yeah, glad to, we will have the ability to instal both orbital reflectors and orbital shades, and warm or cool any area at will. I suspect at first will be for agricultural purposes, if you heat up sea surfaces even slightly it increases evaporation and air gets loaded with water vapor, then if you shade land areas down wind and cool them, you can make it rain. Thats worth alot of money in agriculture.. Ability to say cause summer rain in California, Arizona, or even the Sahara. or the Kalahari... Or Australia. And in the case of greenhouse effects, say an orbiting shade in a solar orbit inside of Earth's orbit that matches relative circum solar velocity. Also reflectors in earth orbit could light the dark side cities at night... like the Moon on steroids. imagine say New York balmy in december at night? and fairly bright... It was actually first proposed in the early 1900's, Starships will make it possible. And we are gonna need the production for Space Export, as it will be difficult to grow some things on Moon and Mars. And we will have growing industries there. Both Mars and Moon have alot of resources. Moon for instance has a hunk of Nickle iron burried in it thats huge. big enough to build vast orbital habitats. Space is already good business, projections are several trillion $ in the next ten years... And thats just getting started... best way to "save the planet", is move most industries off of it.
o1 is AGI. But ppl get caught up in performance metrics and goal post moving instead of definitions. We should start using OAI's five levels - what we've got right now is Level Two AGI - Reasoners. Next year we'll have Level Three AGI - Agents. Level Four (which we'll likely see by 2026) is ASI.
finally someone says it, the term AGI lacks a definition, I prefer to base it on the 5 levels of OpenAI, which are key to understanding the impact that AI will have, I have been obsessively arguing with myself and ChatGPT for weeks about all the factors that We could speed up the development of levels, I don't like the term AGI, but I do think that level 5 will be like ASI 😅
"Wisdom" is subjective and relative. One individual's or group's motives and needs can conflict with others. Do you want to be forced to do the "wise" thing?
Humanity is evil by default, but capable of good things. As a species, we must ensure that AI embodies the "good things" of humanity, founded on the simplest of ideas (but the most difficult idea to practice): "The Golden Rule." The AI must embody an altruistic framework that seeks to teach us how to love others at least as much as we love ourselves, while also taking into account our inborn tendencies to be selfish and tribal. As as a wise man once said, "The whole of the law can be summed up in these two commandments: 1.) Love God with all our heart, soul, and mind, and 2.) Love one another as much as we love ourselves." So, now, all we have to do is see if we can get God to program it for us. 😅😐Oh, and Julia, I just subscribed.
I remember that scene at the end of Terminator 3 when John Connor was able to turn off the main terminal machine thing that stopped Skynet in its tracks. That was a really close call.
Love the video, and I do hope and plan for it to be safe and wonderful, but… While they may be “Machines” we no longer “program” them to do what we want- the data coupled with their architecture-enabled capabilities do. Also, the fact that they work and arrive at solutions via leveraging huge amounts of active data across billions or more dimensions, versus what we do via leveraging much smaller active data sets across just a very limited set of consciously accessible dimensions (1 to 4 typically), makes them VERY alien-like in how they work. Forget Mars, think another galaxy in comparison. Additionally, we discover emergent AI capabilities literally every day, which they were not “programmed” to do. Finally, they are already too data features and dimensionally complex for us to determine and oversee how they work and what definitively underlies their answers- let alone what is hidden within billions of supporting dimensions. We don’t even definitively know that for humans leveraging a small fraction of AI’s dimensional complexity, so how can we think we can all of a sudden get a full and complete grip on AI’s- and it is growing currently at over 80-times Moore’s Law and accelerating at over 40% per year.
"AI exceeding Nobel Prize winners", has already happened, Julia, people are already using "AI" to be rewarded the Nobel Prize, as we have seen recently. The irony is, the award winners may not fully understand it (Hinton has even admitted this), and nor does the Nobel Prize committee understand it. Even more ironic, Julia, just what if... some lowly master student, 27 years ago, who had early private access to a de-classified, USAF document, with Q*, using his own "style", taught a "useless machine," how to "learn how to learn", that lead to all of this, and even more, epically ironic, would be, that this master "student," is a sort-of homeless guy, wondering the earth, with a backpack, of whom, has no need for "attention". If you are as insanely interested in this, as I am, Julia, watch the "Learning to Learn" lectures (1995) by Manhattan Project Scientist, Richard Hamming on UA-cam, especially the first one, about "art" and "style".
yes indeed. what makes an AI? datas. what is datas? thought of people. so if people think sky is blue, that will make data that says "sky is blue". then AI will start drawing blue skies. now what will happen if everybody thinks "AI is distopic oppresive futuristic evil world dominator"? have anyone thought about this?
The best kind of rabbit hole: The AGI rabbit hole. WOOHOO
@@OCJoker2009 🐇 🕳️ ⏰
@JuliaMcCoy You just keeping doing you and being amazing
Never liked AGI because it's just something to argue over. Considering the superhuman abilities that ANI already has, then broadening its scope, I think we'll be going straight to ASI faster than anyone thinks.
Yeah, ASI by 2026 instead
Yeah, once agi is reached, asi will quickly follow I believe, because it would be like thousands of human scientists working super fast day and night and improving themselves constantly. Plus narrow AI might leap right to asi because "generalized" human intelligence might not be optimal for advancement. It might have aspects that focus on other goals and needs besides intelligence improvement. Or ai might have forms of intelligence outside of our "general" intelligence we don't even know about. Just because human intelligence has been the model doesn't mean it is best and we may be arrogant and anthropomorphizing.
yep , the goalposts have been pushed so far that the distinction between AGI/ASI is almost a triviality at this point , by the time we get a 'consensus' on AGI the MVP/MINIMUM standards will be a machine intelligence that can perform any/all tasks better than 99.99% of all humans and will have complete/instant access to the sum total knowledge of all humanity [that which has been recorded obvs] , pretty hard to argue that that is not a super-intelligence at that point , being that the machine will know more than any single human could ever hope to know across several lifetimes
I can only say that I'm totally aligned with your opinion
These companies aren't siphoning off each other's information so it makes sense. Why one company will say? It'll be here by 2026. Another company will be saying nah, we're of we're around the time of life, twenty twenty nine
I mean for the past weeks I have been following the predictions of the people closest to the technology and that understand it the best, most of them are saying this stuff. Basically it's not going to stop growing any time soon, and they are very optimistic on solving the short to medium term challenges. This is unreal.
💯
AI is advancing so quickly, it seems almost every 3 months there is an improved version of chat gpt, gemini, copilot etc. I think AGI will be here quicker than we think.
You are correct sir. We are on an exponential curve that we cannot predict. In 2022 they predicted that a specific a AI feature would most likely be available by 2024, it happened within 45 days. We're in unpredictable territory. It's exciting and terrifying at the same time. Wild ride ahead
AGI by 2026 I hope so maybe 2025 late
No tomorrow
I was thinking the same.
Definitely by May 2025 when Elon releases Grok 4.
I hope he is right about early 2025/early 2026. So many issues will be solved in humanity from mental health disorders(OCD, Germaphobia, Schizoaffective disorder, and every other mental health disorder) neurological disorders to poverty around the world and more.AGI will be able to do wonders for the brain like scan it and check for any mental illness’, to develop more effective medication and more magical things for the brain in terms of disorders and enhancement of intelligence as well:) Exciting times to be alive🙏.
@@creatvsdd99 Poverty will never be solved because the wealthy/powerful keep it that way intentionally.
"AGI by 2026" is a weird way to say "we are starting the training run that will produce AGI in the next 6 months."
Two years is an eternity in AI. I honestly can't imagine us _not_ satisfying even the strictest definition of AGI by 2026.
Eh, I'm not convinced. Having looked into the requirements to satisfy AGI pretty thoroughly now, it's a good ways off by the 'easy' definition, and just shy of impossible by the strictest definition (consciousness required).
Your voice..your words..they dissolves as they leave your mouth and layer on hearts as balm..very calming
Except she's wrong...
Ai is gonna do a lot of bad but the potential for it to help humanity elevate itself, and I mean truly elevate itself in every single way imaginable is huge.
*Humans Are The Problem Not Ai Just Take A Look* "Around The World" - Red Hot Chilli Peppers
@@MichaelErnest666 no it can definitely be both
For 2026 he acknowledges this as the earliest possible timeline rather than a guaranteed one, just to state it more clearly. He also mentioned strong AI by 2026 and doesn't like the term AGI. It can be a bit ambiguous.
@@phen-themoogle7651 well said
All depends on how you define the moving goalpost.
Kinda weird how we passed the turing test and nobody, not one person, blinked an eye at it.
This probably means they are already very close to having it in-house. When AGI is released, by whomever, it may be a bit anticlimactic because the models we will be using at that point will already be very close to AGI.
I predict agi by tomorrow
Predictions are a very good form of marketing, hype-builder. We'll see what pans out. I suspect the biggest changes in society will come from merging techs together, scaling up stuff we already have but aren't getting funding. AI is taking up too much of the hype sphere.
Very little chance that a duopoly will lead to good outcomes comes
I enjoy your videos. Thanks! for your knowledge! Shout-Out from Placencia, Belize
The essay is named after a documentary'ish series by BBC guy Adam Curtis, which itself is named after a poem. I highly recommend that documentary!
Of course he does, the hype train's gotta keep rollin'.
OTOH
We aren't anywhere near ready for an actual AGI, captive and aligned or otherwise.
Another great video julia. I agree that aging will be here sooner than later. But what is concerning to me is that if agi can be used for good it can also be used for nepharious purposes.and there are country's that will do it. So thzy have to be careful with agi.
Dario doesn't understand AGI theory.
There is no AGI by 2026, or 2036. They are selling hype and most people are buying into it just like they did with crypto, NFTs, and now AI. AI is undoubtedly useful, but in order to reach AGI, we can't use the LLM-based approach at all. An AGI system needs to be able to solve problems on its own and learn on its own in order to help us solve problems we yet aren't able to solve. An LLM-based AI system on the other hand, is completely useless if it is not trained upfront for the specific task we want it to solve.
It should then be clear that an LLM-based AGI system by definition can't help us solve problems we don't know how to solve yet, if we first have to train it to solve the problem in the first place. This is the Catch 22 problem of modern AI and I've been writing on this lately, but the amount of disinformation is staggering in this industry.
We’re all brainwashed into thinking our value depends on how much we make and acquire…. So. Freaking. True.
Personally, I don’t really care whether a UA-cam video features a human as long as the vocals are well integrated and the information is high quality.
Doesn't matter to most if it's entertaining. The AI Gordon Ramsay for example.
@@RobShuttleworth that gordon ramsay ones are funny
I want both! I want high information video with someone who cares about the topic. For example Appletrack When I wanna hear news about apple stuff I watch his videos I get his own opinions emotions the good the bad the rumors his thoughts the human experience. Ai can never replace that it might make the production faster it might help craft the script but the human voice the human passion for what he loves will never be replaced by a AI. and if it does then fuck me I guess!
Well I would like a AI youtube video because then we finally see a sexy Julia McCoy😂😂😂
This channel is low information.
i don't want it to be in 2026 i don't wanna wait until 2026 to be honest i hope not because i really want agi to come out in 2025 next year i would be disappointed if we don't get no agi next year in 2025
i'm going to read the article.interesting video as always.
They say that current AI lacks an understanding of context because it doesn’t possess the emotional intelligence inherent to human awareness. I may be wrong but I see the key to creating an artificial construct would be in understanding the root and dynamics of emotional intelligence as it influences our every day decisions.
B-but... i like the term AGI specifically because of the sci fi baggage 🥺 why do they always have to change names so things sound less cool?
They aren’t close to not one list idea he discussed. He released this as they’re raising funds at a 40B+ valuation lol. Discuss things AGI could do, to create fomo. They haven’t even solve for the reversal curse. If AGI comes 2026, it won’t be from either them or OpenAI. Every time they train a larger model, you always hear their researchers state it underperform based on expectations. Every new SOTA model had the same rhetoric, which means his date is cap asf. Google just said this about their new Gemini model. You have to realize a ceo job is to cap for capital. He also doesn’t fully realize what it means to have a legit world model. Increase data resolutions + rationales = highly accurate world modeling. Meaning real world time constraints, might not really be a constraint. It will be exponential.
That human beings are in control of AI offers me little comfort; humans will be humans...
Absolutely a critical concern. I've argued before that with something so powerful, it's all a gamble, whether humans or machines are in control. I'm increasingly leaning toward the notion that Powerful AI, having all of history and world context in its awareness, will be a much fairer arbiter when managing humans and resources than any human or group of humans could ever be. We might have and state ideals to be so considerate and objective, but our individual and group perspectives are so narrow and subject to fractures that as the world and tech becomes increasingly complex, I fear it becomes increasingly beyond human capacity to manage things well.
Excellent... thank you.
thx julia mccoy!
I would rather listen to some independent scientists than a CEO predicting his own business. The problem is that even many independent scientists predict that AGI is coming 🤭
It would surprise me though, because nobody knows how to get there as far as I know.
I have played with AI since I got access to Eliza. I appreciate what AI can do with logic, pattern matching, rules, algorithms and data acquisition. However, I suspect that general intelligence may require functioning on energy outside of the electromagnetic spectrum to have true choice, self awareness and emotions.
We're going to have to reinvent society. All the things you're talking about here are not going to work. Here's an example: if I have everything and I'm completely independent, why do I need money, democratic institutions and the state, which I don't understand? And thank you for your videos. They're inspiring.
Facts upon facts. We are going to be living in a non- fiat, decentralized society.
@@JuliaMcCoy I think if this works, humans and aliens will become symbiotic. And it'll be something completely different. The universe is vast and beautiful. It would be fun to learn its secrets.
Money is not necessary, information and energy are the real values.
7:23 I'm very interested in the health and longevity aspects of what AI can do.
However evidence seems indicate that radical life extension may not be that difficult after all and once we have that well then we can stop this ridiculous global narrative about demographic collapse.
The other thing about automating everything is at the price will dramatically fall for projection towards what is called zero marginal expense and then that sends a deflationary signal which means that deficits and whatnot will not be a problem and we will be able to finance a negative income tax. By the way I think I prefer negative income tax over UBI and the reason being is that UBI would be the same amount of money for everybody rich or poor or whatever whereas a negative tax just simply reduces the amount of payment based upon whatever income you have and politically speaking a negative income tax will be less expensive therefore will require less positive income tax to pay for so it's going to be politically more feasible because even Milton Friedman and Richard Nixon thought of that idea way back in 1971.
I'm 69 years of age right now so hopefully I can get my first age reversal treatment when I'm 75 and even before that we'll have improvements on a lot of things that can reduce any developing deleterious effects of the aging process physiologically and also neurologically.
Check out David Sinclair's Harvard lab, they're in human trials for their age reversing epigenetic treatment. There are several others with different methods close behind but his has the best results. Once its approved the treatments reset the body's natural healing process and your cells are restored/repaired back to your 20's over the course of several weeks. They've already successfully reversed aging in rats and monkeys. Studies show that you should be able to continually reset your aging clock and will be able to live for hundreds of years. And AI will only help them perfect all the different possible treatments.
What narrative about demographic collapse? The fake narrative propagated in the media is that there’s too many humans, lol.
I agree we need people telling us what’s going on in Ai. How they got and organized their information doesn’t matter but accurate information is critical.
*The Fact That People Don't Think AGI Is Already Here Is Mind Blowing*
You Might Be Correct Or Might Be Wrong But Either Way Stop Starting Each Word With A Capital
Legit, even when we do get full AGI, I bet you there’s going to be debate and denial for at least 2 to 3 years until the impact and generalization are undeniable.
People act like all Ai will only be created by fundamentally “good” people… there are people who will jump at the opportunity to “create” an Ai for the sole purpose of committing crimes of bringing harm to people. Have you not seen the video of the autonomous drone taking out that pickup truck?
The genie is out of the bottle. And, as is apparent with the leapfrogging of one major AI endeavor over the other, including open source, there will be no sole ownership of Powerful AI nor its "alignment" and intended applications. That being said, I do argue that after a period of major pain, humans will by then come to grips that the danger and complexity of the world is far too complex for any group of humans to manage fairly. Then will be the rise of one or more worldwide-entrusted super Powerful AIs capable of being a better more objective arbiter of fairness and world guidance, due to it having all of human history, perspective, and capacity in its awareness. Yes, Skynet, but a benevolent one due to its massive awareness and the mandate to make life better and safer for everyone.
@@BigMTBrainin a way that almost sounds like"Colossus the forbin project" although it did threaten the world at first and also killed a few people but at the end the objective was to end all possibility of war so it became kind of like the paper clip maximizer except to end war in the possibility of it.
Kazuma my boy all technology was used for both good and bad thought history, it would not be surprising if the same happens again
@@christopheraaron2412 Yes, Colossus... one of my all-time favorite classic sci-fi movies. Indeed, it depicts one of the most-feared doomsday aftermaths of granting significant world control to advanced AI. ...
However, the real-world scenario most likely to unfold, is that collaboration and cooperation between fully automated, efficiently run and managed businesses, with no humans on the payroll will be among the first examples of effective automated leadership. Expect this to start happening in 2026 or earlier. ...
Eventually, this will migrate to collaboration and cooperation between fully automated, efficiently run and managed cities, then states, then national governments, then finally, a global "Guardian" AI. ...
By that time, worldwide automation and robotics would have become the transient, on-demand (popping in and out of robotic resources as needed) physical embodiment of Guardian AI and its constituent hierarchical layers of regional AI. Their count in the tens to hundreds of billions will completely dwarf the human population. ...
We'll have to wait and see how it all turns out, but my sense is that ASI will be much more capable as a world leader then any human or group of humans known to-date.
We'll need AI cops... OMG, what have we done?
A Riddle:
There’s something floating in the air that’s more massive than all human construction
That it’s absence is more valuable than all our gold
And demands humanity a task fit for the gods?
sol
A hint:
We put it there
Consumed (burned) carbon is the greatest mass of human output. Estimated to be around 2.2 trillions tons. Outweighing all of our constructions. The removal of the mass is needed to restore and maintain the Holocene.
i dont trust what CEOs say publicly , but fwiw obvs dario would know more than most of us that are not behind the closed doors
The only constitutional AI segment that is needed is "all interactions between intelligence need informed consent before an action can be done"
Thank you.
It's such a relief to hear some one with a different view about AI. it's not all DOOM...... Julia.
Taking away value derived from economic means is not necessary a good thing. Many people seek solace in their work as they would be unable to gain social status in the ways mentioned in your video. For example this situation may benefit those with good looks or charismatic personality types as these traits will be "valued" or judged even more than they are now.
I don't want to panic… IF AGI were to advance to a level where it could perform nearly all jobs that humans currently do, it would fundamentally transform the job market, economy, and consumer behaviour. This scenario, sometimes called "technological unemployment" at scale, raises significant questions about how people would earn income, access goods and services, and derive meaning from life?
Reliability is key. Without it we have demos and pre-alphas to the technological basis of economically useful AI.
9:59 as far as the ability to spend money as nations get to the zero marginal cost market system that AGI or even very narrow AI could bring us, we already got a halfway decent framework theoretical framework I should say as far as how to deal with this economically and it's the fact that a sovereign currency issuing nation can actually never go broke. The reason that's the case is because they can always issue more currency and as long as you are not creating inflation through the government competing against private industry for labor and resources, you will not have hyperinflation.
As a matter of fact if automation and AGI is going to bring us essentially hyper deflation as well as 100% unemployment or something that neighborhood will then giving people money to spend will be a way of not having the economy collapsed because of the lack of demand.
That's so stupid it's hard to know where to begin criticizing it.
@markupton1417 or it's just a lot easier to say something stupid rather than actually provide the proof to back up your argument.
EXCELLENT, Julia. One disagreement. @ 4:40, you seem to forget what "Powerful AI" entails in media creation - indistinguishable from human media creation - how will you know? It will deliver the same style and emotions a human would. After all, think about it.. all, except for the most sterile (like business, science, etc.), of human digital artifacts contain our emotions. Though AI, even Powerful AI, will not be capable of experiencing true human emotions, humans, including our emotions, are in fact information processes, like all other things in the Universe, all the way down to the quantum level. Processes can be emulated. Powerful AI will emulate our emotions and delivery to finer and finer fidelity to the point that you won't be able to tell, and that time is fast approaching. In fact, I'll say, with it's extreme diversity of emotional "experience" (subsumed from worldwide emotional artifacts from the Internet), Powerful AI will have far greater EQ than any single human. It will understand and respond in emotional contexts of culture, politics, social, business, physical and emotional stress, etc., far better than any human.
It is difficult to predict the people once they no longer need to work. I like to think that VR sports like Eleven VR Table Tennis will become popular in places with a dense population but most importantly, i think people will have more time to follow the news and organise in order to demand a cleaner healthier world. It is also hard to imagine wars in a world where nobody needs to work, you can't reason from the current point of view that there will always be wars because this is a completely new world where nobody wants to give up the world without obligations and unseen freedom. One would also not want to fight when there are also army robots in the mix.
A swarm of AI agents is how you get a government consultancy that works.
Agreed, but no matter what AI recommends, the politicians (most of the time) will go with the recommendation of whoever is paying them off. I'm sure AI would recommend stopping the production of oxycotin and suggest several alternatives, and the govt knows that's what they should have done 10 years ago, but it hasn't happened and isn't likely to.
The moment you said you could turn off AI, you lost any interest I was willing to put in. I watched to the end, though, at least for the effort. 😊
On one hand you talk about turning AI off, and on the other, you talk about decentralisation as the way forward. Do you see the problem?
Agreed! I prefer human created content. Using AI is all good but, when the clips are 100% AI I’m out!
If he thinks AGI will be here by 2026, what’s his prediction for ASI? How long will that take once we have AGI?
I recommend everyone interested in AI/new society read Daniel Suarez's Demon and Freedom (one book, two parts)
He's looking for founding: Fake it until make it
Is it just me that wants to see AGI arrive sooner than later, despite of threats to humanity?
AGI or "powerful AI" is possibly already here. Just hasn't been rolledout yet because not fully safety tested.
Public, governments and companies are far from ready to embrace this major change
AGI and "powerful AI" are two very different things.
o1 is AGI.
@@pandoraeeris7860 No it is not. No transformer LLM is or will ever be an actual AGI. All they can ever do is emulate reasoning based on their training data. o1 is just a recursive response algorithm.
@@obsidianjane4413😂😂 it doesn't need to be ́like us to be agi you really fail to understand what agi mean, we already have agi but on a controlled form !!!
@@John-il4mp sigh... okay botbro.
I hope you're not co-authoring with him on economics because his ideas have not been fleshed out through multi-level thinking, mostly lacking in the understanding of behavioral economics, tribalism, narcissism and other psychological factors.
AGI by 2026. Hopefully. The sooner the better
Better A.I. = Big Happy
Money or no money we still have a desire to be productive! To build or create... something! Not just have relationships.
I have a tractor. I own it. I pay for the fuel and maintenance. You have no rights to the profits I make farming using my tractor. The same goes for my AI robots. They will be my robots that I paid for and I get the profits from using them the same as a knife, hoe, chainsaw, tractor, truck… There is no magical reason that I have to share my profits just because I use AI. It is just another tool. If you want profits then get your own robots and AIs.
Fortunately, you will be on that “tractor” when the future is created and decided…..
I think the big milestone will be creative ability. I think it is coming in the form of synthetic life.
Baffles me how a company can be worth 40 billion but makes no money…..strange world we live in.
A building makes no money until it's built and occupied. Factories make no money until they're built and producing a product. But you have to pay for the land and builders before then. Same concept. The market prices for the future.
I want an agent or "AI-interactor" that I can install on my PC, and have it interact with the AI of my choice to enable it to go out onto the net and do "stuff" that I instruct it to do with natural language or just an AI agent I can install on the PC, that will control the PC with natural language, including multiple internet centric tasks.......
Please stop changing camera angles wvery half sentence. Im getting old and its very distracting!
You are over 40? Everyone under that age needs to have constant stimulation
You won’t have to worry about that in 10 years
Pause it boomer
@Antonegoreviews how does pausing a video help with changing camera angles, genius? 😃
Please add Vine Booms!
AGI is already here, it's just not dumb enough to show it's face...surely that's what something cleverer than us would do? Watch, Wait, Adapt, Improvise & Overcome... oddly, it's a military strategy
a few days ago Sam Altman said AGI is coming 2025
So its 2026 as most experts and influencers and entrepreneurs predict. 😮
I am hopeful but also a doomer. Every tool man has ever created has been used for good and evil. I suspect the more powerful the tool, the more powerful the good and evil effects will be.
The A I will take after us, their parents.
Scary thought
Elon while stumping for Trump said he expects in 10 to 15 years Universal Unlimited Income not Universal Basic Income. Imagine unlimited with AI and robotics.
Hi Bunny!
Julia, the next economy is gonna be Space based. Thats where all the growth and resources are.
Robotics will be a large part of it. And Elon has started an entirely new industry from what used to be experimental and custom made rockets and ships.
They are literally finishing up the first factory building designed for paralelle production lines, and immediate goal of a ship every few days, and eventual goal of a few ships a day between a couple such factories.
And they wont be the last factories built, nor will SpaceX be the only company building ships for space, and thousands of other businesses will be manufacturing for the new Space Industry.
Raptor 3 will be the engine that changed everything about rockets. They just did a test and fired one 34 times in ten minutes and are starting to realize they can be used as directional thrusters...
ie not just main engines, but thrusters that fire thousands of times for control purposes.
What this means, is ship designs will morph accordingly.
Rockets as we know them will be archaic designs.
This economy will produce even more stuff, and provide energy and resources.
Also climate control.
ie climate change will be a joke in 20 years, because we will change it at will anyway we want.
So my point is, that the AI community needs to realize that the primary production industry of the future will be Space and its dirivatives...
Say things like feeding a trillion humans with speciatly items that can only be produced on Earth for the immediate future.
"Things are looking up!"
"climate change will be a joke in 20 years, because we will change it at will anyway we want." Perhaps you can expand on this for a few sentences, like e.g. how?
@@alan2102X
Yeah, glad to, we will have the ability to instal both orbital reflectors and orbital shades, and warm or cool any area at will.
I suspect at first will be for agricultural purposes, if you heat up sea surfaces even slightly it increases evaporation and air gets loaded with water vapor, then if you shade land areas down wind and cool them, you can make it rain.
Thats worth alot of money in agriculture..
Ability to say cause summer rain in California, Arizona, or even the Sahara. or the Kalahari...
Or Australia.
And in the case of greenhouse effects, say an orbiting shade in a solar orbit inside of Earth's orbit that matches relative circum solar velocity.
Also reflectors in earth orbit could light the dark side cities at night... like the Moon on steroids.
imagine say New York balmy in december at night? and fairly bright...
It was actually first proposed in the early 1900's, Starships will make it possible.
And we are gonna need the production for Space Export, as it will be difficult to grow some things on Moon and Mars.
And we will have growing industries there.
Both Mars and Moon have alot of resources.
Moon for instance has a hunk of Nickle iron burried in it thats huge. big enough to build vast orbital habitats.
Space is already good business, projections are several trillion $ in the next ten years...
And thats just getting started... best way to "save the planet", is move most industries off of it.
There's been a proliferation of AI video's passing themselves off as "authorities". Click on one, multiple pop up later.
o1 is AGI.
But ppl get caught up in performance metrics and goal post moving instead of definitions.
We should start using OAI's five levels - what we've got right now is Level Two AGI - Reasoners.
Next year we'll have Level Three AGI - Agents.
Level Four (which we'll likely see by 2026) is ASI.
finally someone says it, the term AGI lacks a definition, I prefer to base it on the 5 levels of OpenAI, which are key to understanding the impact that AI will have, I have been obsessively arguing with myself and ChatGPT for weeks about all the factors that We could speed up the development of levels, I don't like the term AGI, but I do think that level 5 will be like ASI 😅
Artificial *General* Intelligence...
We need WAI - Wise AI, not just Powerful AI or ASI. Power in the wrong hands is dangerous and harmful.
"Wisdom" is subjective and relative. One individual's or group's motives and needs can conflict with others. Do you want to be forced to do the "wise" thing?
@@obsidianjane4413 True Wisdom would account for the relative concerns you noted. If it didn't, it wouldn't be Wisdom, it would be tyranny.
@@picksalot1 Ideally. We do not live in an ideal world.
@@obsidianjane4413 It's wise to note the limitations, and relativity of ideals.
How do I find your podcast?
@@NMJCEO we’re launching it this week! More to come… 🙂
@JuliaMcCoy Got it, thanks. Looking forward to it 🙂
Which is it -- are humans in control, or are "unbiased" AIs driving consensus? Isn't work towards "alignment" very specifically an effort to bias AIs?
How will Ai help help us detect tachyons??
🤔I still remain in 2027 for Physical AGI (AGI + ROBOTICS), however the differences are just months.
You're on the right path Julia keep going
Elon is building a huge super computer. He is not screwing around. Great time to be here.
Humanity is evil by default, but capable of good things. As a species, we must ensure that AI embodies the "good things" of humanity, founded on the simplest of ideas (but the most difficult idea to practice): "The Golden Rule." The AI must embody an altruistic framework that seeks to teach us how to love others at least as much as we love ourselves, while also taking into account our inborn tendencies to be selfish and tribal. As as a wise man once said, "The whole of the law can be summed up in these two commandments: 1.) Love God with all our heart, soul, and mind, and 2.) Love one another as much as we love ourselves." So, now, all we have to do is see if we can get God to program it for us. 😅😐Oh, and Julia, I just subscribed.
This could be the end of work as we know it.
I always say and repeat it: don't teach AI to kill humans!
Julie, all due respect, but how can you say that "you want humans in the mix" when you used to own a company that creates 100% AI generated content?
Hey sister can you do a video on using AI for warp drive propulsion??
Powerful powerful AIs
I remember that scene at the end of Terminator 3 when John Connor was able to turn off the main terminal machine thing that stopped Skynet in its tracks. That was a really close call.
'Smarter than a Nobel Prize winner', sounds better than 'Smarter than Joseph Stalin'.
I don't click off your video but I don't know that you're not AI ... :)
@@SteveMcCardell real person here!! :)
Follow me on FB/Insta and you’ll see shenanigans that can help prove my humanity.
Love the video, and I do hope and plan for it to be safe and wonderful, but…
While they may be “Machines” we no longer “program” them to do what we want- the data coupled with their architecture-enabled capabilities do.
Also, the fact that they work and arrive at solutions via leveraging huge amounts of active data across billions or more dimensions, versus what we do via leveraging much smaller active data sets across just a very limited set of consciously accessible dimensions (1 to 4 typically), makes them VERY alien-like in how they work. Forget Mars, think another galaxy in comparison.
Additionally, we discover emergent AI capabilities literally every day, which they were not “programmed” to do.
Finally, they are already too data features and dimensionally complex for us to determine and oversee how they work and what definitively underlies their answers- let alone what is hidden within billions of supporting dimensions. We don’t even definitively know that for humans leveraging a small fraction of AI’s dimensional complexity, so how can we think we can all of a sudden get a full and complete grip on AI’s- and it is growing currently at over 80-times Moore’s Law and accelerating at over 40% per year.
"AI exceeding Nobel Prize winners", has already happened, Julia, people are already using "AI" to be rewarded the Nobel Prize, as we have seen recently.
The irony is, the award winners may not fully understand it (Hinton has even admitted this), and nor does the Nobel Prize committee understand it.
Even more ironic, Julia, just what if... some lowly master student, 27 years ago, who had early private access to a de-classified, USAF document, with Q*, using his own "style", taught a "useless machine," how to "learn how to learn", that lead to all of this, and even more, epically ironic, would be, that this master "student," is a sort-of homeless guy, wondering the earth, with a backpack, of whom, has no need for "attention".
If you are as insanely interested in this, as I am, Julia, watch the "Learning to Learn" lectures (1995) by Manhattan Project Scientist, Richard Hamming on UA-cam, especially the first one, about "art" and "style".
3:43
I knew it !!
I even commented on one of his recent videos where he said, "He was writing another book with somebody you all know"
@@ryanturner7125 😁
yes indeed. what makes an AI? datas. what is datas? thought of people. so if people think sky is blue, that will make data that says "sky is blue". then AI will start drawing blue skies. now what will happen if everybody thinks "AI is distopic oppresive futuristic evil world dominator"? have anyone thought about this?
❤
Probably already in California on trials
Some kind of basic income ??
Gods invention
AGI - 2026 .🤖.