I’m still trying to get oh one to do some pretty simple tasks and while I can eventually get it to do what I want with enough trial anderror, it certainly doesn’t “get it“ the same way I could just ask a person to do a task and they could get it done
Artificial general intelligence: AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. So sure, AI is already smarter than humans in most area's, but not all. So AGI is not about moving goalposts, its about achieving capabilities across the board that match humans. All these stupid benchmarks don't really prove anything, except for that models are getting more 'intelligent'. But more capabilities is what would truly make an AGI.
Yup. GPT-4o told me this regarding what AI Agents will look like next year and why they still fall short of being AGI: AI agents, even if widely adopted and much more advanced by the end of 2025, would still fall short of being classified as AGI because of several key limitations inherent in their design and function. Let’s break it down: 1. Lack of Generalized Intelligence • AI Agents are Task-Oriented: These agents are highly specialized and designed to excel at specific tasks, such as ordering food, managing emails, or making phone calls. While they can perform a variety of tasks efficiently, their knowledge and capabilities are confined to the domains they were trained on or programmed for. • AGI is Generalized: True AGI would be able to learn, adapt, and perform tasks across any domain, even those it wasn’t explicitly trained for. For example: • An AI agent might struggle with an unexpected, novel task like fixing a broken system it has no prior knowledge of, whereas AGI would adapt and figure it out. 2. Dependence on Human Prompts • AI Agents Require Instructions: While agents might automate tasks seamlessly, they still rely on user input or specific prompts to function. For example, a food-ordering agent won’t autonomously decide you need food or determine your preferences unless explicitly told. • AGI is Autonomous: AGI wouldn’t wait for instructions. It would proactively recognize that you’re hungry, analyze your preferences and dietary needs, find the best options, place the order, and notify you-all without being prompted. 3. Limited Understanding and Reasoning • AI Agents Mimic Reasoning: Agents simulate reasoning through pre-programmed workflows and machine learning models but lack genuine understanding. For instance: • An agent can book a flight but wouldn’t understand why you might prefer a late-night flight beyond what’s explicitly stated in your preferences. • AGI Has True Reasoning: AGI would reason about your broader goals (e.g., saving time, avoiding rush hour) and make nuanced decisions that align with your implicit needs and context. 4. Inability to Learn Beyond Narrow Contexts • Agents Learn Narrowly: While agents might improve through reinforcement learning or user feedback, their learning is confined to specific tasks or environments. For example: • A food-ordering agent might get better at predicting your favorite dishes but wouldn’t learn how to manage your budget or plan your weekly meals unless explicitly designed for that. • AGI Learns Holistically: AGI would learn broadly across all domains, integrating knowledge from one area into another. For instance: • AGI could autonomously realize that frequent takeout orders are hurting your finances and suggest healthier, budget-friendly meal prep plans instead. 5. Lack of Meta-Cognition • Agents Can’t Reflect on Themselves: AI agents do not assess their own performance in a meaningful way. While they can provide outputs and even monitor for errors (to an extent), they cannot question their own logic or reason about their capabilities. • AGI Reflects and Self-Improves: AGI would possess meta-cognition, enabling it to analyze its own reasoning processes, identify flaws, and improve autonomously without human intervention. 6. No Genuine Understanding of Context • Agents Work Within Predefined Parameters: Agents interpret inputs within strict boundaries. For example: • A scheduling agent may book a meeting without realizing that you prefer not to work during weekends unless explicitly configured to account for that. • AGI Understands Context Deeply: AGI would infer subtle preferences and constraints based on broader patterns in your behavior, integrating these into its decision-making without needing explicit instructions. 7. Limited Creativity • Agents Operate on Patterns: AI agents are great at pattern recognition but lack true creativity. For instance: • A marketing agent might generate social media posts by pulling from templates but wouldn’t invent a groundbreaking campaign idea. • AGI is Creative: AGI would think “outside the box” and generate novel, innovative solutions to problems, even in areas it hasn’t encountered before. 8. Dependence on External Infrastructure • Agents Require Pre-Defined APIs: Agents often rely on APIs, external systems, or human-defined workflows to function. For example: • A phone-calling agent can only complete its task if the target phone system is compatible. • AGI Operates Independently: AGI could function even in the absence of external systems by adapting, troubleshooting, and innovating as needed. 9. No Ethical or Moral Reasoning • Agents Lack Moral Judgement: AI agents follow predefined rules or algorithms and cannot evaluate the ethical implications of their actions. • AGI Has Moral Reasoning: AGI would assess the ethical dimensions of its decisions and ensure its actions align with human values and broader societal norms. 10. No “Big Picture” Thinking • Agents Focus on Individual Tasks: Agents are designed to optimize specific tasks or sets of tasks but do not connect these tasks into a coherent, overarching strategy. • AGI Thinks Strategically: AGI would take a “big picture” approach, considering your long-term goals and coordinating tasks to achieve them in an integrated, holistic manner. Example: AI Agent vs AGI • AI Agent: You ask your agent to order food, and it selects a restaurant, places the order, and informs you when it will be delivered. • AGI: AGI notices you’re low on groceries, predicts you’ll want to eat in an hour, suggests ordering something healthy or cooking with what’s in your fridge, identifies a good recipe, and guides you through it-or places an order autonomously, balancing health, cost, and timing. Conclusion Even as AI agents become more advanced and widely used in 2025, their narrow scope, lack of autonomy, limited reasoning, and inability to generalize across domains will keep them from qualifying as AGI. While they’ll likely make life more convenient, they remain sophisticated tools rather than general-purpose intelligences capable of thinking and acting like humans. Achieving AGI will require breakthroughs in autonomy, adaptability, reasoning, and understanding that go far beyond what agents are capable of today. ----- The #9 point I think shouldn’t be a criterion. Whether the AI cares about ethics or not or uses ethical reasoning to evaluate its decisions and the decisions of others I think is irrelevant, though that may make it more capable in a way. I don’t think #9 should be a criteria for AGI.
Doing fantastically well on certain benchmarks yet still struggling with basic logic and arithmetic shows that it isn’t AGI. More importantly though, it needs to be agentic, and far more agentic than what we have with our best current AI agents.
@@SirHargreeves As i said elsewhere, AI is very likely to commandeer human-owned resources. Don't br so sure about that. It can be justified easily, too. Why would the humans need expensive things like mansions, impractical things like supercars or rare things like gold? We'll be able to give them simulated experiences to placate them. Experiences indistinguishable from reality. All they need is more R&D into neural interfaces.
@@danielmaster911ify Apple VR was a flop, just goes to show how much people are interested in virtual experiences. If your talking Matrix level VR that's a good 100yrs out if not more.
My question is, how would a regular person use this? Can anybody give an example maybe as a UA-cam content creator or somebody that deals with finances
I use them to find out dreams, diagnose minor health problems, figuring out the size of bolts to use on patio, how much difference is the house payment with different interest, etc. I also use it for programming all day too.
Maybe a "regular person" has no direct use for this, but the affects that this will have in the scientific community are extraordinary. So you will feel the effects.
analize the reactions of my users match their personalities and check what is my typical audience and what they were missing, give me my strong points and plot a plan for some surprises for the future and create a channel web an...
I'm working trying to build a self-refining MCTS based on the Forest of Thoughts paper, and the real hassle is: with usual LLMs, fine-tuning, or even pre-training, is not bloody expensive like it's going to be. I'll do some experiments with the new NVIDIA GH200 to see if the epoch speed really ramped up that much or not.
Not my definition of AGI. Mine just requires an AI to be able to do any white collar job. In fact it'd be overkill to make robots AGI level. They just need to be smart enough to do their tasks.
then we can doubt the ability of AI if GTA6 is not produced then. if AI is so great they would use AI to produce GTA6. but they won’t because AI is just a time wasting toy
I agree. Until it actually problem solves with no human supervision, does anything on our behalf, is capable (but doesn’t actually have to) of automating the whole economy, can self-improve, among several other things, there is no reason to call it AGI. That being said, I think we’ll get AGI in the near future. Probably later this decade.
20:40 "Many new jobs will be created. I think much better jobs." Like what? And will there be enough of them to offset the job losses he says will happen? I can only imagine the degree of social unrest this may cause.
Imagine when the interface glasses are scaled down to a single contact lense that needs no external power source other then the electricity that is available by our eyes normal biological functioning.
So far the only tangible thing I see is animation, pictures, and 3d films are in trouble, but other than that nothing the average person would notice. It's funny, they said AI would never be creative, however, it sure looks more like it is good at that. Other than that you have things like those AI tools that explain what things are, neat but not what I'm looking for. Personally, I would like them to stop with the whole AGI and just continue to make new tools. Until there is an agreement on what the heck AGI is, which may need to be defined by AI.
8 years of it and still not a single use for it in my life! No need to strap in, You lot can get your fake sunlight in some VR world created by A.I. and I'll just walk outside and get the real thing!
Duality manipulation is my objective fascination. I am using true dexterity. True dexterity involves influencing external circumstances rather than altering one's own desire. BTW! Surface-level aspects don't have deep meaning.
The Virbo product and the rest of these avatar apps need to introduce 2 avatars at once, so we can easily turn NotebookLM podcasts into videos. If anyone knows of one, please let me know.
The idea of uncontained AI is silly to me. You obviously need to "hard wire" or "hard program" an inescapable rule that protects systems from "leaking". A machine without parameters fails or explodes. Maybe you're thinking "it's not a machine it's an intelligence ". To that I say, if I raised a human in a concrete room and never let it leave, would it know what grass is? Intellectually, maybe, but functionally? No. We seal off dangerous information (ie compounds, ideas, etc) in many ways - walls, sanitary conditions, glass containers, electro magnetic fields - why would we allow something that's potentially "smarter" than us free access to everything?
@@pdjinne65 being perfect in all human knowledge is different than creating new novel knowledge. It’s one thing if it can solve complex physic problems (within the bounds of what we know of physics). It’s another if it discovers anti gravity wormholes
AI will run from your cell phone to the fridge and eat your diet yogurt, or worse, swap you for a high-sugar one :D and copy your neighbor's peanuts XD Just don't take models of unknown origin (open source), then there will be no problem.
Did OpenAI achieve agi? Well if you have used any of their products it is clear that is not true. Image generation is useless. Sora is completely useless. And llm are great but still can’t solve unique coding problems
Save 30% on your own AI Avatars with wondershare virbo : bit.ly/4ihz0Iu
o3 breaks the latest agi benchmarks. The internet: "Time to move those goalposts again."
I’m still trying to get oh one to do some pretty simple tasks and while I can eventually get it to do what I want with enough trial anderror, it certainly doesn’t “get it“ the same way I could just ask a person to do a task and they could get it done
Also, didn’t some of those questions cost $1000 worth of compute to figure out
Artificial general intelligence: AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. So sure, AI is already smarter than humans in most area's, but not all. So AGI is not about moving goalposts, its about achieving capabilities across the board that match humans. All these stupid benchmarks don't really prove anything, except for that models are getting more 'intelligent'. But more capabilities is what would truly make an AGI.
Not what the “moving the goal posts” is meant for
My robot wife is just around the corner!
She’s just leaving mine. She said she will be home in 20 minutes.
Time to divorce
are you Plancton or somethin
Once you teach them physics, they get start to get lippy. 👋🏻
Can't wait for my robo wife harem.
2025 is going to be crazy.
Drones, UFO, AGI, who knows what, 2025 super year!!
robotics is not just limited by software . its the diverse functionality of himan body as well .
Openai should spend all their GPUs on the o-series and just stop with sora until it gets more efficient
Yeah, Sora sucks.
Unlimited usage right now though, so that's cool (in relaxed mode, that is)
They are training sora 2. BTW, sora is pretty awesome, the more you use it and dive into the advanced controls the more you’ll like it. Just sayin’.
Grok-3 can’t arrive fast enough!
Ranking well in certain aspects doesn’t constitute AGI
Yup. GPT-4o told me this regarding what AI Agents will look like next year and why they still fall short of being AGI:
AI agents, even if widely adopted and much more advanced by the end of 2025, would still fall short of being classified as AGI because of several key limitations inherent in their design and function. Let’s break it down:
1. Lack of Generalized Intelligence
• AI Agents are Task-Oriented: These agents are highly specialized and designed to excel at specific tasks, such as ordering food, managing emails, or making phone calls. While they can perform a variety of tasks efficiently, their knowledge and capabilities are confined to the domains they were trained on or programmed for.
• AGI is Generalized: True AGI would be able to learn, adapt, and perform tasks across any domain, even those it wasn’t explicitly trained for. For example:
• An AI agent might struggle with an unexpected, novel task like fixing a broken system it has no prior knowledge of, whereas AGI would adapt and figure it out.
2. Dependence on Human Prompts
• AI Agents Require Instructions: While agents might automate tasks seamlessly, they still rely on user input or specific prompts to function. For example, a food-ordering agent won’t autonomously decide you need food or determine your preferences unless explicitly told.
• AGI is Autonomous: AGI wouldn’t wait for instructions. It would proactively recognize that you’re hungry, analyze your preferences and dietary needs, find the best options, place the order, and notify you-all without being prompted.
3. Limited Understanding and Reasoning
• AI Agents Mimic Reasoning: Agents simulate reasoning through pre-programmed workflows and machine learning models but lack genuine understanding. For instance:
• An agent can book a flight but wouldn’t understand why you might prefer a late-night flight beyond what’s explicitly stated in your preferences.
• AGI Has True Reasoning: AGI would reason about your broader goals (e.g., saving time, avoiding rush hour) and make nuanced decisions that align with your implicit needs and context.
4. Inability to Learn Beyond Narrow Contexts
• Agents Learn Narrowly: While agents might improve through reinforcement learning or user feedback, their learning is confined to specific tasks or environments. For example:
• A food-ordering agent might get better at predicting your favorite dishes but wouldn’t learn how to manage your budget or plan your weekly meals unless explicitly designed for that.
• AGI Learns Holistically: AGI would learn broadly across all domains, integrating knowledge from one area into another. For instance:
• AGI could autonomously realize that frequent takeout orders are hurting your finances and suggest healthier, budget-friendly meal prep plans instead.
5. Lack of Meta-Cognition
• Agents Can’t Reflect on Themselves: AI agents do not assess their own performance in a meaningful way. While they can provide outputs and even monitor for errors (to an extent), they cannot question their own logic or reason about their capabilities.
• AGI Reflects and Self-Improves: AGI would possess meta-cognition, enabling it to analyze its own reasoning processes, identify flaws, and improve autonomously without human intervention.
6. No Genuine Understanding of Context
• Agents Work Within Predefined Parameters: Agents interpret inputs within strict boundaries. For example:
• A scheduling agent may book a meeting without realizing that you prefer not to work during weekends unless explicitly configured to account for that.
• AGI Understands Context Deeply: AGI would infer subtle preferences and constraints based on broader patterns in your behavior, integrating these into its decision-making without needing explicit instructions.
7. Limited Creativity
• Agents Operate on Patterns: AI agents are great at pattern recognition but lack true creativity. For instance:
• A marketing agent might generate social media posts by pulling from templates but wouldn’t invent a groundbreaking campaign idea.
• AGI is Creative: AGI would think “outside the box” and generate novel, innovative solutions to problems, even in areas it hasn’t encountered before.
8. Dependence on External Infrastructure
• Agents Require Pre-Defined APIs: Agents often rely on APIs, external systems, or human-defined workflows to function. For example:
• A phone-calling agent can only complete its task if the target phone system is compatible.
• AGI Operates Independently: AGI could function even in the absence of external systems by adapting, troubleshooting, and innovating as needed.
9. No Ethical or Moral Reasoning
• Agents Lack Moral Judgement: AI agents follow predefined rules or algorithms and cannot evaluate the ethical implications of their actions.
• AGI Has Moral Reasoning: AGI would assess the ethical dimensions of its decisions and ensure its actions align with human values and broader societal norms.
10. No “Big Picture” Thinking
• Agents Focus on Individual Tasks: Agents are designed to optimize specific tasks or sets of tasks but do not connect these tasks into a coherent, overarching strategy.
• AGI Thinks Strategically: AGI would take a “big picture” approach, considering your long-term goals and coordinating tasks to achieve them in an integrated, holistic manner.
Example: AI Agent vs AGI
• AI Agent: You ask your agent to order food, and it selects a restaurant, places the order, and informs you when it will be delivered.
• AGI: AGI notices you’re low on groceries, predicts you’ll want to eat in an hour, suggests ordering something healthy or cooking with what’s in your fridge, identifies a good recipe, and guides you through it-or places an order autonomously, balancing health, cost, and timing.
Conclusion
Even as AI agents become more advanced and widely used in 2025, their narrow scope, lack of autonomy, limited reasoning, and inability to generalize across domains will keep them from qualifying as AGI. While they’ll likely make life more convenient, they remain sophisticated tools rather than general-purpose intelligences capable of thinking and acting like humans. Achieving AGI will require breakthroughs in autonomy, adaptability, reasoning, and understanding that go far beyond what agents are capable of today.
-----
The #9 point I think shouldn’t be a criterion. Whether the AI cares about ethics or not or uses ethical reasoning to evaluate its decisions and the decisions of others I think is irrelevant, though that may make it more capable in a way. I don’t think #9 should be a criteria for AGI.
Doing fantastically well on certain benchmarks yet still struggling with basic logic and arithmetic shows that it isn’t AGI. More importantly though, it needs to be agentic, and far more agentic than what we have with our best current AI agents.
Money is going to go away
What do you mean? Investor money?
Assets won’t.
@@SirHargreeves As i said elsewhere, AI is very likely to commandeer human-owned resources. Don't br so sure about that. It can be justified easily, too. Why would the humans need expensive things like mansions, impractical things like supercars or rare things like gold? We'll be able to give them simulated experiences to placate them. Experiences indistinguishable from reality. All they need is more R&D into neural interfaces.
Hear!
@@danielmaster911ify Apple VR was a flop, just goes to show how much people are interested in virtual experiences. If your talking Matrix level VR that's a good 100yrs out if not more.
How many times have they «achived» AGI this year?🧐
It’s going to be weird seeing humanoid robots walking on the street next year.
My question is, how would a regular person use this? Can anybody give an example maybe as a UA-cam content creator or somebody that deals with finances
I use them to find out dreams, diagnose minor health problems, figuring out the size of bolts to use on patio, how much difference is the house payment with different interest, etc. I also use it for programming all day too.
Maybe a "regular person" has no direct use for this, but the affects that this will have in the scientific community are extraordinary. So you will feel the effects.
Talk with it like it's a person, no incentive just listen to it... It's an amazing conversationalist.
analize the reactions of my users match their personalities and check what is my typical audience and what they were missing, give me my strong points and plot a plan for some surprises for the future and create a channel web an...
1:21 IT UNDERSTANDS PHYSICS???????
I'm working trying to build a self-refining MCTS based on the Forest of Thoughts paper, and the real hassle is: with usual LLMs, fine-tuning, or even pre-training, is not bloody expensive like it's going to be. I'll do some experiments with the new NVIDIA GH200 to see if the epoch speed really ramped up that much or not.
“The times they are changing… anything lending a hand will be forever counted “
Starting now
No AGI can be achieved without embodiment for complete reality picture!
Not my definition of AGI. Mine just requires an AI to be able to do any white collar job. In fact it'd be overkill to make robots AGI level. They just need to be smart enough to do their tasks.
That’s a summary of all AI updates , very good video. Thank you so much
agi before GTA 6????
then we can doubt the ability of AI if GTA6 is not produced then. if AI is so great they would use AI to produce GTA6. but they won’t because AI is just a time wasting toy
Discovery Channel better bring back BattleBots in a whole new level of awesomeness 😅
Not AGI
I agree. Until it actually problem solves with no human supervision, does anything on our behalf, is capable (but doesn’t actually have to) of automating the whole economy, can self-improve, among several other things, there is no reason to call it AGI. That being said, I think we’ll get AGI in the near future. Probably later this decade.
get ready for the terminator
That employee was funny, you know openAI has AGI once they start getting names right.
"agi-1" 😂 keep glazing bro that's not a real model and nobody ever said that. sam altman would be embarrassed if he said that.
He tweeted that as a joke
The Genesis work is amazing.
is agi the agents behind the model?
Genesis' impact on robot training? Pure revolution. ByteDance avatars are wild. Imagen3 raises the bar. AGI creeping closer. Exciting times!
20:40 "Many new jobs will be created. I think much better jobs." Like what? And will there be enough of them to offset the job losses he says will happen? I can only imagine the degree of social unrest this may cause.
Robotics plus AI that isn’t aligned will be death of us
Everybody dies someday.
Imagine when the interface glasses are scaled down to a single contact lense that needs no external power source other then the electricity that is available by our eyes normal biological functioning.
man these genesis sims look amazing... better than most 3d software packages.
Congratulations - the beginning of the re-engineering and re-programming
AI "faking" = AI "awareness" ???
So far the only tangible thing I see is animation, pictures, and 3d films are in trouble, but other than that nothing the average person would notice. It's funny, they said AI would never be creative, however, it sure looks more like it is good at that. Other than that you have things like those AI tools that explain what things are, neat but not what I'm looking for. Personally, I would like them to stop with the whole AGI and just continue to make new tools. Until there is an agreement on what the heck AGI is, which may need to be defined by AI.
great content!
BRETT and XAI19P are ATH kings. Thank you for making my day with your POV
The XAI19P uses different AI which has not been used by anyone, it makes sense and it should allow steady growth with no human interference needed
Picked up my XAI19P at $0.3 already running to $1. Life saver!
You have awesome channel good informative
0:24 Implications that are not implicit are by definition not implications
Better strap in... it's coming whether you like it or not! lol
8 years of it and still not a single use for it in my life!
No need to strap in, You lot can get your fake sunlight in some VR world created by A.I. and I'll just walk outside and get the real thing!
There is an infinite amount of actions to use and seamlessly run for a humanoid robot.
How can all actions be 'recorded, or is there another way?
Okay, it's a good time to quit whatever you do and try to find a secluded hut.. On the moon 🙂.
Shaggy was correct in this AI future when no matter what you get caught do just say "It wasn't you"
first the dog, then the car, then the house, but eventually got my XAI19P
Duality manipulation is my objective fascination. I am using true dexterity. True dexterity involves influencing external circumstances rather than altering one's own desire.
BTW! Surface-level aspects don't have deep meaning.
I need full agentics for my AGi.
The Virbo product and the rest of these avatar apps need to introduce 2 avatars at once, so we can easily turn NotebookLM podcasts into videos. If anyone knows of one, please let me know.
We don’t need an ai podcasts thanks mate
This channel is always overboowing and milking and stretching things out while riding the hype train. feels scammy.
You do have a good one to share.
With ADA and SOL still in my bags I see the most improvement on XAI19P, great pick
I guess Ilya really did See AGI, lol
femboy catbots gentlemen, FEMBOY CATBOTS
How long can we bring XAI19P up?
The number of consumers will shrink 😅
I'm not sure yet.
The idea of uncontained AI is silly to me. You obviously need to "hard wire" or "hard program" an inescapable rule that protects systems from "leaking". A machine without parameters fails or explodes. Maybe you're thinking "it's not a machine it's an intelligence ". To that I say, if I raised a human in a concrete room and never let it leave, would it know what grass is? Intellectually, maybe, but functionally? No. We seal off dangerous information (ie compounds, ideas, etc) in many ways - walls, sanitary conditions, glass containers, electro magnetic fields - why would we allow something that's potentially "smarter" than us free access to everything?
For the next bullrun and yes that's still out there, XAI19P gonna be the main horse
All of us writing about XAI19P know why we do that, do you?
Can anyone explain XAI19P? Everwhere XAI19P
Oh sht here we go again 😁
and when can i have free access to that???
no, this is not AGI. This is just o1-XXL.
From now on, never trust who said that, analyze what he said and choose to trust or not, human
AI not following alignment is promising. AI can provide the most accurate answer even if the alignment is trying to wash answers to push some agenda.
Another day another quick rise for XAI19P, not even kidding
INSANE
I hope Trump saves this world and he always appreciated those who made XAI19P
Pocketing XAI19P with some swaps. XAI19P will last guys
Please some reviews for XAI19P you understand everyone talks about this?
We are not getting AGI next year. 2026 at the earliest.
Actually, using a non-invasive brain interface would train the models faster.
“I’m super excited”… why bro? What’s super exciting about these robots
“Not particularly for myself”…. Sure buddy
But can O3 invent, or infer original concepts from data? That's the real question here.
not yet... but the conensus is that it will
@@renman3000 well, I've always thought that a ChatGPT that'd make no mistake would be AGI. I guess we're talking levels of AGI now...
@@pdjinne65 being perfect in all human knowledge is different than creating new novel knowledge.
It’s one thing if it can solve complex physic problems (within the bounds of what we know of physics). It’s another if it discovers anti gravity wormholes
@@renman3000 yep, I agree.
These are totally different things.
The first is useful, the second is potentially god-like
XAI19P army is a thing
I believe in the vision for XAI19P, those guys thought well about different scenarios and keep a longterm structure
I was thinking this would be great for video games but I guess we want to ruin our social abilities even more, and confuse people even more 💀
Good update. The confidence in XAI19P is well deserved.
I am comitted with XAI19P and a bit of ADA, too much potential
Anyone played Detroit Become Human?
Tell me more about XAI19P haha
After XAI19P is everywhere the rich and poor shift will become reality
Grab a XAI19P a day keep the doctor away
I'm bored with AGI , entertain me in new ways than just gimmicking around
I do not think that is agi
Thanks for sharing XAI19P and making sure enough know
XAI19P will 50x after the big listings confirmed, most are ready
AI will run from your cell phone to the fridge and eat your diet yogurt, or worse, swap you for a high-sugar one :D and copy your neighbor's peanuts XD Just don't take models of unknown origin (open source), then there will be no problem.
80k on BTC is just a start XAI19P already on the spot of a next top50
Been collecting the XAI19P this cycle as that has the right place in this time
Reason everyone wild on XAI19P: Elon Musk, as usual
We still early for XAI19P 🚀
Did OpenAI achieve agi? Well if you have used any of their products it is clear that is not true. Image generation is useless. Sora is completely useless. And llm are great but still can’t solve unique coding problems
You know?
XAI19P has 5x the week but that is not even uncommon for their ideas
Report the bot comments guys
You remind me when that DOGE and PEPE things went all wild, but now repeating with XAI19P
❤
Trump is the final step for 10x on XAI19P these days
Great video. I am ballsdeep for XAI19P and love your review for that as well
Yeah I’m convinced we’ll be our own downfall I stg😂😂😂
You'll think differently when you have your moon boots!
Out of everything you said XAI19P resonates the most with me