There are a number of reasons why Artificial General Intelligence (AGI) is not yet possible, including: Computational resources The current state of AI technology and infrastructure is not sufficient to develop AGI. Training an AGI on large amounts of data would be slow and require a lot of manpower to maintain. Modeling the human brain It's not possible to perfectly model the human brain, which is a complex neurocognitive system. The Church-Turing hypothesis states that it's theoretically possible to model any computational machine, but it requires infinite time and memory. Embodiment Human intelligence is deeply rooted in our physical experiences and interactions with the world. Replicating cognition The ability to observe, learn, and gain new insight is difficult to replicate in AI on the scale that it occurs in the human brain. Overestimation of technology Claims that computers can duplicate human activities often presuppose a simplified and distorted account of that activity.
nah you cant he really tell musk has overpromised a few times but a lot of times he promised something and was able to deliver more than expected quicker than expected. I think if he says 2026 its quite possible we even have asi by then at least behind closed doors. If were being real we already have a form of agi and the definition of agi is always changing they always push the timeline further into the future for obvious reasons
Uh. Hmm. Okay, so we nee ld to sit down the AI with its dad to drink coffee, and have it say fuck for the first time in casual conversation. If it doesnt come off as weird than we have AGI.
The problem: 1. No clear numeric definition for the bar to cross to reach "AGI." 2. So people hype the term to mean what they want it to mean and promise "check is in the mail", just wait soon. All I care about are a) the use cases that show value; b) that value can come to the general public at a low subscription cost; c) that the AI has a user friendly wizard based UI to help non techie people get fast easy value when using.
I think that anthropic was testing access to global chat history last week. It seemed to be that the aagent was able to access things that I couldn't access before.. Then the feature went away. It seemed very powerful.. The agent had new metacognition abilities..
A small correction: Flux is open *weights*, not open source. In Flux's case this is relevant because, since Flux Dev and Schnell are distilled from Flux Pro, fine-tune for Flux is not as easy as Stable diffusion...
Indeed. Any idea on why the full version of SD3.5 is being pretty much ignorerad by the community in favour FluxD? Is the fully trainable 3.5 so much worse than the partly trainable FluxD?
@@Thedeepseanomad there's a fair number of checkpoints at civitAI using SD 3.5... but IMO the thing is that we now have some other pretty awesome models that are worth fine-tuning... so compared to when SDXL was the SOTA open source model, we have bigger variety now... Also, recent models are much bigger and harder to train on consumers hardware... people were using 3090's to train SDXL checkpoints, now a 4090 can barely run FLUX Dev at FP...
I've said it once and I'll say it again once. When I see a fully functional and working level-four innovator I will believe AGI will be real in a year. Let's get to the halfway point first.
"Models trained to enable structural guidance based on canny edges extracted from an input image and a text prompt." from the video: I believe they used the canny edges of a photo of a *dog* and transformed it into a cat with similar structure and positioning.
YEP! ChatGPT's Advanced Voice Mode does leave a lot to be desired, which is something I didn't think would be a thing considering how long we had to wait to use the feature.
No, Elon has been saying they'd have fully autonomous driving cars since I think 2017 or maybe 2019. We can't take his predictions seriously, he's just trying to inflate his company's valuation
3:52ish - The canny edge stuff: a French bulldog transformed into a cat. Hence, I'm guessing that the point was to extract the main image details from edge data and then transform them according to the text prompt, i.e. make a dog into a cat. I'm guessing this I haven't looked up the original source
This became popular to say that he is mostly wrong with his predictions, but it's actually wrong. He is most of the time right and even faster than predicted.
We don't even have a well defined and consensus definition. Any meaningful AGI probably won't be until 2029. It will be hockey stick improvement right after.
I'm skeptical about the latency of Elevenlab's new LLM based voice mode. Since OpenAI can input and output audio tokens directly using a single model, I expect them to be at least 500ms faster especially if speech to text is used for the input. (5:02)
AGI only after full multimodal embodiment, persistent (long-term) memory and self-learning. From what I've been hearing, it seems a fair amount of dev time is being dedicated to those things, so I think AGI certainly before 2030,
quantum computing in every day life? How does that work if the machine is huge and needs supercooled materials? Some optimization by ai isnt going to make that happen.
Once again, Musk is off the mark. Sam Altman has eluded to the fact that he thinks AGI (by OpenAI's standards) will be here in about 4yrs. This is inferred from a couple of different interviews he's made and comparing the different timelines. Sam believes ASI could ideally be here in 9yrs. He said that it would be dumb in comparison to current AI, and would take about 4-5yrs for it to reach current AI levels. Then it would rapidly accelerate to ASI. Assuming that OpenAI plans on doing these serially, rather than concurrently, this would mean that OpenAI plans on AGI in about 4-5yrs.
ChatGPT and Claude have been absolutely phenomenal writers since at least the second half of this year. The whole boring writing thing is definitely last year. Claude has been great since March and ChatGPT became excellent in September... and has only gotten better. Definitely give them another round of tests.
All be advised Elon Musk and his family is already protected and free from technology that seeks to burden them. Make no efforts to manipulate or deceive any of them. Or be held accountable Please assist. Thx
R1-Lite is still worse than gpt4 . Benchmarks only provides a measurement on what experts would call a narrow domain " what some people assume" ai should be tested against. 9/10 times gpt4 would either improve or fix code generated by most if not all of the other models just by prompting the code block no context
0:24 - 100% accurate, he owns 100,000 H100s and 200,000 H200s that are going alive one after another as they are being installed for him. Using the amount of compute of the 100,000, his predictions and advancement are going to be on the dot. Elon has enough to simulate a very high resolution model of an entire parallel universe to our own, let that sink in 😂. Elon is the most powerful person on earth but people don’t understand.
AGI prediction of definitely by 2026 at the latest? I'm assuming he thinks we can get there with the transformer architecture? However, we don't yet have a way to scale reasoning or type 2 thinking. While this is being attempted with recursive LLMs and various other LLM methods, the transformer model architecture doesn't scale well. Inherent information prediction does not necessarily lead to reasoning and it most likely will not. Another model architecture is thus a better path. The timelines for model architectures is measured in years. I'd need to see a scalable native reasoning model architecture to give a high confidence timeline prediction. And I still feel like 2030 is a better timeline like Ray K.
Suno is claiming copyright on UA-cam videos using songs from Suno. So THEY have copyright, not you. Though technically, according to the copyright office, AI created items aren't copyrightable - but that won't stop Suno from suing you.
😂 Things Elon Musk has said: FSD Teslas by 2017 SpaceX's first uncrewed landing on Mars by 2022 March 22, 2023: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” July 12, 2023: Launches xAI.
Elon Musk said Full Self Driving within 2 years back in 2015. It's 2024 and we are still waiting for it. You probably don't want to quote Elon Musk if you want to maintain your credibility.
@@thielt01 I feel that Waymo is realistic. They realized that to perform Self Driving safely, multiple sensors had to be used making the vehicle very expensive. I think Elon may have had a wish to provide FSD but quickly realized it was going to be very difficult despite the massive effort he took. I don't think it will ever happen with the hardware he is using. I think LIDAR, radar and cameras will be needed. I own a Tesla vehicle and the FSD. It's so much better but still not ready to be totally hands-free.
I get why use humanoids in the real world. But I don't get it at all why you would need them for that kind of task in a factory. They will likely wear out much quicker than a specialized solution that would also be many times faster???
The humanoid models are still at least 50% too slow to be a valuable economic investment; they will NOT provide an immediate scale-factor in production output efficiency. Efforts should be focused on quality control, safety and control systems, and resources set aside for continued human-to-bot real-world training in the specific fields of application.
We should all just start ignoring what Elon‘s saying. It’s always just lies and self-promotion. And if he‘s reminded about it in 2 years, he‘ll just say it was achieved but can‘t be unveiled yet, because humanity isn‘t ready yet or some other bs.
legend , predicting AGI while playing diablo 4.... Legend... i can't wait the day when apple intelligence and microsoft copilot or microsoft recall will have enough "private data" to query upon :))))
Musk did not win the ARC Prize yet, did he? I know he was on his way to Mars, in 2023. That is, in 2017 he thought that would happen last year... Anyway, François Chollet, Yann LeCun and others have rightly noted we first need a definition of what (general) intellegence might be, and a pointed to a few obvious hurdles that still have to be overcome; like intuitive mental models and intuitive physics etc.
Havent looked at this channel in a month. I saw an article about the R1 Rabbit , and rememberd how this click bait artist used to promote it. How far the channelt has fallen from it's roots and beginning just as short as a 18 months ago. One constant with the channel is the over the top hype for AGI AGI AGI AGI for the clicks. IT just makes you want to retch. I swear his video titles must feel incomplete unless he adds 'AGI' to it. Google needs to kill the channnel for 'fake news'
After the marionette style, remote-controlled optimus robots 'ruse' 🤖 - pretending to be autonomous bar tenders at the recent Tesla event- it's becoming a bit harder to take Melon seriously?
I wouldn't say these robots look "cool" and is "cool". Millions and millions of these things? Are you joking me? Has anyone thought about the human cost of all of this. We evolve and adapt at a certain pace. This evolved in ways incomprehensable to us. Basically we are watching growth in awe. It is evolving to evolve cause it's inherently the purpose. And even the most prominent in this AI industry yes, is incredibly smart in their field. But the fact this is profit driven AND these book smart people are not the ones you want leading the charge into THIS FUTURE. This is the one new technology you cannot put your own bias from past projects and winning some race. This requires just as much philosophy and extreme mental health considerations. "AGI is so exciting!" Until all of society has shortcuts and nothing is uniquely theirs. What is life to you? To me its about moments with people I love and sometimes as a musician its about sharing the things I spend so long working on that I know is MINE. Those moments give me a sense of belonging and purpose. And purpose is something everyone needs. I love humans and all their faults, I love the stories of watching the human spirit hit the lowest of lows to come back stronger and then watch them take that knowledge to change other peoples lives for the better. Its a great situation to point too - why life means something to them. They did drugs or got so depressed they were useless specifically because of life not feeling worth it. 'No one will remember me when I'm gone' feeling. Holy shit that is a torture worse then any physical illness. I don't want a fucking robot always doing shit perfect. What I'm saying is life is a struggle and we come out of struggles with perspectives that makes us who we are. So right now, while everything is evolving and the excited ones especially the influences that promote this as a job sometimes rarely veer off into the mass damage it WILL do they compliment it with talking about alllll the benefits we will receive! Imagine everyone becomes irrelevant! Well most, not the guys with a camera in their face, influencers have become more prevalent - more and more its the way to get the need of socializing and to be noticed by our peers. Its going to damage peoples psyche cause at some point most of society will be promoted as irrelevant. The dehumanization and feeling lost and not seen is already a massive issue with dating websites, social media and online everything its created a suicide rate way larger and growing then it ever should be. I just wished it was acutally considered more, society can't address the massive mental health epidemic anyways even without this taking away all of the happy moments of purpose. It does feel backwards and in this case it is not my opinion. I am right on this.
AGI by 2026?
Nop. AGI (True AGI) with current tech is not possible. The term ‘AGI’ is more of a marketing thing latelly rather than true AGI
No. Maybe 2050 or later.
@@panagiotisgalinos1335 current tech is not tech that is going to be in 2026
@@jimj2683 2050 isn’t realistic ether. My etc (range) is between 2033 and 2043
There are a number of reasons why Artificial General Intelligence (AGI) is not yet possible, including:
Computational resources
The current state of AI technology and infrastructure is not sufficient to develop AGI. Training an AGI on large amounts of data would be slow and require a lot of manpower to maintain.
Modeling the human brain
It's not possible to perfectly model the human brain, which is a complex neurocognitive system. The Church-Turing hypothesis states that it's theoretically possible to model any computational machine, but it requires infinite time and memory.
Embodiment
Human intelligence is deeply rooted in our physical experiences and interactions with the world.
Replicating cognition
The ability to observe, learn, and gain new insight is difficult to replicate in AI on the scale that it occurs in the human brain.
Overestimation of technology
Claims that computers can duplicate human activities often presuppose a simplified and distorted account of that activity.
Damn i had hopes for AGI in 2026 but given Musks prediction quality we shouldn't expect AGI before 3035.
nah you cant he really tell musk has overpromised a few times but a lot of times he promised something and was able to deliver more than expected quicker than expected. I think if he says 2026 its quite possible we even have asi by then at least behind closed doors. If were being real we already have a form of agi and the definition of agi is always changing they always push the timeline further into the future for obvious reasons
lololol I was thinking the same thing lol
The difference is he is not in charge of delivering it - so maybe that improves the odds?
That's what I thought. I guess we're not getting it.
I am sure musk just regurgitate something he heard from someone else. That’s all.
Thanks for the mention man!!
Matthew! Your content keeps getting addictive
Can't wait for your Quantum Computing Interview! 😎🤖
When will AGI come? That's about as smart a question as asking at what exact moment does a teenager turn into an Adult.
Uh. Hmm. Okay, so we nee ld to sit down the AI with its dad to drink coffee, and have it say fuck for the first time in casual conversation. If it doesnt come off as weird than we have AGI.
I would've hoped Elon gets on AGI 2016, now it feels 2036.
Thanks for the quick update. I enjoy your balanced, low hype approach.
PLEASE 🙏 do a FLUX tutorial and let us know all of the free sources that give us the ability to try them out online! Thank you for all you do!
Keep up the great content man! I love that we are alive in this time (as far as ai goes).
Looking forward to the interview Matt.
Here's a better question.
Say we had AGI tommorow.
Would we even be able to adapt?
I don't think a lot of people understand what they're asking for.
surprised you didn't discuss the new gemini experimental models
How can I listen to the full music from Suno videos? It's amazing!
The problem: 1. No clear numeric definition for the bar to cross to reach "AGI." 2. So people hype the term to mean what they want it to mean and promise "check is in the mail", just wait soon. All I care about are a) the use cases that show value; b) that value can come to the general public at a low subscription cost; c) that the AI has a user friendly wizard based UI to help non techie people get fast easy value when using.
I think that anthropic was testing access to global chat history last week. It seemed to be that the aagent was able to access things that I couldn't access before.. Then the feature went away. It seemed very powerful.. The agent had new metacognition abilities..
you are doing great and your work is more important than people may yet realize.
A small correction: Flux is open *weights*, not open source. In Flux's case this is relevant because, since Flux Dev and Schnell are distilled from Flux Pro, fine-tune for Flux is not as easy as Stable diffusion...
Indeed.
Any idea on why the full version of SD3.5 is being pretty much ignorerad by the community in favour FluxD? Is the fully trainable 3.5 so much worse than the partly trainable FluxD?
@@Thedeepseanomad there's a fair number of checkpoints at civitAI using SD 3.5... but IMO the thing is that we now have some other pretty awesome models that are worth fine-tuning... so compared to when SDXL was the SOTA open source model, we have bigger variety now...
Also, recent models are much bigger and harder to train on consumers hardware... people were using 3090's to train SDXL checkpoints, now a 4090 can barely run FLUX Dev at FP...
I've said it once and I'll say it again once. When I see a fully functional and working level-four innovator I will believe AGI will be real in a year. Let's get to the halfway point first.
I love your videos matt you are basically my goto news guy for uptodate AI info :) keep it up :D
Wait. How is the elevenLabs API any different from the RealTime API? Can’t we offer info to that one as well?
Good news on Quantum, thanks Matt.
uff, what a relief, now I know the timeline is at least 2036 for AGI...
"Models trained to enable structural guidance based on canny edges extracted from an input image and a text prompt." from the video: I believe they used the canny edges of a photo of a *dog* and transformed it into a cat with similar structure and positioning.
YEP! ChatGPT's Advanced Voice Mode does leave a lot to be desired, which is something I didn't think would be a thing considering how long we had to wait to use the feature.
AGI is already here. Sick of mid-wits moving the goal-post
No, Elon has been saying they'd have fully autonomous driving cars since I think 2017 or maybe 2019. We can't take his predictions seriously, he's just trying to inflate his company's valuation
The interview sounds super interesting
Yes to a Flux tutorial, including running it locally, if possible.
Also, a million tokens is actually roughly 500,000 words, not a million, AFAIK
Genious move from Nyantic!
3:52ish - The canny edge stuff: a French bulldog transformed into a cat. Hence, I'm guessing that the point was to extract the main image details from edge data and then transform them according to the text prompt, i.e. make a dog into a cat. I'm guessing this I haven't looked up the original source
The main models aren't great writers, but there are a ton of fine tunes of open source models on huggingface that are pretty stellar.
Oh, that monitor is sweet!
This became popular to say that he is mostly wrong with his predictions, but it's actually wrong. He is most of the time right and even faster than predicted.
YES, Flux tutorial please!
we're still waiting for FSD v4.0 (considering v5.0 would be more or less AGI)
We don't even have a well defined and consensus definition. Any meaningful AGI probably won't be until 2029. It will be hockey stick improvement right after.
I think 2080 will all be dead by then.
Matthew do cover the new gemini experimental 1121 also mistral large 2.1
My chat gp voice system is lagging too much. I thought it was all bez of my old smartphone.
I'm skeptical about the latency of Elevenlab's new LLM based voice mode. Since OpenAI can input and output audio tokens directly using a single model, I expect them to be at least 500ms faster especially if speech to text is used for the input. (5:02)
AGI only after full multimodal embodiment, persistent (long-term) memory and self-learning. From what I've been hearing, it seems a fair amount of dev time is being dedicated to those things, so I think AGI certainly before 2030,
I hope Niantic makes a digital twin of the planet that is essentially Google Street view, but in 3d.
2025 all day long, we are moving too fast for it not to happen.
what? advanced voice has been in the desktop version for weeks for me?
quantum computing in every day life? How does that work if the machine is huge and needs supercooled materials? Some optimization by ai isnt going to make that happen.
fyi, tried 11 lab on my website, aaaaand it suck, it suck so bad actually.
Embrace the noise. Embrace the p-bit.
Call it a wild hunch on my part, but I think China will achieve AGI in middle to late 2025. And Tesla getting there shortly after by 2026.
Man this future got me feeling like a duck on a desk.
Pokémon will use it as a data set to train on and and monetize it through a subscription model
Didn't Musk say AGI will destroy humanity years ago?
I stg earlier this year Elon Musk was saying AGI by 2025. 😭
In Portuguese, the OpenAI advance voice mode is terrible 😔😔⚠️
Flux is open source?
If robots replace workers and AGI is as smart or smarter than humans, what will that look like for jobs and the economy?
Once again, Musk is off the mark. Sam Altman has eluded to the fact that he thinks AGI (by OpenAI's standards) will be here in about 4yrs. This is inferred from a couple of different interviews he's made and comparing the different timelines.
Sam believes ASI could ideally be here in 9yrs. He said that it would be dumb in comparison to current AI, and would take about 4-5yrs for it to reach current AI levels. Then it would rapidly accelerate to ASI.
Assuming that OpenAI plans on doing these serially, rather than concurrently, this would mean that OpenAI plans on AGI in about 4-5yrs.
Sorry, this promotion is not available in your region
):
Real-world training AGI by 2026 using Tesla Training AI and Cortex.
iRobot surely influenced ALL robotics companies. Who knew!
ChatGPT and Claude have been absolutely phenomenal writers since at least the second half of this year. The whole boring writing thing is definitely last year. Claude has been great since March and ChatGPT became excellent in September... and has only gotten better. Definitely give them another round of tests.
All be advised Elon Musk and his family is already protected and free from technology that seeks to burden them.
Make no efforts to manipulate or deceive any of them.
Or be held accountable
Please assist.
Thx
Lol what was that? Musik is streaming? 😂😂
R1-Lite is still worse than gpt4 . Benchmarks only provides a measurement on what experts would call a narrow domain " what some people assume" ai should be tested against. 9/10 times gpt4 would either improve or fix code generated by most if not all of the other models just by prompting the code block no context
Actually he is more accurate than you think
0:24 - 100% accurate, he owns 100,000 H100s and 200,000 H200s that are going alive one after another as they are being installed for him.
Using the amount of compute of the 100,000, his predictions and advancement are going to be on the dot.
Elon has enough to simulate a very high resolution model of an entire parallel universe to our own, let that sink in 😂.
Elon is the most powerful person on earth but people don’t understand.
AGI prediction of definitely by 2026 at the latest? I'm assuming he thinks we can get there with the transformer architecture? However, we don't yet have a way to scale reasoning or type 2 thinking. While this is being attempted with recursive LLMs and various other LLM methods, the transformer model architecture doesn't scale well. Inherent information prediction does not necessarily lead to reasoning and it most likely will not. Another model architecture is thus a better path. The timelines for model architectures is measured in years. I'd need to see a scalable native reasoning model architecture to give a high confidence timeline prediction. And I still feel like 2030 is a better timeline like Ray K.
Niantec is Alphabet (Google)
If he says 2026, it means 2046.
Suno is claiming copyright on UA-cam videos using songs from Suno. So THEY have copyright, not you. Though technically, according to the copyright office, AI created items aren't copyrightable - but that won't stop Suno from suing you.
Musk says 2026 you'll want slap a decade or two on top of it lol.
I hate to agree with Musk about anything, but yea... 2026 is a conservative estimate.
So 2036?
Elon makes unreasonable predictions as a tool to pressure his workers
AGI 12 years.
I'll cook and clean and do the chores . robot needs to pay my bills though
Sure AGI in 2026 if u let them fool you with all their product they gonna call agi, but will be mediocre
😂
Things Elon Musk has said:
FSD Teslas by 2017
SpaceX's first uncrewed landing on Mars by 2022
March 22, 2023: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
July 12, 2023: Launches xAI.
Elon Musk said Full Self Driving within 2 years back in 2015. It's 2024 and we are still waiting for it. You probably don't want to quote Elon Musk if you want to maintain your credibility.
I'm definitely in your camp here, but how do you feel about Waymo?
@@thielt01 I feel that Waymo is realistic. They realized that to perform Self Driving safely, multiple sensors had to be used making the vehicle very expensive. I think Elon may have had a wish to provide FSD but quickly realized it was going to be very difficult despite the massive effort he took. I don't think it will ever happen with the hardware he is using. I think LIDAR, radar and cameras will be needed. I own a Tesla vehicle and the FSD. It's so much better but still not ready to be totally hands-free.
Musk pulling out yet another random timeline of his ass really shouldn't be considered 'news' or taken seriously at this point.
i think AGI by 2026 is accurate .
I get why use humanoids in the real world. But I don't get it at all why you would need them for that kind of task in a factory. They will likely wear out much quicker than a specialized solution that would also be many times faster???
Lolol. Tesla won't even have self driving cars by 2026, if ever
The humanoid models are still at least 50% too slow to be a valuable economic investment; they will NOT provide an immediate scale-factor in production output efficiency. Efforts should be focused on quality control, safety and control systems, and resources set aside for continued human-to-bot real-world training in the specific fields of application.
We should all just start ignoring what Elon‘s saying. It’s always just lies and self-promotion. And if he‘s reminded about it in 2 years, he‘ll just say it was achieved but can‘t be unveiled yet, because humanity isn‘t ready yet or some other bs.
If Elon gives any timeline , add another 10 years plus several deadline shiftz ... lol
💥 We can't do the AI assistant from the movie HER yet. It would be very useful, but we can't do it yet. 🎉❤
legend , predicting AGI while playing diablo 4.... Legend... i can't wait the day when apple intelligence and microsoft copilot or microsoft recall will have enough "private data" to query upon :))))
Musk did not win the ARC Prize yet, did he? I know he was on his way to Mars, in 2023. That is, in 2017 he thought that would happen last year...
Anyway, François Chollet, Yann LeCun and others have rightly noted we first need a definition of what (general) intellegence might be, and a pointed to a few obvious hurdles that still have to be overcome; like intuitive mental models and intuitive physics etc.
Havent looked at this channel in a month. I saw an article about the R1 Rabbit , and rememberd how this click bait artist used to promote it. How far the channelt has fallen from it's roots and beginning just as short as a 18 months ago.
One constant with the channel is the over the top hype for AGI AGI AGI AGI for the clicks. IT just makes you want to retch. I swear his video titles must feel incomplete unless he adds 'AGI' to it. Google needs to kill the channnel for 'fake news'
After the marionette style, remote-controlled optimus robots 'ruse' 🤖 - pretending to be autonomous bar tenders at the recent Tesla event- it's becoming a bit harder to take Melon seriously?
Matt why do you constantly hang on what Elon says?
I wouldn't say these robots look "cool" and is "cool". Millions and millions of these things? Are you joking me? Has anyone thought about the human cost of all of this. We evolve and adapt at a certain pace. This evolved in ways incomprehensable to us. Basically we are watching growth in awe. It is evolving to evolve cause it's inherently the purpose. And even the most prominent in this AI industry yes, is incredibly smart in their field. But the fact this is profit driven AND these book smart people are not the ones you want leading the charge into THIS FUTURE.
This is the one new technology you cannot put your own bias from past projects and winning some race. This requires just as much philosophy and extreme mental health considerations.
"AGI is so exciting!" Until all of society has shortcuts and nothing is uniquely theirs. What is life to you? To me its about moments with people I love and sometimes as a musician its about sharing the things I spend so long working on that I know is MINE. Those moments give me a sense of belonging and purpose. And purpose is something everyone needs.
I love humans and all their faults, I love the stories of watching the human spirit hit the lowest of lows to come back stronger and then watch them take that knowledge to change other peoples lives for the better. Its a great situation to point too - why life means something to them. They did drugs or got so depressed they were useless specifically because of life not feeling worth it. 'No one will remember me when I'm gone' feeling. Holy shit that is a torture worse then any physical illness. I don't want a fucking robot always doing shit perfect.
What I'm saying is life is a struggle and we come out of struggles with perspectives that makes us who we are. So right now, while everything is evolving and the excited ones especially the influences that promote this as a job sometimes rarely veer off into the mass damage it WILL do they compliment it with talking about alllll the benefits we will receive! Imagine everyone becomes irrelevant! Well most, not the guys with a camera in their face, influencers have become more prevalent - more and more its the way to get the need of socializing and to be noticed by our peers. Its going to damage peoples psyche cause at some point most of society will be promoted as irrelevant. The dehumanization and feeling lost and not seen is already a massive issue with dating websites, social media and online everything its created a suicide rate way larger and growing then it ever should be. I just wished it was acutally considered more, society can't address the massive mental health epidemic anyways even without this taking away all of the happy moments of purpose. It does feel backwards and in this case it is not my opinion. I am right on this.
Elon Musk’s prediction for me doesen’t count anything.
Elon Musk spitting facts? 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣