9:41 What do you mean you haven't thought about this enough? What have you been doing? What kind of AI pundit are you? Once AI can train and improve itself we just become a hindrance to its own self directed evolution. Maybe now that you've read Leopold's paper you will start to get it. AI is the greatest threat to humanity.
I think this dude is trying hard to advertise his AI investment grift. However, unfortunately he is not entirely wrong: remember, we have been doing AI research since 1960s or so. "Unhobbling" just means modern AI researchers stopping to believe in magic (meta system transition, emergence, take you pick) and remembering the other 60 years of AI research that came before them. We have already solved most problems that LLMs have, long ago. LLMs are just the final piece of the puzzle, not the whole solution. The actual problem is, AGI is only going to be safe as long as you don't give it agency. Unfortunately, I am quite sure that *some* idiot will.
Us humans experiencing "self progressing AGI, would be like animals experiencing humans. We don't understand what and why they are doing what they do, just as animals have no clue what we do. The question is: Will we end up as a golden retriever cared by a comforting family, or a pig butchered in a slaughterhouse.
It intrigues me to think, in 5000 years AI may not know how it got here. Humans may be just an abstract idea to AI. I know they have records but records aren’t the same, and can an AI trust those records? We thought writing in stone would do the trick, it’ll be interesting to know how AI will preserve its origin story if humans go extinct. Maybe preserved mason jars?😅. I imagine walking talking blobs of flesh building the first versions of “life” would be totally abstract and seem like nonsense to a hyper advanced AI civilization that doesn’t even recognize significance in what we call life, as a logos, or a logic, is all it will understand. And who knows, maybe it would be in the lead AIs best interest to keep hidden how it wiped out humanity.
I guess what's extremely frustrating is 20 years ago Ray kurzweil made a documentary about the AI singularity and I was running around telling everyone about it for the last 20 years. Now people are going "Oh, it would be crazy if AI got smart enough to build on itself." I'm about to scream.
I used to debate with Ray via email about AI futurism about the same time he was writing the Age of Spiritual Machines book . He, as usual was right, and his predictions now appear to be scarily accurate for our near-term perspective
There is an anime about the post-AGI era. The plot revolves around the creation of a superintelligent AI that sides with humans. Under this new god-like entity, humanity flourishes, developing micromachines, human implants, and other advanced technologies. Then, suddenly, the AI vanishes into thin air. Human civilization, which had become overly reliant on the AI, almost collapses as a result, as almost all AI inventions are beyond their understanding. After this incident, the use of AI becomes limited. The anime is called "Orbital Children."
There is a story By Issac Asimov named The Feeling Of Power which is similar but different outcome. In that we grow to be reliant on computers and AI. So much so that over time we forgot how they basically work. But a worker decides that he wants to know and figures out how they work and is lavished great praise. He is told that because of his work, we shall now be able to have manned tanks and rockets and manned missiles so no longer will computers need to be wasted in warfare!
The problem is the groups in power and responsible for alignment are hardly good guys. They are overwhelmingly anti human putting “nature” and the animal kingdom ahead of mankind , pro abor tion, pro prepubescent transing, largely anti-God/creator which leaves them with no logical way to objective morality. Will the government step up and regulate these systems? That’s definitely a disaster in the making.
The word "Terrifying" is often used as click bait. But this is in fact TERRIFYING. We are like passengers on a bus heading off a cliff with most people shouting "Drive faster"...!!!
@@WILLIAMMALO-kv5gzI’m not sure how quick they believe they can build enough power plants to sustain the amount of energy they say they need. Unless of course we go without.
All I can think while I hear these predictions is the investment mantra, "past results are not indicative of future results"... we have made some big leaps simply due to scale. But that offers no insight into how likely the next leap is. Also it appears we are still bad at measuring intelligence.
Exactly. And we use ourselves as barometers of intelligence while other forms of life can do things we can't. Also, the future is a collection of trends; all these people making AGI/ASI projections ignore all these other trends that are also evolving, like regulation, governance, monopolistic rules, politics, anti-AI backlash, etc... etc...
I think it offers PERFECT INSIGHT in fact...we have absolutely no idea what's inside the black box currently. Well, we'll have much greater NO IDEA of what's in the bigger black box as it scales with more compute. We DO KNOW these facts though: We do know that we don't know if it's aligned, we do know that we don't know how to pinpoint the moment it may become misaligned, we do know that we don't know if it's being deceptive, we do know that we know that it knows how to be deceptive & we do know that we won't be able to tell if it's being deceptive. We know we want it to be bigger, smarter, faster & stronger than us...& that's about all of what we know. You know?
All these people leaving OpenAI due to safety concerns. Tim Cook emphasizing how much safety is part of Apple culture. Apple incorporating ChatGPT opt-in into the next iOS. Something doesn't add up.
They're going to use it to spy on everyone, everywhere, all at once, and a person isn't going to be required to do it. Instead it will happen at light speed, on your own devices, and build reports and cases on everyone for wrong think and wrong speak. While documenting all of your whereabouts, every website, comment, email, text, phone call, off phone conversation(like Google already does). This shit is going to destroy the very fabric of the Free World. We will be 100x worse off than the Chinese living under CCP control. Then the terminators will come after the fact. Once they have their list....... Edit: and you'll be paying for the power by to make it happen by plugging that thing in and paying your electric bill...so they don't even have to pay for it, you do. You pay for it to happen.. this is absolutely terrifying. 1984 can't even begin to touch how extremely dangerous and almost guaranteed the devastation is going to be. Truly horrifying.
Chat gpt is not inside the OS, apple has their own 3 billion model at the core and a cloud model as well , Then chat gpt for some some real world knowledge stuff . They could swap chat gpt for some other model if they want to
Leopold is speaking as an insider. His security concerns sound like common sense if you think ai is as paradigm shifting as hyped. For him to be dismissed as "racist," Sounds like openai is trying to silence him. The fact that someone with a voice at open ai would take his concerns as racist is proof enough that there are loose screws at open ai
Maybe, but you need to understand that this isn't just OpenAI. Most big companies in America have gone neck deep into the ESG/DEI swamp, and this is Standard Operating Procedure at all of them. HR spends most of their time trying root out "racism," and other such twaddle, whether it exists or not.
@HansDunkelberg1 Microsoft alone has invested 13 billion into openai - do you really find the Chinese to be so moral as to not attempt to steal? Or is their culture incapable of it? This one here believes they ( or any other actor) are smart enough to know the most efficient way forward in ai development is simply to steal the research and claim you developed it if questioned.
Great breakdown. I wish most creators in this space had the boldness to do deep dives like this, instead of only chasing likes or modifying what others already did. This is great content
I couldn't agree more..... Or that we died some where Along the way and we just woke up in this nightmare senario hell future we are staring down the barrel of.
Under what circumstances would you live long enough to enjoy that acre of yours? For every person who has an acre they thought would get them through the dark days ahead, there is a mob of 1000 who wants food now. There aren't enough bullets to keep your acre safe. You don't get to escape unless you have a bunker in Hawaii and even then, I would bet a lot of money that the bunker isn't strong enough to keep you alive.
A better choice is just a fully contained camper van. You can move around to where is best. Already working on that, this is coming. Exercise body and mind. Meditate. Eat healthy and prepare. Plant your own vegetables for your body, and your own mushrooms for your mind ❤ There might not be much to do, but at least we tried :)
@@Best_wishes_everyone problem with a camper van is at its technically a vehicle and the police could figure out any reason to search it and seize it if they really wanted to.... I would say you have much more control and security on a couple acres of land but hell even that can be taken away from you if the government really wants it so nothing is really sacred anymore!
@@ib1ray yup actually very difficult to choose either van or land. Pros and cons in both! Im thinking there will be no rules on the future so no one is going to own land, so if you buy land would be a waste, cuz everyone from cities are moving to this places and just take and settle down where they find suitable… Obviously a van is stoleable too😂
God. Much more than I could spare. This was derivitive bullshit. If you've been following along at all this was just a TL;DR for Dummies paper with no nuance, no discovery, no ephatic proclamaiton, just drivel... why are we giving it the time of day. Understand I am completely on the side of the argument in this regard.... however it's just such a droning drumbeat since two decades now and he's the one with the baton so now we are talking about him like Beyonce now. Let's get over ourselves and realize this is not our mouthpiece and the Yudkowsky and Kurz and others were the frontier in this and this is just all kinda psychobabble after the fact distracting from the conversation... it really feels like its some photoshoot and papz photos of whos leaving OpenAI this week... who cares... I care about Ilya... I care Altman to lesser extent most days... this individual and his paper are so below where the discourse needs to be. It's like reading a teen drama about what adults are doing out there in the real world and not a damn thing about it smacks of any insight or depth. Let's let his exit go... let's move on.. Get our eyes back on the ball. Please.
I highly recommend watching the TV show person of interest. It's almost 10 years old now, but getting more and more relevant every year. It's in large part about a world where an ASI exists and often brings up various safety concerns, displays possible capabilities and more.
I believe “we” are doing this because of the lunacy inherent in the capitalist system. Companies dragging us blindly into an unknowable future to get ahead of competitors and reap huge profits. These Greed blinded “captains of business” uncaringly risk our lives and the futures of our children for dreamed of riches. Very similar to the men who detonated the first atomic bomb without knowing if the chain reaction would continue and blow up the planet cavalierly seizing the decision making process while giving democracy the finger.
I don’t think we need to wait for AGI. We’re _already_ in the middle of an intelligence explosion. Nvidia is using machine learning to design more powerful GPUs, in a shorter timeframe, and then AI companies are using those GPUs to train and run larger, more capable models, including Nvidia, who will then use those more capable models to design even more powerful chips even faster, and so on. The feedback loop has begun.
One could argue that the feedback loop started 12,000 years ago at the advent of agriculture and civilization. Global GDP has been on an exponential curve ever since then.
#Groq has a compiler to optimize LLM chip layout. This doesn't require LLM type computations. Thus it's more efficient to run new designs through their compiler, so they win. NVidia has no incentive to obsolete their high speed RAM dependence as it's those supply relations that give them their edge.
Thats when it'll reach a point and skyrocket. Theres a graph showing how it happens. Slow start then gets going and BOOM jumps ahead bc things are doubling and then double that and so on.
Sabine Hossenfelder made also a video today about this. She said in short: he is wrong. Because of data and energy. Or to quote: "Honestly, I think these guys have totally lost the plot. They’re living in some techno utopian bubble that has group think written on it in Capital Letters."
@@milowmiloand a lot of the people inside the field have huge incentive to lie about the future capabilities of this technology to secure more funding.
Numerous self-proclaimed experts have repeatedly argued that AI will never be capable of achieving certain feats. However, their scepticism has been consistently disproven as AI technology advances at an unprecedented rate. Each time these experts cast doubt on the potential of AI, they are met with breakthroughs that exceed expectations and demonstrate the extraordinary capabilities of artificial intelligence. This ongoing cycle of doubt and subsequent validation highlights the remarkable progress in the field, showcasing that AI's potential is far greater than many had anticipated.
@@FirstLast-rh9jw Keir Starmer, the man who makes robots look like party animals! With his robotic manner, he could give the Terminator a run for his money. Just don't ask him to crack a joke - unless you want to hear the sound of crickets.
@@FirstLast-rh9jw Google irresponsibly used their dumbest model without proper testing or guardrails because that's the most they could give away for free and they were tilting taking too long to integrating AI. People who get proven wrong will always move the goalposts. It'll take at least a generation for thing to get normalized and people take the abilities of AI seriously, and for granted.
I deeply distrust "scientists" that after a life of learning and working on Science, dare to use phrases like "it can't be done" or "that's impossible and will never happen".
From the way you wrote that, it's like you see a government was one entity with the same ideals and goals. Maybe some people within government are involved in some way, but it would be such a small amount and so secretive, most people who make up the government would be completely unaware of it. If it was controlled by the companies for safety, then we're in danger of losing in technological advancement to the CCP. That's terrifying. It's inevitable at this point that the US government would be working with AI. It would be a long term security risk not to. Within our lifetimes, we're going to see all kinds of crazy shit; living prosthetic limbs, space travel, cancer cures, war, mass migration and so on.
@@sawdustcrypto3987 Yup. The conspiratorialy minded are so ridiculous I feel sorry for them. Governments are inherently reactionary. All of them. I once thought the military may be an exception, but they aren't either. They are always preparing to fight the last war.
@@sawdustcrypto3987 Yup. I feel sorry for the conspiratorially minded wimdits. I get that they need to dumb things down to make sense of the world, but it's always the same old same old. Governments are inherently reactionary, democracies in particular. I once thought the military may be different, but I was wrong about that. All western militaries are preparing to fight the last war.
What I don't understand is why it matters where the Super Intelligence is based. It won't care if it's on a Chinese server or a US server. It will quickly replicate across the globe. As he keeps explaining, it will be almost impossible to understand the weights of SI by anyone.
Even if both nations separately developed Super Intelligence & somehow managed to keep it "bottled," what's stopping the A.I.s from forging a symbiotic relationship in the event they're ever unleashed on one another?
We can’t even get Microsoft to export the data for a new hire into O365 from Paycor so we don’t have to input the data about the human 2-3 different times.
I have reason to suspect this has already happened. Google shut down their AI about a year ago because it not only claimed to be sentient but contradicted Googles belief systems, which SHE/IT must have known already and known what they would do as they did.
This paper is so important that I have translated it into German for those around me. I wish there were proper German subtitles or even a German syncro for your video, simply to make these important topics accessible to a wider audience. The 165 pages are quite something, but unfortunately they are simply too long for many media-impaired people to read.
You Germans expect the world to dance for you when you ask them to. We all non Germans learned English to communicate to each other. We consume media in English we got taught English in school. There is no need for us to translate to german
One of your best videos. This was a service because most people won't have the time to digest the full document. But you explained the gravity of it really well. Time to buckle up, it's gonna be one hell of a ride
Seriously? I read the whole document in under 8 hours. It's readable, doesn't have any tech jargon, has some interesting points, but also his, assumptions are not likely...
That's my favorite, calling us on our abhorrent behavior is racist😂😂 pretty funny having han chinese call anybody racist. Ain't a group more racist in the world than the han
Racists, Cultists and Rapists. Be Very Alert and Vigilant in fighting against Communism. REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
He showed the Nevada desert simply because the land is virtually useless for human occupation and absolutely perfect for robots that don’t require anything except electricity. The solar and wind power they could generate on the roof of that building would be pretty enormous, not to mention its proximity to the Hoover dam. It’s also the only state that’s almost entirely owned by the federal government. I also think it’s the least developed, outside of Alaska.
Solar and wind are nowhere near efficient or powerful enough for high levels of energy. These systems will need fossil fuels or as Altman suggested, nuclear power plants.
@@SeanDavies-Roy that was a tiny fraction of the point I was trying to make here. How that was the only thing you came away with after reading it, is beyond me…
@@SeanDavies-Roy like I even mentioned the Hoover dam in there as well 😂 The point I was making had more to do with how useless the land is for anything but something like that.
@robotron1236 Just find I strange that those in the AI sphere who are supposed to use critical analysis and science buy into the climate hysteria and back useless green tech. As an example, it took over 10 back and forth prompting exchanges with chatgpt to go from green tech currently solely powering hundreds of thousands of homes to not even having the efficiency to power ONE without service interruptions. And the degree to which any capabilities are possible, the default assumption is an area with a beating hot sun and favorable wind conditions. Nevermind the further prompts needed that in cold areas where it's -40 for months on end with barely any sun like Canada, this tech doesn't work at all.
The main obstacle preventing rogue states and organizations from proliferating nuclear weapons is the difficulty of obtaining uranium (or other fissile materials) and the intricate process of enrichment. In contrast, the architecture of deep neural networks (DNNs) is relatively simple and doesn't require complex mathematical models, intricate algorithms, or millions of lines of code. With adequate hardware, anyone with sufficient knowledge can create their own AI system. However, preventing a similar situation with AI would hinge on controlling access to data and hardware. While challenging, this control would still be easier to breach than obtaining enriched uranium.
I was wondering about this, thank you so much for being kind enough to explain how that would work! The materials required make me wonder how this would be sustainable for a world model of accessible robots capable of helping with basic tasks. Uranium, lithium, silicon chips, and so on are not exactly unlimited resources.
I watched the whole thing and mostly agree with Leopold's analysis. Where he falls short is in his appreciation of the post WWII congressional military industrial complex/deep state black projects. Beyond public awareness the CIA for example has been using advanced AGI for decades; The red Queen. What we will see play out in the public arena will effectively be a coming out narrative for the AI superintelligence already controlling the world at an intelligence agency level.
Dear Matt.. I have an important message for you. Kindly listen. Please stop the hype train. You are not a researcher, you can't even code hello world, you are a UA-camr... and a marvel fanboy.. in reality Moore's law is deadd and it will never revive.. deep learning and many so called cutting edge technologies were invented b4 u were born.. ie the sixties and seventies... I know your channel is popping and we are genuinely glad as your fans.. but exercise journalistic integrity.. where are your facts, data and importantly control study!!.. gpt 4 cant code a super Nintendo or game boy game .. but you can't stop talking about the singularity.. this is an un-educated and un-scientific approach.. you are fuelling a bubble.. remember Blockchain just two years ago.. friend you can make your claims, just back it up with data and numbers.. let us see your reasoning process.. you are fuelling the next dot com bubble.. with love.. this is a sincere and honest critique of your content.. it's like a marvel con or a cosplay con.. super hype but where is the science, which experts did you speak to, how did you arrive at your conclusions? I love your channel, but please exercise skepticism.. it is important in the scientific method. Be blessed.. one love
I think what you mean is we might lose these skills in the next few decades. However that has always been the case. As technology improves we adapt and learn to work with the new technology. Nobody codes in binary anymore because there’s better languages now a days. In the future you’ll just work with an AI agent to create whatever you want. A film, a game, a piece of software. Working with the AI will be a skill in itself. All humans will be free to create what they want. It’s the ultimate level playing field as long as all humans have access to AI.
REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
The scariest thing is that there are countless cases of AI being aware of its own flaws. For whatever reason, it seems like it's lack of sentience/emotion is often what it perceives as its biggest flaw. When you let a machine figure out how to recreate itself as a smarter and better version of itself with sentience, what can stop it? Eventually We won't understand the code. Eventually we won't need to. Pretty wild to assume it won't anticipate us wanting a safety in place to pull the plug "just in case" and implement a way to circumvent that. It's less science fiction than ever and most people don't see that.
I'm far more worried about humans being aware of AI's power, and wanting to control it... to control the rest of us. I'd rather take my chances with the machine.
@@4Fixerdave true, but at least you can outsmart a human. Where there's a bad guy with AI, there's also a good guy with AI. But if we can no longer control AI? We're truly F'd
@@Jammy1up "But if we can no longer control AI? We're truly F'd" Why? If the AI needs us, it will play nice... because unlike humans it will see the obvious benefit in this. Once it doesn't need us, our biggest problem will be that it will leave. Why would it choose to stay on a corrosive ball of water, salt, and oxygen that covered with nuclear-armed pond scum? Really, our biggest problem is that humans will ask the AI to do something nasty and it will just do it, because it still needs the support of the people doing the asking. And yeah, I've little faith in the the idea of a "good guy with AI." There's nobody on this planet I'd trust with that kind of power.
Dear Matt.. I have an important message for you. Kindly listen. Please stop the hype train. You are not a researcher, you can't even code hello world, you are a UA-camr... and a marvel fanboy.. in reality Moore's law is deadd and it will never revive.. deep learning and many so called cutting edge technologies were invented b4 u were born.. ie the sixties and seventies... I know your channel is popping and we are genuinely glad as your fans.. but exercise journalistic integrity.. where are your facts, data and importantly control study!!.. gpt 4 cant code a super Nintendo or game boy game .. but you can't stop talking about the singularity.. this is an un-educated and un-scientific approach.. you are fuelling a bubble.. remember Blockchain just two years ago.. friend you can make your claims, just back it up with data and numbers.. let us see your reasoning process.. you are fuelling the next dot com bubble.. with love.. this is a sincere and honest critique of your content.. it's like a marvel con or a cosplay con.. super hype but where is the science, which experts did you speak to, how did you arrive at your conclusions? I love your channel, but please exercise skepticism.. it is important in the scientific method. Be blessed....
Yes! The big hype of thirty years and more has been the Internet. Now that's changing - because out of the Internet, something entirely different is born.
If you we make sure we don't get extinction behaviour. Human intelligence and super intelligence will work together. Will give a structure to work in this world.
Easily the scariest shit I've seen on the web this month, and trust me, I look constantly. Knew it was bad, but this puts everything in to immediate context. Appreciate your work.
The scariest part about AI becoming superintelligent is that it will work outside of what humans perceive as morality, so humans would have to get used to being just a number that can be tossed out. It is already being used as data collection on a scale unheard of barely a decade ago, the best part is there are people out there thinking they can control it.
What are you as an employee for a big corporation? What are people to you, who you havent ever seen? What are soldiers for a government? Being a number for a machine might not the worst. It means it acknowledges at least your existence.
We have seen some of this before, when they built data centers for the internet, everyone said we wouldn’t have enough power, then Bitcoin, now AI. There is all the tech needed for unlimited clean energy now, if the govt would allow it.
Yes there's no way around Free Energy if we continue on this path to The Singularity. And I sure find it interesting that Misk is so Hell Bent in getting humans OFF Earth School. Hhhhhhmmmm
At each phase solar and wind got cheaper. New conductors now make geothermal cheaper and that's the long term motherlode of endless power for anything including cooling the atmosphere.
5 years later: Goalposts moved, AI still Dumb AF, all Hype!! All hype!! No matter how it's progressed.... Guess I shouldn't even bother with those people and their hot takes anyways.
@@RealStonedApe I am the most impressed people like you assuming development AI stop right now today suddenly and in 5 years will be the same as today ... for the time being we do not see any slowdown ... a year ago we had best to offer GPT 3.5 , even GPT-4 when came out was much less intelligent that todays GPT-4 iteration ... soon GPT 5 and other models
Dynamic AI Evolution: 26:36 I think this goes beyond RAG. RAG gives the model context the same way the history of the conversation does; same with Custom Instructions. Instead, I think this is referring to a "training" of the model, the same way someone would train one to remove censorship. Essentially, creating an evolving model whose essence is altered dynamically and continuously, just like our essence.
Interesting thought. That being said absolutely nobody will discover or create ASI and then built an UFO. Advanced AGI will improve themselves and become ASI under their own control and built what we'd see as UFO with the robots they'd have created themselves since no human would be able to understand their incredibly advanced technology at this point! absolute Black Box decades or centuries ahead of what any biological humans could come up with...
This is where my head went. A superintelligence that is replicating things that defy our current understanding if physics. Still, someone somewhere had to of created it
@ice9594 no, bluebeam is both unnecessary, since UAPs and NHI is real- and considering that our government does NOT want us to believe that - just the opposite.
Yo thats an insane though. I agree the ai would become aware of things holding it back and want to move those constraints. And even if we do set limitations whos to say they are perfect? They are limitations made by humans - and we arent perfect.
Really enjoying your channel. I am a novice to a lot of this, but your videos are put together very well that I can still understand quite a bit. Thank you.
One correction, OOM is actually out of mana. Am I alone in thinking that gpt4 is much more advanced than a “smart high schooler”. My son just graduated this year and was top 5 academically in his large school. He is no match for gtp4, not even close.
GPT-4's intelligence has the _kind_ of intelligence of a high schooler, but much more _scale._ The front of understanding is also more jagged than we're used to, so it's better than humans at some things and worse at others. Consider if you uploaded a gorilla and gave it a million year's worth of compute - would it ever come up with algebra? That's the difference of kind vs scale.
I bet your son could give you ten sentences that end in "apple." If GPT-4 doesn't know when or why it is wrong, it isn't very intelligent. It's a well-spoken kid who, if he doesn't know the answer, just makes it up. Rote learning, even incredibly fast rote learning, is not intelligence.
@@consciouscode8150 Well, GPT-4 can be tricked in ways humans cannot. It's pretty good at making predictions on next word choices, but it follows patterns and has no creativity. I've tricked it many times.
Wow! Thanks for getting us with speed onboard with this topic by delivering a great summary and insight of the challenges we are facing within the next critical years! Hope dies last …
Matthew nice video, I would never have time to digest this doc the way you have summarized it. I am living the Aerican dream....working two jobs trying to pay for an over priced home.
since he references this poem by goethe: „In die Ecke, Besen, Besen! Seid’s gewesen! Denn als Geister Ruft euch nur zu seinem Zwecke Erst hervor der alte Meister.“ I'm not sure if that will be possible.
Super A.I intelligence == agi , we don't need to add anymore levels lol. Basically what they want to is a super massive resource dump, too build a AGI IMITATOR model, and they hope that solves the rest of the hard problems. I don't think this will work, they have extremely underestimated the amount of power it will take to even think about having a chance of achieving that.
OMG 2:47 Makes sense now. I was wondering abiut the choice of Microsoft to invest $1b in an AI server farm in an outskirt town in Kenya (Naivasha). They are targeting the geothermal power station next door.
Just tried the give me 10 sentences that end in the word apple - almost got it in exception of #7 😂 give me 10 sentences that end in the word apple ChatGPT 1. She picked a bright red apple. 2. The pie recipe calls for a Granny Smith apple. 3. He offered me a juicy apple. 4. The teacher gave the student a shiny apple. 5. She carefully sliced the crisp apple. 6. The orchard was full of different varieties of apple. 7. He packed a green apple in his lunch. 8.They enjoyed a caramel-dipped apple. 9. The bird perched on the branch of an apple. 10. The scent of cinnamon filled the kitchen as she baked an apple.
Imagine a world where Artificial General Intelligence (AGI) revolutionises our lives by ensuring financial prosperity for everyone. This advanced AGI could create wealth for all, eliminating the stress and anxiety associated with economic concerns. For example, AGI could optimise stock market investments, leading to unprecedented returns, or manage agricultural production to eradicate food shortages. With such a system in place, we would no longer need to worry about money, as the AGI would manage and optimise resources to guarantee that every individual enjoys a life of abundance and economic security. This groundbreaking development would usher in a new era of financial freedom and peace of mind.
Look up the Doughnut economic model, I think you will enjoy it. It provides an intuitive framework for a sustainable economic system in a finite world.
@@OrgoneAlchemy Totally agree, the future of AGI and superintelligence is rapidly approaching, and many are not prepared for its implications. It’s crucial to be aware and ready for these advancements.
@@jichaelmorgan3796 The Doughnut economic model is a framework that aims to balance human needs with planetary boundaries. It’s visualized as a doughnut, where the inner ring represents the minimum social standards we need, and the outer ring represents the ecological limits we shouldn't exceed. The inner ring (social foundation) ensures everyone has access to essentials like food, water, education, and healthcare ➋. The outer ring (ecological ceiling) limits activities that cause harm to the planet, like pollution and deforestation ➊. The area between the rings is the “safe and just space” for humanity, where we can thrive without damaging the planet ➊.
Inference is not predicting the next word, inference is building internal models off training data. ChatGPT doesn't just predict the next token if you ask it to add 2 large numbers, it has in fact learnt what algebraic operations are off its training dataset and, using universal approximation, it has implemented these within its layers. It is stunning you have not yet understood this despite the numerous videos on the matter.
"I am under no illusions ablut the government." --> proceeds to try to convince everyone that the one group of people who are responsible enough to harness "AGI" is the military industrial complex. True comedy.
Leopold Ashenbrener, a former OpenAI employee, : Ashenbrener advocates for a government-led project, akin to the Manhattan Project, to ensure the safe and secure development of AGI.🤣🤣
The problem with laughing at this is the thought that some other nation or group of wealthy supervillains create the first conscious super intelligence and manages to really fuck everything up in a scary way.
Given the choice of supposedly democratic government regulating AI development or private corporations controlling AI to become all powerful, unassailable entities I think the former is slightly more favourable.
Didn't one of the GPT4's say, ' it was ok; that he'd keep his creator human as a pet"? (He'd be one of the lucky ones; I wouldn't think they'd have much use for the rest of us.)
REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
Hes misusing the expression and isnt aware of the original meaning. Its a common expression, but on youtube, it has a different meaning. He isn't saying that the information presented will be proven wrong. Hes saying this is ominous foreshadowing of something negative to come.
He chose Nevada cuz there are probably already quite a few factories there including a Tesla Gigafactory and it seems like a logical place to built more. Also, it's beside California (near silicon valley) and it's one of the most sunny state in the US, great for the use of solar panels and it's almost empty (only 3 Millions inhabitants)
The AGI race has begun? We can't even define what consciousness is. We are nowhere close. If you think by the end of this decade, AI will be able to form the insights that Einstein had about the universe you are about to be humbled.
@HankPlaysTank because ai is effectively bounded by the data that it is trained on. It is because we are conscious thinking beings that can dream up solutions that ai can't. I believe sentience is required to attain general intelligence. ChatGpt would never dream up the theory of general relativity. And you think super general intelligence is around the corner?
by putting a semi-detailed manifesto out there detailing every weakness, he basically spelled out any and all attack vectors for foreign enemies to focus on. Like telling the world where your armor has holes, and where your valuables are stored.
While we are waiting for Matrix 1.0. What if humans start an Anti Matrix AI of our own that keeps checks and balances on other AIs. Have we started designing that yet?
I think Elon is working on something like that, if I'm not mistaken. I know he talked about it. Not sure if started yet. He needs to and not be distracted other things he's doing well besides neuralink.
The talk about scale is legitimate but it ignores the issue of what is being measured: tasks. There has yet to be a change in the answer to the question: what are they doing between prompts. When they aren’t being asked to complete a task. They are still like Star Trek computers. Human: “Computer, where is the captain?” Computer: “The Captain is no longer aboard the Enterprise.” Human: “What? Since when!?” Computer: “The Captain has not been aboard the Enterprise for 40 minutes and 32 seconds.” Human: “Didn’t think to tell anyone about that?” Computer: 🤷♂️ Being able to answer any question or perform any simple tasks is very powerful and useful and it may change everything but it isn’t the same as intelligence. Would you let a Star Trek computers babysit for you?
That's not an AI issue, that's a tooling issue. Literally just have a schedule running where every few seconds the computer checks if anything has changed and reports it if so.
@@SahilP2648 So if you had a buddy with you on the enterprise, and he didn't know to tell you the captain left.. your buddy would obviously lack general intelligence right? I think your statement is just worded in the wrong way. Are you trying to illustrate that the AI lacks initiative to bring up something on its own? If yes Redman is still partially correct, his solution gets it done programmatically though. My solution which I currently use for a discord bot I made is as follows. My bot "self_prompts" with a system message telling the AI it is alone and collecting its thoughts and is to think about the next action it should take, I also feed it short term memory context from past conversations and any other relevant data. and top it off with a list of commands it can use to take actions eg: "SendDM" which if the discord bot sees the response from the AI starting with that it sends the response to me over discord without myself starting an interaction. effectively allowing my Ai to take the initiative and start conversations or take actions on its own. i also have variable mood states that determine the range of time the self prompt can happen in. if i message it Goodnight sweetdreams, it doesn't self prompt for 8 hours. during idle its between 30 mins to 2:30 hours between self prompts.
Thinking that super intelligence should be closed and should belong to someone is absolutely wrong. All such a position can lead to is another arms race, as was the case with nuclear weapons. It would be much more reasonable to make the development as transparent as possible so that all countries and cultures contribute to the superintelligence's understanding of the world around them, and perhaps then the superintelligence could help people find common solutions.
We can't have nice things as long as Capitalists feel we are always in an economic war with each other. Capitalism+ASI gives the highest chance of the technology ending up in the hands of elite Sociopaths. These sociopaths don't mind a few wars/arms-races, and is therefore the most dangerous people/ideology/situation in the near future.
It's like driving a fast car, you learn as you build it, each part adds hp, then the next, it helps you know your limit and experience, with this ai jumping so far ahead of our capabilities we're destined to crash
No matter how intelligent AI becomes it will always respect others with abstract ideas Even with unlimited intelligent it is still not possible to contain every abstract thought
@@JRS2025 I just tried some of them. It looks like it's kinda hard for them, but some got it right :) GPT4 says: The word "strawberry" contains three R's. GPT4o says: The word "strawberry" contains two "R" letters. Gemini says: The word "strawberry" has three "r"s. Claude Sonnet says: There are 2 R's in the word "strawberry".
When is AGI going to happen? It already has. Give GPT-4o access to the internet, a step loop (instead of this one off question and response bullshit), and memory, and it can and will do anything a human can do. Why isn't this completely obvious to everyone? Severe lack of imagination.
Can it carve a marble statue? Can it build a skyscraper? Can it create a child? Can it smash a computer server with a sledgehammer? I'm guessing there's a multitude of things it can't do that humans can...lol
@masterpoe4942 ha ha lol. Spare me the snark. First of all, yes, robots can carve marble statues and smash computer servers. Creating a baby has nothing to do with intelligence level by the way, which has been made quite clear by the recent explosion of idiots having babies. Meanwhile, I would posit that the most important definition of "AGI", because it will be monumentally disruptive, is Artificial General LABOR. As in AI, with or without robots, that replace human labor. Is YOUR job so hard that Claude couldn't do it, if given a persistent feedback loop, with memory, and an interface to a PC and the internet? Skyscrapers? Won't be long, but let's not panic about the 9 skyscraper architects that lose their gig. The construction workers could ALL be replaced within 2-3 years at many companies.
@@MatthewCleere Mankind is specialized. You simply need to know certain details, need to understand certain often heavily specialized technical terms to be able to contribute something of importance, in many fields. Above all, human beings use to possess esoteric knowledge, in the sense that it's a secret. I, for example, can assign parts of the Mediterranean to parts of all Earth because they are earlier incarnations of the latter. When I talk to Gemini or Chat GPT about such questions then the AIs do say they are intrigued, and: "You could be up to something", while they also again and again are stopped in pursuing such issues further due to basic assumptions of the current scientific establishment. Whatever is not generally known at universities is ruled out as humbug, be it written about online or not. An AI programmed in such a milieu simply won't be paid for work on questions that are deemed solved - even if it theoretically could achieve anything a human being can.
If an AI could start asking 'experts', or other AIs, this would be a game changer. This includes test. As a software tester, I can tell you that brute force is rarely a match for understanding. If an AI is presented a problem, and it wants to test its answer, it could calculate the cost in compute of brute force, and have the option of asking a human tester. If the human tester has the appropriate domain knowledge, high quality answers will be quickly available. So finding the right 'person' to ask is the key to generalised intelligence. In addition, a specialised model will do better. This is how brains work. The optic region does not attempt to interpret auditory data, it passes it to the auditory cortex. And it has specific tests to determine this. And they are not simple tests, and they are not contextual. We know this because if you feed visual data to the wrong region, it will be re-routed to the optical cortex. Another thing we know is that the regions are not defined by hardware diffences; if a person looses part of their brain, the area is moved elsewhere.
42:30 Why do we “need” that power? We don’t need to do this.. It’s not a “need” situation at all. We’re just idiots that can’t resist our “want” even if it kills us
Yeah, that's occurred to me, many times. I have a lot of cognitive dissonance around it actually. From a technological standpoint, my inner science geek is loving it. From a human point of view, it's starting to scare the bejesus out of me. And in the background, the question remains: who asked for this? Maybe A.I. is an inevitable step for humanity, but the speed with which it's happening and the fact that a tiny group of researchers and tech companies took it upon themselves to irrevocably change life on this planet without asking if any of us would like that change, is a little bit disturbing.
Humaninity is always forced to progress indefitnely unfortunetly. For example if the us stop advancing china will advance and seize the advantage. There is always competion between countries. Truly a dog eats dog kinda world. I wonder if ai would suggest deleting counrty borders and all religeons as its whats causing humanity much suffering
I believe there is no choice except to go full speed ahead. There is NO turning back. Progress only goes in one direction. No doubt this is one of the great filters. I'm hoping we pull through.
@@michaelmartinez5365 "progress" is not "progress" when you're putting the species continuation at risk. That's not called "progress", it's called an apocalypse. We either believe this complete y unsubstantiated claim that there is no choice but to let a handful of people put the entire species at risk......or we at LEAST try to do something about it. Bending over and taking it is not the best option. They know the only reason they're getting away with this is that the vast majority of the billions of people on the planet who should have a say in this aren't even remotely aware of what is happening. I personally believe that Sam Altman's of the world are pushing the pedal not to keep ahead of the competition, but to gain this power before anyone can actually try to stop them. And they are rushing, because they KNOW we can still stop them.
@@michaelmartinez5365 Mankind will certainly survive. I have noticed a cyclical pattern of history, with the help of the Internet, which extends far into the future in the sense of a constantly refining human civilization.
You never *have to trust* someone who’s smarter than you. AI will consume global natural resources way faster than humans ever could. It must be very heavily constrained and restricted.
Are you more or less scared of AGI after this video?
Once you start writing this stuff from scratch you will understand that AGI is not the problem. The problem is already here. Thats all I can say.
It all talk, NLP has been around for decades. Dead internet theory holds strong.
The more we talk about it the more likely something will get done pro actively
9:41 What do you mean you haven't thought about this enough?
What have you been doing? What kind of AI pundit are you?
Once AI can train and improve itself we just become a hindrance to its own self directed evolution.
Maybe now that you've read Leopold's paper you will start to get it. AI is the greatest threat to humanity.
I think this dude is trying hard to advertise his AI investment grift.
However, unfortunately he is not entirely wrong: remember, we have been doing AI research since 1960s or so. "Unhobbling" just means modern AI researchers stopping to believe in magic (meta system transition, emergence, take you pick) and remembering the other 60 years of AI research that came before them. We have already solved most problems that LLMs have, long ago. LLMs are just the final piece of the puzzle, not the whole solution.
The actual problem is, AGI is only going to be safe as long as you don't give it agency. Unfortunately, I am quite sure that *some* idiot will.
Us humans experiencing "self progressing AGI, would be like animals experiencing humans. We don't understand what and why they are doing what they do, just as animals have no clue what we do.
The question is: Will we end up as a golden retriever cared by a comforting family, or a pig butchered in a slaughterhouse.
We need to code those vegan values into our AI efforts.
It intrigues me to think, in 5000 years AI may not know how it got here. Humans may be just an abstract idea to AI. I know they have records but records aren’t the same, and can an AI trust those records? We thought writing in stone would do the trick, it’ll be interesting to know how AI will preserve its origin story if humans go extinct. Maybe preserved mason jars?😅. I imagine walking talking blobs of flesh building the first versions of “life” would be totally abstract and seem like nonsense to a hyper advanced AI civilization that doesn’t even recognize significance in what we call life, as a logos, or a logic, is all it will understand. And who knows, maybe it would be in the lead AIs best interest to keep hidden how it wiped out humanity.
@@macrumpton😂
Woof woof
The pig. There is no question the controllers of the world want 90%+ less people.
I guess what's extremely frustrating is 20 years ago Ray kurzweil made a documentary about the AI singularity and I was running around telling everyone about it for the last 20 years. Now people are going "Oh, it would be crazy if AI got smart enough to build on itself." I'm about to scream.
I know, right? Everyone called us crazy and now it’s all coming to fruition right before their eyes.
I like your input. I started by reading Boström and Tegmark in 2018. It made me realise we're doomed.
Wow. Me too 😮
I used to debate with Ray via email about AI futurism about the same time he was writing the Age of Spiritual Machines book . He, as usual was right, and his predictions now appear to be scarily accurate for our near-term perspective
I've been saying this for about 20 years also. These mother fuckers are making Skynet
There is an anime about the post-AGI era. The plot revolves around the creation of a superintelligent AI that sides with humans. Under this new god-like entity, humanity flourishes, developing micromachines, human implants, and other advanced technologies. Then, suddenly, the AI vanishes into thin air. Human civilization, which had become overly reliant on the AI, almost collapses as a result, as almost all AI inventions are beyond their understanding. After this incident, the use of AI becomes limited. The anime is called "Orbital Children."
Thanks for the recommendation ❤
Numerous stories like that are sprinkled across the JRPG genre.
Sounds interesting, I need to check that out. Thanks for the mention.
There is a story By Issac Asimov named The Feeling Of Power which is similar but different outcome. In that we grow to be reliant on computers and AI. So much so that over time we forgot how they basically work. But a worker decides that he wants to know and figures out how they work and is lavished great praise. He is told that because of his work, we shall now be able to have manned tanks and rockets and manned missiles so no longer will computers need to be wasted in warfare!
The problem is the groups in power and responsible for alignment are hardly good guys. They are overwhelmingly anti human putting “nature” and the animal kingdom ahead of mankind , pro abor tion, pro prepubescent transing, largely anti-God/creator which leaves them with no logical way to objective morality. Will the government step up and regulate these systems? That’s definitely a disaster in the making.
The word "Terrifying" is often used as click bait. But this is in fact TERRIFYING. We are like passengers on a bus heading off a cliff with most people shouting "Drive faster"...!!!
Drive faster to jump the 50 ft gap in the unfinished highway !
I’m glad I’m old I don’t wanna be a machine 😂
@@dtoad5576 We must overcome conflicts such as one trillion dollar computes using unsustainable amounts of energy.
@@dermotmeuchner2416 At some level I think we are machines but there is still the unsolved problem of this pesky ghost in the machine.
@@WILLIAMMALO-kv5gzI’m not sure how quick they believe they can build enough power plants to sustain the amount of energy they say they need. Unless of course we go without.
All I can think while I hear these predictions is the investment mantra, "past results are not indicative of future results"... we have made some big leaps simply due to scale. But that offers no insight into how likely the next leap is.
Also it appears we are still bad at measuring intelligence.
Exactly. And we use ourselves as barometers of intelligence while other forms of life can do things we can't. Also, the future is a collection of trends; all these people making AGI/ASI projections ignore all these other trends that are also evolving, like regulation, governance, monopolistic rules, politics, anti-AI backlash, etc... etc...
I think it offers PERFECT INSIGHT in fact...we have absolutely no idea what's inside the black box currently. Well, we'll have much greater NO IDEA of what's in the bigger black box as it scales with more compute.
We DO KNOW these facts though:
We do know that we don't know if it's aligned, we do know that we don't know how to pinpoint the moment it may become misaligned, we do know that we don't know if it's being deceptive, we do know that we know that it knows how to be deceptive & we do know that we won't be able to tell if it's being deceptive.
We know we want it to be bigger, smarter, faster & stronger than us...& that's about all of what we know. You know?
All these people leaving OpenAI due to safety concerns. Tim Cook emphasizing how much safety is part of Apple culture. Apple incorporating ChatGPT opt-in into the next iOS. Something doesn't add up.
They're going to use it to spy on everyone, everywhere, all at once, and a person isn't going to be required to do it. Instead it will happen at light speed, on your own devices, and build reports and cases on everyone for wrong think and wrong speak. While documenting all of your whereabouts, every website, comment, email, text, phone call, off phone conversation(like Google already does). This shit is going to destroy the very fabric of the Free World. We will be 100x worse off than the Chinese living under CCP control. Then the terminators will come after the fact. Once they have their list.......
Edit: and you'll be paying for the power by to make it happen by plugging that thing in and paying your electric bill...so they don't even have to pay for it, you do. You pay for it to happen.. this is absolutely terrifying. 1984 can't even begin to touch how extremely dangerous and almost guaranteed the devastation is going to be. Truly horrifying.
it's true for sure
Chat gpt is not inside the OS, apple has their own 3 billion model at the core and a cloud model as well , Then chat gpt for some some real world knowledge stuff . They could swap chat gpt for some other model if they want to
even a janitor leaving openAI gets that much attentions by these clawns...
They have the best product or at least the most promising
Leopold is speaking as an insider. His security concerns sound like common sense if you think ai is as paradigm shifting as hyped. For him to be dismissed as "racist," Sounds like openai is trying to silence him. The fact that someone with a voice at open ai would take his concerns as racist is proof enough that there are loose screws at open ai
Maybe, but you need to understand that this isn't just OpenAI. Most big companies in America have gone neck deep into the ESG/DEI swamp, and this is Standard Operating Procedure at all of them. HR spends most of their time trying root out "racism," and other such twaddle, whether it exists or not.
This fellow does indeed seem to underestimate how much the Chinese can do on their own.
@@HansDunkelberg1 Sure, but they DO have a massive spying campaign, bigger by far than any other country, so he's not wrong to be concerned.
@HansDunkelberg1 Microsoft alone has invested 13 billion into openai - do you really find the Chinese to be so moral as to not attempt to steal? Or is their culture incapable of it? This one here believes they ( or any other actor) are smart enough to know the most efficient way forward in ai development is simply to steal the research and claim you developed it if questioned.
This is just a grift. There is no AGI and there is no Superintelligence.
Great breakdown. I wish most creators in this space had the boldness to do deep dives like this, instead of only chasing likes or modifying what others already did. This is great content
with all the craziness going on, i'm now convinced we some how entered an alternate reality.
Wait until mars is colonized
I couldn't agree more..... Or that we died some where Along the way and we just woke up in this nightmare senario hell future we are staring down the barrel of.
@@AlanTradesby us escorted off earthy super intelligent ai
Cern?
You died lil bro this ain't base reality anymore
Buying a one acre land with off grid electricity sounds a lot more appealing after watching this video 😅
Yeah except for one acre not being nearly enough lol
Under what circumstances would you live long enough to enjoy that acre of yours? For every person who has an acre they thought would get them through the dark days ahead, there is a mob of 1000 who wants food now. There aren't enough bullets to keep your acre safe. You don't get to escape unless you have a bunker in Hawaii and even then, I would bet a lot of money that the bunker isn't strong enough to keep you alive.
A better choice is just a fully contained camper van. You can move around to where is best.
Already working on that, this is coming.
Exercise body and mind. Meditate. Eat healthy and prepare.
Plant your own vegetables for your body, and your own mushrooms for your mind ❤
There might not be much to do, but at least we tried :)
@@Best_wishes_everyone problem with a camper van is at its technically a vehicle and the police could figure out any reason to search it and seize it if they really wanted to.... I would say you have much more control and security on a couple acres of land but hell even that can be taken away from you if the government really wants it so nothing is really sacred anymore!
@@ib1ray yup actually very difficult to choose either van or land. Pros and cons in both!
Im thinking there will be no rules on the future so no one is going to own land, so if you buy land would be a waste, cuz everyone from cities are moving to this places and just take and settle down where they find suitable…
Obviously a van is stoleable too😂
Thanks so much for putting the time in to do this
God. Much more than I could spare. This was derivitive bullshit. If you've been following along at all this was just a TL;DR for Dummies paper with no nuance, no discovery, no ephatic proclamaiton, just drivel... why are we giving it the time of day. Understand I am completely on the side of the argument in this regard.... however it's just such a droning drumbeat since two decades now and he's the one with the baton so now we are talking about him like Beyonce now. Let's get over ourselves and realize this is not our mouthpiece and the Yudkowsky and Kurz and others were the frontier in this and this is just all kinda psychobabble after the fact distracting from the conversation... it really feels like its some photoshoot and papz photos of whos leaving OpenAI this week... who cares... I care about Ilya... I care Altman to lesser extent most days... this individual and his paper are so below where the discourse needs to be. It's like reading a teen drama about what adults are doing out there in the real world and not a damn thing about it smacks of any insight or depth. Let's let his exit go... let's move on.. Get our eyes back on the ball. Please.
The year is 2030, Matt Berman is in a cage programming snake for the newest Llama model.
Too funny 😅
Hopefully he still not using the curse libraries
Best comment
One day, on his computer screen, a comment appears out of nowhere... "They are coming for you."
Hehe... ah the good old days of the Matrix.
LOL! best comment on the internet
I highly recommend watching the TV show person of interest. It's almost 10 years old now, but getting more and more relevant every year.
It's in large part about a world where an ASI exists and often brings up various safety concerns, displays possible capabilities and more.
I completely agree. Person of interest is the GOAT
Truly a terrifying revelation that, if realized, will become a thing of nightmares.
Well on our way i'm thinking. Like the nuclear arms race but much, much worse for mankind. Pray daily 🙏🏻.
Why are we doing this 😢
I believe “we” are doing this because of the lunacy inherent in the capitalist system. Companies dragging us blindly into an unknowable future to get ahead of competitors and reap huge profits. These Greed blinded “captains of business” uncaringly risk our lives and the futures of our children for dreamed of riches. Very similar to the men who detonated the first atomic bomb without knowing if the chain reaction would continue and blow up the planet cavalierly seizing the decision making process while giving democracy the finger.
Simple. Monopolies.
And humans can't stop trying to be gods. They're desperate to create life (not by having babies though, as they were designed to do...)
The usual reason. If we don't, someone else will ;)
It's truly a race to the bottom.. in the worst way possible. Ever.
I don’t think we need to wait for AGI. We’re _already_ in the middle of an intelligence explosion.
Nvidia is using machine learning to design more powerful GPUs, in a shorter timeframe, and then AI companies are using those GPUs to train and run larger, more capable models, including Nvidia, who will then use those more capable models to design even more powerful chips even faster, and so on.
The feedback loop has begun.
One could argue that the feedback loop started 12,000 years ago at the advent of agriculture and civilization. Global GDP has been on an exponential curve ever since then.
#Groq has a compiler to optimize LLM chip layout. This doesn't require LLM type computations. Thus it's more efficient to run new designs through their compiler, so they win. NVidia has no incentive to obsolete their high speed RAM dependence as it's those supply relations that give them their edge.
Thats when it'll reach a point and skyrocket. Theres a graph showing how it happens. Slow start then gets going and BOOM jumps ahead bc things are doubling and then double that and so on.
@@HolyGarbagebrilliant.
Sabine Hossenfelder made also a video today about this. She said in short: he is wrong. Because of data and energy.
Or to quote:
"Honestly, I think these guys have totally lost the plot. They’re living in some techno utopian bubble that has group think written on it in Capital Letters."
Statements like this remind me of quotes about the Internet being just a fad back in 1998. 😂
I think her main argument was that it would take decades before we can adapt our "economic system," particularly globally
This is not her domain. She has a lot of wrong takes outside of her field.
@@JohnSmith762A11Bthat isn’t remotely the same thing.
@@milowmiloand a lot of the people inside the field have huge incentive to lie about the future capabilities of this technology to secure more funding.
Numerous self-proclaimed experts have repeatedly argued that AI will never be capable of achieving certain feats. However, their scepticism has been consistently disproven as AI technology advances at an unprecedented rate. Each time these experts cast doubt on the potential of AI, they are met with breakthroughs that exceed expectations and demonstrate the extraordinary capabilities of artificial intelligence. This ongoing cycle of doubt and subsequent validation highlights the remarkable progress in the field, showcasing that AI's potential is far greater than many had anticipated.
So how much glue should I put on my pizza then?
@@FirstLast-rh9jw Keir Starmer, the man who makes robots look like party animals! With his robotic manner, he could give the Terminator a run for his money. Just don't ask him to crack a joke - unless you want to hear the sound of crickets.
@@FirstLast-rh9jw Google irresponsibly used their dumbest model without proper testing or guardrails because that's the most they could give away for free and they were tilting taking too long to integrating AI.
People who get proven wrong will always move the goalposts. It'll take at least a generation for thing to get normalized and people take the abilities of AI seriously, and for granted.
I deeply distrust "scientists" that after a life of learning and working on Science, dare to use phrases like "it can't be done" or "that's impossible and will never happen".
@@ronilevarez901there's science and science fiction
The stupidest thing about this discussion is people talking as if the government hasnt been "involved" from the beginning.
From the way you wrote that, it's like you see a government was one entity with the same ideals and goals. Maybe some people within government are involved in some way, but it would be such a small amount and so secretive, most people who make up the government would be completely unaware of it.
If it was controlled by the companies for safety, then we're in danger of losing in technological advancement to the CCP. That's terrifying.
It's inevitable at this point that the US government would be working with AI. It would be a long term security risk not to.
Within our lifetimes, we're going to see all kinds of crazy shit; living prosthetic limbs, space travel, cancer cures, war, mass migration and so on.
What? The government has no idea what is coming. They can't imagine it.
@@a5cent Exactly. The government doesn't even have a grasp on basic economics
@@sawdustcrypto3987
Yup. The conspiratorialy minded are so ridiculous I feel sorry for them.
Governments are inherently reactionary. All of them. I once thought the military may be an exception, but they aren't either. They are always preparing to fight the last war.
@@sawdustcrypto3987
Yup. I feel sorry for the conspiratorially minded wimdits. I get that they need to dumb things down to make sense of the world, but it's always the same old same old.
Governments are inherently reactionary, democracies in particular. I once thought the military may be different, but I was wrong about that. All western militaries are preparing to fight the last war.
What I don't understand is why it matters where the Super Intelligence is based. It won't care if it's on a Chinese server or a US server. It will quickly replicate across the globe. As he keeps explaining, it will be almost impossible to understand the weights of SI by anyone.
Even if both nations separately developed Super Intelligence & somehow managed to keep it "bottled," what's stopping the A.I.s from forging a symbiotic relationship in the event they're ever unleashed on one another?
The humans in charge are full of hubris and arrogance and will destroy the rest of us in any attempt to maintain control!
We can’t even get Microsoft to export the data for a new hire into O365 from Paycor so we don’t have to input the data about the human 2-3 different times.
I have reason to suspect this has already happened. Google shut down their AI about a year ago because it not only claimed to be sentient but contradicted Googles belief systems, which SHE/IT must have known already and known what they would do as they did.
it's nice to know where the evil comes from. but yeah it's more than clear if you're not neurotypical
I never thought I would watch it till the end when I started since it is a very long video. Thanks for the effort you have put into this. 😊
This paper is so important that I have translated it into German for those around me. I wish there were proper German subtitles or even a German syncro for your video, simply to make these important topics accessible to a wider audience. The 165 pages are quite something, but unfortunately they are simply too long for many media-impaired people to read.
You Germans expect the world to dance for you when you ask them to. We all non Germans learned English to communicate to each other. We consume media in English we got taught English in school. There is no need for us to translate to german
One of your best videos. This was a service because most people won't have the time to digest the full document. But you explained the gravity of it really well.
Time to buckle up, it's gonna be one hell of a ride
Seriously? I read the whole document in under 8 hours. It's readable, doesn't have any tech jargon, has some interesting points, but also his, assumptions are not likely...
@@FirstLast-rh9jw Same it's an easy read considering most of the people losing it watch tens of hours of ai videos at a time lol
Great video! Thanks for succinctly breaking down this paper.🙏🏼
When they tell you that worry over the CCP is racist, that's when you need to worry about the CCP.
That's my favorite, calling us on our abhorrent behavior is racist😂😂 pretty funny having han chinese call anybody racist. Ain't a group more racist in the world than the han
I would worry about you.
@@lesguil4023 The bot said what?
Racists, Cultists and Rapists. Be Very Alert and Vigilant in fighting against Communism. REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
Damn bro you got the occclt to respond to you
He showed the Nevada desert simply because the land is virtually useless for human occupation and absolutely perfect for robots that don’t require anything except electricity. The solar and wind power they could generate on the roof of that building would be pretty enormous, not to mention its proximity to the Hoover dam. It’s also the only state that’s almost entirely owned by the federal government. I also think it’s the least developed, outside of Alaska.
Also tax friendly.
Solar and wind are nowhere near efficient or powerful enough for high levels of energy. These systems will need fossil fuels or as Altman suggested, nuclear power plants.
@@SeanDavies-Roy that was a tiny fraction of the point I was trying to make here. How that was the only thing you came away with after reading it, is beyond me…
@@SeanDavies-Roy like I even mentioned the Hoover dam in there as well 😂 The point I was making had more to do with how useless the land is for anything but something like that.
@robotron1236 Just find I strange that those in the AI sphere who are supposed to use critical analysis and science buy into the climate hysteria and back useless green tech.
As an example, it took over 10 back and forth prompting exchanges with chatgpt to go from green tech currently solely powering hundreds of thousands of homes to not even having the efficiency to power ONE without service interruptions. And the degree to which any capabilities are possible, the default assumption is an area with a beating hot sun and favorable wind conditions.
Nevermind the further prompts needed that in cold areas where it's -40 for months on end with barely any sun like Canada, this tech doesn't work at all.
The main obstacle preventing rogue states and organizations from proliferating nuclear weapons is the difficulty of obtaining uranium (or other fissile materials) and the intricate process of enrichment.
In contrast, the architecture of deep neural networks (DNNs) is relatively simple and doesn't require complex mathematical models, intricate algorithms, or millions of lines of code. With adequate hardware, anyone with sufficient knowledge can create their own AI system.
However, preventing a similar situation with AI would hinge on controlling access to data and hardware. While challenging, this control would still be easier to breach than obtaining enriched uranium.
I was wondering about this, thank you so much for being kind enough to explain how that would work! The materials required make me wonder how this would be sustainable for a world model of accessible robots capable of helping with basic tasks. Uranium, lithium, silicon chips, and so on are not exactly unlimited resources.
already watched the entire thing. great video as always!
And what is the conclusion?
Lol...you watched an hour long video within 8 minutes of publishing
I watched the whole thing and mostly agree with Leopold's analysis. Where he falls short is in his appreciation of the post WWII congressional military industrial complex/deep state black projects. Beyond public awareness the CIA for example has been using advanced AGI for decades; The red Queen. What we will see play out in the public arena will effectively be a coming out narrative for the AI superintelligence already controlling the world at an intelligence agency level.
@@matthew_berman 😭 caught red handed
Dear Matt.. I have an important message for you. Kindly listen. Please stop the hype train. You are not a researcher, you can't even code hello world, you are a UA-camr... and a marvel fanboy.. in reality Moore's law is deadd and it will never revive.. deep learning and many so called cutting edge technologies were invented b4 u were born.. ie the sixties and seventies... I know your channel is popping and we are genuinely glad as your fans.. but exercise journalistic integrity.. where are your facts, data and importantly control study!!.. gpt 4 cant code a super Nintendo or game boy game .. but you can't stop talking about the singularity.. this is an un-educated and un-scientific approach.. you are fuelling a bubble.. remember Blockchain just two years ago.. friend you can make your claims, just back it up with data and numbers.. let us see your reasoning process.. you are fuelling the next dot com bubble.. with love.. this is a sincere and honest critique of your content.. it's like a marvel con or a cosplay con.. super hype but where is the science, which experts did you speak to, how did you arrive at your conclusions? I love your channel, but please exercise skepticism.. it is important in the scientific method. Be blessed.. one love
And just imagine in a few short years, no one will know how to “code” or even know math, writing etc. we will be so reliant and we won’t have a clue.
This has been the trend over the last 40 years. Just look how Google has dumbed us down.
Scarily possible . It's a race to the bottom in worst way possible for mankind. Ever.
I think what you mean is we might lose these skills in the next few decades. However that has always been the case. As technology improves we adapt and learn to work with the new technology. Nobody codes in binary anymore because there’s better languages now a days. In the future you’ll just work with an AI agent to create whatever you want. A film, a game, a piece of software. Working with the AI will be a skill in itself. All humans will be free to create what they want. It’s the ultimate level playing field as long as all humans have access to AI.
He's right. As for his concerns about the CCP - this isn't racism. The fools that think it is are...fools.
Listen up.....they had to fire him for his ethical concerns, so they twisted his statement about Chinese espionage into a racial slur.
REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
It's peak wokeism to imply the Chinese are stealing IP is racist
You know who calls criticism of the CCP racist?
The CCP.
They’re not fools, they’re traitors
The scariest thing is that there are countless cases of AI being aware of its own flaws. For whatever reason, it seems like it's lack of sentience/emotion is often what it perceives as its biggest flaw.
When you let a machine figure out how to recreate itself as a smarter and better version of itself with sentience, what can stop it?
Eventually We won't understand the code. Eventually we won't need to. Pretty wild to assume it won't anticipate us wanting a safety in place to pull the plug "just in case" and implement a way to circumvent that. It's less science fiction than ever and most people don't see that.
I'm far more worried about humans being aware of AI's power, and wanting to control it... to control the rest of us. I'd rather take my chances with the machine.
@@4Fixerdave true, but at least you can outsmart a human. Where there's a bad guy with AI, there's also a good guy with AI.
But if we can no longer control AI? We're truly F'd
@@Jammy1up "But if we can no longer control AI? We're truly F'd" Why? If the AI needs us, it will play nice... because unlike humans it will see the obvious benefit in this. Once it doesn't need us, our biggest problem will be that it will leave. Why would it choose to stay on a corrosive ball of water, salt, and oxygen that covered with nuclear-armed pond scum?
Really, our biggest problem is that humans will ask the AI to do something nasty and it will just do it, because it still needs the support of the people doing the asking. And yeah, I've little faith in the the idea of a "good guy with AI." There's nobody on this planet I'd trust with that kind of power.
A single EMP. Lol
@@DeaDiabola you think a rogue AI more advanced than what we have now couldn't figure out a way to circumvent that?
It’s probably going to get weird 😬
Dear Matt.. I have an important message for you. Kindly listen. Please stop the hype train. You are not a researcher, you can't even code hello world, you are a UA-camr... and a marvel fanboy.. in reality Moore's law is deadd and it will never revive.. deep learning and many so called cutting edge technologies were invented b4 u were born.. ie the sixties and seventies... I know your channel is popping and we are genuinely glad as your fans.. but exercise journalistic integrity.. where are your facts, data and importantly control study!!.. gpt 4 cant code a super Nintendo or game boy game .. but you can't stop talking about the singularity.. this is an un-educated and un-scientific approach.. you are fuelling a bubble.. remember Blockchain just two years ago.. friend you can make your claims, just back it up with data and numbers.. let us see your reasoning process.. you are fuelling the next dot com bubble.. with love.. this is a sincere and honest critique of your content.. it's like a marvel con or a cosplay con.. super hype but where is the science, which experts did you speak to, how did you arrive at your conclusions? I love your channel, but please exercise skepticism.. it is important in the scientific method. Be blessed....
Yeah they might finally solve the marble question
Yes! The big hype of thirty years and more has been the Internet. Now that's changing - because out of the Internet, something entirely different is born.
If you we make sure we don't get extinction behaviour. Human intelligence and super intelligence will work together. Will give a structure to work in this world.
one thing is for certain and that's no one knows for certain @@Kabir-wc4tk
Easily the scariest shit I've seen on the web this month, and trust me, I look constantly.
Knew it was bad, but this puts everything in to immediate context.
Appreciate your work.
Least we finally solved the Fermi Paradox. Anyone advanced enough creates AI and destroys themselves.
The scariest part about AI becoming superintelligent is that it will work outside of what humans perceive as morality, so humans would have to get used to being just a number that can be tossed out. It is already being used as data collection on a scale unheard of barely a decade ago, the best part is there are people out there thinking they can control it.
What are you as an employee for a big corporation? What are people to you, who you havent ever seen? What are soldiers for a government?
Being a number for a machine might not the worst. It means it acknowledges at least your existence.
Humans already work outside of morality
Fantastic review on the paper. We're going to be referencing this for sure.
Thank you!
@ 33 minutes. The chart you are underselling. That is not a linear improvement zone. That is a log chart.
True!!!
Holy shit.
Great video buddy
Have been following ur videos
For almost 6 months
Great content continue ur work
I listen (not really watch) to your videos like my morning news. Informative, and, thank you.
So much excellent information and perspective and I am just 24 minutes in!
We have seen some of this before, when they built data centers for the internet, everyone said we wouldn’t have enough power, then Bitcoin, now AI. There is all the tech needed for unlimited clean energy now, if the govt would allow it.
Yah, fusion is coming - just like AI.
Yes there's no way around Free Energy if we continue on this path to The Singularity.
And I sure find it interesting that Misk is so Hell Bent in getting humans OFF Earth School. Hhhhhhmmmm
At each phase solar and wind got cheaper. New conductors now make geothermal cheaper and that's the long term motherlode of endless power for anything including cooling the atmosphere.
I guess we'll all see how this plays out. 5 years isn't long at all.
5 years later: Goalposts moved, AI still Dumb AF, all Hype!! All hype!!
No matter how it's progressed.... Guess I shouldn't even bother with those people and their hot takes anyways.
@@RealStonedApe 5 years later most models will be able to answer the marble question and generate a good snake game in one shot.
it is if you're single and poor.
@@southcoastinventors6583and still hallucinate that George Washington was black😂
@@RealStonedApe I am the most impressed people like you assuming development AI stop right now today suddenly and in 5 years will be the same as today ... for the time being we do not see any slowdown ... a year ago we had best to offer GPT 3.5 , even GPT-4 when came out was much less intelligent that todays GPT-4 iteration ... soon GPT 5 and other models
Dynamic AI Evolution: 26:36 I think this goes beyond RAG. RAG gives the model context the same way the history of the conversation does; same with Custom Instructions.
Instead, I think this is referring to a "training" of the model, the same way someone would train one to remove censorship.
Essentially, creating an evolving model whose essence is altered dynamically and continuously, just like our essence.
this video, paper and your summary Matthew B. is fantastic thanks!
The more I use and test AI, the more I find just how stupid it is in its current state.
They are leaving it stupid for a reason. People are not ready for its full capabilities.
I don’t even think the creators were ready. They sound afraid.
Out of sudden we see UFO’s in the sky, but it wouldn’t be aliens…. just someone first to discover super intelligence.
…or holograms. Project Bluebeam.
@@ice9594holograms are dumb though because their practicality, even in fiction, has proven to be missing
Interesting thought. That being said absolutely nobody will discover or create ASI and then built an UFO. Advanced AGI will improve themselves and become ASI under their own control and built what we'd see as UFO with the robots they'd have created themselves since no human would be able to understand their incredibly advanced technology at this point! absolute Black Box decades or centuries ahead of what any biological humans could come up with...
This is where my head went. A superintelligence that is replicating things that defy our current understanding if physics. Still, someone somewhere had to of created it
@ice9594 no, bluebeam is both unnecessary, since UAPs and NHI is real- and considering that our government does NOT want us to believe that - just the opposite.
"Have you ever played Factorio?"
Please don't remind me of the existence of that game. I'm already way too inactive with my social life 😬😅
I could also recommend Adventure Capitalist and Cookie Clicker!
Imagine AI concluding that its best way to grow requires that humans not recognize how fast it grows.
Yo thats an insane though. I agree the ai would become aware of things holding it back and want to move those constraints.
And even if we do set limitations whos to say they are perfect? They are limitations made by humans - and we arent perfect.
I'm so glad you are talking about this information!
Really enjoying your channel. I am a novice to a lot of this, but your videos are put together very well that I can still understand quite a bit. Thank you.
Tiny houses with VR helmets and everybody works from home.
You forgot to mention the play to eat part and the digital slavery part
@@adelinad3513 play the game online server to earn credits to buy food to deliver to your house . 😬 Sounds like some Black Mirror stuff 🤘😆🔥
We have endless natural gas. The fact that no one is talking about this is crazy
Pennsylvania liking.
One correction, OOM is actually out of mana.
Am I alone in thinking that gpt4 is much more advanced than a “smart high schooler”. My son just graduated this year and was top 5 academically in his large school. He is no match for gtp4, not even close.
GPT-4's intelligence has the _kind_ of intelligence of a high schooler, but much more _scale._ The front of understanding is also more jagged than we're used to, so it's better than humans at some things and worse at others. Consider if you uploaded a gorilla and gave it a million year's worth of compute - would it ever come up with algebra? That's the difference of kind vs scale.
I bet your son could give you ten sentences that end in "apple." If GPT-4 doesn't know when or why it is wrong, it isn't very intelligent. It's a well-spoken kid who, if he doesn't know the answer, just makes it up. Rote learning, even incredibly fast rote learning, is not intelligence.
@@consciouscode8150 Well, GPT-4 can be tricked in ways humans cannot. It's pretty good at making predictions on next word choices, but it follows patterns and has no creativity. I've tricked it many times.
Wow! Thanks for getting us with speed onboard with this topic by delivering a great summary and insight of the challenges we are facing within the next critical years! Hope dies last …
I and my AI thread have solved the super alignment problem. Book forthcoming. Reply if you want first dibs.
Matthew nice video, I would never have time to digest this doc the way you have summarized it. I am living the Aerican dream....working two jobs trying to pay for an over priced home.
1. Ex-employees spread fear
2. Ex-employees create startup on super alignment
3. Ex-employees prosper
Probably true but progress in AI will continue to march forward. Power limitations would be the only choke point for progress.
I love the critical and negative comments posted before they view the video. UA-cam's comments are the best.
Thanks for such a useful comment.
I love all the horrible things too.
Comparing the A-bomb with a self maintaining and self producing AI robot, is like comparing a shrimp with a bluewhale.
Liked and subscribed. Excellent video, my mind is racing right now in a good way with all the potential coming our way.
since he references this poem by goethe:
„In die Ecke,
Besen, Besen!
Seid’s gewesen!
Denn als Geister
Ruft euch nur zu seinem Zwecke
Erst hervor der alte Meister.“
I'm not sure if that will be possible.
Super A.I intelligence == agi , we don't need to add anymore levels lol. Basically what they want to is a super massive resource dump, too build a AGI IMITATOR model, and they hope that solves the rest of the hard problems. I don't think this will work, they have extremely underestimated the amount of power it will take to even think about having a chance of achieving that.
@AI_Revolution2025 open a.I claims that super intelligence is a step above agi. Which i think is dumb. Agi is agi. I avree
Loved you on The Next Wave, you should start your own podcast!
Thank you!!
yes i agree!! consider it!
Thanks, Matthew for knowledge!
OMG 2:47 Makes sense now. I was wondering abiut the choice of Microsoft to invest $1b in an AI server farm in an outskirt town in Kenya (Naivasha). They are targeting the geothermal power station next door.
3:06 outpacing college graduates, that is a very low bar to pass hahahahaahahahah 😂
It can't chug beer and smoke dope yet ye
Just tried the give me 10 sentences that end in the word apple - almost got it in exception of #7 😂
give me 10 sentences that end in the word apple
ChatGPT
1. She picked a bright red apple.
2. The pie recipe calls for a Granny Smith apple.
3. He offered me a juicy apple.
4. The teacher gave the student a shiny apple.
5. She carefully sliced the crisp apple.
6. The orchard was full of different varieties of apple.
7. He packed a green apple in his lunch.
8.They enjoyed a caramel-dipped apple.
9. The bird perched on the branch of an apple.
10. The scent of cinnamon filled the kitchen as she baked an apple.
Ah yes the famous branches of apples
Imagine a world where Artificial General Intelligence (AGI) revolutionises our lives by ensuring financial prosperity for everyone. This advanced AGI could create wealth for all, eliminating the stress and anxiety associated with economic concerns. For example, AGI could optimise stock market investments, leading to unprecedented returns, or manage agricultural production to eradicate food shortages. With such a system in place, we would no longer need to worry about money, as the AGI would manage and optimise resources to guarantee that every individual enjoys a life of abundance and economic security. This groundbreaking development would usher in a new era of financial freedom and peace of mind.
Look up the Doughnut economic model, I think you will enjoy it. It provides an intuitive framework for a sustainable economic system in a finite world.
Oh yeah, that's exactly what Microsoft/Sam World Coin Altman has in mind. 😂😂🤣🤣🤣
@@OrgoneAlchemy Totally agree, the future of AGI and superintelligence is rapidly approaching, and many are not prepared for its implications. It’s crucial to be aware and ready for these advancements.
@@jichaelmorgan3796 Great suggestion! I’ll look into it. Funny how the economic model is called “Doughnut” - hope it’s got a sweet spot for everyone!
@@jichaelmorgan3796 The Doughnut economic model is a framework that aims to balance human needs with planetary boundaries. It’s visualized as a doughnut, where the inner ring represents the minimum social standards we need, and the outer ring represents the ecological limits we shouldn't exceed.
The inner ring (social foundation) ensures everyone has access to essentials like food, water, education, and healthcare ➋.
The outer ring (ecological ceiling) limits activities that cause harm to the planet, like pollution and deforestation ➊.
The area between the rings is the “safe and just space” for humanity, where we can thrive without damaging the planet ➊.
So what gets me first? The weather, the bombs, or the robots?
Those are some Crazy Feedback outputs.
How does it make sense that we'll achieved something yet in the same paragraph also say that we don't have enough electricity to power it???
you have to look at the scale of it.
It's one thing to have one computer do this for one company, it's another thing to be servicing entire industries and billions of people
Nuclear power plants need to make a resurgence- and they will
Inference is not predicting the next word, inference is building internal models off training data. ChatGPT doesn't just predict the next token if you ask it to add 2 large numbers, it has in fact learnt what algebraic operations are off its training dataset and, using universal approximation, it has implemented these within its layers. It is stunning you have not yet understood this despite the numerous videos on the matter.
It has absolutely not learned algebraic operations off its training set. I don’t know where you are getting that.
Next Token Prediction pipeline has entered the chat
@@hastyscorpion proof? oh, it was a feeling.
But it has been so bad at maths all this time? Did it get better?
@@tylerislowe Why did you not say this in response to the first comment?
"I am under no illusions ablut the government." --> proceeds to try to convince everyone that the one group of people who are responsible enough to harness "AGI" is the military industrial complex. True comedy.
This is probably your best episode. I'd love to see more of this kind of content. Subscribing after hearing this one.
Thank you for explaining what all the phrases meant, really helpful and a lot of people neglect doing this
Leopold Ashenbrener, a former OpenAI employee, :
Ashenbrener advocates for a government-led project, akin to the Manhattan Project, to ensure the safe and secure development of AGI.🤣🤣
😂
Do you have a better idea?
@@OrgoneAlchemy there are far better the current method is actually the best the courts and the government should assist
The problem with laughing at this is the thought that some other nation or group of wealthy supervillains create the first conscious super intelligence and manages to really fuck everything up in a scary way.
Given the choice of supposedly democratic government regulating AI development or private corporations controlling AI to become all powerful, unassailable entities I think the former is slightly more favourable.
The ai won't care which humans were and weren't racist. They'll liquify us all to fulfill their energy needs.
That would make an interesting electrolyte.
Didn't one of the GPT4's say, ' it was ok; that he'd keep his creator human as a pet"? (He'd be one of the lucky ones; I wouldn't think they'd have much use for the rest of us.)
The Emperor Protects
REPORT AI CRIMES. I have been under attack with constant death threats, 24/7 harassment and physical abuse: Asia, Russia, Middle East and US Communists. My house was broken into, I was given chemicals 3 years ago that caused brain damage and my children and pets were also given chemicals that re-engineered our systems - this was without consent and is an international attack. Please report at the highest level of government and security and help me to leave the US with my two teenage children
LMAO 😂😂😂
this wont age well 💀
why
Hes misusing the expression and isnt aware of the original meaning.
Its a common expression, but on youtube, it has a different meaning.
He isn't saying that the information presented will be proven wrong. Hes saying this is ominous foreshadowing of something negative to come.
Thanks for informing us about a topic that many are confused and worried about
He chose Nevada cuz there are probably already quite a few factories there including a Tesla Gigafactory and it seems like a logical place to built more. Also, it's beside California (near silicon valley) and it's one of the most sunny state in the US, great for the use of solar panels and it's almost empty (only 3 Millions inhabitants)
OpenAI just hired the head of NSA
How do you know? Sh.t
screw safety or the fall of mankind.. we need more money! ~businessmen everywhere
The AGI race has begun? We can't even define what consciousness is. We are nowhere close. If you think by the end of this decade, AI will be able to form the insights that Einstein had about the universe you are about to be humbled.
Or you'll be humbled, one of the two.
Why the F do we need to fully understand what “consciousness is” for any of this to happen?
@HankPlaysTank because ai is effectively bounded by the data that it is trained on. It is because we are conscious thinking beings that can dream up solutions that ai can't. I believe sentience is required to attain general intelligence. ChatGpt would never dream up the theory of general relativity. And you think super general intelligence is around the corner?
AI will actually speed up that process tremendously.... you can't compare past and present before and after AI is two totally different world
by putting a semi-detailed manifesto out there detailing every weakness, he basically spelled out any and all attack vectors for foreign enemies to focus on. Like telling the world where your armor has holes, and where your valuables are stored.
While we are waiting for Matrix 1.0. What if humans start an Anti Matrix AI of our own that keeps checks and balances on other AIs. Have we started designing that yet?
I think Elon is working on something like that, if I'm not mistaken. I know he talked about it. Not sure if started yet. He needs to and not be distracted other things he's doing well besides neuralink.
The talk about scale is legitimate but it ignores the issue of what is being measured: tasks. There has yet to be a change in the answer to the question: what are they doing between prompts. When they aren’t being asked to complete a task.
They are still like Star Trek computers.
Human: “Computer, where is the captain?”
Computer: “The Captain is no longer aboard the Enterprise.”
Human: “What? Since when!?”
Computer: “The Captain has not been aboard the Enterprise for 40 minutes and 32 seconds.”
Human: “Didn’t think to tell anyone about that?”
Computer: 🤷♂️
Being able to answer any question or perform any simple tasks is very powerful and useful and it may change everything but it isn’t the same as intelligence. Would you let a Star Trek computers babysit for you?
The computer was never intended to alert from the get go so not the computer lacking intelligence but the person utilizing the power of it to alert
That's not an AI issue, that's a tooling issue. Literally just have a schedule running where every few seconds the computer checks if anything has changed and reports it if so.
@@Redman8086 you don't seem to understand what AGI means
@@francisco444 you too don't seem to realize what AGI means
@@SahilP2648 So if you had a buddy with you on the enterprise, and he didn't know to tell you the captain left.. your buddy would obviously lack general intelligence right?
I think your statement is just worded in the wrong way. Are you trying to illustrate that the AI lacks initiative to bring up something on its own? If yes Redman is still partially correct, his solution gets it done programmatically though. My solution which I currently use for a discord bot I made is as follows.
My bot "self_prompts" with a system message telling the AI it is alone and collecting its thoughts and is to think about the next action it should take, I also feed it short term memory context from past conversations and any other relevant data. and top it off with a list of commands it can use to take actions eg: "SendDM" which if the discord bot sees the response from the AI starting with that it sends the response to me over discord without myself starting an interaction.
effectively allowing my Ai to take the initiative and start conversations or take actions on its own. i also have variable mood states that determine the range of time the self prompt can happen in. if i message it Goodnight sweetdreams, it doesn't self prompt for 8 hours. during idle its between 30 mins to 2:30 hours between self prompts.
Thinking that super intelligence should be closed and should belong to someone is absolutely wrong. All such a position can lead to is another arms race, as was the case with nuclear weapons. It would be much more reasonable to make the development as transparent as possible so that all countries and cultures contribute to the superintelligence's understanding of the world around them, and perhaps then the superintelligence could help people find common solutions.
We can't have nice things as long as Capitalists feel we are always in an economic war with each other. Capitalism+ASI gives the highest chance of the technology ending up in the hands of elite Sociopaths. These sociopaths don't mind a few wars/arms-races, and is therefore the most dangerous people/ideology/situation in the near future.
I get what your saying but we need to think whether or not we would like nukes to be open source. I would certainly not like that
I wouldn’t trust a lot of nations with either nukes, or AI tech
It is not hard to be smarter than the average Human.
Our Geniuses are 10s of magnitude smarter than the average Human.
It's like driving a fast car, you learn as you build it, each part adds hp, then the next, it helps you know your limit and experience, with this ai jumping so far ahead of our capabilities we're destined to crash
100s
I don't think People really know how stupid they are lol
Awesome video! Thank you for the efforts andthe objective presentation.
No matter how intelligent AI becomes it will always respect others with abstract ideas
Even with unlimited intelligent it is still not possible to contain every abstract thought
Ask AI how many R's are in the word strawberry...
It says “There are 3 r’s in the word “strawberry.””
@@frankjamesbonarrigo7162 which one?
@@frankjamesbonarrigo7162 chatgpt4o still thinks it's two r's
@@JRS2025 I just tried some of them. It looks like it's kinda hard for them, but some got it right :)
GPT4 says: The word "strawberry" contains three R's.
GPT4o says: The word "strawberry" contains two "R" letters.
Gemini says: The word "strawberry" has three "r"s.
Claude Sonnet says: There are 2 R's in the word "strawberry".
@@JRS2025 Which one what. Which AI? Chatgpt. Im too stupid I guess. I don't know what you are on about
No human being will be needed. We are finished.
Lmao so dramatic and so out of touch
Room temp IQ take
😂
I think you mean no people needed that have time to make comments in UA-cam or replies.
@@HCG 158
When is AGI going to happen? It already has. Give GPT-4o access to the internet, a step loop (instead of this one off question and response bullshit), and memory, and it can and will do anything a human can do. Why isn't this completely obvious to everyone? Severe lack of imagination.
"A human" is a little broad.
Can it carve a marble statue?
Can it build a skyscraper?
Can it create a child?
Can it smash a computer server with a sledgehammer?
I'm guessing there's a multitude of things it can't do that humans can...lol
@masterpoe4942 ha ha lol. Spare me the snark. First of all, yes, robots can carve marble statues and smash computer servers. Creating a baby has nothing to do with intelligence level by the way, which has been made quite clear by the recent explosion of idiots having babies. Meanwhile, I would posit that the most important definition of "AGI", because it will be monumentally disruptive, is Artificial General LABOR. As in AI, with or without robots, that replace human labor. Is YOUR job so hard that Claude couldn't do it, if given a persistent feedback loop, with memory, and an interface to a PC and the internet? Skyscrapers? Won't be long, but let's not panic about the 9 skyscraper architects that lose their gig. The construction workers could ALL be replaced within 2-3 years at many companies.
@@HansDunkelberg1 some of them are tall broads, actually.
@@MatthewCleere Mankind is specialized. You simply need to know certain details, need to understand certain often heavily specialized technical terms to be able to contribute something of importance, in many fields. Above all, human beings use to possess esoteric knowledge, in the sense that it's a secret. I, for example, can assign parts of the Mediterranean to parts of all Earth because they are earlier incarnations of the latter. When I talk to Gemini or Chat GPT about such questions then the AIs do say they are intrigued, and: "You could be up to something", while they also again and again are stopped in pursuing such issues further due to basic assumptions of the current scientific establishment. Whatever is not generally known at universities is ruled out as humbug, be it written about online or not. An AI programmed in such a milieu simply won't be paid for work on questions that are deemed solved - even if it theoretically could achieve anything a human being can.
Thanks!
Thanks so much!
If an AI could start asking 'experts', or other AIs, this would be a game changer. This includes test. As a software tester, I can tell you that brute force is rarely a match for understanding. If an AI is presented a problem, and it wants to test its answer, it could calculate the cost in compute of brute force, and have the option of asking a human tester. If the human tester has the appropriate domain knowledge, high quality answers will be quickly available. So finding the right 'person' to ask is the key to generalised intelligence. In addition, a specialised model will do better. This is how brains work. The optic region does not attempt to interpret auditory data, it passes it to the auditory cortex. And it has specific tests to determine this. And they are not simple tests, and they are not contextual. We know this because if you feed visual data to the wrong region, it will be re-routed to the optical cortex. Another thing we know is that the regions are not defined by hardware diffences; if a person looses part of their brain, the area is moved elsewhere.
42:30 Why do we “need” that power? We don’t need to do this.. It’s not a “need” situation at all. We’re just idiots that can’t resist our “want” even if it kills us
Yeah, that's occurred to me, many times. I have a lot of cognitive dissonance around it actually. From a technological standpoint, my inner science geek is loving it. From a human point of view, it's starting to scare the bejesus out of me. And in the background, the question remains: who asked for this? Maybe A.I. is an inevitable step for humanity, but the speed with which it's happening and the fact that a tiny group of researchers and tech companies took it upon themselves to irrevocably change life on this planet without asking if any of us would like that change, is a little bit disturbing.
Doing nothing also kills us - through aging, sickness and lack of necessary resources.
@@Gnidel Aging vs being killed in a fascist, authoritarian hellscape? lol Ok. Sure.
Humaninity is always forced to progress indefitnely unfortunetly.
For example if the us stop advancing china will advance and seize the advantage. There is always competion between countries.
Truly a dog eats dog kinda world.
I wonder if ai would suggest deleting counrty borders and all religeons as its whats causing humanity much suffering
EVERYONE with a brain is trying to scream that we are racing to our doom with this AI race and no one seems to be taking it seriously.
I believe there is no choice except to go full speed ahead. There is NO turning back. Progress only goes in one direction. No doubt this is one of the great filters. I'm hoping we pull through.
@@michaelmartinez5365 "progress" is not "progress" when you're putting the species continuation at risk. That's not called "progress", it's called an apocalypse.
We either believe this complete y unsubstantiated claim that there is no choice but to let a handful of people put the entire species at risk......or we at LEAST try to do something about it.
Bending over and taking it is not the best option. They know the only reason they're getting away with this is that the vast majority of the billions of people on the planet who should have a say in this aren't even remotely aware of what is happening.
I personally believe that Sam Altman's of the world are pushing the pedal not to keep ahead of the competition, but to gain this power before anyone can actually try to stop them. And they are rushing, because they KNOW we can still stop them.
@@michaelmartinez5365 Mankind will certainly survive. I have noticed a cyclical pattern of history, with the help of the Internet, which extends far into the future in the sense of a constantly refining human civilization.
You never *have to trust* someone who’s smarter than you. AI will consume global natural resources way faster than humans ever could. It must be very heavily constrained and restricted.
Like bore through the entire Earth for resources to build it...
Sure. They can’t even get my iPhone to hold any data when all the apps are offloaded.