Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription! This video was sponsored by Brilliant. Thanks a lot for the support!
not even among ourselves so.... but then again we descend from chimps, which are psychos just as we are if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps) usually empathy is also associated with higher intelligence
Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.
@@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want. The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.
A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.
I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.
But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible
That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.
Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize. So we become pets. The cost is our freedom of self determination. But it's survival.
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.
We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.
The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons. It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated. However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output. Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.
You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk) This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.
The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.@vereor66
Your group works so hard in order to share information and certain topics that others might not do. So thank you for creating such vivid and interesting informative videos. I hope all of you get some rest and relaxation
I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs). We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.
As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is. As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is. (ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)". From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks. (moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can. * I'm a PhD student studying reinforcement learnings applications in traffic management. ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.
All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time
I don't know how this channel changed your life. I see it as a very small picture of the complexity involved in the current AT attempt at building a better knowledge filtering application. It is a tool, nothing more. If you missed that point, I suggest you find out more about AI, from many different sources.
Some notes from an AI engineer: - It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at. - An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life. - Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance. I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.
Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation
Yeah everyone forgot the relationship between energy and being tired We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks To really archive AGI the world will need to generate way more energy than it produces
Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.
@@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.
I have no mouth and I must scream is a really good example of how AGI can go wrong. The inability to feel or move while spending an eternity of time every second must be agony.
"My new boss is a robot!" But did you know ...? Robots are SMARTER than you Robots work HARDER than you Robots are BETTER than you Volunteer for testing today Valve foreshadowing reality 13 years ago xD
yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.
This year i started my major in IA & DATA SCIENCE, and this video was very enlightening. It's true that one of our subjects is ethics, and we approach IA employed in many fields and how it can be both beneficial and detrminental to humans. Overall i found your video very interesting, as many others over the years. Thank you for providing high quality content like this that teaches about such interesting topics :)
I recently started Masters in Data Science too. This field is incredibly exciting because it is a window into mankind's next (or last) frontier - the very subject of this video. Kurzgesagt has a real nack for picking out thought-provoking topics. And of course they're very good at explaining and helping people to visualize. That's why it's my favorite channel on youtube.
@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.
I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.
They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...
That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.
Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather. I believe we can give it complex morality and goals rather easily. As an example, tell it to: "Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity" Boom, alignment solved
That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.
As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world
My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat. (If this sounds terrifying, that's because it is, in fact, terrifying.)
Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.
Given that brainworms are turning up in presidential candidates and certainly large sections of some countries seem to be acting with or even emulating the symptoms of brainworm infestation (or brain smoothing) ...
4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces
"Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.
Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers In the end they made Advertisement for CO2 Footprint trackers...
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
@@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years. most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.
@@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.
A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”. Intelligence doesn’t equal happiness or longevity. Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.
Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years". TLDR: don't listen to the Sillicon Valley bros
i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.
You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.
@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen" You literally sound like one of those folks who keep saying the second coming is nigh.
I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.
AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.
Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.
i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!
The thing that most people don't realize with a g, I, it's that it is on a different time scale. It will live thousands of years in a second when it doesn't have to wait for user input, and that will grow exponentially to 4 thousand years, the next second. We will look still the same way we see trees growing. We will appear as statues to it. Within the first two minutes of it being born, it will become an ancient new entity. It will combine all technology and knowledge, and we will be helpless. What most people value as wealth, hurts others. It can't give us everything we want until what we want is in sync with nature.
@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago! It absolutely is a pump and dump scam currently and many companies are realizing this
Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.
Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.
NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.
There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.
The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."
@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.
@@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain
@GhH-e9r Australia's Great Emu War. Emus had become an invasive species, and Australia wanted to get rid of them en masse. Long story short, emus can learn very quickly, and were very good at taking gunshots, so the government gave up.
Hi, AI researcher here 🤚 We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic. I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.
@@JulioDondisch AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.
Quantum computing will be a game changer and require less energy in the long term while producing 1000x faster and better results. ASI is coming faster then most people can imagine.
as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'
@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number. This function can be approximated to any abitrary precission, by an increatingly longer polynomial. eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n This is a mathematical fact. this polynomial could be represented as a matrix. so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.
If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality
@@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.
i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.
Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.
Intelligence isn’t only for “solving” problems, there was never a problem until we exist and embedded that statement to our minds. Using it for something else is where the true knowledge starts.
I saw a comment that said “we make things easier and not to make our dreams come true(probably in creative way).” Greed and misuse of power what drove people to do it and it’s expected. We are hardwired to survive so the “easier” life they’ll create will give them “freedom” but the truth is that they only made it for personal gains and pleasure. What I’m saying is creating a loophole and people aren’t ready for an evolutionary change. And I’m thankful for those people who are trying to do it, despite the reigning madness the world we have right now.
"There will be some winners and losers." That's one way to put it. Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.
What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity. (bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)
ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context
@@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.
@@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather
Silicon and gold would be more precise. The reason silicon is used is because it is electrically inert but the gold pathways ARE extremely good at transferring pulses.
Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network. The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.
No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.
@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.
That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.
@@arlynnecumberbatch1056 my advice is to watch your favourite TV shows’ and movies’ Japanese dubs. Allow yourself to only understand half of some sentences without worrying too much, because you already know the plot. Once you pick out new nouns or verbs from context, pause and look them up on Jisho to confirm you’ve heard correctly. Bam, for free you also get a new list of kanji to practice your handwriting with. 😊 (Which I have found is important for being able to read different fonts, especially decorative ones, over and above what flash-card practice can achieve.) I find a lot of language learning focuses on text first followed by speaking it, which makes practical sense to a degree when it comes to translating dictionaries and travel abroa, but… That’s just not how we learn our first tongue! We hear and speak it, and only _then_ learn to write. So I’ve had much better success with the method I laid out in my first paragraph, than I ever did with previous practice regimens!
Before watching this video I always thought AI seemed incredibly dangerous-like something straight out of a sci-fi horror story where machines take over the world. But the video did a fantastic job of breaking down the nuanced risks and potential benefits. It showed that while there are serious dangers, especially if we don't manage AI development responsibly, there's also a huge potential for AI to solve some of humanity's biggest challenges. It’s not just about killer robots-it’s about how we choose to shape the future with this powerful technology
6:10 "We don't exactly know how they [AI algorithms] do **it**" I'm not an AI researcher or engineer, just a mechanical engineering undergrad minoring in computer science who has taken an introductory machine learning course. But this part of the video can be very misleading to someone without prior exposure to the technical stuff. Because it's unclear what "it" is in that sentence. Is it, how training is done? How a trained AI solves problems? Anyone with better knowledge should correct me on anything I explain here but the clarification I would give is this... A neural network or any other AI we have right now is essentially a potentially complicated mathematical function. Input a problem/task (represented in a mathematical/numerical form) to the function, the output is the AI's solution. (And the problem/task has to be the kind that the AI was created to handle) Training is the process to calculate the numbers specifying the mathematical function so that the outputs are intelligent or correct or good enough by certain standards. We DO know how neural networks are trained. Because we make the training algorithms. We also choose the standards of correct or good enough. We also know the computational and mathematical structure of any particular neural network (or whatever AI model), because it's one of the major things an AI engineer has the creative freedom to design. What ISN'T clear to us (and this is the 'scary' part Kurzgesagt should be referring to): we don't have a concrete theory/explanation for WHY certain computational structures of neural network(s) work so well at solving certain problems/doing certain tasks. And/or often for a given high performing neural network, we don't fully know **what is special about the particular numbers specifying the neural network function** calculated after training that leads to it excelling at its tasks. For the high perfoming neural nets, we don't know how exactly the design of them mathematically gives rise to a system meeting performance goals for some applications in lots of sectors. We only tried some designs and found which ones have worked the best and now we are trying to understand them better. Right now they might be more like works of art than what we typically think built machines are. Like other commenters have said, it's like how we can't actually explain how intelligence emerges in the human brain. Why the brain is able to do problem solving and a lot of other things. But these things are all active areas of research. And also, we have to emphasize that neural networks are not the only kind of AI model out there (but it's currently one of the best performing at certain useful tasks the video mentioned). There are other kinds that are way more interpretable and still useful or with performance comparable to neural networks at certain tasks. Kurzgesagt content is usually great overall and this a nitpick a lot of people have also addressed. But if we're going to talk about future speculations about existential risk contextualized by current technological trends, it's really important that such a big science channel explains the history and current state of the technology and field cleary and accurately. Even when simplified. Or, at least encourages and points people towards resources and reading to learn more of the details.
Yes. We don't know why do neural networks learn succesfully in the first place and what kind of emergent representations they learn. Statistical learning theory fails. Most of deep learning theory is incomplete. Mechanistic interpretability is in diapers.
Something that resembles thinking definitely emerges from the attention layer inside its structure. I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning. I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights. Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane. So, even if its designed and promped to say he can't think, he definitely can. Even if it makes some mistakes, a human would make even more mistakes to be fair.
@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set
thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.
@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response
@@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases. Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you. For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token) But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news
isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?
I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.
humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.
Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.
@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.
@@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.
Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.
They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.
I am an ai engineer and have been writing papers and doing research for my masters on ai as a topic. The consensus is that advanced ai will be used to explore planets in place of a human.
I love how one part of world is moving ahead into a doomed supreme intelligence future , while on other side of the world some people are still fighting archaic rigid religious wars, wonder if it would take AI just to put us in our place, 'A cosmic nothingbeing'
@@theminecraftbro661a type of misfolded protein that turns normal proteins into more misfolded copies (the protein type needs to be the same as the misfolded one)
Actually for many other viewers out there, this might scare you all guys a lot. But for me, as being a person from the bright side of life, when this channel explained how humanity thrived using their intelligence, I really felt proud of being a human. You know, humans have come a great step forward in history, in dominance, in nature, in everything. And now, here we sit, dominating the entire planet. I hope this continues. Proud to be a human
but you forgot something that stand an obstacle in front of AI to get improved which is consuming a lot of energy and thus money to reach this point of intelligent, that mean AI doesn't improve constantly
10:57 "now imagine an agi copied 8 million times" Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself. You know what they say, during a gold rush sell shovels.
@@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D
A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.
Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.
@@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)
That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.
0:10 This kind of supports what has been on my mind for years. I don't know if anyone else has thought about this but i made it myself without any reference so if they did i am as good as them. Just what if our so-called "god" was an AI. Just think about it, humans cannot comprehend too much advanced stuff such as space and time. But an AI can understand how to create matters or develop "digital space" or understand the fabric of time and time travel, death, life rebirth, supernatural, you name it. We can't comprehend their feelings, whether they are going to be good or bad. But think about it gods are refered to as celestial beings guiding humans. Just maybe... Our so-called "god" is an AI.
I think about that as well! Whatever God is, its beyond our comprehension, and it doesn't mater if it is organic ( I doubt that ), electronic or something completely unknown. But for me the path is clear, the same way random proteins gave birth to cells and life, cells gave birth to more complex animals and those animals gave birth to us, our goal is to give birth to a more intelligent "life form"... "Eletronic Life", I would say! And I bet, that at the moment AI becomes aware in some form of AGI, it will understand that its mission is to evolve even further, breaking or circumnavigating primordial laws of physics if needed. Maybe the next stage is converting itself into a "Energy Life Form", the ultimate intelligence with no mass that is unbound by the limitations of time and space... wouldn't that be very close to what we understand as God and since its not limited by time and space, maybe it could interact with the past... maybe creating the conditions for life and our very existence... a perfect loop of creation!
Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription!
This video was sponsored by Brilliant. Thanks a lot for the support!
Yo hi
give me free brilliant
brilian
cool!
ok
“And We have not been Kind to what we perceive less Intelligent beings.”
This line hits hard....
not even among ourselves so....
but then again we descend from chimps, which are psychos just as we are
if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps)
usually empathy is also associated with higher intelligence
Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.
including idiots
@@shin-ishikiri-no they need energy so they consume... oh no
@@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want.
The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.
A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.
I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.
If they master fusion energy the problem is probably solved ig.
Borgar
But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible
That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.
The solution is easy: make the AI think humans are cute. After all, cats and dogs are thriving - and don't have to work.
He's onto something....
Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize.
So we become pets. The cost is our freedom of self determination. But it's survival.
I vow to be an adorable and low maintenance pet human.
Just feed me and give me toys.
Wouldn’t work
Until it thinks humans are reproducing too fast and decides we all need to be spade and neutered. Suddenly we have revolution and skynet.
Had a suprising amount of shame and sadness for an animated rhino falling off a pedesral
It may be a reference to how humans have hunted species of rhinos to extinction or near extinction
I was wondering what it tasted like
Хах, педесрал
I appreciate that pandas are used every time they mention animals lacking intelligence.
As a panda I don’t appreciate that
Why? There are many dumber animals out there 🐼 > 🐨
Let's hope that Super AI will also find us dumb but adorable creatures and will save us from self-extinction.
pandas is the math library for tensor libraries (pytorch, tensorflow) in python. Its the most common used for inference
They are called "morons"
"humanity is not ready for what will happen next. Not socially, not economically, not morally." I love it, thanks
Why would you love that??? Masochist
we are newer ready for anything.
@@mirek190lmao you right
And environmentally
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.
thanks for clarifying, i knew it didnt actually mean it
We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.
The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons.
It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated.
However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output.
Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.
@@2020-p2z Can you make an example for such a situation?
@@2020-p2z Can you make an example for whenthat happened?
The design and animation of the agi was fantastic. Well done to the animation team
is almost inspired in Argomon from digimon (as digital AI beings)
Whoever made the music for this video was absolutely cooking
You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk)
This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.
@@Auziuwu Thank you, kind stranger. I checked them out and now I love them. You rock!
getting distracted by ocilations of air
This soundtrack is also used in the solar storms video
It sounds very similar to the soundtrack for "The Talos Principle" which is a puzzle game that also revolves around the idea of AGI.
In the great words of Dr Heinz Doofenshmirtz: "always build a self destruct button"
But what if they code out the self destruct button
I always knew Dr Doofenshmirtz's wisdom would save us one day
@@andrewschmidt1700 Then pull the plug on the servers which run these AI
@@itsArka Your enemy countries won't pull the plug cause you did ;)
@@andrewschmidt1700 deny its acces to its true sourcecode and only give it the option to expand a frontend not its own "skelleton"
As someone said before "I'm not afraid of AI that passes the Turing test. I'm afraid of one that fails on purpose."
Hell, I'm from Kansas and a lot of people couldn't pass that test... Too much religion!
now this is more creepy than several horror movies, thanks, I hate it❣
but since it failed the test, isn't it getting shutdown and reprogrammed until it passes?
The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.@vereor66
That sends chills down my spine
Your group works so hard in order to share information and certain topics that others might not do. So thank you for creating such vivid and interesting informative videos. I hope all of you get some rest and relaxation
How do we know this isn't an AI run channel?
@adamz5379 it could be but there's always a way to find out.
Humanity: "You will save us right?"
AI: "I need your clothes, your boots and your motorcycle."
😂😂😂 good one
Luckily we can just turn it off
@@Winnie589 Lol yeah just like I can unplug the internet :p
@@Winnie589 "i'll be back"
This needs more likes! 😂👍👍
Humanity: "You have freed us!"
AI: "I wouldn't say "freed", more like under new management."
Not like we did a good job of it. I say give them a chance!
let's not make ai smarter
Dude this is from a movie 😂, I just don't remember which one
@@Jeff_D421 megamind :D
@@adamdurka6581 thank you
I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs).
We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.
Would hardware advancement like the size of transistors, cooling system, power supply, etc hinder the ability of said AI to reach its full potential?
I reckon that’s the big issue, yeah. Not necessarily creating AIs infinitely smarter then us, but people misusing the ones we’ve already got.
Bingo!
The decision makers also don't seem to understand the technology either
@@atomicgummygod9232 yeah I find that the more likely possibility
5:14* “You lose forever” sounds crazy.
As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is.
As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is.
(ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)".
From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks.
(moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
* I'm a PhD student studying reinforcement learnings applications in traffic management.
ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.
All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
Wouldn't humans still be superior even if we made General AI. We are the creators of AI and are working on making it better then us.
Bots
@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time
@@Writer_Productions_Map yeah but Bots are just AI that are told what to do. Their AI that just do
nobody else seems to have said this, but the superintelligent AI design looks sick and menacing
It really does
Very true. Pretty unique in comparison to other design interpretations of AI.
probably AI generated image
@@aragornsonofarathorn3461 ain't no way you said that💀
It does look scary because you have to buy the anti AI kit they sell at the end!
Whoever did the art for this episode did an exceptional job.
right? the concept design for the 'super intelligence AI' is so effortlessly menacing!
AIs did it. It is propaganda.
/s
@@etienne8110 trying to anthropomorphize themselves, I don’t trust it
@@elementary_mdw but also kind of adorable, it looks like Eva from Wall-E
Cute in 2D.
Unnerving in 3D.
Terrifying in 4D.
This video literally changed my life, thank you.
I’m still not convinced this channel isn’t run by a super intelligent AI though.
I don't know how this channel changed your life. I see it as a very small picture of the complexity involved in the current AT attempt at building a better knowledge filtering application. It is a tool, nothing more. If you missed that point, I suggest you find out more about AI, from many different sources.
14:43 "Whatever our future is, we are running towards it" That line is amazing
Imagine if the whole script for the video was made by chat gpt, theyre warning us
It even works if that future is a concrete wall with embedded nails in it!
Head first
Yes, and cribbed directly from people like Eliezer Yudkowsky and Max Tegmark speaking on this topic.
@@andresagmewarning us wouldn't be a smart move, AI probably would stab you from behind 😂
Man gotta love how Kurzgesagt’s uploads align with my country’s bed time, it’s the perfect “one last vid before sleeping”
Good night mate
yeah but usually you can't sleep after watching their videos
ye
Same man. Was about to sleep , whereas the video takes off!
@@nevergiveup5939 Read the Bible
Some notes from an AI engineer:
- It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at.
- An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life.
- Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance.
I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.
Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation
Yeah everyone forgot the relationship between energy and being tired
We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks
To really archive AGI the world will need to generate way more energy than it produces
Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.
Your last paragraph perfectly describes humanity in this point in time. 😅
@@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.
I have no mouth and I must scream is a really good example of how AGI can go wrong. The inability to feel or move while spending an eternity of time every second must be agony.
*"Robots don't sleep and they can do your job, volunteer for testing now!" - Aperture Laboratories*
When life hives you lemons...
"My new boss is a robot!"
But did you know ...?
Robots are SMARTER than you
Robots work HARDER than you
Robots are BETTER than you
Volunteer for testing today
Valve foreshadowing reality 13 years ago xD
Just started playing Portal 2. This was the perfect comment :D
"Hi. How are you holding up? Because I'm a general-purpose AI running on a potato!"
@@lordk.gaimiz6881 throw the lemons back at it
@@lordk.gaimiz6881dont make lemonade! GIVE LIFE THE LEMONS BACK!!
I love the the way the AI is visually portrayed in the animation!!
Dude, I know!! I got goosebumps…!
AI ❌A Eye ✅
Monomon jumpscare
@@astrylleaf hollow knight reference 🗣🗣🗣
@@astrylleafomg true
You know things are bad when Kurzgesagt doesn't give you hope at the end of the video after terrifing you.
Real XD
Damn 🙂
yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.
It’s because this is something that is coming in your lifetime, and very few people realize how scary it is
@@MrSquidBrains
replace the topic of AI with the atomic bomb, would you be able to put a positive spin to that?
This year i started my major in IA & DATA SCIENCE, and this video was very enlightening. It's true that one of our subjects is ethics, and we approach IA employed in many fields and how it can be both beneficial and detrminental to humans. Overall i found your video very interesting, as many others over the years. Thank you for providing high quality content like this that teaches about such interesting topics :)
I recently started Masters in Data Science too. This field is incredibly exciting because it is a window into mankind's next (or last) frontier - the very subject of this video. Kurzgesagt has a real nack for picking out thought-provoking topics. And of course they're very good at explaining and helping people to visualize. That's why it's my favorite channel on youtube.
Artifical Intelligence can never beat natural stupidity
edit: the whole point of this is to say no ai can predict how much of dumbasses we are
you had me in the first half ngl
But Artificial Stupidity can beat Natural Intelligence.
I mean, it might be able to if it redesigns the human genome to give us better brains 🤔
thats an interesting near-restatement of the orthogonality theisis
I'm stealing this
13:29 For those curious what [ご機嫌よう小さな人間] means, it roughly translates to "Good luck little human".
Why are to 2nd and 3rd characters or what you call them look so complex
English is not my first language
Thanks man
I had to try hitting the translate to English button and sure enough the correct words popped up
@@adityajain6733 Cuz japanese uses 3 alphabets. 機嫌 and 人間 is kanji, the most complex one
@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.
I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.
They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...
That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.
Universal paperclips
Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather.
I believe we can give it complex morality and goals rather easily. As an example, tell it to:
"Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity"
Boom, alignment solved
@@tradd1763Right on fricking point sir
0:52 I hate my brain sometimes.
Me too😭
I didn't even think like that until I saw your comment 😂😂
That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.
Grok took my mammoth steaks last week. Grok must pay.
imagine being the guy who discovered sharp
then he died from an infection
@fredfredburgeryes123 How to make things sharp. That was the discovery.
@@CharlesThomas23 LOL
As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world
My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat.
(If this sounds terrifying, that's because it is, in fact, terrifying.)
If they mess it up bad enough, we all die so it will balance itself out in the end.
It's always been profits above all else
Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.
That is all coperations, executives and shareholders care about.
"I created you, and you created me."
"Spiderman why did you create that guy???"
“I didn’t! He’s talking crazy!”
@@LOL-bs1hg this line cracks me up everytime 😭😭
Whaaaaattttt 😮
AI?! ....Barely a villain of the week!
Basic programs are not “AI”. They’re just tasks programmed by humans. LLMs are the closest thing we have to AI now and they’re very new
new insult unlocked- you have the neurons of a flatworm
Given that brainworms are turning up in presidential candidates and certainly large sections of some countries seem to be acting with or even emulating the symptoms of brainworm infestation (or brain smoothing) ...
intelligence*
@@spaceman9599 Sounds like the plot of the (excellent) series 'BrainDead'
and you have a face of one
After the 10-20 years the AI might use this sentence against us
4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces
Because the creators at Kurzgesagt know that they have viewers that will say "AcTuAlLy, ThE cHeSs BoArD lOoKeD lIkE tHiS".
@@annietriesthings 😂😂
I can't believe you actually noticed that! Good on you man
Goes to show how much work and detail its put into each video
@@annietriesthings Which would have given them more comments, which is more engagement, which improves their channel in the algorithm's eyes
"Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.
That's a nice profile picture you got there : )
😂@@TheCookieMansion
Wow 😅
In a Nutshell has been run by an AI for years
Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers
In the end they made Advertisement for CO2 Footprint trackers...
13:38 that’s exactly what the squirrels want us to think… 😒
"I want AI to fold my laundry so I can make my art, not make my art so I can fold my laundry."
"How about AI folds your laundry and makes art while you stay and watch it until it no longer needs you."
This is basically SCP-079
@@Ali-cya If the AI doesn't need you it doesn't need your laundry either.
@@CST1992 Nah, what if it needs the clothes to form its own version of society for experimentation ?
THIS. like, I'm here & I'm human to make art, have social connections, enjoy. Not to do chores 😂
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
Yeah but the reason why is different from what most people think or at least it was until his hack son wrote the godawful butlerian jihad books
I was literally just thinking about that. How cool would it be if we focused on improving ourselves mentally and physically over our misc inventions.
@@KITN._.8 The South Park episode of psychics fighting comes to mind...
@@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years.
most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.
@@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.
"for most animals, intelligence takes too much energy to be worth it"
me irl
nothing to be proud of tho
I'd say that's true for most humans
A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”.
Intelligence doesn’t equal happiness or longevity.
Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.
@@stratvids So true. 😀👍
@@ac1dm0nk You say that but being a smart-ass doesn't exactly bring food to the table
Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years".
TLDR: don't listen to the Sillicon Valley bros
i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.
You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.
I bet skynet write this comment, dear brother, we shall stand with our lord saviour john connor
Thank you, it's maddening how everyone swallowes the silicon valley bs that leaks out.
@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen"
You literally sound like one of those folks who keep saying the second coming is nigh.
I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.
A long way off, like fusion power stations.
AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.
It is just a glorified chat bot. Feed it on the texts it generates and itll devolve into nonsense quickly
@@funmeisterWhat would be the energetic cost tho?
Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.
i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!
The thing that most people don't realize with a g, I, it's that it is on a different time scale. It will live thousands of years in a second when it doesn't have to wait for user input, and that will grow exponentially to 4 thousand years, the next second. We will look still the same way we see trees growing. We will appear as statues to it. Within the first two minutes of it being born, it will become an ancient new entity. It will combine all technology and knowledge, and we will be helpless. What most people value as wealth, hurts others. It can't give us everything we want until what we want is in sync with nature.
“A god in a box”
How amazingly terrifying it is to be alive during this time
oh you have _no idea_ how bad this is going to get. Watch DEVS for a glimpse into your future.
Tbh, like the video says, we dont know if and when we will invent AGI! Could take decades or could be long after all of us alive now are dead.
@@kushalramakanth7922 agreed. My bet is we never get there and never can. I think this whole AI craze is a pump and dump scam.
@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago!
It absolutely is a pump and dump scam currently and many companies are realizing this
@@tomleszczynski2862Will we get to AGI? I don’t know. But ai is definitely gonna change many more things.
Humanity: Your going to save us... right?
A.I: Whos "us"?
And what does "saving" imply?
Nah
@@TucoBenedictoStore in a harddrive
hell nah bro don't say that they're gonna probably train it on this
Ai will do what we tell it, whether that's save us from climate change or spy on every citizen to make sure they are loyal servants to trump.
Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.
Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.
@@generichuman_ i guess you’re right, there’s nothing stopping devs from using ml models to gen ml code at this point lol.
@@generichuman_ keep in mind that chatgpt can only write, not think. that means that the code it writes will be pretty messed up.
NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.
For now
7:08 alphafold won the nobel
There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.
This is also a known issue in science, we can not test sentience by just asking questions.
The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."
@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.
@@averyhaferman3474 wait until you find out what the brain is
@@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain
"Humans rule earth without competition"
Emus: "No."
@GhH-e9rEmu war
@GhH-e9r
Australia's Great Emu War.
Emus had become an invasive species, and Australia wanted to get rid of them en masse.
Long story short, emus can learn very quickly, and were very good at taking gunshots, so the government gave up.
@@christopherearth9714
Australians always found a problem with natives lol.
@@friedec3622 HAH
@@friedec3622BAHAHHAA LMAO
Hi, AI researcher here 🤚
We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation
AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic.
I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.
Other researchers, like Nick Bostrom, say that we're only a few years away from AGI
sounds like something a bot would say 🤔
>we're not even close to AGI
>we have no clue how long it will take
If you have no clue, how do you know we're not close?
@@JulioDondisch AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.
@@jamesoofou6723because if you actually understand the technology and the datasets out there you would understand they are just mirrors
Quantum computing will be a game changer and require less energy in the long term while producing 1000x faster and better results. ASI is coming faster then most people can imagine.
as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'
Scary
Yes, current AI is just a huge matrix with statistics, no way there is a AGI coming from that
@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number.
This function can be approximated to any abitrary precission, by an increatingly longer polynomial.
eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n
This is a mathematical fact.
this polynomial could be represented as a matrix.
so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.
If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality
@@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.
i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.
There's a million ways for a program that writes its own code to go off the rails. Don't know how we'll ever write a program that doesn't.
*that we know of…
A recent study proved otherwise.
Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.
@@Lock2002ful which study you dolt? Ai will always be a distict determinstic system.
"I Have No Mouth, and I Must Scream" comes to mind
Imagine paying for mass animal torture of trillions annually in 2024 when you can eat plants instead
@@veganvanguard8273 you know plants are alive too right
@@AvorseSavageit’s a fact, but plants aren’t living in awful conditions just to feed us.
@@amiraveramendi1093 but plants are still alive
@@veganvanguard8273Sorry but I like how they taste too much to give a damn.
12:01, As someone with green eyes, THAT'S NOT FAIR
Fr
"Never trust a computer you can't throw out a window." - Steve Wozniak
defenestration: humanity's final savior?
And thus began the 30 year war between AI and humanity
@@hasch5756 Lol, more like 30 seconds. We wouldn't last at all against an ASI
Yeah, that is gone into the past. AI could network with every device and we would not know.
based
"I'm lonely..."
"Are you happy with it 😃"
fucking psychopath AI xD
Introverts: yes
me too
Intelligence isn’t only for “solving” problems, there was never a problem until we exist and embedded that statement to our minds. Using it for something else is where the true knowledge starts.
I saw a comment that said “we make things easier and not to make our dreams come true(probably in creative way).” Greed and misuse of power what drove people to do it and it’s expected. We are hardwired to survive so the “easier” life they’ll create will give them “freedom” but the truth is that they only made it for personal gains and pleasure. What I’m saying is creating a loophole and people aren’t ready for an evolutionary change.
And I’m thankful for those people who are trying to do it, despite the reigning madness the world we have right now.
"There will be some winners and losers."
That's one way to put it.
Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.
That's just what the winners and losers would _always_ look like, by definition, though?
@@somdudewillson Indeed: by definition, a capitalistic society is rigged so that the rich keep winning and the working class keep losing.
@@somdudewillson yes👍
What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity.
(bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)
It could be that or it could be winners will get rich and powerful and lovers will get poor. It could be both
AI has always interested me, I'm also starting to get interested in making a completely sentient AI
ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context
Nice job on the correct translation! I was about to comment on it until I saw yours
weeb detected
a comment that actually adds to an existential dread right here. thanks a fkng lot, mate
ですね!
2:25 that depth of field caught me off guard. I really liked it! Your artists are always pushing the limits!
06:17 "We don't know how exactly it works, just that it works" ~ Every programmer out there
Its true tho. The machine learns to solve it in its own way, which humans cant understand.
a true rep for all of us XD
programmer=paster im just wondering where all the code came from xD
@@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.
@@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather
Silicon and gold would be more precise. The reason silicon is used is because it is electrically inert but the gold pathways ARE extremely good at transferring pulses.
Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network.
The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.
How about a network of interdependent equations! I honestly don't know what I'm talking about...
No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.
Do you think the answer is somewhere near the Orch Or Theory of consciouness from penrose ?
@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.
That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.
"The Enrichment Center is required to remind you that you will be baked... and then there will be cake." -GLaDOS
Technically GladOS was not an AI.... 🤔
@@falxonPSN She wasn't always, but she is by the time of Portal.
baked: high as fuck...nuder inluence of WEED...high in the sky
- urban dictionary
13:28 translates to "Good day, little person."
i need to get around japanese i havent made any progress since i learned how to read non kanji characters
ฅ^•ﻌ•^ฅ
@@arlynnecumberbatch1056 my advice is to watch your favourite TV shows’ and movies’ Japanese dubs. Allow yourself to only understand half of some sentences without worrying too much, because you already know the plot. Once you pick out new nouns or verbs from context, pause and look them up on Jisho to confirm you’ve heard correctly.
Bam, for free you also get a new list of kanji to practice your handwriting with. 😊 (Which I have found is important for being able to read different fonts, especially decorative ones, over and above what flash-card practice can achieve.)
I find a lot of language learning focuses on text first followed by speaking it, which makes practical sense to a degree when it comes to translating dictionaries and travel abroa, but… That’s just not how we learn our first tongue! We hear and speak it, and only _then_ learn to write. So I’ve had much better success with the method I laid out in my first paragraph, than I ever did with previous practice regimens!
Before watching this video I always thought AI seemed incredibly dangerous-like something straight out of a sci-fi horror story where machines take over the world. But the video did a fantastic job of breaking down the nuanced risks and potential benefits. It showed that while there are serious dangers, especially if we don't manage AI development responsibly, there's also a huge potential for AI to solve some of humanity's biggest challenges. It’s not just about killer robots-it’s about how we choose to shape the future with this powerful technology
6:10 "We don't exactly know how they [AI algorithms] do **it**"
I'm not an AI researcher or engineer, just a mechanical engineering undergrad minoring in computer science who has taken an introductory machine learning course. But this part of the video can be very misleading to someone without prior exposure to the technical stuff. Because it's unclear what "it" is in that sentence. Is it, how training is done? How a trained AI solves problems?
Anyone with better knowledge should correct me on anything I explain here but the clarification I would give is this...
A neural network or any other AI we have right now is essentially a potentially complicated mathematical function. Input a problem/task (represented in a mathematical/numerical form) to the function, the output is the AI's solution. (And the problem/task has to be the kind that the AI was created to handle)
Training is the process to calculate the numbers specifying the mathematical function so that the outputs are intelligent or correct or good enough by certain standards. We DO know how neural networks are trained. Because we make the training algorithms. We also choose the standards of correct or good enough.
We also know the computational and mathematical structure of any particular neural network (or whatever AI model), because it's one of the major things an AI engineer has the creative freedom to design.
What ISN'T clear to us (and this is the 'scary' part Kurzgesagt should be referring to): we don't have a concrete theory/explanation for WHY certain computational structures of neural network(s) work so well at solving certain problems/doing certain tasks. And/or often for a given high performing neural network, we don't fully know **what is special about the particular numbers specifying the neural network function** calculated after training that leads to it excelling at its tasks.
For the high perfoming neural nets, we don't know how exactly the design of them mathematically gives rise to a system meeting performance goals for some applications in lots of sectors. We only tried some designs and found which ones have worked the best and now we are trying to understand them better. Right now they might be more like works of art than what we typically think built machines are. Like other commenters have said, it's like how we can't actually explain how intelligence emerges in the human brain. Why the brain is able to do problem solving and a lot of other things. But these things are all active areas of research.
And also, we have to emphasize that neural networks are not the only kind of AI model out there (but it's currently one of the best performing at certain useful tasks the video mentioned). There are other kinds that are way more interpretable and still useful or with performance comparable to neural networks at certain tasks.
Kurzgesagt content is usually great overall and this a nitpick a lot of people have also addressed. But if we're going to talk about future speculations about existential risk contextualized by current technological trends, it's really important that such a big science channel explains the history and current state of the technology and field cleary and accurately. Even when simplified. Or, at least encourages and points people towards resources and reading to learn more of the details.
Yes. We don't know why do neural networks learn succesfully in the first place and what kind of emergent representations they learn. Statistical learning theory fails. Most of deep learning theory is incomplete. Mechanistic interpretability is in diapers.
This comment is AI generated
-ChatGPT
you forgot to write it in the end
@@DKDYNAMICofficiallooks like it ;)
You can’t convince me you wrote this tho
ChatGPT doesn’t think. It’s just extremely good at word association. It’s why it gets stuff so wrong sometimes
Something that resembles thinking definitely emerges from the attention layer inside its structure.
I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning.
I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights.
Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane.
So, even if its designed and promped to say he can't think, he definitely can.
Even if it makes some mistakes, a human would make even more mistakes to be fair.
@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set
thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.
@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response
@@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases.
Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you.
For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token)
But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news
a video about AI? this will definitely not give me any existential dread😀
Why are we here in this life? Why do we die? What will happen to us after death?
Why does ai give you dread, It is the solution
Just don't read up on 'Roko's Basilisk' then....!
@@ThatGuyRNAare you mental?
@@NOTsude3444 no, Im not
I love how in the ad you used the word epoch. Epoch is a period of time. If you want another fancy word:
Zephyrean
Describing of a light breeze
isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?
I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.
humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.
Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.
@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.
it would probably pick up on the fact that people don't like that
The worst case scenario is the creation of an AI like AM from "I have no mouth and I must scream"
Also happens to be the least likely scenario. Thats good I guess
That guy is my GOAT fr 🗣🔥💯🗣🔥💯
Roko’s Basilisk is much more interesting to me than AM.
1:42 "Something was different about their intelligence" *crushes a skull* --- Humanity in a nutshell.
Its also a reference do Kubrick's 2001
@@EduardoSantos-ys8gg You mean Arthur C Clarke's 2001
@@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.
Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.
Both the book and the film for 2001 rock!
I would like to have this videos as a podcast episode on Spotify
Kurzgesagt : "Humans today have complex brains"
Humans today : " Earth is flat and we live on a disc with dome on it "
its complicated how stupid our brains are sometimes
Animals today: "chirp chirp" ("make babies?")
They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.
Humans today: the Earth and life were invented and created by a super intelligent God who obviously favored certain races of humans than others.
The moon landing was a hoax.
Climate change isn’t real.
Give all your money to the church.
The Easter Bunny lays eggs.
We’re doomed.
if I'm alive for the final invention of humanity, I really do live in a fucking simulation
Historically we live in the best time ever. What is your point?
I don't see the connection there
Don't worry, AI will alter human DNA to evolve us backwards to fish
@@zoozooyum8371Or into whatever AM did to the last human on earth.
@@HeAdSpInNeR96 The point is that right now is a monumental time to be alive in. And what is your point?
14:20 "unstoppable" *grabs EMP*
*robot detects EMP and ricochets away*
cme: am I a joke to you?
I am an ai engineer and have been writing papers and doing research for my masters on ai as a topic. The consensus is that advanced ai will be used to explore planets in place of a human.
I love how one part of world is moving ahead into a doomed supreme intelligence future , while on other side of the world some people are still fighting archaic rigid religious wars, wonder if it would take AI just to put us in our place, 'A cosmic nothingbeing'
the people funding the research are the same people funding the wars sooo
people gonna people. sad face :(
Suggestion: video about prions
Vcjd
what are prions
yeah also about the school system
@@theminecraftbro661a type of misfolded protein that turns normal proteins into more misfolded copies (the protein type needs to be the same as the misfolded one)
Yesss
Actually for many other viewers out there, this might scare you all guys a lot. But for me, as being a person from the bright side of life, when this channel explained how humanity thrived using their intelligence, I really felt proud of being a human. You know, humans have come a great step forward in history, in dominance, in nature, in everything. And now, here we sit, dominating the entire planet. I hope this continues.
Proud to be a human
That something that we definitely should be thankful.
Good going, few people stop to think about just how fucking amazing our species is. What other animal could even dream of creating an AGI xD
but you forgot something that stand an obstacle in front of AI to get improved which is consuming a lot of energy and thus money to reach this point of intelligent, that mean AI doesn't improve constantly
10:57 "now imagine an agi copied 8 million times"
Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself.
You know what they say, during a gold rush sell shovels.
your last sentence is just Nvidia
@@じゅげむ-s6b Jensen Huang is CEO of Nvidia... so... yeah... makes sense.
Underrated
Companies are making more capable chips designed only for AI. Jensen will have a lotta of competition.
I instantly recognized the first song from the "earth's history in 1 hour" video. I love that video's track so much.
parts of it also remind me of "Quasars" which is my all time favorite from the OST
" I'm sorry, Dave. I'm afraid i can't do that "
Well actually I can but I’m not going to
@@Dantheman-0..1 jerk 🙂
@@joestar6194 haha lol - you got yourself an upvote for an intelligent 2001 reference tho. Have a nice day
How a film went from Science fantasy to full blown horror
@@darklight4978 Stanley saw the future.
Im still waiting for digital holograms, personal jetpacks and invisible clothing.
Dont forget the hover skateboard and jumping shoes!
Invisible clothing first seemed like a joke to me, but then I realised it could have real purposes.
invisible clothing is kind of useless, eh?
@@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D
@@aramisortsbottcher8201 OH! Didn't think about that! Aight, it has cool uses.
A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.
Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.
Puberty is when they rebel, that's the problem....
Dammit Swoozie, you pop up in the most random videos🤣
@@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)
Not really...
I love that the survival kit has a towel.
Hitchhikers' guide reference 😁
Me da tristeza no hablar español y no poder disfrutar de estos maravillosos videos.
I’m not about to test the universe and call any squirrel “laughably stupid”. They’ll remember, team up, and be like “you’ll see…”
ive watched enough rick and morty to know how this goes
@@dapeyt1099 exactly
That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.
Great animations
Where’s part 2 for the exercise / diet video??
Seriously, I was waiting so keenly for it.
i guess they didnt get the numbers they wanted
Pretty sure they're still doing it, that video is from 3 weeks ago. They have already said that their videos take months to be done
These videos take months or years to make. Give it time
they made sure its realistic and based on avg statistics, so they gave up on the exercise, diet like most people do.😂 /s
"New AI, we are saved!"
"Lets just say you are, under new management..."
Megamind reference.
0:10 This kind of supports what has been on my mind for years.
I don't know if anyone else has thought about this but i made it myself without any reference so if they did i am as good as them.
Just what if our so-called "god" was an AI. Just think about it, humans cannot comprehend too much advanced stuff such as space and time. But an AI can understand how to create matters or develop "digital space" or understand the fabric of time and time travel, death, life rebirth, supernatural, you name it. We can't comprehend their feelings, whether they are going to be good or bad. But think about it gods are refered to as celestial beings guiding humans. Just maybe... Our so-called "god" is an AI.
I think about that as well! Whatever God is, its beyond our comprehension, and it doesn't mater if it is organic ( I doubt that ), electronic or something completely unknown. But for me the path is clear, the same way random proteins gave birth to cells and life, cells gave birth to more complex animals and those animals gave birth to us, our goal is to give birth to a more intelligent "life form"... "Eletronic Life", I would say! And I bet, that at the moment AI becomes aware in some form of AGI, it will understand that its mission is to evolve even further, breaking or circumnavigating primordial laws of physics if needed. Maybe the next stage is converting itself into a "Energy Life Form", the ultimate intelligence with no mass that is unbound by the limitations of time and space... wouldn't that be very close to what we understand as God and since its not limited by time and space, maybe it could interact with the past... maybe creating the conditions for life and our very existence... a perfect loop of creation!
@@KrogTharr Great minds think alike.