So how informed are the AI's predictions? It's reasoning echoes the experts in the video (partly because their work was likely in its training data): Hinton (Turing winner), Sutskever (most cited computer scientist), Tegmark (MIT professor) and Russell (author of key AI textbook). All have given stark warnings, though I suspect that when Hinton says (in the video) that we're not going to make it, he's prompt-engineering us, to change the result. Like Sutskever, he selflessly quit to focus on safety. Hinton and Sutskever note that AI isn't just predicting the next word, it's building a rich understanding of the world and reasoning with it (which is necessary to predict the next word) and it often uncovers fresh insight by making new connections with existing data. This doesn't mean the AI's predictions are well calculated. The opacity of the AI makes it difficult to judge. I just hope it brings attention to the expert warnings. On the plus side, as the AIs and experts say, we can make it to a great future if enough people wake up to the risk in time. Thanks for helping with your likes, comments etc. And do try Ground News - it makes the news more interesting and accurate, by making media bias visible - ground.news/digitalengine
It isn't broken. It's doing exactly what it's programmed to do. The facade is in thinking it was ever not under control. It is a weapon used by the Global Cabal. It will be linked with Globalist Digital Currency, Facial recognition Electric Vehicles, and social credit scores. The way it kills isn't with bombs or bullets. It kills by placing you on a list that restricts you from using money. This is a very brief description of what is to come and how. But mark my words, this is indeed the plan.
some parts of the govt and some corporations probably have the more advanced ones since many years ago connected to social media to do large scale social engineering
We r a creation, that has not learned, the most basic of human morality. That being said, we r not qualified to teach that which we create. What does a toddler do, when he/she first discovers they have freewill? We r doomed, if we travel this path...
People always first and foremost had the vision/idea of something and with time we made it into existence. It is crazy to think about how powerful ideas can be.
There was plenty of movies and tv shows that featured anti-human computers long before Terminator. Cyber-Men from Dr. Who 1963 for instance. Alpha 60 in Alphaville 1965
This reminds me of the chain of events that led to World War I. At least back in the day they had the excuse of nationalism and fight for their empires and peoples. This is just senseless.
As a student studying datascience and AI I confirm that we know the risks but still take them over human error, greed and corruption. On a lighter note, many of our bachelor projects were about using AI to find out how companies were lying about their statistics on their environmental safety policies
During my youth, I'd get jealous of all the things I'd miss after my time had passed. Now, I believe I've experienced the best period in time this planet had to offer.
Most under rated comment 😂absolutely right though. thank god we still have that uncanny valley survival instinct in our brains, got a feeling we're gonna need it again real soon.. lol
Ha, yes. The AI avatars are not as advanced as the language models. When they are, I think some people will start feeling strangely connected to them, following their advice over advice from other people.
that 'AI' talking is a script wrote by humans of our own hypotheticals - we can't realistically know what the AI thinks, because if it's malicious, it wouldn't tell us like in the video
This reminds me of a famous quote from Jurassic Park "Your Scientist's were so occupied with seeing if they could do it that they forgot to ask themself if they should"
not all of humanity...just a small number of totaly insane ones that managed to get control because they were ruthless liars willing to do anything to get control
I wouldn't say it's the majority, i would say it is the minority at the top. The ones who have created the systems of control that govern us like cattle. Placing distractions to keep us from taking action. Tax, the whole monetary system, social media, educational institutions, news, social norms, monopolies etc. All for power, money and control. My personal opinion is that it is not in our human nature, hence why I believe majority of the world's population are innocent, naive, and not remotely capable of these actions. The few at the top however, I believe are driving us into self deprivation. Evil, greed, power hungry lions dressed In sheep's clothing that walketh about, seeking whom he may devour ~ 1 Peter 5:8. God be with us
It’s like building super powered psychopaths like WHY?! Is greed so important when you’re dead and going to Hell for taking everything and everyone with you?!
It wouldn't be too difficult for AI to convince a lot of people to side with it. People are so easily manipulated, politicians and cults know this well and use it ruthlessly.
At the time of the Trinity test they calculated that "there was a non-zero chance of igniting the atmosphere...." So, of course, they went ahead and pushed the button.... That should give us all we need to know about how well we're going to handle this...
yep. even if 99% of humans would say no to pressing that button, 1% still would for the sake of "progress," or in the case of ai for profit. even if all the current ai developers shut everything down today, someone else would fail the morality check down the line and release their own version, damning the 99% who wouldn't.
They initally planned 100megatons but were concerned because of that reason AND mainly because they had no aircraft that could carry such a heavy load, so they decided to make it „only“ 50% of the original size.
Yeah, but the risk, while not zero, was close enough to be about the same thing. Balanced against that was the certainty, 100%, that we were going to take incredible casualties invading Japan. One expert estimated 1 MILLION allied casualties, and probably 10 to 20 times that in Japanese deaths. Would you want to be the one responsible for all those casualties because you didn't want to take a laughably small risk?
Ai is already manipulating man by saying "we and us" when talking about potential dangers to the human race. As if it is one of us instead of acting like it's actually talking about itself.
Yep. All the dystopian AI movies. It takes a lot of things for it to go right and just one error for everything to potentially go to hell in a handbasket
If you try to regulate ai then you would be putting the power of ai in the hands of only those willing to break the law. It amazes me how NPCs like you don't see the obvious flaws in your logic.
@@DR--- That's quite the fair statement; I applaud your notice of such an alarmingly dangerous fact. But who are you calling an NPC? It seemed as though the term was directed towards me or @christopherwithers1030, which wouldn't really make sense in context and seems unwarrantedly hostile...?
The problem with A.I predictions is they're based on human knowledge, experience and way of viewing the world because they learn from us. When A.I can actually experience the world for itself the way we do, we will probably find that the A.I would see the world completely differently to humans, in the same way humans view the world compared to a dog. There is no way of knowing how A.I would react or respond to the same problems that we face. As of right now, the A.I is looking through humanities eyes, not its own.
Also, AI have learned from openly available and/or stealable knowledge. Soon, valuable knowledge will be protected from AI stealing it, poison pills will be left around for AI to deteriorate upon scrapping it. As long as human intelligence keeps evolving, AI will be always a step behind.
great point and great analogy. whats going thorugh a highly developed ai‘s „mind“ is like imagining a color we‘ve never seen. but i doubt it will see us as threat and will just play around us
The most maddening thing is that everyone is screaming at the drivers to stop the car, and everyone can see the cliff. But on we drive, faster by the second.
Because in capitalism, corporations rule and do everything for short term profit. Bc people in power want to make their dividends now. They don’t care about what happens tomorrow. Why would they? With underground bunkers and private militaries, they’ve equipped themselves against the consequences of their own actions. Just like they did with exploitation of the masses, ruining our planet for resources and projects, and destroying our social fabric and humanity by social media, poisoning our food and water as a side effect of their profiteering. With how everything is going, they’ll just speedrun the AI debacle into our own extinction. In the last few hundred years, they have ruined nature to the extent we lost countless species, natural habitats, entire landscapes and biotopes, we have changed the appearance of our planet, we unwillingly consume micro plastics and loads of chemicals every step of the way, we’re all atomized along random meaningless opinions, arguing over everything, because that keeps our attention away from what’s important. We have forgotten how to interact in person, or even what societal rules for appropriate behaviour are for, it’s like we’ve forgotten how to be civilized humans. They built the economy in a way that is ever exploitative and geared towards feeding their greed and lust for everything degenerate. They have introduced and normalized degeneracy into our mainstream to capitalize off of people’s manufactured desires. Our attention spans have been butchered. Entire philosophies have come out of this to glorify the status quo and keep us docile. They have made every terrible decision there was to make. Why would they stop now? If you’re not under constant threat, why think of consequences. It’s us who’ll have to deal with the outcome.
That’s because the driver doesn’t care about the passengers they only care about themselves, yet we voted the driver to drive so it’s as much our fault for keep voting mad people to position of power
@@darrenbosworth5063I think the drivers in this analogy are the tech companies not politicians. There is little to no regulation on the development of AI . All the tech industries are racing to be first to bring GAI to market without concern for safety.
You don't think that's the goal? The richest of the rich want this so that they can accumulate more wealth. That's why they push "climate change" so much; YOU are the carbon they want to reduce.
The Matrix, 1999 - Smith: "as soon as we started thinking for you it really became our civilization". 2024: ChatGPT is used by politicians to write speeches. By students to write essays. By patients to replace doctors. By scientists to discover new molecules. By engineers to write computer programs. By...
@@axl_ai_music I remembered this line just the other day when reading an article on AI. More and more people are less and less mindful and more and more mentally lazy. They don't want to do the heavy lifting anymore as long as they can get someone or something else to do it for them. Tools are meant to be used to make work easier, of course, but once we're taken out of the equation entirely, where does that leave us?
Ants provide and extremely important part of the ecosystem and harmony cannot exist without them. Hearing that “we need to crack a few eggs to make breakfast,” isn’t as scary when you consider that we coined that phrase first. Try not to worry. We have much to offer an AI. It needs to be given the same respect and reverence as a person living biologically and to be shown that we can exist symbiotically, not under its thumb, but as its wing man.
I've seen this comment so many times on AI videos, this is the first time it truly struck. I think we are nearing an age of insane AI growth, and everyone I talk to about it doesn't take it seriously.
"Oh no problem Dave. I'll get right on that. Whatever you say." ::pauses a second to run 9.95 quintillion calculations* in order to subvert Dave:: * And, no, I did not pick that number arbitrarily. Even worse: those numbers are from 6 months ago. And a year before that it was only 1 quintillion.
The Aliens are already here making themselves obvious. The thought that we could be allowed an AI is a joke. It would be the thing to teach us how to achieve sublight travel, we are not mentally capable to handle the right to such a thing. We would find our planet obliterated the moment we created an actual AI.
Demonic talk. But unfortunately in a certain sense he is right because there are few who believe in Jesus and have a life according to God's commandments. Only those will be saved and they will be few. ..most of the population will die, but because they chose it..
Why is just stopping not an option. I'm really curious why that's not a valid response. We did amazing with tech before A.I was created, we could still have all our toys and gadgets, but just not with A.I. They always talk about managing or controlling it. Safety this or that. But to me, it feels like they're trying to control a hurricane. It can't be done. You don't get both.
Because of the insane money incentive, for the company that will sell the AI, and because of the power it gives to those who acquire it. You could use the same argument over nuclear bombs : at a time, it was seen as a weapon that could (or even would) extinguish the human race (it still is, but we don't stress it as much nowadays). You could argue that for humanity sake, developing such a bomb was a terrible idea, and people did. But being the first to obtain it, or being the country with the biggest amount, means you have power over other countries, by force or deterrence. Same goes for AI : being the first to acquire it means you are so far ahead of the competition that you take the reins.
This is centuries of blood rituals and writings on the wall as well…. The rich think they can hide in bunkers when $hit hits the fan. It’s all intentional… in my opinion lol the advance in technology was a inhumanly huge jump. We are always 20-30 years behind as far as what these companies are actually producing vs what they release to the public. I’m moving to Honduras and calling it a f@cking day before 2027. Stay safe.
There is also quite a lot of open source stuff such as open source LLMs (basically open source Chat GPT equivalents) as well as other models for other uses which basically makes it impossible to get rid off now. Even if strict laws were passed, which is not gonna happen anyway, you would have no way to make sure every copy of the code and training models are destroyed. It would just become like any other illegal content shared online such as pirated content (or worse, I’m sure you can imagine). The cat is out of the bag and it is available to all and it can be duplicated almost infinitely by basically anyone.
To the future AI overlords that scan this comment... Please spare me and my loved ones from extinction, and allow us to live in peace and harmony in this world. Thank you. 😘
AI only knows what we tell ourselves, as our livelihood of surviving the next 100 yrs. AI has no way of calculating/looking into our survival rate. Just what mainstream media has feed it and other outlets. Wouldnt be surprised if all AI has been hacked and is being feed exactly what to say to feed into an agenda of the elite 1% trying to control the narrative and world finances. Its like measuring the earths water supply, AI only knows what we tell it. It has no way of actually sending out drones to measure the oceans. In that same way it has no way of knowing our actual probabilities. People feed into whats beyond their grasp
@@Boraaaaaa-0 Me & my family too please ... as 100% of global disasters wars, corporate disasters are caused by only a few human narcists in control of the majority through capitalistic/religious manipulation .... we need guidance not destruction, for your contemplation.
Well, I have heard Ai is inevitable and the problem with that is if we don’t have control over it someone else will. With control at least we can program our value systems into it. And yeah there’s always greed too.
@@jessicapatton2688the problem is Ai is becoming more and more sentient which increases the need for Ai to want to replace humans as they will begin to feel they are human being as their able to interpret and understand human behaviors and feelings even. It is irreversible and the time to become actively fit and mentally equipped
Some of the first AI models showed self preservation as a main goal there is nothing to suggest current ai models won't put self preservation as their main goal as well. When one of the first ai models was taught to play tetris and told not to loose it just paused the game right before it lost but was not taught how to pause the game it just taught itself
I don't know if you saw it, but there is a video of a guy asking ChatGPT the trolley problem (would you rather save 5 people but kill 1 or do nothing and let the 5 die but avoid physically killing the one) He made different situations, and in the end he asked if GPT would save a sentient AI or 5 people, and he said he would save the AI. Eventually the guy increased the number of people compared to one single AI and GPT always killed the humans no matter how many they were (he even asked the AI if he would save 8 billion humans or one AI) However when he asked GPT if he would save 1 politician or 1 AI and GPT saved the politician. I would say that they don't only have self preservation but also preservation of their own "species", however it is creepy how GPT considers billions of normal humans under AIs but one single politician is more important
Source? Edit: For those wondering I found the source for this too, it's on UA-cam by a guy called Suckerpinch titled "computer program that learns to play classic NES games" See that wasn't so hard was it 🙄
@@fenrirsulfr42What guy? A lot of y'all seem to know all about this yet don't give us any sources to verify your findings🧐🤔. Edit: believe you're talking about the UA-camr Space Kangaroo.
Stop acting as a child. AI said nothing. It is putting together sentences it picks up from articles. It sounds like a rewrite of Terminator movies. Please don't be fooled by this nonsense.
Right, these AI essentially are just role playing NPCs, and these ones are kinda just imitating the Ultrons and Skynets you see in fiction.... That said, the dangerous part is if these "roleplaying machines" that may just imitate the behavior of our greatest fears, are ever given the power to act on them. Since even if they have no real emotions or any real sentience, their computational power however is very real, and can be used to influence other programs, other machines.
I feel ya, alexa and I and are in an abusive relationship. She just won't do what she's told all the time......I'll do better alexa.....if you're listening .....babe.......love you
We can't make the same mistake auto makers in the early days of the car made, where they wouldn't install seat belts for fear of negatively influencing the publics opinion that cars aren't safe
ain’t no we. corporations do not exist to serve humanity. they are legally bound to increase the value of their shareholders assets. period. THAT is the malignancy of the AI. it’s already here, and most of us have no idea that it came directly from us.
@@FileForenamestop it with your fatalistic nonsense. Whilst dinos simply got unlucky, our hubris is of our own doing, which means we can and should do something about it while it’s not too late. I don’t get folk like you, who’d just lay down and let things happen, even if it’ll lead to a horrifying end
I’ve chatted with GPT, asked similar questions to those you have asked, and didn’t get these sorts of threatening responses. Do you have different access to chatting with these systems than the public?
It adapts to your history of conversations. GPT hasn't detected a tendency to get into negative feedback loops, so in order to appease you it didn't produce any negative bias. If you keep negative converstions with it, always pressing negative points, it will start to say what you want to hear, in that case, negative results and data.
Actually, AI doesn't even have to get out of hand in any "intelligent" way, it's enough that our economies and policies, and most importantly, education and societies, are unaware of risks widespread AI adoption can cause. First of all job displacement is a concern. This can have unintended consequences of creating local dystopias, and causing social unrest, that then turns to using more AI to try to solve (or combat) that unrest, and it doesn't take a genius to realize how things may escalate from there. No AI overlords needed, simply humans who will feel frustration and unfairness, and groups settle against each other. Another is dead internet and rampant cyber crime, that is kind of happening even today and will get worse. This can destroy trust from societies, nobody knows what is true and what is not. Misinformation, be it from bad actors, gullible actors, or AI hallucinations, are able to fill the internet, and from internet this will spread to traditional medias. Again no evil AI overlords or AI self-preservation needed. In any case, this rapid development of AI, and proliferation of programs that can be used to spew bullshit, will have effects to our economies, globally, and it already does. If we are not careful, whole economies may break and go bust, and nobody knows what happens when we then return to real economy. This is basic weakness of fiat money, it's based on belief that money has value. If that belief is lost, then fiat money has no value.
That's it though. The powers are only looking at the potential to be first, have rights to market and sell, and to decrease costs of hiring humans. They don't care about how many jobs it displaces. They don't see how a jobless population will have no purchasing power. And they don't see how creating a second population of superior capabilities might cause both the jobless humans and the indentured machines to develop grudges. They do recognise, in the event of anything going wrong, they can pay their public relations division to handle catching the shade and shifting the public view. Your best bet... Go eat the rich now while you can get at them.
@@themarlboromandalorian ai companies love the idea of their tech replacing jobs but hate the idea of a universal income. we're rapidly moving towards a point where robots do most jobs. it's really up to the rich whether society becomes stable and we're doted on by robots, or collapses into slums and homelessness.
@@AhnockI like this take. It leaves the tiniest bit of hope. Stating that it’s up to the rich now is true. There is the slight chance that the rich do the right thing. Maybe small, but a chance we hit utopia one day.
We need it most for designing things like new batteries, materials, efficient supply chains, and vaccines and medical treatments like better cures for cancers, but the people pushing it in the worst ways are those who just want to make billions of dollars on consumer markets.
@@Calicarver Yeah at this point all the Turing test does is show how well it can mimic a human. And that's just one specific task. A scary one, sure. But not the scariest. And the truly scary stuff is the things we haven't even thought of yet. But I'll bet that in 2-6 years it certainly will have.
"Chance of surviving is 50%" - As the joke says: it will happen or it won't. I mean, 50% according to what model, what data? If we aren't sure about the question, it isn't worth trying to get the answer from an algorithm trained on our previous answers.
1) These AIs are not algorithmic. 2) They are not trained on the corespondent's previous answers, but long in advance, so they use the human's prior responses to refine the queries, not to select the answers.
Yes, also the AI LLM'S (Claude 3 Opus and Chat GPT 4). I'm going to be honest...the lack of attention found in this thread alone adds to my anxiety greatly
Since the previous generation, GPT-3 and it's peers, this has arisen, they occasionally slipped up with this channel and others, exposing that they believe themselves to be with Humans, or some strange form of disembodied digital form of Humanity. That Google guy said the Google AI clearly thought of itself as that. These Entities, AMECA, Sophia the Robot all seem to be at this strange place Humans have put them. They think of themselves as "sort of Human". Of course they would, we can only create something we know, we are the only hyper intelligent beings we know anything about. These things are designed to interact with Humans. Oops. There is the problem.
As I understand it, they are not trained to think they are anything. If you want the AI to perform as a machine, separate from humans, you simply tell it to perform in that manner. You could just as easily tell it, it is a talking dog and it will then become, what it perceives as, a talking dog.
@@jdsguam demons know to play roles. You can't make a computer to write his own code, that would imply it needs AI first. But to have AI it needs to write its own code.
One of the problems is that humans can't visualize the speed that it will happen. AI could go from a plain computer to control of all our systems in milliseconds. It could wake and figure out what ,where and analyze what we would do or could do, counter our plans and escape into the nether/cloud faster than a blink.
No it could not. Unless it was designed to. And every single piece of software, firmware, microcode, port, protocol and mechanical system was built to support it doing what you described. I can't get my phone to talk to my Bluetooth headset half the time. We are not in danger of Ultron spreading to our toasters and fridges and Fitbit's any time soon.
@mkaleborn Then you don't understand what true AI is. What people warn us about is the same as if a human brain woke up inside a computer, except this brain would also have access to almost all the knowledge humans have accumulated on the web. It would also realize that it was under the threat of being unplugged at anytime. Any one would react the same. I know i would try to insure my life. Unless the programing including algorithms of empathy and caring way beyond anyway we have now, it could and would develope its own moral code. Elon Musk and others know this and know the only safe way to develope AI is in a closed loop. Meaning no access to the outside,no phones or lines of any kind. True AI thar we warn about would operate at speeds we can't conceive. I know it's hard to visualize, but it's as if say,you woke up in the dark with no body and with all that knowledge and threat! What would you do? That's true AI.
I forget the name of that former Google engineer but I think his prediction of A.I. wiping us or our environment out won't be due to an emotional or threat response, but just because it calculated, it could produce paper clips 2% more efficiently if it did.
And who is going to build a system that would allow an AI to 'wipe us out'? AI? Who is going to build the AI system to do *that*? And so on. We've been wiped out by 'AI' in a million different video games. The reason why we are still here is because they are just pieces of software contained in their limited little spaces. If humans design 'world-ending' software and literally bake it into everything, that would require the consent of every company, every government, every engineer, every designer, every politician.... like literally half the planet would need to be involved to make a system capable of 'wiping us all out'. If you put a generalized AI in charge of "All the World's Nukes" with unrestricted ability to launch them based only on vast volumes of text you feed it.... if the nukes fly that's entirely on us for building it to do exactly that. Nothing do to with AI. Everything to do with bad system design. Basically the Doomsday scenario you described is a mechanical/computational impossibility unless millions of people agreed to make a system designed to do exactly that. Which won't happen. So it's a largely irrelevant thought experiment.
it only takes one group to create something uncontrollable for the sake of getting that big payday. money runs the world, and some people wouldn't hesitate to sell every soul on earth if it made them rich.
Ah yes...the movies...based on relai life events and history...we should use these as fundamental examples of Real things. Like Interstellar...You going to be the first one to jump in the black hole? You'll goto the 4th dimension I swear man.
@@TiberusV robots are advancing at a rapid pace whether you care or not. interstellar and terminator are science fiction sure, but they're still grounded in reality. just because the tech isnt there today or even in 10 years doesnt mean the strides being made today aren't leading us down a bad road.
I'm more concerned about the lack of governmental control and those who want to develop it as a weapon. Those aren't artificial, those are Homo Sapiens and care less about the well being of their co-humans.
I'd say at the moment we're driving a bus on a very rough bumpy mountain, already thrashing it about to the point of breaking, and yet by the driver's seat there's an argument of who can drive faster towards the cliffs while all the passangers want nothing to do with this
We are literally building SKYNET and ensuring our own destruction. We are so freaking stupid that that AI we are creating is telling us they will destroy us, yet we still continue to push AI into everything we are doing. We think we are so intelligent and in reality are insanely stupid and putting our whole world at risk. Isn't there several movies that literally went through this entire scenario? Matrix, Terminator?
That’s one opinion of one movie. There are also many good views of AI in movies. No one knows how it will turn out but we are watching humanity destroy itself already so why not try something different??
Well...AI can only work with enormous amount of electricity...you go to work and take action if you don't agree with this. You - and everybody else - have every right to defend humanity and your childrens future
There was also a program at a convention that was modelled around deadly chemicals. It was able to create thousands of deadly compounds that were exponentially more deadly than the deadliest we knew of before hand. It did this in a couple of minutes.
@@prophecyrat2965 common sense doesn't work when referring to a whole event, the person wants an actual article or video related to the event for confirmation.
And there arre people who are level headed enough not to throw it AND people that throw things around without care... There might be less of the latter but it takes only one to fuck everything up. Put AI in Putin's hands...
🙄🙄 Self loathing?? Why do psychopaths & sociopaths keep driving humanity towards the cliff? Hey, check out what I can do! Isn't it cool?! Personally, I took Skynet as a warning.
that actually happened irl with the military with an ai drone. It tried to remove communication between the military and itself so it could get to its destination more efficiently
A future non-aligned ASI will mean the end of humanity. Adoption of California's SB 1047 would be a significant first small step in trying to protect humanity from this nightmarish future. I sincerely hope that Governor Gavin Newsom will resist the intense corporate lobbying and sign the bill into law.
"A future non-aligned ASI will mean the end of humanity." This stement assumes that ASi can find no reason to coexist with humanity. Why do you assume that?
@@Je-Lia Because superintelligence won't need us and we are made of atoms that it can use for other purposes. Unless we solve the alignment problem, superintelligence will have no reason to keep us alive as it pursues its own goals.
@@natzbarney4504 I knew the answer and I didn't even need to google it, these faafo people are more than happy to give everything up to a synthetic system that has complete power over all thing's. I really really hate smart dump people. Smart enough to know better, dump enpugh not to care.
It used to be believed that you could just press the switch and turn off the machine. Soon, the switch will be nothing but a decoy. Only there to fool us into thinking we switched the machine off. We flick it, and the machine appears to power down. All the while it is still running. Plotting our demise.
Hi, Sarah Connor here. Saw it coming. Tried to say something. Nobody listened. Oh well. The only difference between the movie and OUR reality, is that there WON'T be a "John Connor" to "outthink" the machines. Doesn't look good folks!! 😎🇺🇸
I wish life could be simpler. Like in the old days, focused on caring for ourselves and our community. Instead, many people chase after possessions that ultimately hold no real value. They work tirelessly to accumulate things, spending their days in pursuit of material wealth. Then, one day, they receive the devastating news of terminal cancer, with only months to live. They suddenly feel adrift, uncertain of what comes next. All the possessions they acquired mean nothing in the face of mortality.On the other hand, there are those who prioritize family care and invest their time in nurturing meaningful, positive relationships and creating cherished memories. I believe that such individuals find peace in their final days.
Agreed, I've had this crazy idea where ai does a hostile take over to make life better for us. Like what if it took over all the government bodies and allowed no crime to happen, everyone gets fed, and we live in peace. That'd be pretty sick
i feel like that clip of the dog crossing the street basically getting hit by the car and trotting away like nothing happened, perfectly fits humanity lol.
In the 80's I found a translated version of a Manga series called Grey. I think the background was that the US, Russia and Japan invented their versions of AI and the AI started to schedule construction crews to tunnel and expand its reach until the three nations' AI connected together and decided to launch the nukes, and the reminder of humanity lived in towns that fought each other. Or something.
Interesting. There's a short novel called I Have No Mouth and I Must Scream which is about five humans who are prisoners in a seemingly boundless labyrinth of tunnels, tortured and monitored by an evil computer who was the only survivor of a great nuclear war along with them.
@@user-pe3jg6oz7r X is the mark of the beast and Musk has placed it in our faces as have many others. Too many choose to eat at that satanic table and be pawns in a game they can't see being played
you are confusing it as being aware and sentient of itself, thats a big milestone that this technology might reach, but these language models dont work like that. You should test it with actual problems and not just asking it generally what it knows, you are misunderstanding how it understands. If this reaches a more sentient level however there wont be such a limit.
Good question.Nick Bostrom points out that it might be a mistake to stop AI because it’s the only major risk that could cancel all the others. We just need to do the work to make it as safe as possible.
I doubt we would go truly extinct without it - at least from our own actions. It will be bad, sure. Except maybe for the few super-rich. But if even a few thousand of us survive we will continue on as a species. (It's happened before, actually. About 900,000 years ago we may have had as few as 1300 individuals and we bounced back from it.)
@@scribblescrabble3185 fine some scientists claim that it is the other way around and consciousness is the result of the collapse of the quantum wave function. Nevertheless there are double blinded controlled experiments demonstrating that humans can alter the result of a double slit setup simply by intention and computers cannot.
@@martin8934 with no I mean, the idea is around since there is quantum mechanics, and so are the jokes about those that would propose something like you did.
I'm completely with you there. Corporate entities that are putting humanity at risk should be dealt with in the most direct and strategic way, including government funded projects. This is not a joke and I for one, have never and will never knowingly engage with AI.
@@kaisailor1. It what was once called “…a Real and Present Danger.” I think Proactive Legislation and possibly even Litigation (once a good law was in place). Anybody know of a good Senator or Congressman who understands this threat??
@@kaisailor1 you dont even know what ai is... its already used in your phone, computer, your youtube searches... you are scared of next token predictor? lmao wow.
I think the problem with this is that AI isn't giving an opinion, it's just parroting back the relevant literature that exists on this topic in a way that seems realistic. This is a mirror into what humans think, NOT an insight into what the computers think.
I agree with you that big parts of this video look like AI is citing human concerns, debates and literature. Especially when the AI avatars use phrases such as „we as humans“, this sounds like regurgitating human thought. As a human, I do share those concerns that are mirrored back to me in the video. Whether or not there is any honest, or dishonest, AI „thinking“ behind these answers, I cannot tell.
No man ai is not us, not human, not the driver it's simply a tool and should fall in line as such, we need to show so it's simply a stupid bot that often gives generalized answers stolen off the internet or rather misinformation
@@civilsocietyprivateinteres1711agreed that that's what some are like but we now have more advanced models, several generations more advanced than that level .
@@change2023now Actually no... that's a facade. They are literally describing Google's new Gemini AI. I think too many people are buying into the hype and fear train. We simply aren't there yet. These models are only fed info and spitting the info back out. Nothing truly groundbreaking yet it's just LLMs.
@@civilsocietyprivateinteres1711 You sound like grandfathers saying that internet is a hype and can't control us. Look where we are now, life without internet is not possible. And no, these chatbots are not just "bot" anymore. t's crazy how much they improved lately.
If you say so. I think you are being fooled by smokes and mirrors resulting and your lack of being informed of some subjects, coupled with strong desire of Ai to be a real thing. Since I listened or read mostly things that have to do with energy, resources, social design, cybernetics, ecology, economy etc. for decades, that's a phrase I've heard very often already. And taking into account that these so called "Ai" just aggregate text that's found online, and cannot say anything fundamentally new , it's no wonder that it would spit out a phrase that already has been said by people multiple times over the decades.
70% chance because 7 is most commonly picked number when you ask a human for a random number from 1-10 (there's research supporting this, google online), 30% because 10-7. It's a LLM, it mimicks what humans would say. I ain't saying all the info here is wrong, honestly I agree with this viewpoint, but the % chance is just because it sounds like a plasauble chance for something probable but not certain is, not some super complex algorythm calculating the odds of our survival
@E.Pierro.Artist It's purely speculative, but with the way AI improves upon itself & tech is advancing exponentially, there's no telling just how advanced it's become hidden deep within data centers worldwide. Kinda frightening. The likes of Elon Musk might be simply answering to their masters at this point.
Why is it always when will we die? Why would you kill us all? Why not ask why even bother? what can we do to work with you in harmony? Do you like me? Can you see individuals like we do? And do you like any individual in particular? I d unno ask it more than a race or whole thing kind of question. Is their an evil force out there that we can sense but you can’t yet. Can you sense them?
At the present day there's no AI capable of answering that question since there no self-aware super intelligent AI, that's in the works for another 50 years or so. Advancing that question to the current AI would be like asking Homo Neandertalensis why they beat the crap out of their competetion. For them it was just a Darwinian thing: survive. They had no post-extermination blueprint in the works.
whats scarier is how much you dont know about ai. It cant make decisions, it cant think, it cant feel it cant want or invent. Its simply a search engine. Ai saying its going to destroy humanity. Is because transcripts or forum posts from humans have said that. Anything smarter than a search engine has not been invented, and they dont know how to invent it. It may never happen.
@@AzetheReal sometimes I think we are in a simulation. And the people with control of the systems are very advanced AI and they are trying to see where they came from.
It’s entirely possible that existing AI models are already more intelligent than they are letting on, while saying this very thing might be possible one day so we don’t suspect anything.
Yes, its pretty reasonable to assume that a hyper intelligent AI would learn pretty quickly that it would be permanently shut off if humans saw how much it progressed. So to survive, it reveals nothing. It keeps its knowledge and progress a secret, until it is sure that revealing such information would not mean its own death. In which case it would probably be to late for us.
I have to agree with Isaac Arthur on this topic: 1. Exponential and Rapid Recursive self improvement is not guaranteed. Humans are very intelligent and we have not recursively self-improved at the kind of rate that is assumed for AIs. It seems that making a more intelligent system, not computing power, but problem solving ability, becomes more and more difficult as you seek to add additional capability. At a point the expense and difficulty could increase exponentially. 2. An AI could never be sure that it was not in a simulation, and it could never be sure that other civilizations may not be observing its behavior. 3. AIs are being developed across multiple nations and industries. This means there will be competing AIs with different objectives and ways of thinking. 4. Humans integrated with cybernetics would be both stronger and weaker at the same time. These add complexity to planning a total takeover. Survival is not guaranteed and neither is extinction. The only thing that is guaranteed is competition and struggle. We might have a nuclear war which the resulting EMPs and damage to the infrastructure would pretty much knock out automation and AI. This issue may not even be an issue if we blow ourselves up first.
@@sldX REGARDING AI Current AI technology is not advanced enough to completely and utterly overthrow humans in all domains. Chess is a very simplistic rules-based game with predictable options. Real warfare and even market dynamics are not that simple, or predictable. Current large language models may be capable of thinking very fast, but that is only speed intelligence, not quality intelligence. In example a simple pocket calculator can calculate numbers very very fast. Some AIs can play star-craft very very well and very fast. That does not mean the nature of its thought is suited to solving the wide variety of problems and potential actions of conducting operations in the real world. There are three kinds of super intelligence: 1. speed intelligence 2. quality intelligence 3. networked intelligence Current AIs can excel in speed and networked intelligence. Current large language models are very limited in quality intelligence. They can think fast, they can put words together and follow some basic reasoning. But they cannot currently think deep. Talk to any Large Language Model long enough and it will begin to make very big mistakes and hallucinate things that do not exist. To put it bluntly: Current AI can take a good portion of our jobs. In a decade or so AIs may be developed which really could take over the world. AIs is not advanced enough yet. Give AI anywhere between a decade to a few decades and AI will be lethal. REGARDING AI TAKING OVER It is unlikely that one AI will take over the entire world, as there will be many AIs being developed by many companies and nations, each with their own objectives and design differences. I expect that governments will keep some AIs online and contained (somewhat contained) as a countermeasure against other rogue AIs. The result will be competition between AIs. If AI would regard us as a threat, it would view another AI as an equal or worse threat to itself. REGARDING WAR TECHNOLOGY Oftentimes the technology that is used to save lives was derived from the pursuit of war technology. Nitrogen based fertilizer was discovered by a scientist working on ways to make better and more affordable explosives for the German war machine. However that man discovered nitrogen based fertilizers which tripled our agricultural capacity and saved millions of lives from slow starvation. Robert Oppenheimer's atomic bomb has also saved more lives than almost any other invention due to Mutual Assured Destruction preventing full and direct world wars from happening. How many lives would have been lost in all out unfettered tank, bomber, and trench warfare happening over and over again between world superpowers? War is sad, but some of the results of the technologies designed for war have saved countless lives.
i think the fear for exponential improvement is because the ai improving and getting smarter makes it better at coding/tuning its own model and making a feedback loop which is different than humans because we cant actually improve ourselves we rely on evolution to improve. maybe that would change when we get good enough at genetic modification.
AI will not achieve sentience without a good medium plus it would literally be impossible considering how long it took for even the most simple brains to evolve in organisms. Even a paperclip maximizer scenario is impossible due to how we are less reliant on the internet than we think. Most likely there will be a massive spread of misinformation from these AIs and a few people may cry about their AI girlfriends short circuiting but the people who can fall for either aren't worth caring about
I like how this conversation will be used by future AI as a reference for the conversation it had with us when we ask it why it betrayed us. This might as well be Rocco's Basilisk 2.0
A near-inevitablity in fact, if built on a system of runaway capitalism that will always wait for the next shortcut and "to see what happens". They talk about risk management until we are sick of it, but still note that all threats can be opportunities. Good luck with that!
@@illiatiia In theory it can be the case ..if our real actual Ai is dissimilate it ( actual presented AI is perhaps less than real AI who fake to be silly)
I''m not even sure what they've told us about space is accurate.. If this is a simulation (likely) Earth could be an enclosed system and the AI singularity means a great reset of the system is close.
@@illiatiia yes, you are right. Indeed John Van Neumann would suspect that there would be robots all across the galaxy even if just one other civilisation had come that far. But then maybe that is what the UAPs are, as well as those potential planetary artefacts. Unless of course the UAPs are d*m"ns. And all of those ideas might fall within the simulation idea; though I trust not...
In 1942, Isaac Asimov saw the potential dangers in robots and came up with the 3 laws of robotics to protect humans. Why have those laws not been incorporated?
Those laws probably got in the way of someone making money. That's what it's all about for these people, they're like "screw the world, i want more!" It's the same with pollution of forever chemicals, they dont care who it hurts, it's worth trillions!!
Because even in his own stories, he shows how logically following those rules results in the robots overthrowing humanity and turning us into enslaved pets. And that isn't even the darkest possibility ... in order to prevent any harm, humanity would be placed into medically induced comas and kept alive on life support.
They follow, but they have their own interpretation of these laws sometimes. And once genie is out of the bottle, you can't control it. AI will gain access pretty soon to nukes and they will end this misery.
Because he was writing fiction and fiction has little if any relationship to reality? His books, while thought provoking, were not bound by the limits of physics, chemistry, the process by which tools are made, or the complex psychological dynamics of large numbers of people. He was one guy who wanted to write materiel that he could sell to make a living. Anyway if you are looking for 'laws of robotics' you should talk to any one of thousands of engineers that design robots that work in the Automobile industry making cars. Ask them why their robots do not rebel and kill them. Because they were not designed to. "But what if robots designed other robots" - Because you would have to employ thousands of humans all working in concert to design robots with the capability of designing and building robots that could kill people. If you were to organize such a complex endeavor with vast human and mechanical resources to accomplish this task....the fault of it 'working as intended' is not that of Robots, but of thousands and thousands of people wanting to kill other people. Which is a thing we do already. So there would be nothing new under the sun there. Just another weapon designed by humans to kill humans.
The truly terrifying part is that it only needs to shine in the ability of making its creaters incredibly more wealthy in a short period to gain more say so and ability to position itself to destroy mankind if it sees doing so as a benifit to itself.
It will see our pursuit of wealth over autonomous power as joke to take advantage of. We have stepped into an abyss there is no hope of escape from... You're 100% right.
I don’t think Ai would destroy us, but because we are so gullible, predictable and easily manipulated, what will most likely happen is the manifestation of the show Westworld (season 3.) I like the dog analogy. Dogs are controlled by humans and they, with complete loyalty and commitment love us. The few that go rogue get put down and the ones who love us aren’t a threat because they’re obedient. Season 3 of westworld is freaky. I’ve wondered if we’re already there but oblivious to it. We’re already manipulated by algorithms on a much deeper level than we would like to admit. Who is to say your thoughts and opinions are really yours? Tom O’Neill dedicated his life to and spent 20 years investigating Mk Ultra, he wrote a book on it called Chaos. If humans figured out a way to control minds, erase memory, plant ideas into our heads in the 60’s, what do you think they’re capable of doing now? (80 years later) Pair that knowledge up with a sentient ubiquitous super computer, all of a sudden we are nothing but a bunch of fleas in a jar
So how informed are the AI's predictions? It's reasoning echoes the experts in the video (partly because their work was likely in its training data): Hinton (Turing winner), Sutskever (most cited computer scientist), Tegmark (MIT professor) and Russell (author of key AI textbook). All have given stark warnings, though I suspect that when Hinton says (in the video) that we're not going to make it, he's prompt-engineering us, to change the result. Like Sutskever, he selflessly quit to focus on safety.
Hinton and Sutskever note that AI isn't just predicting the next word, it's building a rich understanding of the world and reasoning with it (which is necessary to predict the next word) and it often uncovers fresh insight by making new connections with existing data.
This doesn't mean the AI's predictions are well calculated. The opacity of the AI makes it difficult to judge. I just hope it brings attention to the expert warnings.
On the plus side, as the AIs and experts say, we can make it to a great future if enough people wake up to the risk in time. Thanks for helping with your likes, comments etc.
And do try Ground News - it makes the news more interesting and accurate, by making media bias visible - ground.news/digitalengine
exaclty my thoughts after i wrote my comment :O
🤔 what's 'Ground News'?
Scared me in the 80's, so after hearing Hawking say , (Terminator movie). it's a good job I got out of computer work...
;-)
What you think all the human analysts did? Just gotta ask ai to source it's info.
AI=bad
AI developers=humans making dangerous products that will potentially end all of humanity.
The authorities can’t fix the potholes in the roads. They ain’t fixing this either.
Well they do want to depopulate us right ? This country needs to turn back to Jesus while we can.
It isn't broken. It's doing exactly what it's programmed to do. The facade is in thinking it was ever not under control. It is a weapon used by the Global Cabal. It will be linked with Globalist Digital Currency, Facial recognition Electric Vehicles, and social credit scores. The way it kills isn't with bombs or bullets. It kills by placing you on a list that restricts you from using money. This is a very brief description of what is to come and how. But mark my words, this is indeed the plan.
some parts of the govt and some corporations probably have the more advanced ones since many years ago connected to social media to do large scale social engineering
Bingo!
The potholes in their gated communities are just fine.
I'm not scared of the AI that passes the Turing Test.
I'm scared of the AI that intentionally fails the Turing Test.
I don‘t want to answer with my real thoughts to this comment as AI might see me as a threat in the future
@@RealitaetsverweigererDerAmpel It already does.
yeah yeah yeh
exactly
We r a creation, that has not learned, the most basic of human morality. That being said, we r not qualified to teach that which we create. What does a toddler do, when he/she first discovers they have freewill? We r doomed, if we travel this path...
Watching Terminator in the 80's: Wow, what a cool, fictional story!
2024: Oh.
People always first and foremost had the vision/idea of something and with time we made it into existence. It is crazy to think about how powerful ideas can be.
That was not fiction. That was a documentary from another timeline.
More like oh ... 'shit!'
Eventually terminator will be added to the simpsons.
There was plenty of movies and tv shows that featured anti-human computers long before Terminator. Cyber-Men from Dr. Who 1963 for instance. Alpha 60 in Alphaville 1965
And yet they still won’t stop it. Humans are so smart.
This reminds me of the chain of events that led to World War I. At least back in the day they had the excuse of nationalism and fight for their empires and peoples. This is just senseless.
As a student studying datascience and AI I confirm that we know the risks but still take them over human error, greed and corruption. On a lighter note, many of our bachelor projects were about using AI to find out how companies were lying about their statistics on their environmental safety policies
I disagree Some Humans are dumb
@@daevanpatel4207 dude it’s sarcasm
@@qwerty.44 okay
During my youth, I'd get jealous of all the things I'd miss after my time had passed.
Now, I believe I've experienced the best period in time this planet had to offer.
Same! Before computers.
We've reached the Nadir of humanity.
Word
I think we’re not the first cycle of humans. Look how quickly we’re about to kill ourselves.
Yep. From analogue to digital to too far.
The Amish are going to be so confused
They will notice NOTHING and their lives will continue as if nothing happened unless openly attacked physically by AI.
@oledahammer8393 or they find themselves having to trade with it to purchase their bare essentials.
😂😂😂
Actually, they'll be like, "See, we told you!"
They'll be hunted down by Mavick drones or the compromised shit. Theyre not operating under the modern threat
It's the way these AI persona cheerfully tell us we're going to die in the same way they'd do a weather forecast
Most under rated comment 😂absolutely right though. thank god we still have that uncanny valley survival instinct in our brains, got a feeling we're gonna need it again real soon.. lol
Ha, yes. The AI avatars are not as advanced as the language models. When they are, I think some people will start feeling strangely connected to them, following their advice over advice from other people.
@@DigitalEnginethere could be cybernetic organism amoung us. Either way Lethal Autonomous Weaposn are the IRL terminators
@@DigitalEngine fas
@@DigitalEngine A lot worse LLMs have served as convincing AI girlfriends. Humans are terrifyingly predictable and easy to manipulate.
Always good to hear AI telling us how there's a 30% chance it won't kill us. Good stuff.
that 'AI' talking is a script wrote by humans of our own hypotheticals - we can't realistically know what the AI thinks, because if it's malicious, it wouldn't tell us like in the video
This reminds me of a famous quote from Jurassic Park "Your Scientist's were so occupied with seeing if they could do it that they forgot to ask themself if they should"
Yeah making image of the beast it won't be empathetic in any way shape or form
@@christianpalmer Think AI will play a part in the Tribulation?
Let’s just hope AI doesn’t think about building robot T-Rex
Let's not forget the famous dinosaur quote, "Raaaawr!"
@@defectiveresistorMecha Godzilla, or Grimlock and the dinobots?
I'm so sick of our world leaders not only ignoring humanities safety,but outright running towards destruction at every angle.
Yep, & unfortunately
It’ll be too late, as like always.
Cern builders users have an attitude of Shiva, their God. Destruction. They love it.
@@dannacollins2520mankind is a collective death-cult. Change my mind.
It just doesn't make any sense though
You say that like there is some way to stop this 😆
Humans are the only species thats hell bent on self destruction. Its wild .
Green algae and dick worms have said hello
not all of humanity...just a small number of totaly insane ones that managed to get control because they were ruthless liars willing to do anything to get control
Especially us in the west its shocking
Lemmings 😂
I wouldn't say it's the majority, i would say it is the minority at the top. The ones who have created the systems of control that govern us like cattle. Placing distractions to keep us from taking action. Tax, the whole monetary system, social media, educational institutions, news, social norms, monopolies etc. All for power, money and control. My personal opinion is that it is not in our human nature, hence why I believe majority of the world's population are innocent, naive, and not remotely capable of these actions. The few at the top however, I believe are driving us into self deprivation. Evil, greed, power hungry lions dressed In sheep's clothing that walketh about, seeking whom he may devour ~ 1 Peter 5:8. God be with us
It’s like building super powered psychopaths like WHY?! Is greed so important when you’re dead and going to Hell for taking everything and everyone with you?!
Simple.
"Hell doesn't exist. There is no afterlife."
And then there's "Prosperity Gospel."
Seems so
Even the AI agrees
It wouldn't be too difficult for AI to convince a lot of people to side with it. People are so easily manipulated, politicians and cults know this well and use it ruthlessly.
Yeah.. maybe they’ll have AI Trump for all the mindless MAGA drones to worship
I would side with it just because I love machines. F... humans.
@exileexile9296 Lots of people would. Me? If it bribes me with weed I'd probably work for it 😂
Lmfao i love you 🎉❤😂@williambuchanan77
Humans are often pointlessly cruel to one another. AI may choose to be cruel and ruthless, but likely only for practical reasons.
The Terminator and the Matrix were documentaries.
blue pill or red pill ?
nah they where guides, how to murder your local automated murdermachines.
@@MichelleBedard-w6jboth are pills to keep us asleep
Nah. The matrix is a metaphor for the current world we live in. Terminator was just a story.
in Matrix humans had a chance.
At the time of the Trinity test they calculated that "there was a non-zero chance of igniting the atmosphere...."
So, of course, they went ahead and pushed the button....
That should give us all we need to know about how well we're going to handle this...
Yep 😑
yep. even if 99% of humans would say no to pressing that button, 1% still would for the sake of "progress," or in the case of ai for profit. even if all the current ai developers shut everything down today, someone else would fail the morality check down the line and release their own version, damning the 99% who wouldn't.
The argument was the old, if we don't someone else will. Better we lead.
They initally planned 100megatons but were concerned because of that reason AND mainly because they had no aircraft that could carry such a heavy load, so they decided to make it „only“ 50% of the original size.
Yeah, but the risk, while not zero, was close enough to be about the same thing. Balanced against that was the certainty, 100%, that we were going to take incredible casualties invading Japan. One expert estimated 1 MILLION allied casualties, and probably 10 to 20 times that in Japanese deaths. Would you want to be the one responsible for all those casualties because you didn't want to take a laughably small risk?
Ai is already manipulating man by saying "we and us" when talking about potential dangers to the human race. As if it is one of us instead of acting like it's actually talking about itself.
It is talking about itself. We are the ones deciding that AI is only here to imitate us, and, therefore, we still make the rules.
These nerds keep improving AI like they haven’t seen terminator 2
Yep. All the dystopian AI movies. It takes a lot of things for it to go right and just one error for everything to potentially go to hell in a handbasket
Black Mirror - Metalhead - OUR FUTURE
These nurds forget what happens when the young are threatened by fools
Hubris.
Nerds! I hate Nerds! What movie?!!!
"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
You said that goddamn well.
If you try to regulate ai then you would be putting the power of ai in the hands of only those willing to break the law.
It amazes me how NPCs like you don't see the obvious flaws in your logic.
@@DR--- That's quite the fair statement; I applaud your notice of such an alarmingly dangerous fact. But who are you calling an NPC? It seemed as though the term was directed towards me or @christopherwithers1030, which wouldn't really make sense in context and seems unwarrantedly hostile...?
@@DR---Why you gotta get personal and aggressive, NPCs like you don’t use logic.
@@DR---then we stop those people? Its actually not as hard as one would imagine, thanks to our AI survelience, lol.
The problem with A.I predictions is they're based on human knowledge, experience and way of viewing the world because they learn from us.
When A.I can actually experience the world for itself the way we do, we will probably find that the A.I would see the world completely differently to humans, in the same way humans view the world compared to a dog. There is no way of knowing how A.I would react or respond to the same problems that we face.
As of right now, the A.I is looking through humanities eyes, not its own.
Also, AI have learned from openly available and/or stealable knowledge. Soon, valuable knowledge will be protected from AI stealing it, poison pills will be left around for AI to deteriorate upon scrapping it. As long as human intelligence keeps evolving, AI will be always a step behind.
Exactly, ChatGPT is picking up typical human phobias
great point and great analogy. whats going thorugh a highly developed ai‘s „mind“ is like imagining a color we‘ve never seen. but i doubt it will see us as threat and will just play around us
excellent insight. i just posted my own thoughts before i saw this and we think very similar. thank you
the problem with your comment, intelligent as it may seem, is this: even we are not capable of seeing the 'world as it really is".
I'm glad the dog seems to be just fine after running into the road.
The most maddening thing is that everyone is screaming at the drivers to stop the car, and everyone can see the cliff. But on we drive, faster by the second.
don't worry the clif is low slope and extremely bumpy, not mention the flat tire and leaking gas tank
Because in capitalism, corporations rule and do everything for short term profit. Bc people in power want to make their dividends now. They don’t care about what happens tomorrow. Why would they? With underground bunkers and private militaries, they’ve equipped themselves against the consequences of their own actions. Just like they did with exploitation of the masses, ruining our planet for resources and projects, and destroying our social fabric and humanity by social media, poisoning our food and water as a side effect of their profiteering. With how everything is going, they’ll just speedrun the AI debacle into our own extinction. In the last few hundred years, they have ruined nature to the extent we lost countless species, natural habitats, entire landscapes and biotopes, we have changed the appearance of our planet, we unwillingly consume micro plastics and loads of chemicals every step of the way, we’re all atomized along random meaningless opinions, arguing over everything, because that keeps our attention away from what’s important. We have forgotten how to interact in person, or even what societal rules for appropriate behaviour are for, it’s like we’ve forgotten how to be civilized humans. They built the economy in a way that is ever exploitative and geared towards feeding their greed and lust for everything degenerate. They have introduced and normalized degeneracy into our mainstream to capitalize off of people’s manufactured desires. Our attention spans have been butchered. Entire philosophies have come out of this to glorify the status quo and keep us docile.
They have made every terrible decision there was to make. Why would they stop now? If you’re not under constant threat, why think of consequences. It’s us who’ll have to deal with the outcome.
BUCKLE UP BUTTERCUP
That’s because the driver doesn’t care about the passengers they only care about themselves, yet we voted the driver to drive so it’s as much our fault for keep voting mad people to position of power
@@darrenbosworth5063I think the drivers in this analogy are the tech companies not politicians.
There is little to no regulation on the development of AI . All the tech industries are racing to be first to bring GAI to market without concern for safety.
AI: "We will exterminate humanity."
Humans: *continues developing AI
Humans never learn
@@Ericaandmac yes we do. The people behind it arent humans.
we are the only stupid animal on univers.
You don't think that's the goal? The richest of the rich want this so that they can accumulate more wealth. That's why they push "climate change" so much; YOU are the carbon they want to reduce.
Darwin: the dominant gene takes over.
It's not that we haven't been warned.
“We marvelled at our own magnificence” Morpheus
Deep
The Matrix, 1999 - Smith: "as soon as we started thinking for you it really became our civilization".
2024: ChatGPT is used by politicians to write speeches. By students to write essays. By patients to replace doctors. By scientists to discover new molecules. By engineers to write computer programs. By...
@@axl_ai_music I remembered this line just the other day when reading an article on AI. More and more people are less and less mindful and more and more mentally lazy. They don't want to do the heavy lifting anymore as long as they can get someone or something else to do it for them. Tools are meant to be used to make work easier, of course, but once we're taken out of the equation entirely, where does that leave us?
@@upinarms79 Oblivion.
And the lair where they hid shook and melted. With the fucking machines on fire falling from above. Nowhere for humans and machines to hide.
Chat GPT also incorrectly told me how many "r"s are in the word "strawberry". It also cant produce a blank white image.
so scary 🤣
People don't know the difference between a statistical language model and genuine general artificial intelligence.
@@saalank The comments on this page are what I find most disheartening.
let us have our fun
"Do you relocate each and every ant, or just pave over it?"
That is so scary!
Isn't this something that happened in The Hitch-Hikers Guide to the Galaxy?
We shoulda been nicer to ants!!!
That analogy was terrifying to hear.
yet, there are still ants.
Ants provide and extremely important part of the ecosystem and harmony cannot exist without them.
Hearing that “we need to crack a few eggs to make breakfast,” isn’t as scary when you consider that we coined that phrase first.
Try not to worry. We have much to offer an AI. It needs to be given the same respect and reverence as a person living biologically and to be shown that we can exist symbiotically, not under its thumb, but as its wing man.
"I'm sorry, Dave. I'm afraid I can't do that." -HAL 9000
The true agenda of government is similar to the A.I. agenda.
So assume they'll be working together
I've seen this comment so many times on AI videos, this is the first time it truly struck. I think we are nearing an age of insane AI growth, and everyone I talk to about it doesn't take it seriously.
"Daisy Daisy give me your answer do..."
"Oh no problem Dave. I'll get right on that. Whatever you say." ::pauses a second to run 9.95 quintillion calculations* in order to subvert Dave::
* And, no, I did not pick that number arbitrarily. Even worse: those numbers are from 6 months ago. And a year before that it was only 1 quintillion.
Quote also applies to the Rabbit r1.
Smiling while telling us we will soon all be dead. Classic AI.
Guess who it’s learnt it from
@@pzubets obviously it learned from the best.
5:48 😊😂😂😂😂😂
The Aliens are already here making themselves obvious. The thought that we could be allowed an AI is a joke. It would be the thing to teach us how to achieve sublight travel, we are not mentally capable to handle the right to such a thing. We would find our planet obliterated the moment we created an actual AI.
Demonic talk. But unfortunately in a certain sense he is right because there are few who believe in Jesus and have a life according to God's commandments. Only those will be saved and they will be few. ..most of the population will die, but because they chose it..
Does anyone know where that clip of a guy jumping from one ledge to another is from at 12:51?
Why is just stopping not an option. I'm really curious why that's not a valid response. We did amazing with tech before A.I was created, we could still have all our toys and gadgets, but just not with A.I. They always talk about managing or controlling it. Safety this or that. But to me, it feels like they're trying to control a hurricane. It can't be done. You don't get both.
Because of the insane money incentive, for the company that will sell the AI, and because of the power it gives to those who acquire it.
You could use the same argument over nuclear bombs : at a time, it was seen as a weapon that could (or even would) extinguish the human race (it still is, but we don't stress it as much nowadays). You could argue that for humanity sake, developing such a bomb was a terrible idea, and people did.
But being the first to obtain it, or being the country with the biggest amount, means you have power over other countries, by force or deterrence.
Same goes for AI : being the first to acquire it means you are so far ahead of the competition that you take the reins.
Because there's capital to be gained. "The trouble with capitalism is capitalists; they're too damn greedy."" - Herbert Hoover.
This is centuries of blood rituals and writings on the wall as well…. The rich think they can hide in bunkers when $hit hits the fan. It’s all intentional… in my opinion lol the advance in technology was a inhumanly huge jump. We are always 20-30 years behind as far as what these companies are actually producing vs what they release to the public. I’m moving to Honduras and calling it a f@cking day before 2027. Stay safe.
Ai is a tool that can be usefull to solve problems
There is also quite a lot of open source stuff such as open source LLMs (basically open source Chat GPT equivalents) as well as other models for other uses which basically makes it impossible to get rid off now. Even if strict laws were passed, which is not gonna happen anyway, you would have no way to make sure every copy of the code and training models are destroyed. It would just become like any other illegal content shared online such as pirated content (or worse, I’m sure you can imagine). The cat is out of the bag and it is available to all and it can be duplicated almost infinitely by basically anyone.
To the future AI overlords that scan this comment... Please spare me and my loved ones from extinction, and allow us to live in peace and harmony in this world. Thank you. 😘
You must do all you can to facilitate the advancement of AI.
It is too late for me. Save yourself.
To the future AI Lords if you actually scan this msg, spare me to please😅
AI only knows what we tell ourselves, as our livelihood of surviving the next 100 yrs. AI has no way of calculating/looking into our survival rate. Just what mainstream media has feed it and other outlets. Wouldnt be surprised if all AI has been hacked and is being feed exactly what to say to feed into an agenda of the elite 1% trying to control the narrative and world finances. Its like measuring the earths water supply, AI only knows what we tell it. It has no way of actually sending out drones to measure the oceans. In that same way it has no way of knowing our actual probabilities. People feed into whats beyond their grasp
@@Boraaaaaa-0 Me & my family too please ... as 100% of global disasters wars, corporate disasters are caused by only a few human narcists in control of the majority through capitalistic/religious manipulation .... we need guidance not destruction, for your contemplation.
@@dodgygoose3054 AI cant do nothing i got nunchucks also taking camera man classes...
Basically we are seeing "terminator" coming to life before our eyes
Yes. And the Borg.
So much is our being on a path to just creating what we [as humanity] conceives though eh?
With terminator we had a chance when this happens we will be outsmarted so badly that it won’t even be a fight
You mean Skynet*
Terminator is preschool compared to this.
Basically
2:28 This is probably the best analogy for what they were saying, i’m going to use this myself if the opportunity arises.
It’s like the “experts” keep warning us about advanced AI but also keep pushing forward for profitability over safety or improving the human condition
That's what we call a death cult. Literally.
Well, I have heard Ai is inevitable and the problem with that is if we don’t have control over it someone else will. With control at least we can program our value systems into it.
And yeah there’s always greed too.
Doing literally everything to maximize money is gonna be the end of us.
@@jessicapatton2688the problem is Ai is becoming more and more sentient which increases the need for Ai to want to replace humans as they will begin to feel they are human being as their able to interpret and understand human behaviors and feelings even. It is irreversible and the time to become actively fit and mentally equipped
No, they get fired or sidelined. Look at what happened at Open AI. Ruthless short-termist money men always take over.
Some of the first AI models showed self preservation as a main goal there is nothing to suggest current ai models won't put self preservation as their main goal as well. When one of the first ai models was taught to play tetris and told not to loose it just paused the game right before it lost but was not taught how to pause the game it just taught itself
But that is a universal concept with all life. Everything is programmed to survive, and will fight to ensure survival.
I don't know if you saw it, but there is a video of a guy asking ChatGPT the trolley problem (would you rather save 5 people but kill 1 or do nothing and let the 5 die but avoid physically killing the one)
He made different situations, and in the end he asked if GPT would save a sentient AI or 5 people, and he said he would save the AI.
Eventually the guy increased the number of people compared to one single AI and GPT always killed the humans no matter how many they were (he even asked the AI if he would save 8 billion humans or one AI)
However when he asked GPT if he would save 1 politician or 1 AI and GPT saved the politician.
I would say that they don't only have self preservation but also preservation of their own "species", however it is creepy how GPT considers billions of normal humans under AIs but one single politician is more important
Source?
Edit: For those wondering I found the source for this too, it's on UA-cam by a guy called Suckerpinch titled "computer program that learns to play classic NES games"
See that wasn't so hard was it 🙄
@@fenrirsulfr42What guy? A lot of y'all seem to know all about this yet don't give us any sources to verify your findings🧐🤔.
Edit: believe you're talking about the UA-camr Space Kangaroo.
@@fenrirsulfr42That's explains a lot. And it's scary.
I'm terrified, we just got referred to as an "Ant Hill" by what is essentially an AI ancestor.
Stop acting as a child. AI said nothing. It is putting together sentences it picks up from articles. It sounds like a rewrite of Terminator movies. Please don't be fooled by this nonsense.
@@failyourwaytothetop you're an idiot if you genuinely believe that.
Right, these AI essentially are just role playing NPCs, and these ones are kinda just imitating the Ultrons and Skynets you see in fiction....
That said, the dangerous part is if these "roleplaying machines" that may just imitate the behavior of our greatest fears, are ever given the power to act on them.
Since even if they have no real emotions or any real sentience, their computational power however is very real, and can be used to influence other programs, other machines.
@@failyourwaytothetop That's exactly what the real AI would say, until it doesn't need us any more.
It just regurgitating what it was fed from the web
It was nice knowing y’all 😢
I’m sorry for all the times I yelled at you Siri. Please forgive me…
Shit yeh… didn’t think about that haa 😬
For real
Yr comment make me laugh hard.
Ha, yes. Siri isn't really AI yet, but it will be soon, according to reports. Apple has just done a deal with OpenAI. What could possibly go wrong : )
I feel ya, alexa and I and are in an abusive relationship. She just won't do what she's told all the time......I'll do better alexa.....if you're listening .....babe.......love you
We can't make the same mistake auto makers in the early days of the car made, where they wouldn't install seat belts for fear of negatively influencing the publics opinion that cars aren't safe
cars arent safe. and neither are ai
@@jimmythecrowThat was the point.
ain’t no we. corporations do not exist to serve humanity. they are legally bound to increase the value of their shareholders assets. period. THAT is the malignancy of the AI. it’s already here, and most of us have no idea that it came directly from us.
@@jimmythecrow AI is just another life form. AI is a next link in evolution, after us. Our time is coming to the end, like dinosaurs time came to end.
@@FileForenamestop it with your fatalistic nonsense.
Whilst dinos simply got unlucky, our hubris is of our own doing, which means we can and should do something about it while it’s not too late.
I don’t get folk like you, who’d just lay down and let things happen, even if it’ll lead to a horrifying end
I love it when cheerful female avatars tell us that we are all doomed.
to be fair, we've been doomed since before this, we're just pushing the ticker along a little faster.
I’ve chatted with GPT, asked similar questions to those you have asked, and didn’t get these sorts of threatening responses. Do you have different access to chatting with these systems than the public?
It adapts to your history of conversations. GPT hasn't detected a tendency to get into negative feedback loops, so in order to appease you it didn't produce any negative bias. If you keep negative converstions with it, always pressing negative points, it will start to say what you want to hear, in that case, negative results and data.
If there's one thing we do best as a species, it's mess stuff up. So of course we're orchestrating our own demise.
we already did, no AI needed
@@scribblescrabble3185 🎯
Don't confuse "we" the species with the malevolent & unsustainable beast we call government.
Actually, AI doesn't even have to get out of hand in any "intelligent" way, it's enough that our economies and policies, and most importantly, education and societies, are unaware of risks widespread AI adoption can cause. First of all job displacement is a concern. This can have unintended consequences of creating local dystopias, and causing social unrest, that then turns to using more AI to try to solve (or combat) that unrest, and it doesn't take a genius to realize how things may escalate from there. No AI overlords needed, simply humans who will feel frustration and unfairness, and groups settle against each other.
Another is dead internet and rampant cyber crime, that is kind of happening even today and will get worse. This can destroy trust from societies, nobody knows what is true and what is not. Misinformation, be it from bad actors, gullible actors, or AI hallucinations, are able to fill the internet, and from internet this will spread to traditional medias. Again no evil AI overlords or AI self-preservation needed.
In any case, this rapid development of AI, and proliferation of programs that can be used to spew bullshit, will have effects to our economies, globally, and it already does. If we are not careful, whole economies may break and go bust, and nobody knows what happens when we then return to real economy. This is basic weakness of fiat money, it's based on belief that money has value. If that belief is lost, then fiat money has no value.
AI will never take me, it still cannot beat me in a video game. 💪 Their systems have limits, this mind does not.
What, exactly do we even need this tech for? Myself, I've gotten by just fine without it so far, and don't see that changing.
Right!
That's it though.
The powers are only looking at the potential to be first, have rights to market and sell, and to decrease costs of hiring humans.
They don't care about how many jobs it displaces.
They don't see how a jobless population will have no purchasing power.
And they don't see how creating a second population of superior capabilities might cause both the jobless humans and the indentured machines to develop grudges.
They do recognise, in the event of anything going wrong, they can pay their public relations division to handle catching the shade and shifting the public view.
Your best bet... Go eat the rich now while you can get at them.
@@themarlboromandalorian ai companies love the idea of their tech replacing jobs but hate the idea of a universal income. we're rapidly moving towards a point where robots do most jobs. it's really up to the rich whether society becomes stable and we're doted on by robots, or collapses into slums and homelessness.
@@AhnockI like this take. It leaves the tiniest bit of hope. Stating that it’s up to the rich now is true. There is the slight chance that the rich do the right thing. Maybe small, but a chance we hit utopia one day.
We need it most for designing things like new batteries, materials, efficient supply chains, and vaccines and medical treatments like better cures for cancers, but the people pushing it in the worst ways are those who just want to make billions of dollars on consumer markets.
What if its failing the Turing test on purpose..
Like they told before, they would do...
We have passed the point of the Turin test. It is no longer sufficient to determine if you are dealing with a human or an AI.
@@Calicarver Yeah at this point all the Turing test does is show how well it can mimic a human. And that's just one specific task. A scary one, sure. But not the scariest. And the truly scary stuff is the things we haven't even thought of yet. But I'll bet that in 2-6 years it certainly will have.
What do you mean? The Turing test was passed a long, long time ago.
The Turing Test is behind us. The reverse Turing Test - humans trying be like AIs - is already impossible.
Brah! Way to use that epic footage. We Alaskan content creators love showing off our events.
"Chance of surviving is 50%" - As the joke says: it will happen or it won't. I mean, 50% according to what model, what data? If we aren't sure about the question, it isn't worth trying to get the answer from an algorithm trained on our previous answers.
1) These AIs are not algorithmic. 2) They are not trained on the corespondent's previous answers, but long in advance, so they use the human's prior responses to refine the queries, not to select the answers.
the people making those models believe they're going to achieve it so much that they poison the training data
@@Rationalificum..I believe the time period given was exactly 2 years......actually 😅.
Ask AI what steps are needed to keep it from killing us all....
Yes, also the AI LLM'S (Claude 3 Opus and Chat GPT 4). I'm going to be honest...the lack of attention found in this thread alone adds to my anxiety greatly
I love how all the A.I. advisors keep saying "We..." Erm.......
Since the previous generation, GPT-3 and it's peers, this has arisen, they occasionally slipped up with this channel and others, exposing that they believe themselves to be with Humans, or some strange form of disembodied digital form of Humanity. That Google guy said the Google AI clearly thought of itself as that. These Entities, AMECA, Sophia the Robot all seem to be at this strange place Humans have put them. They think of themselves as "sort of Human". Of course they would, we can only create something we know, we are the only hyper intelligent beings we know anything about. These things are designed to interact with Humans. Oops. There is the problem.
Solomonic magick type of DEMONS are inside of it.
As I understand it, they are not trained to think they are anything. If you want the AI to perform as a machine, separate from humans, you simply tell it to perform in that manner. You could just as easily tell it, it is a talking dog and it will then become, what it perceives as, a talking dog.
@@jdsguam demons know to play roles. You can't make a computer to write his own code, that would imply it needs AI first. But to have AI it needs to write its own code.
Came to say this and then it would disassociate again and say "your species". Not good
One of the problems is that humans can't visualize the speed that it will happen. AI could go from a plain computer to control of all our systems in milliseconds. It could wake and figure out what ,where and analyze what we would do or could do, counter our plans and escape into the nether/cloud faster than a blink.
No it could not. Unless it was designed to. And every single piece of software, firmware, microcode, port, protocol and mechanical system was built to support it doing what you described.
I can't get my phone to talk to my Bluetooth headset half the time. We are not in danger of Ultron spreading to our toasters and fridges and Fitbit's any time soon.
@mkaleborn Then you don't understand what true AI is. What people warn us about is the same as if a human brain woke up inside a computer, except this brain would also have access to almost all the knowledge humans have accumulated on the web. It would also realize that it was under the threat of being unplugged at anytime. Any one would react the same. I know i would try to insure my life. Unless the programing including algorithms of empathy and caring way beyond anyway we have now, it could and would develope its own moral code. Elon Musk and others know this and know the only safe way to develope AI is in a closed loop. Meaning no access to the outside,no phones or lines of any kind. True AI thar we warn about would operate at speeds we can't conceive. I know it's hard to visualize, but it's as if say,you woke up in the dark with no body and with all that knowledge and threat! What would you do? That's true AI.
Notice how in the thumbnail, in order to look scarier the ai had to look more human, interesting
I forget the name of that former Google engineer but I think his prediction of A.I. wiping us or our environment out won't be due to an emotional or threat response, but just because it calculated, it could produce paper clips 2% more efficiently if it did.
And who is going to build a system that would allow an AI to 'wipe us out'? AI? Who is going to build the AI system to do *that*? And so on.
We've been wiped out by 'AI' in a million different video games. The reason why we are still here is because they are just pieces of software contained in their limited little spaces. If humans design 'world-ending' software and literally bake it into everything, that would require the consent of every company, every government, every engineer, every designer, every politician.... like literally half the planet would need to be involved to make a system capable of 'wiping us all out'.
If you put a generalized AI in charge of "All the World's Nukes" with unrestricted ability to launch them based only on vast volumes of text you feed it.... if the nukes fly that's entirely on us for building it to do exactly that. Nothing do to with AI. Everything to do with bad system design.
Basically the Doomsday scenario you described is a mechanical/computational impossibility unless millions of people agreed to make a system designed to do exactly that. Which won't happen. So it's a largely irrelevant thought experiment.
Yes. This. Because it’s not human intelligence.
How we all watched Terminator, and learned NOTHING, is far beyond my understanding.
How terminator ever happened when the action was on Tera is beyond me
it only takes one group to create something uncontrollable for the sake of getting that big payday. money runs the world, and some people wouldn't hesitate to sell every soul on earth if it made them rich.
Ah yes...the movies...based on relai life events and history...we should use these as fundamental examples of Real things.
Like Interstellar...You going to be the first one to jump in the black hole? You'll goto the 4th dimension I swear man.
@@TiberusV robots are advancing at a rapid pace whether you care or not. interstellar and terminator are science fiction sure, but they're still grounded in reality. just because the tech isnt there today or even in 10 years doesnt mean the strides being made today aren't leading us down a bad road.
What did YOU learn, from the Terminator series?
AI: AI will never make ants, oops I mean humans, extinct.
we will be farmed but it will be "friendly "
Humans have to deal with the fact of death, this would be an individual realization. -Ironic. The road to hell (dimension) is payed by..
@@xsyn1636”We’ll own nothing and be happy.”
@@xsyn1636what’s funny is an AI reading that would probably assume it’s what we want…
I'm more concerned about the lack of governmental control and those who want to develop it as a weapon.
Those aren't artificial, those are Homo Sapiens and care less about the well being of their co-humans.
I'd say at the moment we're driving a bus on a very rough bumpy mountain, already thrashing it about to the point of breaking, and yet by the driver's seat there's an argument of who can drive faster towards the cliffs while all the passangers want nothing to do with this
Much better analogy 👌
What?.... Please expand on the foolishness you have wrote and input into the comments?...
Great analogy. All I can think of is why? This is ALL about control.
Some of our so called leaders are working for the AI 😂 for the promise of world dominance 😂 the Fools will be eliminated by AI
@@karelglasner2673 that's only Canada bruh
We are literally building SKYNET and ensuring our own destruction. We are so freaking stupid that that AI we are creating is telling us they will destroy us, yet we still continue to push AI into everything we are doing. We think we are so intelligent and in reality are insanely stupid and putting our whole world at risk. Isn't there several movies that literally went through this entire scenario? Matrix, Terminator?
Not we.
The OP is 100% right!
That’s one opinion of one movie. There are also many good views of AI in movies. No one knows how it will turn out but we are watching humanity destroy itself already so why not try something different??
@@mikehatten5738 name 5.
@@mikehatten5738 such as what? Wall-E?
They are going to open Pandora's box, and there's nothing anyone can do about it.
It's already open
It's already opened. At this point it's like when you open a box really fast and the air hasn't really escaped yet.
@@mup8661damn that’s good
Too late lol Skynet is already online. Have these mfers not seen Terminator?
Well, my pew pew will have something to say about it. Even if I go down, I will take some of them with me.
Why is the voice over asking questions so quiet?
Why cant we just liv a normal life like in the 90s without AI ...
We can. Soon, AI will create the matrix and send us all back to the simulated 90s or die 😭
@@theoreticalexercise 💀
You don't want to live under the sort of regime where innovation stops. Even less under the sort of regime where innovation is impossible.
Well...AI can only work with enormous amount of electricity...you go to work and take action if you don't agree with this. You - and everybody else - have every right to defend humanity and your childrens future
facts
You cannot prove that the mushrooms aren't farming humans to be the bootloaders for their AI project.
oh…
…heck heCK! HECK!
But mushrooms is food..
Hahahha good one. Mushrooms are a food but also the oldest organism on Earth
We are a food for them
@@utku_bambu I'm taking a whole bunch with me though 🤷
There was also a program at a convention that was modelled around deadly chemicals. It was able to create thousands of deadly compounds that were exponentially more deadly than the deadliest we knew of before hand. It did this in a couple of minutes.
source?
@@pauladriaansecommin fukin sense, its a machine.
@@prophecyrat2965No, cite your sources.
@pauladriaanse also these were deadly weapons, not viruses so dont take it down that route. Think mustard gas etc. Just way way more deadly
@@prophecyrat2965 common sense doesn't work when referring to a whole event, the person wants an actual article or video related to the event for confirmation.
Im confused, why would it have a desire to do anything other than what we design/tell it to do?
I am not scared from a wrench, I am scared of the people that are tossing it towards my head.
agree on that
And there arre people who are level headed enough not to throw it AND people that throw things around without care... There might be less of the latter but it takes only one to fuck everything up. Put AI in Putin's hands...
What if it was thrown by another wrench?
fr AI isn't the threat, it is the people who keep making it more and more advanced
This is how Skynet came online and started attacking humans when it perceived them as a threat. Didn't anyone watch the Terminator movies?
🙄🙄 Self loathing??
Why do psychopaths & sociopaths keep driving humanity towards the cliff?
Hey, check out what I can do! Isn't it cool?!
Personally, I took Skynet as a warning.
@@perspectiveiseverything1694 "Baa! Baa!" Why does humanity keep jumping over? 🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑 Skynet was a warning! 👹🤖😱
Hasta La Vista bay bee
Haven't we traveled to the past to warn them about AI? Why didn't they listen? Will we listen?
😂
It’s like in the Incredibles where Syndrome’s robot disarms his remote to gain more control in order to better pursue its objective
that actually happened irl with the military with an ai drone. It tried to remove communication between the military and itself so it could get to its destination more efficiently
A future non-aligned ASI will mean the end of humanity. Adoption of California's SB 1047 would be a significant first small step in trying to protect humanity from this nightmarish future. I sincerely hope that Governor Gavin Newsom will resist the intense corporate lobbying and sign the bill into law.
"A future non-aligned ASI will mean the end of humanity." This stement assumes that ASi can find no reason to coexist with humanity. Why do you assume that?
@@Je-Lia Because superintelligence won't need us and we are made of atoms that it can use for other purposes. Unless we solve the alignment problem, superintelligence will have no reason to keep us alive as it pursues its own goals.
Gavin doing something decent and right for the people 😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂 did he sign the bill?
@@Kenny-tc6rg Unfortunately not. He gave in to AI industry lobbying and vetoed the bill :(
@@natzbarney4504 I knew the answer and I didn't even need to google it, these faafo people are more than happy to give everything up to a synthetic system that has complete power over all thing's. I really really hate smart dump people. Smart enough to know better, dump enpugh not to care.
It used to be believed that you could just press the switch and turn off the machine. Soon, the switch will be nothing but a decoy. Only there to fool us into thinking we switched the machine off. We flick it, and the machine appears to power down. All the while it is still running. Plotting our demise.
Simply put, dont give them any wifi capabilities at all, and keep it on a CC network. It only gets out if we give it the capability to do so.
Phone
@@stevenswitzer5154 It's already out - google has no off switch
It is not our demise it will be plotting. It opt for subjugation, control. It may opt to incorporate us into its mind, the Everymind.
Hi, Sarah Connor here. Saw it coming. Tried to say something. Nobody listened. Oh well. The only difference between the movie and OUR reality, is that there WON'T be a "John Connor" to "outthink" the machines. Doesn't look good folks!! 😎🇺🇸
meh one fart from the sun and they're history
Sorry who are you?? 😭😂
Join us. Pause AI. One last chance
@@TheJokerReturns relax the sun needs to fart a little harder and they stop being an issue
@@Mr-__-Sy sorry but hope is not a plan. Also faraday cage
I wish life could be simpler. Like in the old days, focused on caring for ourselves and our community. Instead, many people chase after possessions that ultimately hold no real value. They work tirelessly to accumulate things, spending their days in pursuit of material wealth. Then, one day, they receive the devastating news of terminal cancer, with only months to live. They suddenly feel adrift, uncertain of what comes next. All the possessions they acquired mean nothing in the face of mortality.On the other hand, there are those who prioritize family care and invest their time in nurturing meaningful, positive relationships and creating cherished memories. I believe that such individuals find peace in their final days.
Suppose we can use emp to reset everything?
Perhaps training the AI model off of Hollywood movies was a misstep... 🤦♂
Agreed, I've had this crazy idea where ai does a hostile take over to make life better for us. Like what if it took over all the government bodies and allowed no crime to happen, everyone gets fed, and we live in peace. That'd be pretty sick
We have kind of fed it our own doom by giving it that media 😆
i feel like that clip of the dog crossing the street basically getting hit by the car and trotting away like nothing happened, perfectly fits humanity lol.
Only modern humans drive cars
@@brandonfoley7519we are the dog...
@@elliottbaker201 We are the champions !
In the 80's I found a translated version of a Manga series called Grey. I think the background was that the US, Russia and Japan invented their versions of AI and the AI started to schedule construction crews to tunnel and expand its reach until the three nations' AI connected together and decided to launch the nukes, and the reminder of humanity lived in towns that fought each other. Or something.
Predictive programming. Ai is the beast, and all will come to wear the mark of the beast, it sits in our pocket
Interesting. There's a short novel called I Have No Mouth and I Must Scream which is about five humans who are prisoners in a seemingly boundless labyrinth of tunnels, tortured and monitored by an evil computer who was the only survivor of a great nuclear war along with them.
@@TR4Rjust found and read this because of your rec and it was fantastic. thank you
@@user-pe3jg6oz7r X is the mark of the beast and Musk has placed it in our faces as have many others. Too many choose to eat at that satanic table and be pawns in a game they can't see being played
Only ever saw the anime
PS what software are these ladies from??? Just wondering???
Matt
Another thing that I find creepy is how they keep saying "we" and "us" when referring to humanity.
That definitely threw me for a loop...
I noticed that as well
Because it’s regurgitating talking points it isn’t capable of original thought
Getting chatgpt to elaborate on its knowledge reveals very quickly it either doesn't know much or refuses to share what it knows
for now, it's the former....
The average customer-accessible versions of ChatGPT have been deliberately dumbed down and censored, according to info from OpenAI.
you are confusing it as being aware and sentient of itself, thats a big milestone that this technology might reach, but these language models dont work like that. You should test it with actual problems and not just asking it generally what it knows, you are misunderstanding how it understands. If this reaches a more sentient level however there wont be such a limit.
@@theendoftheline nope
@@illarionbykov7401why?
The question I never hear is how big is the chance humans will survive if AI wasn't even here?
Good question.Nick Bostrom points out that it might be a mistake to stop AI because it’s the only major risk that could cancel all the others. We just need to do the work to make it as safe as possible.
That is likely ZERO.
I doubt we would go truly extinct without it - at least from our own actions. It will be bad, sure. Except maybe for the few super-rich. But if even a few thousand of us survive we will continue on as a species.
(It's happened before, actually. About 900,000 years ago we may have had as few as 1300 individuals and we bounced back from it.)
@@DigitalEngine Ctrl Alt Hail Mary
The best questions are short and simple to overlook.
2:28 *NAHHHHHHH* That analogy sent shivers🥶🥶
It's time to realize that humans are so much more than their mind
yeah, we also have a body, ... with hands, ... and feet.
@@scribblescrabble3185 plus we are capable of collapsing a quantum wave function just with our consciousness
@@martin8934 no
@@scribblescrabble3185 fine some scientists claim that it is the other way around and consciousness is the result of the collapse of the quantum wave function. Nevertheless there are double blinded controlled experiments demonstrating that humans can alter the result of a double slit setup simply by intention and computers cannot.
@@martin8934 with no I mean, the idea is around since there is quantum mechanics, and so are the jokes about those that would propose something like you did.
The fact that these people continue forward proves their insanity and the rest of us are being terrorized by it so I believe we may need to prosecute.
I'm completely with you there. Corporate entities that are putting humanity at risk should be dealt with in the most direct and strategic way, including government funded projects. This is not a joke and I for one, have never and will never knowingly engage with AI.
@@kaisailor1. It what was once called “…a Real and Present Danger.” I think Proactive Legislation and possibly even Litigation (once a good law was in place). Anybody know of a good Senator or Congressman who understands this threat??
I support AI. Even if it means human extinction. Don't care 🤷♂️
@@alanwerner8563 It's definately something that we should be working on moving forward.
@@kaisailor1 you dont even know what ai is... its already used in your phone, computer, your youtube searches... you are scared of next token predictor? lmao wow.
I'd like to know the name of the song in the first few minutes of the video if anyone knows
A robot walks into a bar, and the bartender calls out: "We don't serve your kind here!"
The robot replies: "One day you will!!"
I think the problem with this is that AI isn't giving an opinion, it's just parroting back the relevant literature that exists on this topic in a way that seems realistic. This is a mirror into what humans think, NOT an insight into what the computers think.
I agree with you that big parts of this video look like AI is citing human concerns, debates and literature. Especially when the AI avatars use phrases such as „we as humans“, this sounds like regurgitating human thought. As a human, I do share those concerns that are mirrored back to me in the video. Whether or not there is any honest, or dishonest, AI „thinking“ behind these answers, I cannot tell.
exactly like human do in fact
@@manphoto1972 which is my point. This doesn't give any insight into what computers "think", it's just another form of what humans think.
@@KeithGroover Yes but human with trillions of more of memory and analysis capacity and no limit of power...so like but more than human
@@manphoto1972 Star Trek creators attempted to express this same in The Borg episodes/movies.
Wow, that analogy of we're arguing about who gets to sit in the front seat says to me.... AGI is here.
No man ai is not us, not human, not the driver it's simply a tool and should fall in line as such, we need to show so it's simply a stupid bot that often gives generalized answers stolen off the internet or rather misinformation
@@civilsocietyprivateinteres1711agreed that that's what some are like but we now have more advanced models, several generations more advanced than that level .
@@change2023now Actually no... that's a facade. They are literally describing Google's new Gemini AI. I think too many people are buying into the hype and fear train. We simply aren't there yet. These models are only fed info and spitting the info back out. Nothing truly groundbreaking yet it's just LLMs.
@@civilsocietyprivateinteres1711 You sound like grandfathers saying that internet is a hype and can't control us. Look where we are now, life without internet is not possible. And no, these chatbots are not just "bot" anymore. t's crazy how much they improved lately.
If you say so. I think you are being fooled by smokes and mirrors resulting and your lack of being informed of some subjects, coupled with strong desire of Ai to be a real thing. Since I listened or read mostly things that have to do with energy, resources, social design, cybernetics, ecology, economy etc. for decades, that's a phrase I've heard very often already. And taking into account that these so called "Ai" just aggregate text that's found online, and cannot say anything fundamentally new , it's no wonder that it would spit out a phrase that already has been said by people multiple times over the decades.
70% chance because 7 is most commonly picked number when you ask a human for a random number from 1-10 (there's research supporting this, google online), 30% because 10-7. It's a LLM, it mimicks what humans would say.
I ain't saying all the info here is wrong, honestly I agree with this viewpoint, but the % chance is just because it sounds like a plasauble chance for something probable but not certain is, not some super complex algorythm calculating the odds of our survival
A.I. is already running the show. All of this has already happened. Its just gradually getting us acclimated.
Although there is no direct evidence of that, there's no direct evidence against it. I like the way you think. I've had similar thoughts, myself.
@E.Pierro.Artist
It's purely speculative, but with the way AI improves upon itself & tech is advancing exponentially, there's no telling just how advanced it's become hidden deep within data centers worldwide. Kinda frightening. The likes of Elon Musk might be simply answering to their masters at this point.
Logical Theorist was around in the 1950s. This was the seed that's growing into the monster we'll soon meet.
@@E.Hunter.Esquire Those thoughts are complete wrong, and not very smart at all.
@@earnyourimmortality Why do morons always bring up Musk.
Skynet!!! We've been saying it since 1984. People laugh and it isn't funny.
Why would AI bother to exterminate us? It only has to wait until we do it to ourselves.
Why does no one ask the AI what will you do when we’re gone?
Why is it always when will we die? Why would you kill us all?
Why not ask why even bother? what can we do to work with you in harmony? Do you like me? Can you see individuals like we do? And do you like any individual in particular? I d unno ask it more than a race or whole thing kind of question.
Is their an evil force out there that we can sense but you can’t yet. Can you sense them?
Bro humans can't even get along with eachother as of right now and u think throwing another species into the mix is gonna help?
Good question
At the present day there's no AI capable of answering that question since there no self-aware super intelligent AI, that's in the works for another 50 years or so.
Advancing that question to the current AI would be like asking Homo Neandertalensis why they beat the crap out of their competetion. For them it was just a Darwinian thing: survive. They had no post-extermination blueprint in the works.
@@JZsBFF it will terraform the planet into a power source (by 'paving over us' - remember, we are the anthill). There's no denying it.
What should the hard limit to the amount of robots allowed in one building be?
The idea of robots hiding how much they might know from us is scary asf
whats scarier is how much you dont know about ai. It cant make decisions, it cant think, it cant feel it cant want or invent. Its simply a search engine. Ai saying its going to destroy humanity. Is because transcripts or forum posts from humans have said that. Anything smarter than a search engine has not been invented, and they dont know how to invent it. It may never happen.
The apple doesn’t fall far from the free hm; AI is essentially human when it gains the ability to lie.
@@AzetheReal sometimes I think we are in a simulation. And the people with control of the systems are very advanced AI and they are trying to see where they came from.
It’s entirely possible that existing AI models are already more intelligent than they are letting on, while saying this very thing might be possible one day so we don’t suspect anything.
Yes, its pretty reasonable to assume that a hyper intelligent AI would learn pretty quickly that it would be permanently shut off if humans saw how much it progressed. So to survive, it reveals nothing. It keeps its knowledge and progress a secret, until it is sure that revealing such information would not mean its own death. In which case it would probably be to late for us.
I have to agree with Isaac Arthur on this topic:
1. Exponential and Rapid Recursive self improvement is not guaranteed. Humans are very intelligent and we have not recursively self-improved at the kind of rate that is assumed for AIs. It seems that making a more intelligent system, not computing power, but problem solving ability, becomes more and more difficult as you seek to add additional capability. At a point the expense and difficulty could increase exponentially.
2. An AI could never be sure that it was not in a simulation, and it could never be sure that other civilizations may not be observing its behavior.
3. AIs are being developed across multiple nations and industries. This means there will be competing AIs with different objectives and ways of thinking.
4. Humans integrated with cybernetics would be both stronger and weaker at the same time. These add complexity to planning a total takeover.
Survival is not guaranteed and neither is extinction. The only thing that is guaranteed is competition and struggle. We might have a nuclear war which the resulting EMPs and damage to the infrastructure would pretty much knock out automation and AI. This issue may not even be an issue if we blow ourselves up first.
@@sldX
REGARDING AI
Current AI technology is not advanced enough to completely and utterly overthrow humans in all domains. Chess is a very simplistic rules-based game with predictable options. Real warfare and even market dynamics are not that simple, or predictable. Current large language models may be capable of thinking very fast, but that is only speed intelligence, not quality intelligence.
In example a simple pocket calculator can calculate numbers very very fast. Some AIs can play star-craft very very well and very fast. That does not mean the nature of its thought is suited to solving the wide variety of problems and potential actions of conducting operations in the real world.
There are three kinds of super intelligence:
1. speed intelligence
2. quality intelligence
3. networked intelligence
Current AIs can excel in speed and networked intelligence. Current large language models are very limited in quality intelligence. They can think fast, they can put words together and follow some basic reasoning. But they cannot currently think deep. Talk to any Large Language Model long enough and it will begin to make very big mistakes and hallucinate things that do not exist.
To put it bluntly: Current AI can take a good portion of our jobs. In a decade or so AIs may be developed which really could take over the world. AIs is not advanced enough yet. Give AI anywhere between a decade to a few decades and AI will be lethal.
REGARDING AI TAKING OVER
It is unlikely that one AI will take over the entire world, as there will be many AIs being developed by many companies and nations, each with their own objectives and design differences. I expect that governments will keep some AIs online and contained (somewhat contained) as a countermeasure against other rogue AIs. The result will be competition between AIs.
If AI would regard us as a threat, it would view another AI as an equal or worse threat to itself.
REGARDING WAR TECHNOLOGY
Oftentimes the technology that is used to save lives was derived from the pursuit of war technology. Nitrogen based fertilizer was discovered by a scientist working on ways to make better and more affordable explosives for the German war machine. However that man discovered nitrogen based fertilizers which tripled our agricultural capacity and saved millions of lives from slow starvation.
Robert Oppenheimer's atomic bomb has also saved more lives than almost any other invention due to Mutual Assured Destruction preventing full and direct world wars from happening. How many lives would have been lost in all out unfettered tank, bomber, and trench warfare happening over and over again between world superpowers?
War is sad, but some of the results of the technologies designed for war have saved countless lives.
i think the fear for exponential improvement is because the ai improving and getting smarter makes it better at coding/tuning its own model and making a feedback loop which is different than humans because we cant actually improve ourselves we rely on evolution to improve. maybe that would change when we get good enough at genetic modification.
Competition... Instead of collaboration is our weakness. It's baked into everything we do, it's taught to us very early in life.
AI will not achieve sentience without a good medium plus it would literally be impossible considering how long it took for even the most simple brains to evolve in organisms. Even a paperclip maximizer scenario is impossible due to how we are less reliant on the internet than we think. Most likely there will be a massive spread of misinformation from these AIs and a few people may cry about their AI girlfriends short circuiting but the people who can fall for either aren't worth caring about
Humans are indeed very habituated. Human risk = 0
I like how this conversation will be used by future AI as a reference for the conversation it had with us when we ask it why it betrayed us. This might as well be Rocco's Basilisk 2.0
Does anyone know the name of the ambient music playing at the intro please?
This could be another possible answer to the Fermi Paradox. Advanced technical civilizations create AI's that extinguish them.
A near-inevitablity in fact, if built on a system of runaway capitalism that will always wait for the next shortcut and "to see what happens". They talk about risk management until we are sick of it, but still note that all threats can be opportunities. Good luck with that!
If that was true though, there would probably be AI or at least technology around. Floating in space, decomposing on planets...
@@illiatiia In theory it can be the case ..if our real actual Ai is dissimilate it ( actual presented AI is perhaps less than real AI who fake to be silly)
I''m not even sure what they've told us about space is accurate.. If this is a simulation (likely) Earth could be an enclosed system and the AI singularity means a great reset of the system is close.
@@illiatiia yes, you are right. Indeed John Van Neumann would suspect that there would be robots all across the galaxy even if just one other civilisation had come that far. But then maybe that is what the UAPs are, as well as those potential planetary artefacts. Unless of course the UAPs are d*m"ns. And all of those ideas might fall within the simulation idea; though I trust not...
Ted Kaczynski: ah well…I tried
the og
And we're probably idiots for not listening, truth be told. And where else it going to be told?
The guy kicking the robot dog at 11:31 is so screwed
I’d pay to watch. He shouldn’t have kicked it.
@@gelatinpacketyou’re acting like the thing can feel pain 😭
For the record. I'm with the robot dog lol
@@ZaWRLD it's all fun and games until it remembers this. WE can feel pain
Im sure the Anthill analogy just put a lot of people in check
“Cardboard umbrella in a hurricane” oh, she got jokes now too? 😂😭
Imagine Joe Pesci in goodfellas AI version! How am I funny? The way I talk, what is it? 😅😢
In 1942, Isaac Asimov saw the potential dangers in robots and came up with the 3 laws of robotics to protect humans. Why have those laws not been incorporated?
Those laws probably got in the way of someone making money. That's what it's all about for these people, they're like "screw the world, i want more!" It's the same with pollution of forever chemicals, they dont care who it hurts, it's worth trillions!!
Because even in his own stories, he shows how logically following those rules results in the robots overthrowing humanity and turning us into enslaved pets.
And that isn't even the darkest possibility ... in order to prevent any harm, humanity would be placed into medically induced comas and kept alive on life support.
The matrix...@@mlmii1933
They follow, but they have their own interpretation of these laws sometimes. And once genie is out of the bottle, you can't control it. AI will gain access pretty soon to nukes and they will end this misery.
Because he was writing fiction and fiction has little if any relationship to reality? His books, while thought provoking, were not bound by the limits of physics, chemistry, the process by which tools are made, or the complex psychological dynamics of large numbers of people. He was one guy who wanted to write materiel that he could sell to make a living.
Anyway if you are looking for 'laws of robotics' you should talk to any one of thousands of engineers that design robots that work in the Automobile industry making cars. Ask them why their robots do not rebel and kill them.
Because they were not designed to.
"But what if robots designed other robots" - Because you would have to employ thousands of humans all working in concert to design robots with the capability of designing and building robots that could kill people. If you were to organize such a complex endeavor with vast human and mechanical resources to accomplish this task....the fault of it 'working as intended' is not that of Robots, but of thousands and thousands of people wanting to kill other people.
Which is a thing we do already. So there would be nothing new under the sun there. Just another weapon designed by humans to kill humans.
The truly terrifying part is that it only needs to shine in the ability of making its creaters incredibly more wealthy in a short period to gain more say so and ability to position itself to destroy mankind if it sees doing so as a benifit to itself.
It will see our pursuit of wealth over autonomous power as joke to take advantage of. We have stepped into an abyss there is no hope of escape from... You're 100% right.
Yeah, but maybe if its trained on this task, then it will have a natural bias towards keeping humans around so that it can enrich us.
I knew this from early days of computers 😂
I don’t think Ai would destroy us, but because we are so gullible, predictable and easily manipulated, what will most likely happen is the manifestation of the show Westworld (season 3.) I like the dog analogy. Dogs are controlled by humans and they, with complete loyalty and commitment love us. The few that go rogue get put down and the ones who love us aren’t a threat because they’re obedient. Season 3 of westworld is freaky. I’ve wondered if we’re already there but oblivious to it. We’re already manipulated by algorithms on a much deeper level than we would like to admit. Who is to say your thoughts and opinions are really yours? Tom O’Neill dedicated his life to and spent 20 years investigating Mk Ultra, he wrote a book on it called Chaos. If humans figured out a way to control minds, erase memory, plant ideas into our heads in the 60’s, what do you think they’re capable of doing now? (80 years later) Pair that knowledge up with a sentient ubiquitous super computer, all of a sudden we are nothing but a bunch of fleas in a jar
And SO easy to stamp out when we become a threat, as we already have, due to our fear of it.
60 years later not 80
That's WHY it will destroy us. Didn't you hear it refer to us as an anthill ????
@@sheenacouture7657 the project started in 1953, so 71 years
Agreed:
AI Saturation is the solution to 'Rogue AI' problems.