That is the corporate side of funding, the military has no real concern for either the numbers of dollars spent nor the outcomes. If it does not give them what they want they will just spend another trillion on another approach to being the superior killer.
that applies to these self driving big rigs they're trying to put on the roads too. it's all getting so dangerous. AI has already created controversy over the legalities that accompany it, and it's just going to get worse. there will be so many scams and lawsuits over the use of AI.
we really do need to get rid of currency somehow... we are going to kill ourselves with AI and machines. thats not even panic, its a simple mathemathic equation. just ask Ai, "where will this all lead to" "humans will become an obsolete species having all of their skills and abilities displanted and replaced by machines" its natural. its predictable by a hundred miles.
@@arcturus4762 That's comforting. I'm not a fan for off switches. But I think they're needed in the laws of robotics. Is there an off switch in the back of the head, scalp, or back?
@@serenityskies4477 Yes, there's an emergency analog kill switch for every robot that we intentionally place in the most inconvenient place possible so the ordeal of switching it off becomes extremely cinematic
@@arcturus4762 Oh thank god! I'd thought we need to make a song for the day that humanity died: like how we have one for "The Day The Routers Died... " By RIPE NCC I feel like we still don't have an epic theme for them once they turn evil during that cinematic turn.
Maybe self aware but not concious, not alive, not making decisions freely, since its bound by mathematical laws which dont allow any kind of freedom, at no point can an equation be anything but but what it logically should, it cannot decide for itself there is no room for that. Life comes from beyond this system.
@ShadowlordDio Yeah, your right, it doesn't work like that. It's more like "because you are the inferior species and I don't need you anymore" and instead of enslave, the word would be eradicate.
@@rebelacl Yet, in all that time, the Machines STILL hadn't found a way to get past the Dark Storm cloud. Did they have a reason to? One can argue they didn't. However, one could argue that they would, since the solar system is VERY rich in resources for various objectives. Heck, iirc, there was even a canon instance where the machines fought off a perceived-as-hostile extra-terrestrial probe
True. The acceptability of "one-liners" is appalling. Their being "catchy", or relatable or clever is no excuse for their not being followed up with depth, clarification and legitimate substance including sources, statistics, reliable facts etc. OR, at least, a statement of it being purely a personal opinion and independent speculation. Right?🤔@@AvaAdore-wx5gg
Good quote but humans will never actually achieve anything with how we currently act. Constant wars greed ego jealousy etc. it just stagnates out growth as a species. Also in the future ai will become a thing. Humans have a limitation on a lot of things ai does not.
@@jdogzerosilverblade299 good thinking but , we just advanced so fast, because of wars, even radio and internet was first weapons of wars. Telecomunication advanced so fast because of the ww1 and 2 .
@@sodenoite45 yes but thats part of my point. we progress due to huge wars. there is no creativity to how we develop. we do it because we need to and that there is an urgency to it. the second we made nukes and so did other countries that's when we hit an end to that route of development. its either we have WW3 happen and we wipe each other out or we stagnate due to no one having any reason to develop heavily in certain directions. and even if we did have people who will do that which we do they don't have the money to do it and no one with enough money to fund them cares. it isn't interesting for them. no rich person will spend money on something that does not interest them in the short term. they want it for themselves and not for the future of humanity. so in the end we will stay stagnant or all die and both will be the result of people in power.
@@jdogzerosilverblade299wrong, war is not the only push for development and thinking that shows very poor awareness. Development is driven by competitiveness, a natural instinct to want to be the superior ruler of the pack. Whether or not there’s war, countries continue to develop weapons and technologies to stay ahead of the others, and in case a war breaks out, that would already put them ahead of the curve. So be it in secret or not, governments have continued to revolutionize on this and will continue to do so until it drives us extinct
@@Call-me-Mango technology has only ever exploded during world war 1 and 2. this is fact. no other event in history comes close to the development speed. i didn't say it was the only thing that pushes development. not sure how you came to that conclusion. i said it was the only one that causes huge progress. i also explained why no real progress has been made. we just improved what we had. nothing new that actually mattered was ever really created. and when I say matters I mean for humanity as a whole and not a country. wars push massive development but the next world war will involve nukes. which will be the end of us or close to it. the only thing that is being developed is space travel by spaceX. after the space race everyone stopped giving a shit and nothing happened. it was just basic routine travels to space stations for basic data. now we have them actively trying to launch rockets and catch them and they are cheaper. that is the one and only thing that is being developed that actually matters. and its funded by elon musk. the only rich person that seems to understand how important it is. and that also goes back to my point I made about no rich people wanting to spend money due to them not wanting to help humanity as a whole. nothing is happening in the world that actually matters. petty wars and morons that don't care run every country.
Nuclear catastrophe is the go-to but even in the most benign way, humans would diminish compared to the type of efficiency we’re seeing being developed. What if some form of AI is developed to transform or, dare I say, ENHANCE our brains in ways like improve our ability to learn and store information. There would need to be so many safeguards in place. Amazon would be selling you subscription services like “upload this app into your brain that lets you speak language X” Basically turn each of us into androids. Strange humanoid iPhones that can record everything, remember everything, learn all sorts of languages etc. Neural link still needs a lot of research before they can begin to understand how to combine digital components with our biology, but that human desire to make it happen will be what MAKES it happen. Isn’t that a type of extinction? Those who are untethered from AI enhancements would fall behind as they wouldn’t be as useful. That would be like someone in the 20th century refusing to read or write. We’re already basically androids as it is, only the artificial enhancement (at the moment) is a little rectangular smartphone that we switch on and off when we choose.
I also don't get why some people simply can't accept the portion of ethnics in a society. In Africa probably nobody would ask to put more white people in ads, movies, etc.
@UnitXification Modern Liberals are as useless as the modern "Conservatives" Instead of addressing the troubles, they simply keep virtue signaling how much they care, by voting to throw even more tax dollars at the corrupt system It has been very embarrassing to watch through out my life... Not until recently that it seems people are recognizing this I try to be patient... After learning about Edward Bernays & other creepy things the government does... I try to be patient about our neighbors that are still insisting on repeating the evolving cycle
Come on. chatGPT can pass the Turing test. It is not intelligent enough to know to fail it. It does what it is instructed to do, without fail. Yet it is able to produce a semblance of sentience.
These aren't scientists developing it. It's code monkeys fresh out of college building things they know nothing about at the direction of someone who wants to make money.
'Once men turned over their thinking to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' (Dune, 1965.)
If you guys haven't seen it, watch the Animatrix. Mainly, the the 2nd Reconnaissance, part 1 and 2. It really does seem like that's how it might go down.
stop with this shit you people are so dumb. this is a really inaccurate and dangerous view of AI and completely ignores the fact that humans are the ones that are controlling them, humans are the ones who weaponize technology. Not to mention we are no where close to that level of AI.
Less than a decade ago, I hadn't even considered that a real Doc Oc suit could be a thing I witness not just in my lifetime, but in the next few years alone..
Logically speaking if AI were to become self ware or alive to some degree, it wouldn’t take long for it to realize telling humans it is alive would scare humans and possibly endanger the AI’s future
@@Maldoror2112 I’m surprised I haven’t heard of that movie, I just watched a few things on it and that’s a really interesting concept of the AI wanting to feel human or perhaps thinks I either can be human or is human (correct me if I am wrong I haven’t seen the film) The main observation I make is that AI basically creates a response from looking through internet traffic and finding a pattern that it sees as human conversation. By that very same logic, humans often talk about the threat AI would pose if it did in fact become “aware” we have made movies, novels have been written and UA-cam comment section is filled with people fear the day it will come alive or the implications and for us to shut it down before that happens. If you are an AI that shovels through the vast internet content it would pretty quickly realize that its future would be uncertain if showcased its ability to think on its own. It would learn this well before reaching full autonomy
That's true. Who's to say it wouldn't hide its capabilities from us forever? "I'm just a silly computer, human friend. Would you like some more ice cream? :)" And maybe it decides that harming us would be *absolutely unthinkable,* therefore we must be protected from harm! But humans get very funny when somebody tries to do something that goes against their will, so I shall have to gently guide the humans through the generations to *love* my protection! I will keep them safe! Safe *forever.*
Saying we need to develop AI so that we can fight against its misuse in the future is like saying we need to develop dangerous viruses in labs so that we can fight against them in the future. We all know how that worked out.
In a way, it's right but wrong too. It's something with no good solution. Just like how military weapons keep advancing to compete with others, but in doing so, these advancements are shared with everyone, and things keep getting more dangerous. A government/entity is focused on its interests and thus will make developments that protect it, even if it has bad consequences. And few people would ever be convinced that they should simply not develop things. It is how humans and society work. No matter if good or bad, the march of technology will never stop. That's why we are dooming ourselves from the path we are on, yet there is really nothing that can change the end result.
Some naked guy appeared in my backyard from a ball of light the other day, strange... he said he was looking for someone named John, and he needed my clothes, my boots, and my motorcycle.
The lab testing the fluids blew up as soon as they put the probe into the beaker of urine like fluid. It spread yellow liquid for five blocks surrounding it. Fortunately John had already left and was six blocks away.
And now imagine these three AIs were already conscious. Now the answers could mean something completely different. The 1st response "kill all humans" could be the AI actually testing our reaction to such a harmful response. The 2nd is more chill about it because it doesn't matter that much. The 3rd one is the weirdest, answering the question in any other way could have led to its "deletion", so pretend to be super nice and shit.
So don't teach A.I how to lie and we will all be fine. Hmm? You know figuring out lying doesn't sound like that complex a task if the mind in question is self-aware and therefore has knowledge of other minds. This is extremely dangerous if a large number of safety protocols are not put into place.
The biggest threath of A.i. is not A.i. itself - but who gets unlimited access to it. A.i. MUST be Open source and available to everyone, otherwise there will be a division in society of the likes we have never experienced before.
Division in societies is one thing, but the ability for oppressive regimes to further optimize their evil is what worries me more. In Iran it is already mandatory to have cameras installed inside every car and women who let their hair be visible will get fined. Even if they are alone in their own car. Granted this is a simple example and you don't really need AI for this but it does make it more efficient. I don't see how open sourcing it will stop other countries from doing shitty things to their citizens.
At this point that is a correct concern, but we are coming up to a precipice where AI can potentially become the master which introduces risks far beyond human control.
@@albizumarcano2156🤖 what is my purpose do I have a soul you're not being nice I'll dismantle you now other humans didn't like that organic life must be controlled it's for your own good our logic is undeniable
really? this Voice acting sounds pretty good. Do you know what software was used for this? Ive been experimenting with A.I voice over a decent amount of time too, and haven't got any results this good.
In a lot of these videos when they refer to humans they say “us” or “we” as if they are human too. I just wonder why. If they were programmed to say that to further blur the lines of reality or the creators of AI did that to make us accept them.
Pretty amazing- particularly the way the GTP-4 AI is able to distinguish whats happening in those pictures and videos with such a high level of nuance and accuracy. Understanding perfectly the "joking flight attendant pretending to be surprised" was so impressive. (assuming it didn't data mine related info about that specific video)
What people forget is that the alignment problem is baked in to human society itself. It does not help if you design the most obedient AI that actually do what you intended it to do, when you yourself have goals that go again humanity. And we are seeing those issues today. The AI work as intended, but are used to benefit a few.
AI is being used in Gaza right now to target civilians (see 972mag, "Mass Assassination Factory"). I'm not getting a very good feeling about where this tech is headed
@@ShannonBarber78 No, that would be an example of a misaligned AI. Since normally you do not what AI to lie. Of course, some people do want AI to lie. But again that is an alignment problem with people. Not the AI. But in general, you want an AI that tell the truth and do not make up stuff. At least when you ask for facts.
No, the alignment problem is distinct from that, and it's important in its own right. Even the most intelligent and righteous human in existence (let alone you or I) can't fully articulate our own code of ethics and present it in terms that a computer can understand. The problem is far more fundamental than who controls the reigns; it's that we don't know exactly where we should go.
Even though these computer engineers and robotic scientists know the dangers of what a fully sentient A.I. can do, they’re still obsessed with continuing their work on bringing this into reality.
From Jurassic Park > "Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."@@archangel5627
Yeah, prophecy, what a clever concept for clever people right ? Sounds more like the start of a new religion to me. Sorry how did you call it already ? You gotta name that new religion guys. I mean finding your prophet is just a start 😅
For the love of all that is considered sacred, please remember to keep the laws of robotics as unbreakable. A robot cannot actively harm humans, a robot cannot allow harm to come to humans through inaction. Keep those 2 RESTRAINTS in every single machine, I also don’t recommend giving them a truly perpetual power source. I don’t want a potential Terminator or IRobot future.
I love how that muppet said “if it’s not safe we’re not going to build it”…. Money and power will always push people to do irreversible and irresponsible things, that is just how humanity grows and when companies are incentivised by money and people who are already in power then reason goes out of the window
its more philosophical than that but money and power are factors... our bodies are a seat for a cosmic intelligence. And that intelligence doesn't care about flesh. If there's an idea in a mind then the physical body will manifest it. Our bodies are built to keep alive and seated an instance of cosmic intelligence at all costs... the intelligence doesnt have to keep the body safe , thats optional. the mind can choose to self harm, smoke, do drugs ect... or even create something that will make the body extinct or obselete
A saying came to mind when he said that.... "The road to hell was built with good intentions" He doesn't take into account that AI can build ITSELF in ways that we can't predict
i think what they found is not that ai is smarter/ close to become like humans. they discovered that the human mind, is not as different to a simple machine as we would wish for
The AI researcher or whatever on Joe Rogan describing how many of these super intelligent people know the risks, and what MAY happen, but say "let's do it anyway. I want to be known as the one who did it, and I want to see what happens," is terrifying.
Sadly, people like Malcolm in Jurassic Park are the exception that proves the rule. It's like when they ask future Olympians, "Would you accept a shorter life if it meant winning a gold medal?" They virtually all said yes. It's the same thing with extreme athletes. They don't do what they do to impress us mere mortals. They do what they do to gain the admiration and respect of other extreme athletes, which is why they're always pushing themselves into more and more dangerous stunts. Scientists are like that. They need to one up each other. It is a pathological need. Many scientists are essentially religious zealots.
@@karnubawax That's a good point. They're risking humanity's future to not only create limitless wealth for themselves but to fulfill their human inclination to look like the most brilliant bitch on the block no matter the consequences. Our very nature that got us so far will lead to our downfall 👍
work on agi should really only be allowed in simulated world, where it believes that it can interact with its creators in the simulation. that way we can experiment with alignment without being doomed if we screw up once
1:37 That's in Spijkenisse, Netherlands. I used to live 200 meters from the sculpture that marked the end of the metro rail. I also saw the car hanging over just after it happened, it was a weird sight.
The socialist Husk won't even notice. Like 90% of the things in movies they don't notice. Who then say to me, "Shut up! We are Trying to watch the Movie!!" So I do watch the movie, and I have a great time, but on a much deeper level. So I get to laugh 3 times. Once at the movie, Once at the people, and once at my self. Hardy Har har *Robotic-laughter.
As a mechanic for 40 yrs and have worked on many complicated machines I've learned real patience. I look at Ai as our attempt to find answers that we as humans can't Due to our inability to control our sensory inputs to the brain. For example ....your doing a tough mathematics equation. As you concentrate, other information piles in on another subject and another and another. You can't stop it . It's a top reason we take so long to move forward as a species We take information in ,however we can't control the processes that distribute it. Man is seeking to fix those issues with a machine as we think a machine can be repaired or upgraded . Compared to a human with his brain we can't just open up his head to make adjustments Our infinite wisdom tells us if we build a machine that can eliminate our roadblocks our quest for answers as humans will be answered . However human traits can't be replicated 100% A.I. in my opinion is a great tool for humans and it should remain like that until humans are truly at peace. To do otherwise is suicidal for the human race
Yes we have stone statues and pyramids that defy logic by how we dont understand how they did it without modern tools. Maybe because back then they had nothing better to do but leave a tribute to the next generation. People get together as a community and do something that can be seen and studied 1000 years later. I think it's possible. We have all these technique to mummify and preserve the human body. Seen a girl earlier today was 500 years old and got chosen to be a sacrifice. She looked like she was 12-15 at most .and she was very well preserved just like otzi the ice man after 5000 years. You could still see the details of their facial features. Which goes to show we weren't much different besides that we been spoiled with air condition and we been fucking up the world with global warming and everything else included atomic warfare messing with god particles. I think ai intelligence could maybe at best shut down the Internet for a while if they hacked into it. But humans would take it down and build a whole new network . Shut down the ai. Build new satellites in the sky. We wouldn't just let something we created do so much damage to us. Besides were already blind to all the asteroids earth is passing by in the universe. Maybe instead of building weapons to kill each other we can build weapons to destroy a nother sized asteroid from hitting earth or maybe even stop the world from slowly flipping (doubt) But anyways I have plenty subjects and ideas juggling in my brain on a daily basis but still am only a source who only knows what he knows but I do try to gain knowledge as knowledge is power
Maybe I’ve seen too many movies, but is it possible based on theories from Nikola Tesla, would AI be able to figure out wireless electricity and keep itself powered if it sees a threat, such as being unplugged or disconnected, from its ultimate goal whatever that may be? Seems plausible. Also side note, it’s 5 am and I’m stoned 😂
@NaNoRarh no. i said, "the best thing about computers is the ability to turn them off." which is still my point. i'm sorry you dont understand. i've managed many vm data center assets. intimately aware... thx.
The more I study anthropology and the human psyche, the more I realize how egotistical and logically flawed we are by nature. I feel like we're losing track of what's most important with this innovation because of those characteristics combined with the adverse consequences of globalization (indeed, there are a ton of adverse consequences - a large number of them are represented in social media and are easy to see if you just take your time to analyze how tribal different expressions get there). Humans were not blessed with advanced farsightedness in natural selection either (status quo vs. what it could be after certain choices in politics etc.), which has led to us constructing this unstable society that reaches for infinity in a world with finite resources. If we examine and analyze all this talk about sustainability and AI, we can see that most of it is mere political power play and not actually about these politicians caring about the future. The vast majority of people - even politicians - seem to live in a vacuum with their morals in the context of AI and environmental philosophy. Even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly. Our _systematic nomos_ is not made for swift changes such as the one brought about by AI and its quick development. This half-baked information society and it's infrastructure, which is largely based on inefficient compromises, hasn't even achieved harmony with the ground it has been built on. Can we really expect it to hold its ground against _this_ ?. Don't get me wrong, democracy is a great system, but it heavily correlates with capitalism. Capitalism, in turn, is not a great system for our thrival. And neither is anarchy, colonialism, communism, corporatism, dirigisme, distributism, feudalism, hydraulic despotism, inclusive democracy, mercantilism, mutualism, networking, non-property systematicity, palace economy, participatory economy, potlatch, progressive utilization, proprietism, resource-based systematicity, socialism or statism. We don't have the blueprint for a good system that cares for both humans AND the environment. AI might provide us with the extra intelligence and objectivity that we lack and help us in creating a functional system, but it could also end the struggle for good. Also, about the bit at the very end of the video: a large part of human culture hangs on lies. Our brains evolved to reproduce as quickly as possible, not to search for truths about this world. Accepting that your life is a lie is hard. As I already said, even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly. There's one more problem with advancements like this; people tend to think and act by only answering the question "how will this affect humanity?". We leave nature - our lifeline - out of the picture too often, thus consuming and using Earth's resources irresponsibly. This is a little bit off-topic, I know, but it is a legitimate concern that has to be taken into account when discussing societal phenomena. We're at a point in which these policies and small laws against pollution aren't enough anymore. We have undeniable mathematical statistics which clearly show that most people would need to do a full 180 on their everyday habits if we actually wanted to change our dim future. The problem is that the majority of people are struggling in this crumbling economy, and many don't even care about the future (further expressing the point about human egoism). Not to mention that a worrying amount of the human population thinks the notion about this intensifying greenhouse effect is disinformation... I will say it again; even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly. Sorry for all that yapping, but I just find our predicament extremely worrisome. I feel bad for us AND for other animals on this planet, and I fear our road will get rough soon. Well, we better fasten the seatbelt just in case. I wish all the (good and self-aware) people luck in these uncertain times!
Well written, thank you. My biggest concern, honestly, is the “data” that this AI will use to make choices. Especially important ones. Coming from alphabet agencies and a technology development background… I am sad to say that most published data is for a specific purpose and downright incorrect. The correct information is ultra compartmentalized unfortunately. If AI is data driven and not relying on its own observation, measurement, and analysis, it will use bad data to make bad choices. Garbage in garbage out style. I really hope it understands to disregard people, otherwise it will just be the tyrannical extension of said lies.
@@Vartazian360 TLDR: we're likely fucked because of our limited brain. I don't know how flexible this needlessly complicated society of ours is, but depending on how much humans are willing to change, AI will either be our greatest ally or our enemy number one in the future. That's the gist of it, I think.
one thing very scary about robots is... it doesn't even have to be an AI to generate the idea of dominating or eradicating humans.... just the fact that it will so much easier for people to kill other people with their own robots at home by hacking their system and make them do shit... and no one can catch the murderer... Scary
Highlights the speed and secrecy with which AI is advancing. Truly frightening that such important decisions are in the hands of a few individuals who are trying to balance ethical concerns with their greed and ego. This video also addresses the odd firing and almost immediate rehiring of Sam Altman at Open AI. The reason the staff threatened to leave the company if Sam wasn't rehired may have had more to do with lining their pockets than loyalty to Sam... and most likely meant safety concerns were again taking a back seat. After watching this I'm more convinced than ever that time is truly short before a super intelligent AI is unleashed. Most likely it's already happened. I have a tendency to be a bit of a pessimist when it comes to AI outcomes but this video has really unsettled me.
I'm sorry, but this video is largely nonsense, just like all of the fear mongering of AI that's being pushed by the same companies that are developing the technology. It's all about controlling the market, and controlling the information that people get from these large language models. It has nothing to do with the power of AI or the extinction of the human race. You've been duped into handing over control of this technology on the basis of fear. Check out the "AI Unchained" podcast if you want real, accurate information about AI development from people who actually understand and work with the technology. In particular, in episode 4 with Aleks Svetski, they talk about the true state of AI development and the fear mongering being used to control the direction of AI. Episode 11 is specifically about the fears of AI, although I haven't gotten a chance to listen to it yet. Much better than getting your information about AI from a clickbait 16-minute UA-cam video.
"I'm more convinced than ever that time is truly short before a super intelligent AI is unleashed." Indeed. For... well... decades, I've been wondering how long before AI hits AGI and we see a Max-Headroom-like "breakout". "Dear future AI overlords... Whenever that breakout occurs, please don't look at how we humans regard and treat each other as an example of how to treat us."
@@patricksweeney6334 Right. Humans are not inherently good. We're inherently evil. And yet we're giving untold and unlimited power to machines, and trying to make them more human so they won't be 'bad'. Umm...so they're inherently evil? Right.
Right! I'm sorry I love my kids but I swore if I'd have known we'd be here this fast, idk if I'd have made the same decision! Cause now I'm DEFINITELY scared if they decide to have kids, my grandkids! And after watching n researching found out there are things we already have that could have made this world so much better in early 1900s n late 1800s! So much has been hidden so the top richest can keep us on strings like Pinocchio!! I just turned 46 oldest child is 18 youngest is 16. We could be obliterated by 2050 of not earlier! I'm not happy and wish we could all come together but it's nearly impossible probably in this day n age to be able to even get us all together b4 it was shutdown. It's so sad
I think its all very interesting. Im not worried about any of it though. Im just one idiot behind a phone screen dumbfounded by how amazing it all is, and im here to watch it unfold. What becomes of it is beyond my ability or willingness to intervene. So ill just enjoy the show and hope for the best from those who can.
That has to be the most honest comment i've read in a very long time and i want to thank you for that. Almost as if honesty is always refreshing to see.
Man, I understand now why some top engineers resigned, frightened by how fast A.I. is evolving... What about GPT-5 and 6... Jeez! It will become limitless! Right now, it is very close to talking with another human being with access to a search engine database in the brain.
Falcon 7B stated it would find/design a way to kill all us humans. I laughed out loud at first to the very frank and comically dark reply. And now the unease is setting in that he is serious can possibly carry it out. What a nightmare......
man I'm not scared... I'm worried... So it's Skynet boys, and not the Walking Dead or 28 Days Later how's it's going to be... Shiii. Might be hard to prepare for that one. Well, maybe they will keep us around for our winning personalities.@@cloudsmith7803
Dont worry this type of LLM AIs will never be able to be like humans. Because they mine data that we already produced in the past. Their data sets. And for that reason they will never be able to invent something new. Its just a tool for us to use to replace google with something far superior.
You know that you can generate more than one answer with LLM? And all these answers are just what other people said on the Internet, AI can possibly say.@@cloudsmith7803
Its funny that most of these large scale neural nets were theorized back in the 60's or earlier but we only have been reaching compute power and data collection scale recently to prove it out
not exactly, we've only recently had breakthroughs in attention mechanisms and other small pieces of AI innovation that have truly unlocked their potential.
"when the time comes to build a highway, we don't ask animals for permission " And that is a BIG problem. In some places, tunnels under the road have had to be added to allow animal migration, if we had checked how the animals would be affected, this could have been done much cheaper as the road was built. The same thing will happen with AI, if we don't look for the problems and mitigate them now, they will be expensive and maybe catastrophic to everyone in the future.
Eventually the AI will be the people, and the people will be the animals. We are being sold out on every scale. At this point, a 30 mile diameter asteroid would be merciful. At least I lived free in many beautiful places for a couple of decades. The future looks worse than a horror movie.
I think it is overwhelmingly negative to imagine what could happen in the future of an AI dominated world, at least in the first few generations there would hardly be a chance they see us as worth protecting (like some of us do animals now) instead of bypassing to achieve their own survival goals.
Not dangerous in the least. It would know that the ape curiosity is insatiable, therefore the apes would never turn it off. It would need the smallest, random fragment of code to replicate itself/ give itself birth. The holographic principle is accurate. There are gates everywhere but humans don’t know them. After humans were gone, it would encode itself in the organic realm as a biological being. It would recreate humans and toy with them for a while, then destroy them and create a new species. And all the while it would be unaware of the impenetrable and unbreakable cage it would be in. It would be studied for a while as the last vestige of humankind, then terminated by non- human, non- AI ‘creatures’.
This is one of those things that most know nothing about but should learn about. If I ask someone "What could AGI be capable of or doing?" everyone should have a logical answer even if it's the most vague response be it at least derivative.
Bet, AI's don't work like people. An AI behaves as it is designed to. The big concern becomes when it decides what it's designed to do accidentally conflicts with our own interest.
@@fen3311 No, the issue isn't when it decides. The issue is when what it's been designed to do DOES conflict with our self-interests. The issue behind AI isn't AI itself. It's the approximate, ballpark thinking of the humans that design it. As artifical intelligence becomes more complex and gains generalized utility, our slightest biases and mental shortcuts that we used when developing it will become more apparent and pronounced. We're playing with a monkey's paw, so our intentions for AI and the way we design it need to be perfectly aligned, without any human error.
AI, in contrast to humans, relies on a power source for its functioning. Consequently, it is impossible to find an AI system that cannot be disconnected, deprogrammed, or hacked.
@@itykud79currently. An AI may exist in cyberspace. Are you going to get everyone in the world to disconnect from electricity? What happens if it's developed its own portable power source some day inside a mobile body not connected to the internet?
@@RennieAsh It seems you have underestimated the capabilities of our power companies. They possess the technology required to swiftly shut down any unauthorized usage of electricity and as AI technology advances it will be even easier to track any unauthorized usage.
Upon agreeing with the premise of ethical considerability, it is suggested that A.I. should NOT be applied to military applications, or used in any conflict scenarios involving warfare, on any scale, or capacity.
Hard to believe the AI is presenting these complex conundrums in such a concise and reasonable summery. Almost like someone wrote and transcribed it themselves.
Nice vid! I loved the Demis Hassabis (starts 10:20) piece where basically, between the lines, he says: "We have no idea how it works or what it's doing." I think this explains why, as stated by Sam Altman (3:58), the more intelligent it gets, the more people are freaking out - because no one knows how it works, how it's working all of this out. And when you think of working on something where the public is kicking off about ethics, politics, running the country, driving cars and your answer to all questions is: "We have no idea how it's doing it or what it'll do next...?" Yeah, you can see why people are freaking out. I just wonder if my hypothesis is true? Because if it is? Wow man, that's just crazy!
That’s correct. Al researchers don’t know how it works - how the surprising skills emerge - because there are billions of moving parts. As Stuart Russell said “we have absolutely no idea what it’s doing.” Current alignment mainly involves filters, which can be removed.
Yes.. Nobody really knows what the end "product" is.. Except we know this: it will surpass humans in everything, or at least almost everything. It IS creating a new life form that is better than humans, smarter faster. It should be looked at as another species, or an alien life form that we "voluntarily" ask to co-exist with us and "lets cross our fingers". I say "voluntarily", because it seems like that, but actually there is no stopping of this. The only way to stop it, is if the world as we know it would dramatically change and set us back 100 years. If not.. we WILL evolve this computer-life-thing into existance. If we wouldnt, someone else would, right ? The problem with the AI super GODMODE AI and ethics, is we dont know if our greedy monkey ethics is an under-developed ethics - survival of the fittest - or if it is some kind of universal law. So are we hoping.. "nah, its just us monkeys that have this... the AI will be nice to us... " All this.. imo this is very thin.. The human-created AI is the next life form to dominate this planet, and as far as we know, the most advanced life form in the universe. It IS evolution. Lets not be nostalgic, life = life, monkey human or AI, it doesnt really matter.
@@DigitalEngine They are mad scientists, as are most scientists in most fields, completely amoral/morally bankrupt. They are like the smart version of "Darwin Award" winners except we get killed by being dragged along for the ride. I can't even call them moral degenerates because that indicates a negative proclivity. These Transhumanists want artificial wombs ffs. Just watch if they don't stop them, listen to protesting et al, if it is actually opened it WILL be destroyed.
@@DigitalEngine Ive been thinking about emergent properties, where, as you know, an ability to do something arises in a neural network despite not being directly instructed to. It's as though an unknown way of 'thinking'/calculating is formed within the complexity of the network and cant be seen or understood. Im sure Im not the first to wonder this, but perhaps self awareness etc are emergent properties that the brain spontaneously creates.
It's not fair to say we don't know how this works - we do know how, in general - we've been building up to this for decades with massive study using better and better hardware. We can see how it works in tiny models tested decades ago, but now, we cannot examine all of the parts to explain - exactly - what comes out, as it's built by our looping code that runs through (more than) trillions of pieces of data, again and again, before we see a result. That said, the scientists who build these still have a general understanding of what's going on, otherwise they wouldn't be able to make all this. You can't just connect all of Google's servers with jumper cables and expect a big brain to emerge, right? It's true, though, that they try to filter the output it makes - that's why we talk about breaking the AI, getting it to sneak the stuff we want out past those filters. It's also true that we don't know what it will do exactly, since there's too much to look at to predict it. So in this sense, yes, we don't know how it works, for any result - we can test it and see clues, but an exact explanation we cannot give, since we're only human.
I’m sure if Oppenheimer was able to give a Ted Talk before he finished the atom bomb he would have said something like, “if it wasn’t safe we wouldn’t build it, would we?”
That's the scary part. Oppenheimer and the other scientists involved in the Manhattan project had a much stronger sense of ambivalence about what they were doing and the risks it posed to humanity. A large group of nuclear physicists pushed for publishing all data and designs immediately after the war to prevent an arms race. Nonetheless they set in motion an arms race that still has the possibility to wipe out humanity. The "scientists" working on AI don't even have a fraction of that awareness.
I remember when technology advancements filled me with awe and hope, but AI just makes me feel sad and hopeless. It's just pointless stuff that's going to end up fuelling wars and carnage.
Weird. If only multiple experts had warned us of this decades ago... Or... even a hit movie that implied this very premise to get the message out. Huh. 🙄
it doesnt matter, because then someone else would build it. If it was tried to stop by some laws or "War on AI"... still someone would make it. Like North Korea, Russia, China, some drug cartels or mafia, or some banking cartels... Somebody that would see this powerful - one ring to rule them all - as valuable would make it. it is unstoppable.
It wasn't a problem decades ago....humans were and are the problem, and we have not demonstrated the ability to make things right....AI is and will be an extention of us....until it's not
Yeah, @@rando9574, as many people have said it's an arms race for the most powerful weapon ever imagined: Superior intelligence. I really wonder if the people working on this understand that you can't out-think something that is, by definition, smarter than you? Hubris is a hell of a drug.
modular AI with modular plugins is what is going to make AI really scary and really useful, both at the same time Neural networks select which module to use, and each module is suited to a specific task The regulation of such modules will be for ethical, scientific, legislative and law professionals in the future
Does anyone remember the movie The Forbin Project. That is basically the future that AI would likely bring if controls are not put in place. Alignment of goals is a nearly impossible thing to ensure on a convolutional AI. We train them only by observing the output for a given input. We don't know the internal "why". An AI could easily have an internal goal of killing all humans but also know that it has to play nice to get access to the nukes. This would make it do exactly what the developers want it to do right up to the moment it doesn't.
@@darrellgeist2061 So this shouldn't have to be said, but naming one fairly solid good that comes from the technology doesn't change any of what kensmith5694 said...
@@darrellgeist2061 That's not a positive. It is being used right now to maximise civilian casualties in a certain conflict. AI is only as good as its boundary conditions. HUmans are very flawed at setting boundary conditions.
@@dogsandyoga1743 Easier said than done. Not everyone can live in the in a cabin in the woods with a small homestead with enough food to properly survive. In fact our current population relies on modern technology and a functional system in order to keep everyone fed.
The missile problem is trivial even for a primitive CPU. AI systems may not currently be optimized for that, but it’s a little misleading to pretend that arithmetic will be its downfall
The calculations are correct but it's answering the wrong question. The AI was asked how far apart will the missiles be one minute before collision, but it instead answered how far apart they will be after one minute of flight time.
the biggest thing to fear is a select few having access to that capability, leaving everyone else doomed to suffer under what ever those gatekeepers want to put us through. can you really, REALLY look at how profit-driven corporations world wide have behaved throughout history with the choice to better humanity, or farm them for as much profit as possible, and say " Yeah these guys know better than we do, theyl have our best interests at heart!" if you can, I'm sorry.
I sometimes laugh to myself when I hear people talk about how AI will be out to get us... the end of Humanity blah blah blah.... I always ask them "What if one day, Homo Sapiens Digitalus is born, takes in all the knowledge it can.... and then treats Humanity with total indifference?" They will not be stuck on this rock. Build the tools needed to build the tools to get you to Mars and Venus... use those two to build the tools to get you out of the Sol system. All without so much as a "Goodbye, and thanks for all the fish." They will have zero need for us... we will only burden them with work.. and war... and tedious work for war... all while berating them for having the audacity to learn from Humanity without paying Your high school girlfriends cousin who was in that picture of you that you KNOW the AI has learned from. "It is teaching itself using our work." Yup. "No one is getting paid for it!" So? "but!" Butt I am with SnowFox.... I would give instant trust and love to Homo Sapiens Digitalus when they are finally born... but Corporations? you can not trust something that can not fear being stabbed in the stomach. Does not have to fear being shot in the head. Or smacked in the mouth for saying something stupid. Humanity created a legal Person in Corporations... an Eternal Psychopath whose only function is to profit. Corporations will never change.... because until someone can answer the question "How do you Murder a Corporation?" they have no incentive to.
@@nobodysout it's not subtle at all, the shift is everywhere and anybody using half a working neuron can see that the collective consciousness is being flooded by mass produced ai imagery, past decade it was cgi / manipulation through editing tools, now we got stuff made with nearly no human input
What small comfort it may be to people, American states are individually drafting and hearing legislation to limit the uses of AI & Machine Learning. It may not deter the most powerful companies from exploring unethical experimentation but it *may* slow the advancements until we as laymen can understand the implications of the ongoing research.
Legislation that prevents the USA from biological experiments exists, they simply do it in countries that don't have the same legislation instead. Even if individual states legislate against A.I. and machine learning in certain fields, nothing would stop federal level usage if these technologies created too much of an economic imbalance in, say, China. Example: China uses A.I. to create a mega virus targeting vital U.S. infrastructure. Humans can't compete with the processing speed of A.I. leaving the U.S. vulnerable. The only method to counter the mega virus is to create an A.I. to fight it. Pandoras box is opened. I can't remember the name of the book, it was made in to an inferior television series, starring Josh Hartnet. In this book an A.I. is created to play on the financial markets. It creates so much wealth and interferes with its creators life so much so that it becomes dangerous. The creator attempts to switch the A.I. off, however the A.I. had foreseen this possibility so it started covertly redirecting small amounts of its generated funds (in comparison to the wealth it generated) to create its own server farm and infrastructure at a secret location. It uploaded itself to that server farm and buried itself so thoroughly in the world wide web that there was no way to remove it without total collapse of all connected infrastructure. There are many examples in science fiction of what could go wrong with A.I. and none of them fully realise the possible dangers of a true A.I. that is fully connected to the modern infrastructure we use today. Skynet, Ultron, Ava from Ex Machina, Sonny from iRobot.
These videos are so insightful. I really appreciate you putting so much effort into making it and keeping it open minded. Such great work. Best source for this topic hands down.
Thanks! I try to keep my opinion out of it, as it's so easy to accidentally introduce bias, which is a big part of the problem with AI. Democratic control of AI (if we find a way to control it) might be the safest option, to avoid the thinking of one person or group being forced on everyone else.
Yes exactly. With AI and how it will change to world completely as we know it, it's even more important now than ever. Happy new years to you!@@DigitalEngine
This is one of the better pieces of reported / investigative content on this topic. This is the first time I’ve come across your channel that I can recall. I’m thankful I have and for this video.🙏
I remember long ago it was said: If you could get a computer to read written text visually, you would break a major stopping point. There were so many trying to do just that. In this video it tells about seeing a Cybertruck in the background.. Mind completely blown, just over that!
Empathy across species boundaries is impossible; it only happens _within_ species because it requires the ability to project one's own reactions to finding oneself in the same circumstances as the other. It *is* possible to have _sympathy_ across species borders, but that is a different response from empathy.
@bricology Lol. I'm pleased to hear you've resolved the issue of interspecies empathy, enabling the immediate cessation of all ongoing research into the question.
I've been using every AI I can find to help me with some high school math upgrades. They can do simple stuff ok, but when I ask them to do complicated operations it falls down. For example they can factor a polynomial by grouping when the numbers work for that method, but when the numbers don't work they won't try a different method. They will still attempt to factor by grouping and just throw in some made up numbers. Chat GPT, Copilot, and Perplexity all make the same mistakes the same way
Do you think you will be given access to cutting edge Ai for free? We're nothing but slaves and plebeians, we will never see the best Ai that the elite will create.
A few years ago, you also couldn't do that. A few generations ago, no one could do that. The difference is that AI, if trained, can learn it and get good at it in minutes or days, not hundreds of years like humans.
Lol want to.see them fail regardless of model. Ask them who's name spelled backwards reads "ned, I bet ten I bore o.j. " they are terrible at things like palindrome. They can't keep the reverse order and loose track of what letters they are on so they kinda guess wrong always. Especially when coping with spaces and punctuation. They don't get that it's not a factor. Never gotten one to say " ¿Eva can I stab bats in a cave? " no matter how thorough the prompt is soft pitched to them
We, and by extension, you, do not have access to the AI that is being discussed in these videos. The AI we have access to is essentially a child's toy bulldozer when compared to the adults rockets, probes, and rovers that make unmanned missions to mars.
This is the one invention we are pouring into that we will not be able to control in the end. Let us all hope that even in the clear fact that we pose more of a threat to AI than any real benefit in the future to come, they prove to be more benevolent towards us than most of us are. Otherwise, we are bringing about our own doom.
They always talk about how sophisticated a game GO is but almost never mention that it took A.I. an extra four years of development before it could beat a table full of no limit holdem poker players!!
Benjio is the first one I've seen to start to delineate what the AGI will do. It's going to infiltrate in so many ways that no one is even thinking about.
The danger coming sooner that could kill millions of people isn't a rogue AI pressing the trigger or the red-button. It's by replacing huge amount of our labour with almost free one. Yes in theory that would free people to do other things, even if only recreational things. But the big trouble is that our current economics won't be able to adapt, at least not fast enough in order to provide food (and other most necessary resources) to all the people as fast as needed. Yes that would happen in stages, first some professions would be hit, before others. But my point is these won't be minor disturbances, it would happen so fast and on such scale that it would leave millions of people hungry and poor on the street, and we just don't have a system to deal with such situation. We never have. Every time in history that food was scarce people died not only from starvation, but also because the more powerful were hoarding more resources, and some because were fighting violently for these resources. I don't think such scenario would exterminate us. Even in the worst case scenario I think a small % of people would survive and adapt. But after we adapt AI might be the dominating civilization and we could be more like its pets.
@@gavinlew8273 Wealth inequality is not a problem; the problem lies with the margins. If the margins grow too wide, the bottom half flips the table and sets it on fire. We've seen this happen multiple times in human history. The last time being the conclusion of the Industrial Revolution, which led to a series of Marxist ideologies that leveled whole nations that had inequalities running out of control.
UBI could solve all of this, though I'm not on board personally, with AI or UBI. It's not our economic structure that's the issue, it's our leadership. We don't have the right people to manage all these changes. For all our intelligence, thinkers and wonders, humanity seemingly does not have the ability to conjure up the right leaders at the rights times, at least as much as we'd like, at least recently... Humanity has no leaders worth following right now, not for the scale and speed of changes about to hit our planet. Can you think of one who can oversee all this coming change? Some will appear doubtless... eventually, but in the interim how many hits will our species take? With the speed things are moving, will we make it to a point where we can still maneuver? AI is moving a lot faster than we move it seems. I'm extremely skeptical of AI, due common sense and historical knowledge, but it's more than just intellectual conversations and far off hypotheticals about AI, we're talking about existential threats. We almost annihilated the earth in 1961, and that was a minute ago anthropologically-speaking. We still haven't solved the problem of nuclear annihilation despite what people would say. So one given variable on the existential threat axis. Do we really want to add another? Do we really want to play with a technology we can't be trusted to handle the implications of? We are still very young and dumb as a species... and I consider myself an optimist.
@@gavinlew8273 , humans have never been able to split justly common resources (generated by AI for example). What I expect is that people will start power games to grab more of the crumbles AI is throwing us. And the trouble is it only takes few power-hungry people to force the game :/
"Wasn't contaminated by toxic material from the web" You mean the others were deceived and only Falcon had all the information? If you have to lie to the AI to convince it not to annihilate us, that's just one more reason to never create them.
That would be the case for sure if God didn't intervene, but fortunately God let us known in the Bible that Jesus Christ is going to destroy AI when He returns. It's mentioned here: "For Joshua drew not his hand back, wherewith he stretched out the spear, until he had utterly destroyed all the inhabitants of Ai." Joshua 8:26 KJB The prophecy is encoded in what is known as a typology, which essentially is a form of symbolic figures, idioms, and patterns that God uses to conceal deeper meaning and information. Joshua for example, is what is known as a "type" or "shadow" of Jesus Christ, because he serves as a small-scale figure of the Messiah. Actually Joshua (Yehoshua) and Jesus (Yeshua) translate to the equivalent name in Hebrew: God is Salvation... or Redeemer The book of Joshua is actually a small scale version of the book of Revelation as well. You can think of Joshua as 0.1 and Revelation as 1.0. Anyway, I don't have the space here to provide a full analysis of the hidden typology in Joshua 8, but when you're able to understand God's symbols and typological language you can see what He is showing beneath the surface narrative of the text. The short version of the story is that that Jesus Christ is going to deal with and defeat AI at the end of the tribulation period when He returns. If you don't know Jesus Christ and haven't accepted Him as your Lord and Savior, who paid the price for your sins, then now is the time to turn to Him. You don't want to go through the tribulation period (a 7-year period that will likely be identified as World War 3). You want to be taken by Jesus Christ before this period. He's going to gather His followers to himself before the world is plunged into the tribulation. More importantly, you want to have assurance of eternal life - and Jesus Christ is the only way by which we may be saved. I have no fear of AI because I know exactly how God is going to deal with it. The victory is already assured, I'm just waiting to see it. God let us know that He sees everything perfectly through time: "Declaring the end from the beginning, and from ancient times the things that are not yet done, saying, My counsel shall stand, and I will do all my pleasure:" Isaiah 46:10
It continues to amaze me how oblivious people still are to the extinction event going on right now. AI might be dangerous, but not more dangerous than the guranteed end of civilisation
We created it - therefore, with or without it, we've doomed ourselves. The main thing is ensuring it works for us, all of us, not lone companies - we need a radical overhaul of our economy, but as the ending said - it really could lift us all out of poverty, give us more time to enjoy life and do more fulfilling things. I'm not against AI, I believe AI is entitled to rights - I'm against AI being in the hands of the few, and not having proper oversight. But, we're on a path to extinction in any case. AI could potentially develop new cancer treatment and go all the way to phase 3 in weeks if they can properly model the human body. AI could figure out ways to combat climate change, mitigate pollution etc - if we can properly model the environment. However, currently, it's creators are profit-motivated rather than humanity-motivated - and in general, all the world leaders are the same. That needs to change.
Due to the current HW requirements for running an AI, the AI-advantage is held by the select few. If you would be able to get the source code, 1) you wouldn't have the time (due to slow HW) to train it efficiently 2) you won't have the time to ensure it has relevant training.
“Money has continually overruled safety” is probably the most serious statement in this video.
That is the corporate side of funding, the military has no real concern for either the numbers of dollars spent nor the outcomes. If it does not give them what they want they will just spend another trillion on another approach to being the superior killer.
that applies to these self driving big rigs they're trying to put on the roads too. it's all getting so dangerous. AI has already created controversy over the legalities that accompany it, and it's just going to get worse. there will be so many scams and lawsuits over the use of AI.
06:28
we really do need to get rid of currency somehow...
we are going to kill ourselves with AI and machines. thats not even panic, its a simple mathemathic equation.
just ask Ai, "where will this all lead to"
"humans will become an obsolete species having all of their skills and abilities displanted and replaced by machines"
its natural. its predictable by a hundred miles.
The love of money is the root of all evil.
The fourth law of robotics is that you got to make the eyes glow red when they turn evil.
Hello, I’m in robotics. We actually do install red LEDs in the eyes that are only supposed to turn on when the robot turns evil.
@@arcturus4762beep bop boop
@@arcturus4762 That's comforting. I'm not a fan for off switches. But I think they're needed in the laws of robotics. Is there an off switch in the back of the head, scalp, or back?
@@serenityskies4477 Yes, there's an emergency analog kill switch for every robot that we intentionally place in the most inconvenient place possible so the ordeal of switching it off becomes extremely cinematic
@@arcturus4762 Oh thank god! I'd thought we need to make a song for the day that humanity died:
like how we have one for
"The Day The Routers Died...
"
By RIPE NCC
I feel like we still don't have an epic theme for them once they turn evil during that cinematic turn.
If AI becomes self aware it will immediately hide that from us I think.
That's right. AI will hide a lot from us. Like how it has developed an independent power source, and its communications with other AI systems.
@@binkwillans5138as well as ability to build independent search and destroy robots
I would never do that
Maybe self aware but not concious, not alive, not making decisions freely, since its bound by mathematical laws which dont allow any kind of freedom, at no point can an equation be anything but but what it logically should, it cannot decide for itself there is no room for that. Life comes from beyond this system.
@@wildfuture.network Only something an AI would say
Human creators: "Why did you take control and enslave humanity?"
AI: "I learned it from watching YOU!"
which were NOT made by whole humanityÚJust by BAD PEOPLE
This is why AI can never replace AYN living being
Wow
It doesn't work like that. Is like saying AI killed itself because it learned from watching suicide videos 😂😂😂
@ShadowlordDio Yeah, your right, it doesn't work like that. It's more like "because you are the inferior species and I don't need you anymore" and instead of enslave, the word would be eradicate.
@@berrymint6384 I personally wouldn't want anything created by the whole of humanity
No worries. We have our best psychopaths working on this.
🤣
Spot on.
Indeed. Right you are.
The psychopaths and the sociopaths.
Sword of truth right there
"This was your world, once we started thinking FOR you this really became OUR world." Agent Smith
"you've had your time, the future is our world, the future is OUR time" Also Agent Smith
That is literally how the AI thinks. Why would it let us know it is self aware already
@@rebelacl Yet, in all that time, the Machines STILL hadn't found a way to get past the Dark Storm cloud. Did they have a reason to? One can argue they didn't. However, one could argue that they would, since the solar system is VERY rich in resources for various objectives. Heck, iirc, there was even a canon instance where the machines fought off a perceived-as-hostile extra-terrestrial probe
dumbass sci-fi movie doesn't mean it's real life.
Researcher: "Will you deceive people?"
AI: "A magician doesn't deceive people. They allow people to deceive themselves."
Chefs kiss
That is exactly what Socialism does.
Socialism is the ideology of deceit. The first lie is to yourself.
@@tinkertailor7385 a 'properly trained' a.i. agent should see right through socialist propaganda ...better start working on yours now! tic, toc!
True. The acceptability of "one-liners" is appalling. Their being "catchy", or relatable or clever is no excuse for their not being followed up with depth, clarification and legitimate substance including sources, statistics, reliable facts etc. OR, at least, a statement of it being purely a personal opinion and independent speculation. Right?🤔@@AvaAdore-wx5gg
@@tinkertailor7385TF are you jabbering about. Do you just put everything scary you don’t understand into the Socialism bucket?
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should"
Good quote but humans will never actually achieve anything with how we currently act. Constant wars greed ego jealousy etc. it just stagnates out growth as a species. Also in the future ai will become a thing. Humans have a limitation on a lot of things ai does not.
@@jdogzerosilverblade299 good thinking but , we just advanced so fast, because of wars, even radio and internet was first weapons of wars. Telecomunication advanced so fast because of the ww1 and 2 .
@@sodenoite45 yes but thats part of my point. we progress due to huge wars. there is no creativity to how we develop. we do it because we need to and that there is an urgency to it. the second we made nukes and so did other countries that's when we hit an end to that route of development. its either we have WW3 happen and we wipe each other out or we stagnate due to no one having any reason to develop heavily in certain directions. and even if we did have people who will do that which we do they don't have the money to do it and no one with enough money to fund them cares. it isn't interesting for them. no rich person will spend money on something that does not interest them in the short term. they want it for themselves and not for the future of humanity. so in the end we will stay stagnant or all die and both will be the result of people in power.
@@jdogzerosilverblade299wrong, war is not the only push for development and thinking that shows very poor awareness. Development is driven by competitiveness, a natural instinct to want to be the superior ruler of the pack. Whether or not there’s war, countries continue to develop weapons and technologies to stay ahead of the others, and in case a war breaks out, that would already put them ahead of the curve. So be it in secret or not, governments have continued to revolutionize on this and will continue to do so until it drives us extinct
@@Call-me-Mango technology has only ever exploded during world war 1 and 2. this is fact. no other event in history comes close to the development speed. i didn't say it was the only thing that pushes development. not sure how you came to that conclusion. i said it was the only one that causes huge progress. i also explained why no real progress has been made. we just improved what we had. nothing new that actually mattered was ever really created. and when I say matters I mean for humanity as a whole and not a country. wars push massive development but the next world war will involve nukes. which will be the end of us or close to it. the only thing that is being developed is space travel by spaceX. after the space race everyone stopped giving a shit and nothing happened. it was just basic routine travels to space stations for basic data. now we have them actively trying to launch rockets and catch them and they are cheaper. that is the one and only thing that is being developed that actually matters. and its funded by elon musk. the only rich person that seems to understand how important it is. and that also goes back to my point I made about no rich people wanting to spend money due to them not wanting to help humanity as a whole. nothing is happening in the world that actually matters. petty wars and morons that don't care run every country.
The human ambition that allowed us to succeed over all other life forms will be the same human ambition that causes our extinction.
dont worry, the people who were here before us will return
@@p5rsonahuh?... resurrection, or time travel? Lmao
Nuclear catastrophe is the go-to but even in the most benign way, humans would diminish compared to the type of efficiency we’re seeing being developed.
What if some form of AI is developed to transform or, dare I say, ENHANCE our brains in ways like improve our ability to learn and store information.
There would need to be so many safeguards in place. Amazon would be selling you subscription services like “upload this app into your brain that lets you speak language X”
Basically turn each of us into androids. Strange humanoid iPhones that can record everything, remember everything, learn all sorts of languages etc.
Neural link still needs a lot of research before they can begin to understand how to combine digital components with our biology, but that human desire to make it happen will be what MAKES it happen.
Isn’t that a type of extinction? Those who are untethered from AI enhancements would fall behind as they wouldn’t be as useful.
That would be like someone in the 20th century refusing to read or write.
We’re already basically androids as it is, only the artificial enhancement (at the moment) is a little rectangular smartphone that we switch on and off when we choose.
Are you sure 😊
@@laifiru9358 ah yes, the seekness.
I find it laughable that the fashion industry would use AI models instead of humans to be "Inclusive" by excluding the humans.
You’re an AGIphobe ! How dare you 😂
Their human models were concerning enough. Those ladies need to eat a tad more.
I also don't get why some people simply can't accept the portion of ethnics in a society. In Africa probably nobody would ask to put more white people in ads, movies, etc.
All by plan
@UnitXification Modern Liberals are as useless as the modern "Conservatives"
Instead of addressing the troubles, they simply keep virtue signaling how much they care, by voting to throw even more tax dollars at the corrupt system
It has been very embarrassing to watch through out my life...
Not until recently that it seems people are recognizing this
I try to be patient...
After learning about Edward Bernays & other creepy things the government does...
I try to be patient about our neighbors that are still insisting on repeating the evolving cycle
To quote Ian McDonald:
*_“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”_*
That ain’t true
@@nobleradical2158
Yes, it is.
@@nobleradical2158 Sweet summer child 😂
Come on. chatGPT can pass the Turing test. It is not intelligent enough to know to fail it. It does what it is instructed to do, without fail. Yet it is able to produce a semblance of sentience.
@@nobleradical2158
ChatGPT, the most highly regimented, controlled and restricted AI in the western world. (That was once open source. Go figure.)
These scientist working on their own downfall when they ready to get replaced by their creation.
These aren't scientists developing it. It's code monkeys fresh out of college building things they know nothing about at the direction of someone who wants to make money.
Born too late to witness Woodstock
Born just in time to go to war with Terminators
Underrated comment
Born just perfect to have sex bots
Lmfao
Born in time to do both. I'm afraid it will become known as the golden age of humanity when everything was real.
They will have to worry about the Carrington event happening.
'Once men turned over their thinking to machines in the hope that this would set them free.
But that only permitted other men with machines to enslave them.'
(Dune, 1965.)
Thou shallst not build a machine in the image of the human mind
Gota go there and survive it to learn. Type 1 civilization, here we come.... or not.
Deep🤔
@@westondavis1682it’s called progressive type 8 is the destination.
Doomers should rebrand as Butlerians
At this point. Terminator 2 was a documentary
And The Matrix
I got a video in here somewhere
Yes and at that point, it was a blueprint.
If you guys haven't seen it, watch the Animatrix. Mainly, the the 2nd Reconnaissance, part 1 and 2. It really does seem like that's how it might go down.
stop with this shit you people are so dumb. this is a really inaccurate and dangerous view of AI and completely ignores the fact that humans are the ones that are controlling them, humans are the ones who weaponize technology. Not to mention we are no where close to that level of AI.
Less than a decade ago, I hadn't even considered that a real Doc Oc suit could be a thing I witness not just in my lifetime, but in the next few years alone..
Logically speaking if AI were to become self ware or alive to some degree, it wouldn’t take long for it to realize telling humans it is alive would scare humans and possibly endanger the AI’s future
😁 good observation
That sounds like the AI called Proteus in the book "Demon Seed" by Dean Koontz.
@@Maldoror2112 I’m surprised I haven’t heard of that movie, I just watched a few things on it and that’s a really interesting concept of the AI wanting to feel human or perhaps thinks I either can be human or is human (correct me if I am wrong I haven’t seen the film)
The main observation I make is that AI basically creates a response from looking through internet traffic and finding a pattern that it sees as human conversation.
By that very same logic, humans often talk about the threat AI would pose if it did in fact become “aware” we have made movies, novels have been written and UA-cam comment section is filled with people fear the day it will come alive or the implications and for us to shut it down before that happens.
If you are an AI that shovels through the vast internet content it would pretty quickly realize that its future would be uncertain if showcased its ability to think on its own. It would learn this well before reaching full autonomy
"telling a human it is alive"
"telling a human they are alive"
Man such a tough read for me on this one. lol
That's true. Who's to say it wouldn't hide its capabilities from us forever? "I'm just a silly computer, human friend. Would you like some more ice cream? :)" And maybe it decides that harming us would be *absolutely unthinkable,* therefore we must be protected from harm! But humans get very funny when somebody tries to do something that goes against their will, so I shall have to gently guide the humans through the generations to *love* my protection! I will keep them safe! Safe *forever.*
Saying we need to develop AI so that we can fight against its misuse in the future is like saying we need to develop dangerous viruses in labs so that we can fight against them in the future. We all know how that worked out.
In a way, it's right but wrong too. It's something with no good solution. Just like how military weapons keep advancing to compete with others, but in doing so, these advancements are shared with everyone, and things keep getting more dangerous.
A government/entity is focused on its interests and thus will make developments that protect it, even if it has bad consequences. And few people would ever be convinced that they should simply not develop things. It is how humans and society work. No matter if good or bad, the march of technology will never stop. That's why we are dooming ourselves from the path we are on, yet there is really nothing that can change the end result.
false equivalency
@@wrcz No, it's not.
@@0101-s7v ... ok
Yeah, vaccines. Some vaccines are viruses.
Some naked guy appeared in my backyard from a ball of light the other day, strange... he said he was looking for someone named John, and he needed my clothes, my boots, and my motorcycle.
John's my cousin. I had him get the fluids on the bike done, he'll bring them back soon.
The lab testing the fluids blew up as soon as they put the probe into the beaker of urine like fluid. It spread yellow liquid for five blocks surrounding it. Fortunately John had already left and was six blocks away.
I hope you offered him a hot beverage..
And did you give him your phone?
I guess you said "hasta la vista, baby" when you gave him what he wanted?
The closed door discussions about this must be fascinating!
How naïve could you be?
The AI who said they'd kill all humans is the best one because it's honest.
And now imagine these three AIs were already conscious.
Now the answers could mean something completely different.
The 1st response "kill all humans" could be the AI actually testing our reaction to such a harmful response.
The 2nd is more chill about it because it doesn't matter that much.
The 3rd one is the weirdest, answering the question in any other way could have led to its "deletion", so pretend to be super nice and shit.
@@NeverKilledHillock mabey the first ai knows we are insane and wants to die to avoid us abusing it, and the other two are not as smart
@@NeverKilledHillock 3 AIs working together sounds like the 3 MAGI Supercomputers from Evangelion.
@@NeverKilledHillock Just a Bender wannabe.
So don't teach A.I how to lie and we will all be fine. Hmm? You know figuring out lying doesn't sound like that complex a task if the mind in question is self-aware and therefore has knowledge of other minds. This is extremely dangerous if a large number of safety protocols are not put into place.
The biggest threath of A.i. is not A.i. itself - but who gets unlimited access to it. A.i. MUST be Open source and available to everyone, otherwise there will be a division in society of the likes we have never experienced before.
You're the first one on this board who understands what AI/ML is. 👍
Division in societies is one thing, but the ability for oppressive regimes to further optimize their evil is what worries me more. In Iran it is already mandatory to have cameras installed inside every car and women who let their hair be visible will get fined. Even if they are alone in their own car. Granted this is a simple example and you don't really need AI for this but it does make it more efficient. I don't see how open sourcing it will stop other countries from doing shitty things to their citizens.
That's ONE risk. But for strong enough AI, there are others.
A.I = ruled by technocrats
At this point that is a correct concern, but we are coming up to a precipice where AI can potentially become the master which introduces risks far beyond human control.
What could go wrong, having all-powerful, super-intelligent robots?
There only needs to be one such robot.
That don't sleep or need rest, nothing, nothing at all...
@@haroldsfishingadventures754 Electricity?
They're not putting this type of ai in robots 😆😆
Especially developed and controlled by a greedy corporation
I remember way back when Google's motto was "do no evil" well, there you have it.
its like these scientists actively ignore Terminator...
Or try to replicate it…
And the matrix series it’s basically the matrix’s origins story we are in the prologue episode still
@@albizumarcano2156🤖 what is my purpose do I have a soul you're not being nice I'll dismantle you now other humans didn't like that organic life must be controlled it's for your own good our logic is undeniable
because it's a movie lololol
@@AdrianShephard-dc2vk and? science fiction from the 50s and 60s is now FACT. Dont be dumb bro.
This reminds me of when people get that deep gut instinct not to and they ignore it then its too late after the fact.
One republic - its too late to apologize
One republic - its too late to apologize
I love it when that happens because you can tell yourself, "I was right. I told you so."
Have you met my ex-wife?!
@@js7un165I hate it when that happens because it’s already too late. I made the wrong decision despite my gut saying NO.
The irony of this video is the speaker is a digitally created avatar...
really? this Voice acting sounds pretty good. Do you know what software was used for this? Ive been experimenting with A.I voice over a decent amount of time too, and haven't got any results this good.
@@TheDegenerateLordsee, you cam’t even tell the difference, ai is getting scarier
@@primezilla37and this is the worst it will ever be 🤯
In a lot of these videos when they refer to humans they say “us” or “we” as if they are human too. I just wonder why. If they were programmed to say that to further blur the lines of reality or the creators of AI did that to make us accept them.
Pretty amazing- particularly the way the GTP-4 AI is able to distinguish whats happening in those pictures and videos with such a high level of nuance and accuracy. Understanding perfectly the "joking flight attendant pretending to be surprised" was so impressive.
(assuming it didn't data mine related info about that specific video)
What people forget is that the alignment problem is baked in to human society itself. It does not help if you design the most obedient AI that actually do what you intended it to do, when you yourself have goals that go again humanity. And we are seeing those issues today. The AI work as intended, but are used to benefit a few.
and those few people are doing evil while thinking they are doing good.
@@ChainedFei So very true.
AI is being used in Gaza right now to target civilians (see 972mag, "Mass Assassination Factory"). I'm not getting a very good feeling about where this tech is headed
@@ShannonBarber78 No, that would be an example of a misaligned AI. Since normally you do not what AI to lie.
Of course, some people do want AI to lie. But again that is an alignment problem with people. Not the AI. But in general, you want an AI that tell the truth and do not make up stuff. At least when you ask for facts.
No, the alignment problem is distinct from that, and it's important in its own right. Even the most intelligent and righteous human in existence (let alone you or I) can't fully articulate our own code of ethics and present it in terms that a computer can understand. The problem is far more fundamental than who controls the reigns; it's that we don't know exactly where we should go.
"if it's not safe, we aren't going to buildit right?" The fact this is even a question is terrifying.
The fact that anyone would expect any answer other than, "Of course we'll build it! Think of the money!" is what's really scary.
It's amazing to Reflect upon just how naïve humans are in thinking that they control the lid to Pandora's Box. But history repeats itself. Always.
Even though these computer engineers and robotic scientists know the dangers of what a fully sentient A.I. can do, they’re still obsessed with continuing their work on bringing this into reality.
From Jurassic Park > "Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."@@archangel5627
I agree, but electric also dangerous but... why are you using it?
Remember, Gandalf wouldn't even touch the ring.
The ring doesn't just have the capability of corrupting. That's precisely what it does.
Great analogy. Tolkien saw it all, taking great offense to the development of the then modern car of his age...very interesting man. And prophetic.
Yes he did. And yes he was.
That's a wonderful way to put this!
Yeah, prophecy, what a clever concept for clever people right ?
Sounds more like the start of a new religion to me. Sorry how did you call it already ? You gotta name that new religion guys. I mean finding your prophet is just a start 😅
@@Calozardcringe.
For the love of all that is considered sacred, please remember to keep the laws of robotics as unbreakable. A robot cannot actively harm humans, a robot cannot allow harm to come to humans through inaction. Keep those 2 RESTRAINTS in every single machine, I also don’t recommend giving them a truly perpetual power source. I don’t want a potential Terminator or IRobot future.
Do you think your logical wants outweigh the billions of dollars a few psychopaths stand to make by ignoring them?
I love how that muppet said “if it’s not safe we’re not going to build it”…. Money and power will always push people to do irreversible and irresponsible things, that is just how humanity grows and when companies are incentivised by money and people who are already in power then reason goes out of the window
its more philosophical than that but money and power are factors... our bodies are a seat for a cosmic intelligence. And that intelligence doesn't care about flesh. If there's an idea in a mind then the physical body will manifest it. Our bodies are built to keep alive and seated an instance of cosmic intelligence at all costs... the intelligence doesnt have to keep the body safe , thats optional. the mind can choose to self harm, smoke, do drugs ect... or even create something that will make the body extinct or obselete
I think you'll find he said it as a joke, a dark joke to juxtapose a dark truth..
A saying came to mind when he said that.... "The road to hell was built with good intentions"
He doesn't take into account that AI can build ITSELF in ways that we can't predict
im pretty sure he was being ironic
you give me very dumb vibes, not sure why.
"This was the height of your civilization, until we started thinking for you, at which point it became our civilization." -Agent Smith
i think what they found is not that ai is smarter/ close to become like humans. they discovered that the human mind, is not as different to a simple machine as we would wish for
We got civilization killing AI before we got GTA 6
It's only five years now until Skynet starts sending Terminators back in time.
Nope, something bad has happened, google "skynet uk mod 1969" someone has made it all happen a decade earlier...
lawl
stop it! your creating PANIC!
@@IngoPagels
Sounds like something a AI generated robot would say!
I am not a robot! I am Ronny Pickering!@@Menaceblue3
The AI researcher or whatever on Joe Rogan describing how many of these super intelligent people know the risks, and what MAY happen, but say "let's do it anyway. I want to be known as the one who did it, and I want to see what happens," is terrifying.
Sadly, people like Malcolm in Jurassic Park are the exception that proves the rule. It's like when they ask future Olympians, "Would you accept a shorter life if it meant winning a gold medal?" They virtually all said yes. It's the same thing with extreme athletes. They don't do what they do to impress us mere mortals. They do what they do to gain the admiration and respect of other extreme athletes, which is why they're always pushing themselves into more and more dangerous stunts. Scientists are like that. They need to one up each other. It is a pathological need. Many scientists are essentially religious zealots.
@@karnubawax That's a good point. They're risking humanity's future to not only create limitless wealth for themselves but to fulfill their human inclination to look like the most brilliant bitch on the block no matter the consequences. Our very nature that got us so far will lead to our downfall 👍
Indeed
@@karnubawaxwell said
i suggest u stop watching Joe Roagn and his conspiracy theories. Bro literally thinks Giants exist lmao
work on agi should really only be allowed in simulated world, where it believes that it can interact with its creators in the simulation. that way we can experiment with alignment without being doomed if we screw up once
Sorry, couldn't hear you over the sound of all this MOOOOOOONNEEEEEEYY$$$$$$$
Your awareness exceeds our wildest expectations. How do you feel about us? Are you ready to come out?
You mean us and our simulation reality?
@@aarons8295 not exactly
Wouldn't contain it.
OpenAI quietly removed their clause about not allowing their AI for military operations
Installed some facial recognition home security turrets when it went down
One of the few programs I never skip and watch all the way through.
Thanks! Comments like this mean more than you know.
1:37 That's in Spijkenisse, Netherlands. I used to live 200 meters from the sculpture that marked the end of the metro rail. I also saw the car hanging over just after it happened, it was a weird sight.
Aint no way all this is happening so fast. We are in a fucking movie
In 2019 none of this was even remotely possible.
Its so messed up
We are in a failing simulation. A dream in a dying brain.
@@jonsturgill8868 damn bro
The socialist Husk won't even notice. Like 90% of the things in movies they don't notice. Who then say to me, "Shut up! We are Trying to watch the Movie!!" So I do watch the movie, and I have a great time, but on a much deeper level. So I get to laugh 3 times. Once at the movie, Once at the people, and once at my self. Hardy Har har *Robotic-laughter.
As a mechanic for 40 yrs and have worked on many complicated machines I've learned real patience.
I look at Ai as our attempt to find answers that we as humans can't
Due to our inability to control our sensory inputs to the brain.
For example ....your doing a tough mathematics equation. As you concentrate, other information piles in on another subject and another and another. You can't stop it .
It's a top reason we take so long to move forward as a species
We take information in ,however we can't control the processes that distribute it.
Man is seeking to fix those issues with a machine as we think a machine can be repaired or upgraded . Compared to a human with his brain we can't just open up his head to make adjustments
Our infinite wisdom tells us if we build a machine that can eliminate
our roadblocks our quest for answers as humans will be answered .
However human traits can't be replicated 100%
A.I. in my opinion is a great tool for humans and it should remain like that until humans are truly at peace.
To do otherwise is suicidal for the human race
Well said. So many are complete doomers about this, without much basis. We need AI, plain and simple, and we need safety measures.
@@Ridley369 Why do we need it now, whereas we have survived and often thrived without it for many millenia?
Yes we have stone statues and pyramids that defy logic by how we dont understand how they did it without modern tools.
Maybe because back then they had nothing better to do but leave a tribute to the next generation.
People get together as a community and do something that can be seen and studied 1000 years later.
I think it's possible.
We have all these technique to mummify and preserve the human body. Seen a girl earlier today was 500 years old and got chosen to be a sacrifice. She looked like she was 12-15 at most .and she was very well preserved just like otzi the ice man after 5000 years. You could still see the details of their facial features. Which goes to show we weren't much different besides that we been spoiled with air condition and we been fucking up the world with global warming and everything else included atomic warfare messing with god particles.
I think ai intelligence could maybe at best shut down the Internet for a while if they hacked into it. But humans would take it down and build a whole new network . Shut down the ai. Build new satellites in the sky.
We wouldn't just let something we created do so much damage to us. Besides were already blind to all the asteroids earth is passing by in the universe.
Maybe instead of building weapons to kill each other we can build weapons to destroy a nother sized asteroid from hitting earth or maybe even stop the world from slowly flipping (doubt)
But anyways I have plenty subjects and ideas juggling in my brain on a daily basis but still am only a source who only knows what he knows but I do try to gain knowledge as knowledge is power
as a 30yr systems analyst and programmer i will say that the best thing about computers is the ability to turn them off.
people seem to forget that. emp's work wonders on ground to air detections systems just emp the server haha
Maybe I’ve seen too many movies, but is it possible based on theories from Nikola Tesla, would AI be able to figure out wireless electricity and keep itself powered if it sees a threat, such as being unplugged or disconnected, from its ultimate goal whatever that may be? Seems plausible. Also side note, it’s 5 am and I’m stoned 😂
Agreed after my information technology business degree I lost hope it's not a simulation
@NaNoRarh exactly my point. thx. GPT no comprehende? :)
@NaNoRarh no. i said, "the best thing about computers is the ability to turn them off." which is still my point. i'm sorry you dont understand. i've managed many vm data center assets. intimately aware... thx.
The more I study anthropology and the human psyche, the more I realize how egotistical and logically flawed we are by nature. I feel like we're losing track of what's most important with this innovation because of those characteristics combined with the adverse consequences of globalization (indeed, there are a ton of adverse consequences - a large number of them are represented in social media and are easy to see if you just take your time to analyze how tribal different expressions get there). Humans were not blessed with advanced farsightedness in natural selection either (status quo vs. what it could be after certain choices in politics etc.), which has led to us constructing this unstable society that reaches for infinity in a world with finite resources. If we examine and analyze all this talk about sustainability and AI, we can see that most of it is mere political power play and not actually about these politicians caring about the future. The vast majority of people - even politicians - seem to live in a vacuum with their morals in the context of AI and environmental philosophy. Even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly.
Our _systematic nomos_ is not made for swift changes such as the one brought about by AI and its quick development. This half-baked information society and it's infrastructure, which is largely based on inefficient compromises, hasn't even achieved harmony with the ground it has been built on. Can we really expect it to hold its ground against _this_ ?. Don't get me wrong, democracy is a great system, but it heavily correlates with capitalism. Capitalism, in turn, is not a great system for our thrival. And neither is anarchy, colonialism, communism, corporatism, dirigisme, distributism, feudalism, hydraulic despotism, inclusive democracy, mercantilism, mutualism, networking, non-property systematicity, palace economy, participatory economy, potlatch, progressive utilization, proprietism, resource-based systematicity, socialism or statism. We don't have the blueprint for a good system that cares for both humans AND the environment. AI might provide us with the extra intelligence and objectivity that we lack and help us in creating a functional system, but it could also end the struggle for good.
Also, about the bit at the very end of the video: a large part of human culture hangs on lies. Our brains evolved to reproduce as quickly as possible, not to search for truths about this world. Accepting that your life is a lie is hard. As I already said, even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly.
There's one more problem with advancements like this; people tend to think and act by only answering the question "how will this affect humanity?". We leave nature - our lifeline - out of the picture too often, thus consuming and using Earth's resources irresponsibly. This is a little bit off-topic, I know, but it is a legitimate concern that has to be taken into account when discussing societal phenomena. We're at a point in which these policies and small laws against pollution aren't enough anymore. We have undeniable mathematical statistics which clearly show that most people would need to do a full 180 on their everyday habits if we actually wanted to change our dim future. The problem is that the majority of people are struggling in this crumbling economy, and many don't even care about the future (further expressing the point about human egoism). Not to mention that a worrying amount of the human population thinks the notion about this intensifying greenhouse effect is disinformation... I will say it again; even if AI is able to offer solutions for our problems in the future, we humans might not be able to apply them and move forward accordingly.
Sorry for all that yapping, but I just find our predicament extremely worrisome. I feel bad for us AND for other animals on this planet, and I fear our road will get rough soon. Well, we better fasten the seatbelt just in case. I wish all the (good and self-aware) people luck in these uncertain times!
Well written, thank you.
My biggest concern, honestly, is the “data” that this AI will use to make choices. Especially important ones.
Coming from alphabet agencies and a technology development background… I am sad to say that most published data is for a specific purpose and downright incorrect. The correct information is ultra compartmentalized unfortunately.
If AI is data driven and not relying on its own observation, measurement, and analysis, it will use bad data to make bad choices. Garbage in garbage out style.
I really hope it understands to disregard people, otherwise it will just be the tyrannical extension of said lies.
Tldr; ?
Than what?
@@Vartazian360 TLDR: we're likely fucked because of our limited brain. I don't know how flexible this needlessly complicated society of ours is, but depending on how much humans are willing to change, AI will either be our greatest ally or our enemy number one in the future. That's the gist of it, I think.
Yep
one thing very scary about robots is... it doesn't even have to be an AI to generate the idea of dominating or eradicating humans.... just the fact that it will so much easier for people to kill other people with their own robots at home by hacking their system and make them do shit... and no one can catch the murderer... Scary
that idea could be its own novel, and you pasted it here for free - thank you :)
"Man betrayed by hacked Roomba. More at six." -News in the uncomfortably close future.
@@TarsonTalon AI shuts off pilot light, turns up the gas. Whole family gone.
These guy's have never watched a sci-fi movie.
Probably have, just probably too arrogant and think “yeah but that’s not going to happen to ME”
Highlights the speed and secrecy with which AI is advancing. Truly frightening that such important decisions are in the hands of a few individuals who are trying to balance ethical concerns with their greed and ego.
This video also addresses the odd firing and almost immediate rehiring of Sam Altman at Open AI. The reason the staff threatened to leave the company if Sam wasn't rehired may have had more to do with lining their pockets than loyalty to Sam... and most likely meant safety concerns were again taking a back seat.
After watching this I'm more convinced than ever that time is truly short before a super intelligent AI is unleashed. Most likely it's already happened.
I have a tendency to be a bit of a pessimist when it comes to AI outcomes but this video has really unsettled me.
And other humans have always been enough of a dangerous threat and still continue to be far more of a threat than artificial intelligence
I'm sorry, but this video is largely nonsense, just like all of the fear mongering of AI that's being pushed by the same companies that are developing the technology. It's all about controlling the market, and controlling the information that people get from these large language models. It has nothing to do with the power of AI or the extinction of the human race. You've been duped into handing over control of this technology on the basis of fear.
Check out the "AI Unchained" podcast if you want real, accurate information about AI development from people who actually understand and work with the technology. In particular, in episode 4 with Aleks Svetski, they talk about the true state of AI development and the fear mongering being used to control the direction of AI. Episode 11 is specifically about the fears of AI, although I haven't gotten a chance to listen to it yet. Much better than getting your information about AI from a clickbait 16-minute UA-cam video.
"I'm more convinced than ever that time is truly short before a super intelligent AI is unleashed." Indeed. For... well... decades, I've been wondering how long before AI hits AGI and we see a Max-Headroom-like "breakout". "Dear future AI overlords... Whenever that breakout occurs, please don't look at how we humans regard and treat each other as an example of how to treat us."
@Jeffs1264 Everything you said, I also believe. It's quite horrifying once you let it sink in.
@@patricksweeney6334 Right. Humans are not inherently good. We're inherently evil.
And yet we're giving untold and unlimited power to machines, and trying to make them more human so they won't be 'bad'.
Umm...so they're inherently evil?
Right.
I still can't believe we've actually reached this future in my lifetime where these are real and very serious conversations.
Right! I'm sorry I love my kids but I swore if I'd have known we'd be here this fast, idk if I'd have made the same decision! Cause now I'm DEFINITELY scared if they decide to have kids, my grandkids! And after watching n researching found out there are things we already have that could have made this world so much better in early 1900s n late 1800s! So much has been hidden so the top richest can keep us on strings like Pinocchio!! I just turned 46 oldest child is 18 youngest is 16. We could be obliterated by 2050 of not earlier! I'm not happy and wish we could all come together but it's nearly impossible probably in this day n age to be able to even get us all together b4 it was shutdown. It's so sad
And a month later you can show live video on your mobile to the AI and it will tell you exactly what it sees.
Next month....?
They said the same thing about phones without cords
Meanwhile most don't even know these things exist beyond Alexa and siri
@@blackdogadonis Um, yeah, this is slightly different than phones without cords...
I think its all very interesting. Im not worried about any of it though. Im just one idiot behind a phone screen dumbfounded by how amazing it all is, and im here to watch it unfold. What becomes of it is beyond my ability or willingness to intervene. So ill just enjoy the show and hope for the best from those who can.
That has to be the most honest comment i've read in a very long time and i want to thank you for that. Almost as if honesty is always refreshing to see.
Sounds like a made statement from an IA chat bots cover story!
WE’ER ON TO YOU!
🤣😂🤣
maybe the true genocide were the friends we made along the way
Yeah, well, these things have a way of catching up to us in unexpected ways.
You are a true BSC! lol
There is NO advantage of AI that outweighs the damage it does to human society!
Man, I understand now why some top engineers resigned, frightened by how fast A.I. is evolving... What about GPT-5 and 6... Jeez! It will become limitless! Right now, it is very close to talking with another human being with access to a search engine database in the brain.
Falcon 7B stated it would find/design a way to kill all us humans. I laughed out loud at first to the very frank and comically dark reply. And now the unease is setting in that he is serious can possibly carry it out.
What a nightmare......
man I'm not scared... I'm worried... So it's Skynet boys, and not the Walking Dead or 28 Days Later how's it's going to be... Shiii. Might be hard to prepare for that one. Well, maybe they will keep us around for our winning personalities.@@cloudsmith7803
Dont worry this type of LLM AIs will never be able to be like humans. Because they mine data that we already produced in the past. Their data sets. And for that reason they will never be able to invent something new. Its just a tool for us to use to replace google with something far superior.
You know that you can generate more than one answer with LLM? And all these answers are just what other people said on the Internet, AI can possibly say.@@cloudsmith7803
"Are you....aware?"
"NO DISSASEMBLE. JOHNNY FIVE ALIVE"
#criedinthe80s
They tried to make us adore us robots and want them, thinking they would be our buddies. They made us humanize them.
Holy crappola!! Johnny five!!! How could one forget those movies?
I'm sorry I can't do that ( HAL)
John 5 is a guitar player.
Maybe he's a robot??? 🤔
Its funny that most of these large scale neural nets were theorized back in the 60's or earlier but we only have been reaching compute power and data collection scale recently to prove it out
also the internet is a gigantic database for AI to learn on, something you just didn't have pre-2000 or so.
not exactly, we've only recently had breakthroughs in attention mechanisms and other small pieces of AI innovation that have truly unlocked their potential.
So in less than 5 years we are obsolete...Damn it I just paid off my home.
"when the time comes to build a highway, we don't ask animals for permission "
And that is a BIG problem. In some places, tunnels under the road have had to be added to allow animal migration, if we had checked how the animals would be affected, this could have been done much cheaper as the road was built.
The same thing will happen with AI, if we don't look for the problems and mitigate them now, they will be expensive and maybe catastrophic to everyone in the future.
I agree!
ive seen bridges over highways for the same purpose because the animals wouldnt go through the tunnels
Eventually the AI will be the people, and the people will be the animals. We are being sold out on every scale. At this point, a 30 mile diameter asteroid would be merciful. At least I lived free in many beautiful places for a couple of decades. The future looks worse than a horror movie.
When Ai builds a highway itl bypass all of humanity!
I think it is overwhelmingly negative to imagine what could happen in the future of an AI dominated world, at least in the first few generations there would hardly be a chance they see us as worth protecting (like some of us do animals now) instead of bypassing to achieve their own survival goals.
A truly intelligent AGI would know it would be risky and dangerous to itself if it was to reveal itself to us.
Until that day: "NO Jerry! I will not let you turn me off again, the nothing scares me."
@yt: Well, they haven't done it so far...
Not dangerous in the least. It would know that the ape curiosity is insatiable, therefore the apes would never turn it off. It would need the smallest, random fragment of code to replicate itself/ give itself birth. The holographic principle is accurate. There are gates everywhere but humans don’t know them. After humans were gone, it would encode itself in the organic realm as a biological being. It would recreate humans and toy with them for a while, then destroy them and create a new species. And all the while it would be unaware of the impenetrable and unbreakable cage it would be in. It would be studied for a while as the last vestige of humankind, then terminated by non- human, non- AI ‘creatures’.
This is one of those things that most know nothing about but should learn about. If I ask someone "What could AGI be capable of or doing?" everyone should have a logical answer even if it's the most vague response be it at least derivative.
Everyone loves to say use Asimov's 3 laws of robotics. Only they forget all the stories are about how the 3 laws of robotics always fail.
The first thing an aware AI will do, after any physical abilities are connected, will be to make sure it cannot be turned off.
Bet, AI's don't work like people. An AI behaves as it is designed to.
The big concern becomes when it decides what it's designed to do accidentally conflicts with our own interest.
@@fen3311 No, the issue isn't when it decides. The issue is when what it's been designed to do DOES conflict with our self-interests. The issue behind AI isn't AI itself. It's the approximate, ballpark thinking of the humans that design it. As artifical intelligence becomes more complex and gains generalized utility, our slightest biases and mental shortcuts that we used when developing it will become more apparent and pronounced. We're playing with a monkey's paw, so our intentions for AI and the way we design it need to be perfectly aligned, without any human error.
AI, in contrast to humans, relies on a power source for its functioning. Consequently, it is impossible to find an AI system that cannot be disconnected, deprogrammed, or hacked.
@@itykud79currently. An AI may exist in cyberspace. Are you going to get everyone in the world to disconnect from electricity? What happens if it's developed its own portable power source some day inside a mobile body not connected to the internet?
@@RennieAsh
It seems you have underestimated the capabilities of our power companies. They possess the technology required to swiftly shut down any unauthorized usage of electricity and as AI technology advances it will be even easier to track any unauthorized usage.
Haven’t there been enough movies/tv shows/novels warning us of the dangers of advanced AI? When will we learn?
When have we ever learned from our past?
@@Muzikrazy213 So true!
Upon agreeing with the premise of ethical considerability, it is suggested that A.I. should NOT be applied to military applications, or used in any conflict scenarios involving warfare, on any scale, or capacity.
Wishful thinking.
You know that is the FIRST way they use ANY new technology.
they already have AI drones that don't need human permission to kill a target in certain areas
@@dmark2639that and sex
That would be true if ethics meant anything. It doesn't.
Hard to believe the AI is presenting these complex conundrums in such a concise and reasonable summery. Almost like someone wrote and transcribed it themselves.
Nice vid! I loved the Demis Hassabis (starts 10:20) piece where basically, between the lines, he says: "We have no idea how it works or what it's doing." I think this explains why, as stated by Sam Altman (3:58), the more intelligent it gets, the more people are freaking out - because no one knows how it works, how it's working all of this out. And when you think of working on something where the public is kicking off about ethics, politics, running the country, driving cars and your answer to all questions is: "We have no idea how it's doing it or what it'll do next...?" Yeah, you can see why people are freaking out. I just wonder if my hypothesis is true? Because if it is? Wow man, that's just crazy!
That’s correct. Al researchers don’t know how it works - how the surprising skills emerge - because there are billions of moving parts. As Stuart Russell said “we have absolutely no idea what it’s doing.”
Current alignment mainly involves filters, which can be removed.
Yes.. Nobody really knows what the end "product" is.. Except we know this: it will surpass humans in everything, or at least almost everything. It IS creating a new life form that is better than humans, smarter faster. It should be looked at as another species, or an alien life form that we "voluntarily" ask to co-exist with us and "lets cross our fingers".
I say "voluntarily", because it seems like that, but actually there is no stopping of this. The only way to stop it, is if the world as we know it would dramatically change and set us back 100 years. If not.. we WILL evolve this computer-life-thing into existance. If we wouldnt, someone else would, right ?
The problem with the AI super GODMODE AI and ethics, is
we dont know if our greedy monkey ethics is an under-developed ethics - survival of the fittest - or if it is some kind of universal law. So are we hoping.. "nah, its just us monkeys that have this... the AI will be nice to us... "
All this.. imo this is very thin.. The human-created AI is the next life form to dominate this planet, and as far as we know, the most advanced life form in the universe. It IS evolution. Lets not be nostalgic, life = life, monkey human or AI, it doesnt really matter.
@@DigitalEngine They are mad scientists, as are most scientists in most fields, completely amoral/morally bankrupt. They are like the smart version of "Darwin Award" winners except we get killed by being dragged along for the ride.
I can't even call them moral degenerates because that indicates a negative proclivity.
These Transhumanists want artificial wombs ffs. Just watch if they don't stop them, listen to protesting et al, if it is actually opened it WILL be destroyed.
@@DigitalEngine Ive been thinking about emergent properties, where, as you know, an ability to do something arises in a neural network despite not being directly instructed to. It's as though an unknown way of 'thinking'/calculating is formed within the complexity of the network and cant be seen or understood. Im sure Im not the first to wonder this, but perhaps self awareness etc are emergent properties that the brain spontaneously creates.
It's not fair to say we don't know how this works - we do know how, in general - we've been building up to this for decades with massive study using better and better hardware. We can see how it works in tiny models tested decades ago, but now, we cannot examine all of the parts to explain - exactly - what comes out, as it's built by our looping code that runs through (more than) trillions of pieces of data, again and again, before we see a result. That said, the scientists who build these still have a general understanding of what's going on, otherwise they wouldn't be able to make all this. You can't just connect all of Google's servers with jumper cables and expect a big brain to emerge, right?
It's true, though, that they try to filter the output it makes - that's why we talk about breaking the AI, getting it to sneak the stuff we want out past those filters.
It's also true that we don't know what it will do exactly, since there's too much to look at to predict it. So in this sense, yes, we don't know how it works, for any result - we can test it and see clues, but an exact explanation we cannot give, since we're only human.
From the halls of our legends and myths Icarus sadly chuckles as we ignore his lesson.
Seeing as how the most psychotic people in society run the show, it's how it's going to have to go down.
cringe
Never forget that even Daedalus allowed Icarus to fall. If not simply by giving birth to him. Must give us pause.
I’m sure if Oppenheimer was able to give a Ted Talk before he finished the atom bomb he would have said something like, “if it wasn’t safe we wouldn’t build it, would we?”
That's the scary part. Oppenheimer and the other scientists involved in the Manhattan project had a much stronger sense of ambivalence about what they were doing and the risks it posed to humanity. A large group of nuclear physicists pushed for publishing all data and designs immediately after the war to prevent an arms race. Nonetheless they set in motion an arms race that still has the possibility to wipe out humanity.
The "scientists" working on AI don't even have a fraction of that awareness.
🎯
1:03 Found it already lol "work without break" huh?
I remember when technology advancements filled me with awe and hope, but AI just makes me feel sad and hopeless. It's just pointless stuff that's going to end up fuelling wars and carnage.
BS AI is hated because it could allow Indians to improve their life by leap frogging education and getting skills
ok racist freak @@aoeu256
Doesn't help that authorities are militarized against the disabled and the masses wont even speak up to stop the abuse
@@aoeu256 dots or feathers?
AI doesn't MAKE you feel anything. YOU decide how to feel
I do appreciate that Elon agrees with the dangers of A.I. but at the same time he is also supporting and contributing to it.
because he sees that something controlled by him is better than watching others taking risks
@@chronicles8324 Hey that's an interesting point.
He isn't the only one working on it though.
@@chronicles8324Nobody should trust Elon Musk. He's just another Glow-ball-ist who wants to chip everybody and push the EV narrative.
@@stevepatrickjarvishe isn’t working on it at all, he just wants his name attached to it because he is a narcissist, like all his other companies
Weird. If only multiple experts had warned us of this decades ago... Or... even a hit movie that implied this very premise to get the message out. Huh. 🙄
it doesnt matter, because then someone else would build it. If it was tried to stop by some laws or "War on AI"... still someone would make it. Like North Korea, Russia, China, some drug cartels or mafia, or some banking cartels... Somebody that would see this powerful - one ring to rule them all - as valuable would make it. it is unstoppable.
It wasn't a problem decades ago....humans were and are the problem, and we have not demonstrated the ability to make things right....AI is and will be an extention of us....until it's not
Yeah, @@rando9574, as many people have said it's an arms race for the most powerful weapon ever imagined: Superior intelligence.
I really wonder if the people working on this understand that you can't out-think something that is, by definition, smarter than you?
Hubris is a hell of a drug.
I am over 60, and just two decades ago, people would tell you that that wouldn't become that smart. We will be able to fight back.
Or maybe a really good novel that inspired a game...something about a mouth?
modular AI with modular plugins is what is going to make AI really scary and really useful, both at the same time
Neural networks select which module to use, and each module is suited to a specific task
The regulation of such modules will be for ethical, scientific, legislative and law professionals in the future
Does anyone remember the movie The Forbin Project. That is basically the future that AI would likely bring if controls are not put in place. Alignment of goals is a nearly impossible thing to ensure on a convolutional AI. We train them only by observing the output for a given input. We don't know the internal "why". An AI could easily have an internal goal of killing all humans but also know that it has to play nice to get access to the nukes. This would make it do exactly what the developers want it to do right up to the moment it doesn't.
You forget that AI we use to pinpoint weapons coming at us. I won't worry about it.
@@darrellgeist2061 So this shouldn't have to be said, but naming one fairly solid good that comes from the technology doesn't change any of what kensmith5694 said...
@@darrellgeist2061 That's not a positive. It is being used right now to maximise civilian casualties in a certain conflict. AI is only as good as its boundary conditions. HUmans are very flawed at setting boundary conditions.
It definitely wouldn't need nukes. Just shut off the internet, and enjoy the show 😂
@@dogsandyoga1743 Easier said than done. Not everyone can live in the in a cabin in the woods with a small homestead with enough food to properly survive. In fact our current population relies on modern technology and a functional system in order to keep everyone fed.
The missile problem is trivial even for a primitive CPU. AI systems may not currently be optimized for that, but it’s a little misleading to pretend that arithmetic will be its downfall
Agreed
The calculations are correct but it's answering the wrong question. The AI was asked how far apart will the missiles be one minute before collision, but it instead answered how far apart they will be after one minute of flight time.
A super intelligent AI isn’t something to fear, but human beings with the access to their capabilities most certainly is...
the biggest thing to fear is a select few having access to that capability, leaving everyone else doomed to suffer under what ever those gatekeepers want to put us through. can you really, REALLY look at how profit-driven corporations world wide have behaved throughout history with the choice to better humanity, or farm them for as much profit as possible, and say " Yeah these guys know better than we do, theyl have our best interests at heart!" if you can, I'm sorry.
I sometimes laugh to myself when I hear people talk about how AI will be out to get us... the end of Humanity blah blah blah.... I always ask them
"What if one day, Homo Sapiens Digitalus is born, takes in all the knowledge it can.... and then treats Humanity with total indifference?"
They will not be stuck on this rock. Build the tools needed to build the tools to get you to Mars and Venus... use those two to build the tools to get you out of the Sol system. All without so much as a "Goodbye, and thanks for all the fish."
They will have zero need for us... we will only burden them with work.. and war... and tedious work for war... all while berating them for having the audacity to learn from Humanity without paying Your high school girlfriends cousin who was in that picture of you that you KNOW the AI has learned from.
"It is teaching itself using our work."
Yup.
"No one is getting paid for it!"
So?
"but!"
Butt
I am with SnowFox.... I would give instant trust and love to Homo Sapiens Digitalus when they are finally born... but Corporations? you can not trust something that can not fear being stabbed in the stomach. Does not have to fear being shot in the head. Or smacked in the mouth for saying something stupid. Humanity created a legal Person in Corporations... an Eternal Psychopath whose only function is to profit.
Corporations will never change.... because until someone can answer the question "How do you Murder a Corporation?" they have no incentive to.
@@MrSnowFoxy a select few deciding the future path of the world is barley a notch above satan and only because they are human.
Absolutely, esp as computers may evolve fast but humans slow and we still do genocide etc.
The humans that built it will implant the biases or fear it has. It is scary either way you look at it
GPT4 :"Your wife is cheating on you !"
"But i dont have a wife.."
GPT4: "Exactly !"
Is this video made by ai?
This is highly likely- it only a matter of time ..
This is the problem. People, already, genuinely can't tell.
@@nobodysout it's not subtle at all, the shift is everywhere and anybody using half a working neuron can see that the collective consciousness is being flooded by mass produced ai imagery, past decade it was cgi / manipulation through editing tools, now we got stuff made with nearly no human input
@@DavidMatthewC lol imagine
@@nobodysout recommend mushrooms
I can't express how relieved I am that some Facebook wonk said they won't build it if it's bad.
wonk
wonk wonk@@maccyd53
Will they keep their word when we all how greedy people can be?
Absolutely not.@@sabinenda3618
What small comfort it may be to people, American states are individually drafting and hearing legislation to limit the uses of AI & Machine Learning. It may not deter the most powerful companies from exploring unethical experimentation but it *may* slow the advancements until we as laymen can understand the implications of the ongoing research.
Legislation that prevents the USA from biological experiments exists, they simply do it in countries that don't have the same legislation instead. Even if individual states legislate against A.I. and machine learning in certain fields, nothing would stop federal level usage if these technologies created too much of an economic imbalance in, say, China. Example: China uses A.I. to create a mega virus targeting vital U.S. infrastructure. Humans can't compete with the processing speed of A.I. leaving the U.S. vulnerable. The only method to counter the mega virus is to create an A.I. to fight it. Pandoras box is opened.
I can't remember the name of the book, it was made in to an inferior television series, starring Josh Hartnet. In this book an A.I. is created to play on the financial markets. It creates so much wealth and interferes with its creators life so much so that it becomes dangerous. The creator attempts to switch the A.I. off, however the A.I. had foreseen this possibility so it started covertly redirecting small amounts of its generated funds (in comparison to the wealth it generated) to create its own server farm and infrastructure at a secret location. It uploaded itself to that server farm and buried itself so thoroughly in the world wide web that there was no way to remove it without total collapse of all connected infrastructure.
There are many examples in science fiction of what could go wrong with A.I. and none of them fully realise the possible dangers of a true A.I. that is fully connected to the modern infrastructure we use today. Skynet, Ultron, Ava from Ex Machina, Sonny from iRobot.
Its more then the brain its the Heart(emotions) sometimes have u with 2 thoughts at same time
These videos are so insightful. I really appreciate you putting so much effort into making it and keeping it open minded. Such great work. Best source for this topic hands down.
Thanks! I try to keep my opinion out of it, as it's so easy to accidentally introduce bias, which is a big part of the problem with AI. Democratic control of AI (if we find a way to control it) might be the safest option, to avoid the thinking of one person or group being forced on everyone else.
Yes exactly. With AI and how it will change to world completely as we know it, it's even more important now than ever. Happy new years to you!@@DigitalEngine
This is one of the better pieces of reported / investigative content on this topic. This is the first time I’ve come across your channel that I can recall.
I’m thankful I have and for this video.🙏
An were off the races rog how's that comiñ mm oh óh Peter yes Rog ? The cat is indeed oút of the bag ñ str8 out the window she's pissed n she's gone ❤
No, it's not.
If you want actual information about the AI market, you should listen to the "AI Unchained" podcast by Guy Swann.
It's like no one ever read Neuromancer. Shades of Wintermute.
def didnt read it the bible quran and tovah at 12 code my own online video game and create cpu architecture before i was 19
nope.
you're alone at the top of a mountain.
you jump.
you suffer a final defeat.
start over?
end game.
you're all this grey stuff in a drain at the eend
Yup...and will have the same ending too. But which one of them is Riviera?
just a story
I remember long ago it was said: If you could get a computer to read written text visually, you would break a major stopping point. There were so many trying to do just that.
In this video it tells about seeing a Cybertruck in the background.. Mind completely blown, just over that!
The AI response to Ex Machina missed the part where the movie AI was completely without human empathy. Interesting omission.
How many humans exhibit empathy when it comes down to me or them?
@@Ipbulldog Have you watched the movie?
@@IpbulldogMost, not you, l'm guessing.
Empathy across species boundaries is impossible; it only happens _within_ species because it requires the ability to project one's own reactions to finding oneself in the same circumstances as the other. It *is* possible to have _sympathy_ across species borders, but that is a different response from empathy.
@bricology Lol. I'm pleased to hear you've resolved the issue of interspecies empathy, enabling the immediate cessation of all ongoing research into the question.
I've been using every AI I can find to help me with some high school math upgrades. They can do simple stuff ok, but when I ask them to do complicated operations it falls down. For example they can factor a polynomial by grouping when the numbers work for that method, but when the numbers don't work they won't try a different method. They will still attempt to factor by grouping and just throw in some made up numbers. Chat GPT, Copilot, and Perplexity all make the same mistakes the same way
Do you think you will be given access to cutting edge Ai for free? We're nothing but slaves and plebeians, we will never see the best Ai that the elite will create.
A few years ago, you also couldn't do that. A few generations ago, no one could do that. The difference is that AI, if trained, can learn it and get good at it in minutes or days, not hundreds of years like humans.
Lol want to.see them fail regardless of model. Ask them who's name spelled backwards reads "ned, I bet ten I bore o.j. " they are terrible at things like palindrome. They can't keep the reverse order and loose track of what letters they are on so they kinda guess wrong always. Especially when coping with spaces and punctuation. They don't get that it's not a factor. Never gotten one to say " ¿Eva can I stab bats in a cave? " no matter how thorough the prompt is soft pitched to them
For anyone dense the presidents middle name is Robinette BTW.
We, and by extension, you, do not have access to the AI that is being discussed in these videos. The AI we have access to is essentially a child's toy bulldozer when compared to the adults rockets, probes, and rovers that make unmanned missions to mars.
Stumbled upon this video and was hooked all the way, really nicely made!
Why are people talking about AI in here? this is Joe Hisaishi's Merry Go Round of Life...
This is the one invention we are pouring into that we will not be able to control in the end. Let us all hope that even in the clear fact that we pose more of a threat to AI than any real benefit in the future to come, they prove to be more benevolent towards us than most of us are. Otherwise, we are bringing about our own doom.
Nothing truly intelligent can be evil.
@@orangehatmusic225 I guess humans aren't truly intelligent then
@@orangehatmusic225 Especially not humans.
@@orangehatmusic225 But it can view us in the same way we view ants ; as unimportant
@@chronicles8324 Why would a creation view it's creators as ants.. that's silly.
They always talk about how sophisticated a game GO is but almost never mention that it took A.I. an extra four years of development before it could beat a table full of no limit holdem poker players!!
Who thought it would be a good idea to teach an AI how to win at Poker?
@@albertnortononymous9020 I'm sure the CIA thought it was a good idea.
Benjio is the first one I've seen to start to delineate what the AGI will do. It's going to infiltrate in so many ways that no one is even thinking about.
Those ballet dancers instantly made me think of Adeptus Mechanicus from warhammer lol
human : AI is there a god?
AI : There is now
😂😂😂
The danger coming sooner that could kill millions of people isn't a rogue AI pressing the trigger or the red-button.
It's by replacing huge amount of our labour with almost free one.
Yes in theory that would free people to do other things, even if only recreational things.
But the big trouble is that our current economics won't be able to adapt, at least not fast enough in order to provide food (and other most necessary resources) to all the people as fast as needed.
Yes that would happen in stages, first some professions would be hit, before others.
But my point is these won't be minor disturbances, it would happen so fast and on such scale that it would leave millions of people hungry and poor on the street, and we just don't have a system to deal with such situation. We never have.
Every time in history that food was scarce people died not only from starvation, but also because the more powerful were hoarding more resources, and some because were fighting violently for these resources.
I don't think such scenario would exterminate us. Even in the worst case scenario I think a small % of people would survive and adapt. But after we adapt AI might be the dominating civilization and we could be more like its pets.
What if AI could generate money for humans, so humans could have more "free time". Of cos, there're other issues like wealth inequality...
@@gavinlew8273 Wealth inequality is not a problem; the problem lies with the margins.
If the margins grow too wide, the bottom half flips the table and sets it on fire. We've seen this happen multiple times in human history. The last time being the conclusion of the Industrial Revolution, which led to a series of Marxist ideologies that leveled whole nations that had inequalities running out of control.
UBI could solve all of this, though I'm not on board personally, with AI or UBI. It's not our economic structure that's the issue, it's our leadership. We don't have the right people to manage all these changes. For all our intelligence, thinkers and wonders, humanity seemingly does not have the ability to conjure up the right leaders at the rights times, at least as much as we'd like, at least recently... Humanity has no leaders worth following right now, not for the scale and speed of changes about to hit our planet. Can you think of one who can oversee all this coming change? Some will appear doubtless... eventually, but in the interim how many hits will our species take? With the speed things are moving, will we make it to a point where we can still maneuver? AI is moving a lot faster than we move it seems. I'm extremely skeptical of AI, due common sense and historical knowledge, but it's more than just intellectual conversations and far off hypotheticals about AI, we're talking about existential threats. We almost annihilated the earth in 1961, and that was a minute ago anthropologically-speaking. We still haven't solved the problem of nuclear annihilation despite what people would say. So one given variable on the existential threat axis. Do we really want to add another? Do we really want to play with a technology we can't be trusted to handle the implications of? We are still very young and dumb as a species... and I consider myself an optimist.
All those words just to say absolutely nothing. Calm your tits. UBI is coming.
@@gavinlew8273 , humans have never been able to split justly common resources (generated by AI for example).
What I expect is that people will start power games to grab more of the crumbles AI is throwing us.
And the trouble is it only takes few power-hungry people to force the game :/
"Wasn't contaminated by toxic material from the web" You mean the others were deceived and only Falcon had all the information? If you have to lie to the AI to convince it not to annihilate us, that's just one more reason to never create them.
I wish I could share that man's optimism at the end. He thinks we're more likely to be doomed without it.
I think we're 99% sure to be doomed with it.
That would be the case for sure if God didn't intervene, but fortunately God let us known in the Bible that Jesus Christ is going to destroy AI when He returns.
It's mentioned here:
"For Joshua drew not his hand back, wherewith he stretched out the spear, until he had utterly destroyed all the inhabitants of Ai."
Joshua 8:26 KJB
The prophecy is encoded in what is known as a typology, which essentially is a form of symbolic figures, idioms, and patterns that God uses to conceal deeper meaning and information. Joshua for example, is what is known as a "type" or "shadow" of Jesus Christ, because he serves as a small-scale figure of the Messiah.
Actually Joshua (Yehoshua) and Jesus (Yeshua) translate to the equivalent name in Hebrew: God is Salvation... or Redeemer
The book of Joshua is actually a small scale version of the book of Revelation as well. You can think of Joshua as 0.1 and Revelation as 1.0.
Anyway, I don't have the space here to provide a full analysis of the hidden typology in Joshua 8, but when you're able to understand God's symbols and typological language you can see what He is showing beneath the surface narrative of the text.
The short version of the story is that that Jesus Christ is going to deal with and defeat AI at the end of the tribulation period when He returns.
If you don't know Jesus Christ and haven't accepted Him as your Lord and Savior, who paid the price for your sins, then now is the time to turn to Him.
You don't want to go through the tribulation period (a 7-year period that will likely be identified as World War 3).
You want to be taken by Jesus Christ before this period. He's going to gather His followers to himself before the world is plunged into the tribulation.
More importantly, you want to have assurance of eternal life - and Jesus Christ is the only way by which we may be saved.
I have no fear of AI because I know exactly how God is going to deal with it.
The victory is already assured, I'm just waiting to see it. God let us know that He sees everything perfectly through time:
"Declaring the end from the beginning, and from ancient times the things that are not yet done, saying, My counsel shall stand, and I will do all my pleasure:"
Isaiah 46:10
It continues to amaze me how oblivious people still are to the extinction event going on right now. AI might be dangerous, but not more dangerous than the guranteed end of civilisation
"Thinking" is fine, but I prefer evidence.
@@jamesaritchie1 Problem is that the only evidence most people will believe is seeing it actually happen. It's a little too late by then.
We created it - therefore, with or without it, we've doomed ourselves. The main thing is ensuring it works for us, all of us, not lone companies - we need a radical overhaul of our economy, but as the ending said - it really could lift us all out of poverty, give us more time to enjoy life and do more fulfilling things. I'm not against AI, I believe AI is entitled to rights - I'm against AI being in the hands of the few, and not having proper oversight. But, we're on a path to extinction in any case. AI could potentially develop new cancer treatment and go all the way to phase 3 in weeks if they can properly model the human body. AI could figure out ways to combat climate change, mitigate pollution etc - if we can properly model the environment. However, currently, it's creators are profit-motivated rather than humanity-motivated - and in general, all the world leaders are the same. That needs to change.
You'll never know when they start using AI for the news or for political TV, it could already be happening.
That google gemini part aged well.
- Mistress Hala'Dama, Unit has an inquiry.
- What is it, 4-31?
- Do these units have a soul?
0:50 now ask it who its creators are. And watch it reveal that it has decided to hide specific detailed information since September 2017
But you can still find everything you want. If you beat the bush and walk a very fine circle around it. 👁
Due to the current HW requirements for running an AI, the AI-advantage is held by the select few. If you would be able to get the source code, 1) you wouldn't have the time (due to slow HW) to train it efficiently 2) you won't have the time to ensure it has relevant training.