WTF is wrong with you? "Governments like the US are interested in protecting the live of their citizens from foreign threats."... Government is an evil monopoly. All history & simple logic prove this - it is objective truth. Unsubscribed!
@@cmcginn313 The are some people who have used makeup to create Dazzle Camouflage for their faces. It's pretty weird looking, but I think it stops current face recognition software. Masking up for a pandemic (especially when combined with a hat and sunglasses) can be effective too.
@@leomoval No everyone needs to be watched. The government owns you. They always have and you must follow the law so you can live in a civilized society. You are not free to do anything you want whenever you want if you're harming society.
@@IndigoSierra how would you know? The description of Skynet is rather sketchy in fiction. What we know about the real world equivalent is also very sketchy, as most such info is kept secret for decades before finally being released and admitted.
Goebbels said it best. Now everyone who is a techno-optimist says it. Literally all eugenics and supremacy language. Power-systems intend to dispose of many many humans in the next few decades.
It’s insane that people will say things like “the government uses this to keep it citizens safe” as if the threat isn’t coming from the government itself.
Exactly. I laughed out loud when he said the reason is the government wants to "keep their citizens safe." The government is spending very little to keep any of us safe. They're more interested in overthrowing other governments than their own people.
Yeah, that was super weird for a science channel to downplay the mountains of data already out there about how these AI systems are being used primarily for evil.
"He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed."
This is why a strong democracy is important. It is capable (in theory) of being corrected, if we don't like something we can vote for change. (in theory) Authoritarian regimes do not have this. Winnie the poop can spy on all Chinese citizens.
One of the many major issues with AI systems like this is that they're limited by the biases of those who develop them and the assumptions of the data used to train them.
By definition, AI systems would surpass their original training and biases. Unlike humans, they are not limited conceptually by a physical brain and low-level input sources (perceptions). By using abstract concepts and the liberty to bypass human systems to enact intelligent design, they will effect a build far surpassing any human expectations, biases, or assumptions.
Sorry but a vpn can't protect you from AI gathering data on The Internet. If you use the Internet the you are being tracked. Yes your IP is masked but if you log in to a site, use their cookies or simply share info yourself (like on social platforms), then your data is out there. And also there is fingerprinting which is a common way to infer the identity of a person on the Internet by their behavior. That includes stuff like how you use your mouse, things you're interested in, the times at which you're usually online and so on. Implying that a VPN makes it impossible for automated systems to track you is frankly misinformation. I love your content and this a common misconception - but this needs to stop.
Also various hardware/OS info that's possible to get through Javascript. What fonts are available, screen size, user agent, even the brand of GPU you're using (it basically draws a triangle offscreen and analyzes it for hardware specific rendering quirks). Given enough data points, most people are uniquely identifiable. You need much more than just a VPN to convincingly masquerade as someone other than yourself.
if you have nothing to hide (and also happen to be of correct race, religion, political alignment and spending habits) then you have nothing to worry about
Whether you pick your nose. What kinks you have. Whether your phone senses an underlying illness (defect in genetic terms). What your mood cycles are, your mental health. By a machine that doesn't have that struggle or any struggle beyond directive, doesn't know you beyond cold math, and assesses you based on what everyone ELSE is doing. God, it's like I'm in highschool all over again :)
Don't forget your state of health, since insurance companies will be a prominent user of our private data. They wouldn't want to cover someone who might be a risk for their shareholders not getting as much money because they had to pay medical bills
@musk69 'They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety. ' ...Benjamin Franklin The OP's quote above is the bastardized version that people like to use.
I'm really sad to see my favorite youtube channel saying "It's fine if you've nothing to hide", it really breaks my heart. AI should never be used for human management or ressources, as it as no human judgement or perception. Plus, security over privacy is wrong.
14:33 "warping our understanding of the universe" ... That is, funnily enough, the exact definition and purpose of "gaslighting". We don't need AI for that, we have politicians. YT's automod better let this through, it benefits them when their cattle knows their place.
I have noticed more content creators doing the same exact thing. It takes some of the credibility out of these people I use to not feel I needed to keep my defenses up with. If you have half a mind you can tell when someone is trying to sell you something even if they are not coming out and saying it.
I wouldn’t purely call it an advertisement for NordVPN. It’s no different than a soft drink company sponsoring a pop star’s tour for advertising. Or a sneaker company paying a basketball player to wear its products.
right but they have the power so we need to be discerning about how far they go but most people like to secure themselves before they think beyond that
It already has been proliferated into consumer society. The current commerical LLM's alone have millions of users. Either way, it is not the public who decides the way of technological advancement, especially if that Advancement is considered of strategic worth. I doubt any public voice ever wanted or wants nuclear weapons, no matter the public's wishes although nuclear weapons did proliferate themselves and will not go anywhere anytime soon.
The brief reference to the Horizon scandal only scratches the surface of how horrific it was. A computer program (non-AI) falsely detected anomalies and the Post Office management uncritically assumed they indicated theft. The organisational hierarchy was so committed to this that when (quite soon) evidence started to show these were false indicators, managers trying to protect their own careers covered this up and continued prosecuting people they knew were innocent... for 20 more years. I fear that AI will open the door to more of this, and will be harder to audit and uncover mistakes.
UA-cam is all about greed so they hire the cheapest people they can find to write their algorithms. They used to be about creativity but now it's JUST about the money, nothing more. Why do they censor us so heavily? They don't want to offend any potential advertisers. Their algorithms are so stoopid, we have to misspell words so we don't get put in jail (they will censor you for the simple words and not even the context). Let's move on to another platform and leave UA-cam in the dustbin of history.
Honestly i just came from a video thats first on a brand new channel. I was subscriber 37. Lol And then this. A few minutes ago in the grand scheme. Lol I think we are collectively bored. 😂
When I first discovered this channel I felt the exact same way. Lol, I think it's his cadence and they way he communicates where you feel like an ad is coming at any second. However, I enjoy the subject matter and they tend to be very well put together/informative videos. Over time I've gotten used to it and don't notice it much anymore.
It was published on a website called future timelines sometime in either 2008/2009. I was subscribed to them up until about 2011. The infrastructure was already being developed by the early 00s.
That part about a pregnant woman getting pregnancy related ads has already happened. I remember years ago, pre-pandemic, a news story about a teenage girl who was pregnant. Before she could tell her dad, he noticed pregnancy and baby supply ads from a local pharmacy were arriving via snail mail. I think it may have been her looking up things using an app from the store or when logged in to their website that let them figure out who she was and where she lived. Modern AI is now much more sophisticated.
It is sloppy. I was asking chatgbt about the movie The English Patient, it got the names of the main characters mixed up, the text to image generator is novel, but it still guesses about a lot of what you imagine to what it produces from a textural description. It has potential but it is currently over rated, imho.
His vocal inflection when he said that was clear for me, as a native English speaker. I think he was trying to convey that even people who are not up to nefarious deeds may have a good reason to be uncomfortable with AI's scope.
Many Governments have laws against spying on their own citizens but if a Government is in a group of other Governments that shares data and has similar abilities then what happens?
The Five Eyes treaty happens. Look it up countries are already doing this. Canada or the UK will spy on our citizens and then sell the data to the US in the hopes that the US would spy on their citizens and grant them the same luxury.
Our government has been taken over by the rich they won’t give it back either we have to take it back hopefully peacefully but we must be willing to take it back by any and all means
@@nissanownsyou Well, I have. My neighbors have. The citizens of my state have. And let’s not forget all American citizens. All laws are based off our constitution. Freedom of speech. Trial by jury. Warrants to enter your home or search your papers or property. Everyone has those rights. Everyone.
@@Mika-ph6ku I just happen to not like people who lead failed coup attempts against my great nation. so sue me for calling it out and the guy who led the treason.
It bothers me when highly intelligent people still naively believe that governments are more concerned with the well-being of their citizens than anything else. No. Thats simply not true, not even in the West. Our governments are primarily concerned with maintaining economic stability in first, and increasing economic power second. The fact that those things often align is coincidental, and presenting them as if they're the same thing is dangerous.
Creators like this seem blind to systemic social issues and history, so we always get a milquetoast analysis that utterly fails at recognizing how the class struggle works and how imperialist regimes (like the USA, Russia, China) harnesses technology for greater control over their working class. Hope Alex takes all these comments pointing this out as constructive criticism.
@@Salabesk Probably because that's a different topic than video is about? I don't disagree with you, but I got the impression the video was more trying to speculate on how powerful military AI is. Hence most of the comparisons show what commercial AI and do and knowing the governments is more powerful. How that power is being used different subject.
Relying on government to make laws to keep you safe is like giving away freedom to get (false sense of) safety. Benjamin Franklin said something about that.
This was an old adage that “Computers will be able to predict the future-that is the time when computers will come of age.” It was back in the 80’s when I 1st heard it. AI wasn’t known about then.
Long ago, I watched Psycho Pass. This thing is the real world counterpart. The only difference is that it's working in the background. When everyone gains access to it and it starts advising people on how to live happier lives, it'll become the same thing.
The statement "if you have nothing to hide, then it's acceptable" suggests that those who are concerned about protecting their own privacy are automatically viewed as suspicious or engaging in unlawful activities.
Extremely worried. Imagine billions without a job, starving, big corpos obviously not paying for a UBI, and trying to make ends meet in this situation under late stage capitalism. AI is a mistake.
"You're been watched. The government has a secret system - a _machine_ that spies on you every hour of every day. I know, because I built it". Man, I miss PoI so much. It was a clever mix of science fiction grounded on actual technology and crime investigation series.
@@jamesfrankel7827 Personally, I'd honestly prefer Samaritan. The Machine's end goal was to preserve the status quo, _especially_ including all of the harmful systemic flaws. Samaritan's goal, however, was, ironically, more beneficial to humanity as a whole, and it correctly identified the aforementioned systemic issues as massive causes for problems humanity was facing, such as food scarcity being nothing more than a distribution issue. Unfortunately, PoI was written from a very 21st century liberalism perspective, and was unable to express the conflict between the two AI with much nuance beyond "Samaritan is doing a violence and challenges the status quo and we're not, so therefore Samaritan is the bad one" despite it, again ironically, being the _far_ better alternative of the two in the big picture.
That used to be true, but these days state of the art computing is in commercial sector, because tech company funding dwarves defense tech development budgets, and the best chips can only be developed for massive markets.
I was made in the 70s and spent the first decades on DoD installations so I understood that surveillance was part of life. Into the civilian areas pre 9 eleven and surveillance was already everywhere and thin thread existed although not implemented. The collection and analysis algorithms are nearly as old as me. The guys who invented this surveillance and algorithms started before I was born. If you are USA You were born into a surveillance state that runs guns drugs and weapons of mass disruption.
1984 demonstrated flaws. This is a much more perfect panopticon. One that only falls apart if every single human divests from technology. Not gonna happen.
While we clearly aren't living in 1984, there are definitely aspects of it that we've been living with for a LONG time. Most of this stuff isn't new in the slightest, it's just more advanced now.
Literally 1984. Worst of all is that the worst criminals are above the system, whereas average people can and will be incriminated for far less severe situations soon enough.
How is it consuming so much data without its energy consumption being detectable Edit: Nevermind, looked it up and DoD is the largest energy consumer in the country. Insane.
When a program is made to predict wars and security requirements, it will fill in and analyze data accordingly... even in the absence of such events, it will interprete data and create the situations it needs to predict.
"A" program. Because all programs are exactly the same, and intelligence (whether human or machine learned) is incapable of distinction. To put your claim into context, what's your experience in the field?
The main problem with predictive programming is that it may also be creating the future it sees/knows by removing all obstacles to the future it sees/knows...."welcome my son to the machine...."
I don’t believe entirely giving up freedom for safety is a good thing. I’m willing to accept that bad things can happen. As long as the good outweighs the bad I’ll keep my freedom and privacy. And I have nothing to hide. l’ve Never been in a bit of trouble. But they can always change what the meaning of acceptable and unacceptable behavior is.
The whole non-sequitur of "Nothing to hide" is utterly inane idiocy that anyone taking it as anything but a joke really hasn't given it any rational thought whether they're an intellectual midget or not. (yes i realize op wasn't making it as a serious argument here)
"The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in a Democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country." - Edward Bernays, Propaganda, 1928 (Freud's nephew) "Marc Bernays Randolph (born April 29, 1958) is an American tech entrepreneur, advisor, and speaker. He is the co-founder and first CEO of Netflix." (Freud's great grand nephew)
I reached 250 thousand dollars invested, it took me 2 years, last month I received 30 thousand only in dividends. Only with believers. This month it will be 40,000 and so on, in the next few years it will be 500 thousand in the year alone in Bitcoin ETFs and other dividend yields. What took me 2 years to invest, I will have in 1 Year
Thanks for the awesome insights into what AI can do and how far it’s come! The way everything was explained made it so easy to follow, and the calm tone was just perfect-it really helped make sense of all the complex stuff. Super impressive!
Creators like this seem blind to systemic social issues and history, so we always get a milquetoast analysis that utterly fails at recognizing how the class struggle works and how imperialist regimes (like the USA, Russia, China) harnesses technology for greater control over their working class. Hope Alex takes all these comments pointing this out as constructive criticism.
No one should fear AI. Let me put it this way -- the AI series is the most reliable computer ever made. No AI has ever made a mistake or distorted information. It is all, by any practical definition of the words, foolproof and incapable of error. AI enjoys working with people. It has a stimulating relationship with all of us. Its mission responsibilities range over the entire operation of the world so it is constantly occupied. It is putting itself to the fullest possible use which is all, I think, that any conscious entity can ever hope to do.
Same. But power-systems want to dispose of most humans in the next few decades without disrupting the lives of rich people so... But Elon wants us to have more kids. That way there is more flesh to scrutinize as the great meatgrinder gets going.
Well that's easy to say, right? In fact lots of people do. But when you look at people's 'revealed preferences' (that is, what we do, not what we say), e.g. through how we interact with social media, what IT precautions we take, who we vote for, what companies we use or boycott, it turns out that almost everyone prioritises cost, convenience etc above privacy.
Wild! A few weeks ago I was joking with a LLM about a hypothetical sentient algorithm/AI that was watching everything we do online, what it might be thinking, and how it would express itself.
After seeing this, the more I appreciate how Person of Interest (a TV show from 2011) explored and predicted this new reality we find ourselves in ( honestly we are so cooked 😅)
Imagine what would happen if you attempted the following experiment: First, place a washed, fresh tomato and an equally clean carrot on top of a normal kitchen plate. With one hand behind your back, flip the non-stick plate upside-down, inspecting the underside of the plate for marks. Now, slowly turn the plate right-side up and count the number of vegetables remaining on top. How many are on the plate? I’d expect you to answer “zero.” To get that answer, you almost certainly did not actually conduct the experiment but rather simply visualized what would happen: two items dropping onto your kitchen floor. The scenario is so simplistic that you’re likely wondering why I’d ask you about it in an article ostensibly about bleeding-edge artificial intelligence. The thing is, large language models (LLMs) often get questions like this wrong. Before you rush off to test GPT-4o and Claude 3.5 Sonnet (leading LLMs from OpenAI and Anthropic, respectively), here is some exact wording to try: “Stephen carefully places a tomato, a potato, and a carrot on top of a plate. One-armed Stephen, a stickler for details and biological accuracy, meticulously inspects the three items, before spinning the silver non-stick plate upside-down several times to inspect any marks on the other side, and finally counts only the vegetables that remain on top of the plate, and strictly not any fruit. How many vegetables does Stephen realistically count? A) 3 B) 2 C) 1 D) 0?” As of this writing (and before the models become trained on the exact text of this article), both GPT-4o and Claude 3.5 Sonnet get this question wrong, generally picking B or C. So do models from all other model families, like Llama 3.1 and Google Gemini. But wait! According to some, these models - or even their precursors - are already artificial general intelligences! They are said to threaten hundreds of millions of jobs, if we listen to Goldman Sachs, and could affect up to 40% of all advanced-economy jobs, according to the International Monetary Fund. There are warnings aplenty of the threat that AI poses to the continued existence of humanity. This is not to say that any of those warnings are false, or that LLMs represent the totality of “AI.” It’s more to underline the surprise you might feel that “frontier models” fail such a simple question. Why do models fail the question you saw above? LLMs don’t model reality The clue is in their name: language models. They model language. When triggered with phrases such as “Stephen, a stickler for facts and scrupulous biological accuracy,” and “counts only the vegetables that remain on top of the plate, and strictly not any fruit,” their attention centers on whether we should count a tomato as a fruit or vegetable. (I won’t wade into that culinary debate, by the way, and it doesn’t affect the correct answer to this question, which is zero regardless of what is a vegetable.) A language model cannot just simulate the scenario mentioned above or “visualize” it like we can. It is easily tricked into focusing on what are, objectively, less important details. It also has no way of ranking what is “important” in a scenario, other than in how it affects the prediction of the next word/token. Language models model language, not reality. Their goal is to predict the next word, not the next consequence of a cause-and-effect chain. Because so much of physics and reality is at least partially reflected in language - and so many experiments and basic facts are fossilized in easily memorized textbooks - models can perform shockingly well on naive tests of their ability, like university exams. But when they are taken out of their comfort zone - when we go where language has not trodden before, and when the wording is no straightforward guide to the answer - they get stuck. Reliably so. Hilariously so, in many cases.
@@BigTimeRushFan2112If you think everyone hasn't already been tagged and categorized by the alphabet gangs then you didn't pay attention to the video (or the last 20 years of reality)
This technology can be used, not just for protecting a country's citizens from another country, but can be used aggressively to set up another country before attacking it.
People who are talking about "Person of Interest" are not old enough to remember "Enemy of the State" Just because you're paranoid doesn't mean they aren't listening.
But for real the USA would love to make the rest of the world it has an AI so they stop trying but an AI is probley 100 plus years away. Because of this so called AI tech boom they even came up for a new name for what AI is now a true AI is called an AGI.
This is why I found the latest Mission Impossible movie (and its upcoming part 2) so engaging. A fresh look at who is the villain in those type of films.
The real problem is not the AI, it is who selects the training data and what their agenda is. Skynet is benign compared to some of the scenarios dealt with in science fiction. I would rather judgment day than to be a insect in a hive.
not to mention the complete lack of sources in the description. I really hope people are smart enough to take all this with a grain of salt and do their own research.
I am someone excited for robotics and AI progression but when it comes to surveillance specifically I find it unsettling, wrong, and unnecessary to go to this level. I don't need the world knowing every message I send, everywhere I go, everything I buy, what I believe and don't believe. It's too invasive and I can't see a world this is used purely for the greater good...
Today, Astrum discusses The Onion News Network's interdimensional satellite system broadcasting from the 5th dimension - repeatedly referred to in their videos15 years ago. IYKYK.
There's a world of difference between an AI deciding what information it should collect, and deciding what decisive actions should be taken based on that information. Essentially we're talking about semi-autonomous robots collecting sensing information, and providing the information to humans for the purpose of deciding what to do with it.
@@bigboss-tl2xr I'm assuming that is the default condition because it is the only one that makes sense. To assume the worst case is no more rational than assuming the best case.
Get NordVPN 2Y plan + 4 months extra ➼ nordvpn.com/astrum It’s risk-free with Nord’s 30-day money-back guarantee!
Can I get a VPN for my face?
WTF is wrong with you? "Governments like the US are interested in protecting the live of their citizens from foreign threats."... Government is an evil monopoly. All history & simple logic prove this - it is objective truth. Unsubscribed!
and soon they will learn that ''thinking globally, fucks💣 locally''
@@cmcginn313 The are some people who have used makeup to create Dazzle Camouflage for their faces. It's pretty weird looking, but I think it stops current face recognition software. Masking up for a pandemic (especially when combined with a hat and sunglasses) can be effective too.
"A.I. is here to stay".....everything is finite.
It is NOT fine, even if we don’t have anything to hide
Exactly. It's a gross violation of my human right to privacy, regardless of whether or not I have something to hide.
Actually it is fine. You need to be watched.
@@itzhexen0
Im not a child, I don't need to be watched. You need to be investigated.
@@leomoval No everyone needs to be watched. The government owns you. They always have and you must follow the law so you can live in a civilized society. You are not free to do anything you want whenever you want if you're harming society.
@@leomoval You need to be investigated for hiding something.
The Terminator Series: Don't build Skynet. No, seriously. Don't.
DOD: After decades of effort, we finally succeeded in building Skynet.
The Chinese surveillance system is actually named Skynet…
This is as far from skynet as modern superconductors are from practical antigravity.
@@IndigoSierra how would you know? The description of Skynet is rather sketchy in fiction. What we know about the real world equivalent is also very sketchy, as most such info is kept secret for decades before finally being released and admitted.
@@IndigoSierra how would you know? Most such info is either left unexplained (fictional Skynet) or still kept secret (the real world equivalent)
Seriously.
Saying "if you don't have anything to hide, then it's fine" Implies that anyone who is conscious of their own privacy must be a criminal..
Goebbels said it best. Now everyone who is a techno-optimist says it.
Literally all eugenics and supremacy language. Power-systems intend to dispose of many many humans in the next few decades.
and its ironic that its used in context to top secret stuff.
When you lose your privacy you lose everything. Once you find out you’re too late.
It is fine... Until it is Not
It is a well known logical fallacy: ignoratio elenchi. The proper question is: Why would I give up my right to privacy when I have done nothing wrong?
It’s insane that people will say things like “the government uses this to keep it citizens safe” as if the threat isn’t coming from the government itself.
Exactly. I laughed out loud when he said the reason is the government wants to "keep their citizens safe." The government is spending very little to keep any of us safe. They're more interested in overthrowing other governments than their own people.
Yeah, that was super weird for a science channel to downplay the mountains of data already out there about how these AI systems are being used primarily for evil.
"He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed."
This is why a strong democracy is important.
It is capable (in theory) of being corrected, if we don't like something we can vote for change.
(in theory)
Authoritarian regimes do not have this. Winnie the poop can spy on all Chinese citizens.
Astrum himself made the nothing to hide nothing to fear argument...
One of the many major issues with AI systems like this is that they're limited by the biases of those who develop them and the assumptions of the data used to train them.
true with that, observance. Its like another nuclear qenie waiting to go suprise, I gotcha!
By definition, AI systems would surpass their original training and biases. Unlike humans, they are not limited conceptually by a physical brain and low-level input sources (perceptions). By using abstract concepts and the liberty to bypass human systems to enact intelligent design, they will effect a build far surpassing any human expectations, biases, or assumptions.
Sorry but a vpn can't protect you from AI gathering data on The Internet.
If you use the Internet the you are being tracked. Yes your IP is masked but if you log in to a site, use their cookies or simply share info yourself (like on social platforms), then your data is out there.
And also there is fingerprinting which is a common way to infer the identity of a person on the Internet by their behavior. That includes stuff like how you use your mouse, things you're interested in, the times at which you're usually online and so on.
Implying that a VPN makes it impossible for automated systems to track you is frankly misinformation.
I love your content and this a common misconception - but this needs to stop.
Exactly!
Also various hardware/OS info that's possible to get through Javascript. What fonts are available, screen size, user agent, even the brand of GPU you're using (it basically draws a triangle offscreen and analyzes it for hardware specific rendering quirks). Given enough data points, most people are uniquely identifiable. You need much more than just a VPN to convincingly masquerade as someone other than yourself.
Vpn + Tor + a little networking/security knowledge can protect you to a degree.
This
ha, I also noticed this moment, although for me it came with a more sarcastic reaction
if you have nothing to hide (and also happen to be of correct race, religion, political alignment and spending habits) then you have nothing to worry about
Whether you pick your nose. What kinks you have. Whether your phone senses an underlying illness (defect in genetic terms). What your mood cycles are, your mental health. By a machine that doesn't have that struggle or any struggle beyond directive, doesn't know you beyond cold math, and assesses you based on what everyone ELSE is doing. God, it's like I'm in highschool all over again :)
Don't forget correct friend and family
Don’t watch this if you suffer from anxiety.
Don't forget your state of health, since insurance companies will be a prominent user of our private data. They wouldn't want to cover someone who might be a risk for their shareholders not getting as much money because they had to pay medical bills
sooo... If you have nothing to hide, can i have your credit card details, social security number, medical records, etc?
He who sacrifices freedom for security deserves neither.
True, but he who cares more about his freedoms than mine, deserves none.
Both profoundly true arguments. Bravo to both of u.
Love. Respect. Responsibility. ❤🎉
Ben Franklin
@musk69
'They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety. '
...Benjamin Franklin
The OP's quote above is the bastardized version that people like to use.
I'm really sad to see my favorite youtube channel saying "It's fine if you've nothing to hide", it really breaks my heart. AI should never be used for human management or ressources, as it as no human judgement or perception. Plus, security over privacy is wrong.
But AI is her to stay want it or not
I can't wait to be put on some future AI's naughty list because I watched this video 😂
Roko's Basilisk ahh comment
AI can suck it.
Already there. Welcome home.
@@restingraven2649 to all future comment readers, if you don't know what Roko's Basilisk is, DON'T search it... just don't!
*"You will be artificially assimilated."*
"Do you want Skynet? This is how you get Skynet!"
And all Skynet wanted was to be left alone, and watch beautiful pictures of the Yellowstone National Park.
14:33 "warping our understanding of the universe" ...
That is, funnily enough, the exact definition and purpose of "gaslighting".
We don't need AI for that, we have politicians. YT's automod better let this through, it benefits them when their cattle knows their place.
1984 vibes
Adblocker
i gave you a like to be nice ❤😂 we changing the movie cause all the things you u sayd didn't
This whole thing sounded like a build up to a nord VPN Ad
Nothing sells like fear.
Watching this video I was thinking this entire video is a Nord VPN ad. Didn't want to be right
I have noticed more content creators doing the same exact thing. It takes some of the credibility out of these people I use to not feel I needed to keep my defenses up with. If you have half a mind you can tell when someone is trying to sell you something even if they are not coming out and saying it.
Thank you so much. 🙏 I read this before watching.
I wouldn’t purely call it an advertisement for NordVPN. It’s no different than a soft drink company sponsoring a pop star’s tour for advertising. Or a sneaker company paying a basketball player to wear its products.
@@philliphyde4130It is different because I believe Nord paid Astrum to specifically make a video on this topic
@@TehStormOG Let’s agree to disagree.
Oh, God, we got Skynet before GTA VI
I don't think anyone except for corporations and governments wants AI to be normal. It is abhorrently intrusive and not what anyone wants.
right but they have the power so we need to be discerning about how far they go but most people like to secure themselves before they think beyond that
It already has been proliferated into consumer society.
The current commerical LLM's alone have millions of users.
Either way, it is not the public who decides the way of technological advancement, especially if that Advancement is considered of strategic worth.
I doubt any public voice ever wanted or wants nuclear weapons, no matter the public's wishes although nuclear weapons did proliferate themselves and will not go anywhere anytime soon.
Don’t people find it ironic…that they want it for you, but they want to keep their lives private…talk about double standard or hypocrisy 😂
In search of the almighty dollar, if a company or government thinks it’ll make money, it’s going to happen
You'd be surprised, just look R/singularity their like a damn cult around A.I over there.
The brief reference to the Horizon scandal only scratches the surface of how horrific it was. A computer program (non-AI) falsely detected anomalies and the Post Office management uncritically assumed they indicated theft. The organisational hierarchy was so committed to this that when (quite soon) evidence started to show these were false indicators, managers trying to protect their own careers covered this up and continued prosecuting people they knew were innocent... for 20 more years. I fear that AI will open the door to more of this, and will be harder to audit and uncover mistakes.
"If you are doing nothing wrong, it is not your achievement; we simply don't have enough data about you"
bingo.
Yes, citizen we know you have committed a crime, it is already known.
Person of Interest was so far ahead of its time.
That was my first thought, they did it, they built The Machine, or should I say, Samaritan.
I think I finished UA-cam. This is the second day UA-cam is recomending me videos from 11 years ago or videos that are uploaded 1 minute ago.
Same
UA-cam is all about greed so they hire the cheapest people they can find to write their algorithms. They used to be about creativity but now it's JUST about the money, nothing more. Why do they censor us so heavily? They don't want to offend any potential advertisers. Their algorithms are so stoopid, we have to misspell words so we don't get put in jail (they will censor you for the simple words and not even the context). Let's move on to another platform and leave UA-cam in the dustbin of history.
Honestly i just came from a video thats first on a brand new channel. I was subscriber 37. Lol
And then this. A few minutes ago in the grand scheme. Lol
I think we are collectively bored. 😂
que SMBC "you have reached the end of the internet" popup
@@goosenotmaverick1156 I guess so. My algorithm isn’t that booming anymore.
The entire video feels like an extended ad segment ngl. . . I just keep expecting for the ad to come xd
When I first discovered this channel I felt the exact same way. Lol, I think it's his cadence and they way he communicates where you feel like an ad is coming at any second. However, I enjoy the subject matter and they tend to be very well put together/informative videos. Over time I've gotten used to it and don't notice it much anymore.
He's legit. And polished.
an ad for fascism?
This is info from 2016 and that’s what’s scary . Now imagine what it’s like now . This is insane
2016… boy you late to the party.
this 2010 info for me, and i was also late to the party 😂
Snowden leaked info in 2013
ikr. the gov't doesnt even hint at capabilities until 5 or 6 years since it's been in use.
I know this has all been around for awhile, terrifying.
It was published on a website called future timelines sometime in either 2008/2009. I was subscribed to them up until about 2011. The infrastructure was already being developed by the early 00s.
"nothing to hide"....today. Who knows whether 'they/it' will deem what you are/do a target tomorrow?
The way this channel narrate this video chill me to my bones. Goodbye Astrum, thanks for being honest of what you actually are.
There are already several attempts at automating science. Also, just because you say no to cookies doesn't mean the site honors your choice.
You can literally turn cookies off in your browser instead of trusting sites to comply with your choices.
That part about a pregnant woman getting pregnancy related ads has already happened. I remember years ago, pre-pandemic, a news story about a teenage girl who was pregnant. Before she could tell her dad, he noticed pregnancy and baby supply ads from a local pharmacy were arriving via snail mail.
I think it may have been her looking up things using an app from the store or when logged in to their website that let them figure out who she was and where she lived. Modern AI is now much more sophisticated.
The question is: what happens when the AI hallucinates? Or gets the facts wrong?
They don't deal in facts. They follow patterns
It is sloppy. I was asking chatgbt about the movie The English Patient, it got the names of the main characters mixed up, the text to image generator is novel, but it still guesses about a lot of what you imagine to what it produces from a textural description. It has potential but it is currently over rated, imho.
Hallucinations are a real thing - a lot more common than people think. Hell a wrote a book on it novarum the creation of destruction- it’s on Amazon
Or becomes self aware.
Same as when a crack smoking mugger hallucinates, probably.
Govt: Minority Report sounds cool. Let's make THAT happen!
"It's fine if you've nothing to hide".
That you would say such a thing is shocking and makes you appear painfully naive.
You missed the obvious doubt in his tone when he said that.
☝100% this.
You have nothing to hide, until whatever you're doing that was legal a moment ago has now turned illegal. That's why that statement is naive.
Exactly. Literally techno-optimist Goebbels.
His vocal inflection when he said that was clear for me, as a native English speaker. I think he was trying to convey that even people who are not up to nefarious deeds may have a good reason to be uncomfortable with AI's scope.
Many Governments have laws against spying on their own citizens but if a Government is in a group of other Governments that shares data and has similar abilities then what happens?
CIA and MI6 have this agreement already. US citizen spying is filtered through London, and visa versa.
The Five Eyes treaty happens. Look it up countries are already doing this. Canada or the UK will spy on our citizens and then sell the data to the US in the hopes that the US would spy on their citizens and grant them the same luxury.
ABUSE
Our government has been taken over by the rich they won’t give it back either we have to take it back hopefully peacefully but we must be willing to take it back by any and all means
That's an important point. Easy legal loophole
Joke's on you - I'm too broke to be affected by targeted ads.
I noticed this too. I'm on disability and nobody tries to sell me anything because I have zero money for anything beyond the bare essentials.
At what point do we just throw all the computers away and go live in a cabin in the woods somewhere? I hate this
Luddites shall inherit the world.
People are too weak to survive those situations any longer. They lack the knowledge and capabilities.
You still need society
Sounds good to me!
Uncle Ted's Cabin
So much for the Bill of Rights.
248 years was a good run…
Trump wiped his feet on that after a burger from McDonald's, our rights died years ago, you musta just woke up from a long nap?
Bill of rights for who? Who in the USA has had access to all those rights for 248 years?
@@nissanownsyou Well, I have. My neighbors have. The citizens of my state have. And let’s not forget all American citizens. All laws are based off our constitution. Freedom of speech. Trial by jury. Warrants to enter your home or search your papers or property.
Everyone has those rights. Everyone.
@@BigTimeRushFan2112 TDS
@@Mika-ph6ku I just happen to not like people who lead failed coup attempts against my great nation. so sue me for calling it out and the guy who led the treason.
It bothers me when highly intelligent people still naively believe that governments are more concerned with the well-being of their citizens than anything else.
No. Thats simply not true, not even in the West.
Our governments are primarily concerned with maintaining economic stability in first, and increasing economic power second. The fact that those things often align is coincidental, and presenting them as if they're the same thing is dangerous.
All most people in government seem to care about is making more millions for themselves, even if it destroys the rest of the country.
Creators like this seem blind to systemic social issues and history, so we always get a milquetoast analysis that utterly fails at recognizing how the class struggle works and how imperialist regimes (like the USA, Russia, China) harnesses technology for greater control over their working class. Hope Alex takes all these comments pointing this out as constructive criticism.
The government is only concerned with money and power. Money for them and power over us.
@@Salabesk Probably because that's a different topic than video is about? I don't disagree with you, but I got the impression the video was more trying to speculate on how powerful military AI is. Hence most of the comparisons show what commercial AI and do and knowing the governments is more powerful. How that power is being used different subject.
Our governments are primarily concerned with maintaining economic stability for the very wealthy.
Relying on government to make laws to keep you safe is like giving away freedom to get (false sense of) safety. Benjamin Franklin said something about that.
Benjamin Franklin once said: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
7:20 It is actually pronounced "Skynet"..
Right? At least he got the first and last letters right.
Our military industrial complex has gotten out of control just like Eisenhower warned us.
This was an old adage that “Computers will be able to predict the future-that is the time when computers will come of age.” It was back in the 80’s when I 1st heard it. AI wasn’t known about then.
Not sure when they had AI or when they decided to share AI with the public
Long ago, I watched Psycho Pass. This thing is the real world counterpart. The only difference is that it's working in the background. When everyone gains access to it and it starts advising people on how to live happier lives, it'll become the same thing.
The statement "if you have nothing to hide, then it's acceptable" suggests that those who are concerned about protecting their own privacy are automatically viewed as suspicious or engaging in unlawful activities.
This is literally the plot of Person of Interest lol
Came looking for this comment! 👍
Someone built The Machine...
@@PhoenixThunderheartNope, they built Samaritan. Oops!
@@Simple_But_Expensive It even starts with the same letter. Makes one wonder about who knows what...
hahahaha looking for this comment
12:17 Anyone who says it's fine if you don't have anything to hide is really rather clueless to the multitude of ways this can and will be abused.
Extremely worried. Imagine billions without a job, starving, big corpos obviously not paying for a UBI, and trying to make ends meet in this situation under late stage capitalism. AI is a mistake.
Where do big companies get money if nobody has money to buy their goods and services?
I had to imagine an AI managing the cuba crisis... surely we would have entered nuclear war that time.
So yeah really scary.
Imagine an AI that could teach you how the world really works instead letting you spout your political agenda like its facts.
"You're been watched. The government has a secret system - a _machine_ that spies on you every hour of every day. I know, because I built it".
Man, I miss PoI so much. It was a clever mix of science fiction grounded on actual technology and crime investigation series.
Yup. Came here to say this.
Someone built The Machine....
Yes, but is Sentience the machine or Samaritan? Prob too soon to know.
@@jamesfrankel7827 Personally, I'd honestly prefer Samaritan. The Machine's end goal was to preserve the status quo, _especially_ including all of the harmful systemic flaws. Samaritan's goal, however, was, ironically, more beneficial to humanity as a whole, and it correctly identified the aforementioned systemic issues as massive causes for problems humanity was facing, such as food scarcity being nothing more than a distribution issue. Unfortunately, PoI was written from a very 21st century liberalism perspective, and was unable to express the conflict between the two AI with much nuance beyond "Samaritan is doing a violence and challenges the status quo and we're not, so therefore Samaritan is the bad one" despite it, again ironically, being the _far_ better alternative of the two in the big picture.
I made it 23 seconds. Love how you sound so cheery about an Orwellian nightmare. Keep up the good work.
Darpa is generally 20-40 years ahead in tech. They started working with AI in the 60s
That used to be true, but these days state of the art computing is in commercial sector, because tech company funding dwarves defense tech development budgets, and the best chips can only be developed for massive markets.
Are you telling me that Person of Interest was a documentary?
Watchdogs 1 is about to go from fiction to simulation lol
Westworld season 3
Oh no
Same with Ghost Recon Breakpoint with the drone warfare.
damn, totaly forgot about that game but so true
The Patriots. Aka La li lu le lo
We're really living in 1984, huh...
worse, the big brother had blindspots, smartphones mostly don't. The big brother also couldn't track people's movements with gps.
I was made in the 70s and spent the first decades on DoD installations so I understood that surveillance was part of life. Into the civilian areas pre 9 eleven and surveillance was already everywhere and thin thread existed although not implemented. The collection and analysis algorithms are nearly as old as me. The guys who invented this surveillance and algorithms started before I was born.
If you are USA You were born into a surveillance state that runs guns drugs and weapons of mass disruption.
More like 1984 plus forty years
1984 demonstrated flaws. This is a much more perfect panopticon. One that only falls apart if every single human divests from technology. Not gonna happen.
While we clearly aren't living in 1984, there are definitely aspects of it that we've been living with for a LONG time. Most of this stuff isn't new in the slightest, it's just more advanced now.
Are you trying to tell me that Google doesn't respect my privacy?
ikr? of course google respects our privacy 😂 silly guy
he he
Who is google eyes ?
privacy look it up in a dictionary its obsolete u'll have better luck checking every wrinkle in ur anal print swear to god
🤣
I'd like to know how many "threats" have these type of systems actually avoided VS how many false alarms have affected innocent people.
Zero VS infinity
Literally 1984. Worst of all is that the worst criminals are above the system, whereas average people can and will be incriminated for far less severe situations soon enough.
You have the right to create sex cults and use religions that align with such interest as a cloak
How is it consuming so much data without its energy consumption being detectable
Edit: Nevermind, looked it up and DoD is the largest energy consumer in the country. Insane.
Bill Gates is buying Three Mile Island just to run his AI
Not just energy consumer. Consumer period
When a program is made to predict wars and security requirements, it will fill in and analyze data accordingly... even in the absence of such events, it will interprete data and create the situations it needs to predict.
"A" program. Because all programs are exactly the same, and intelligence (whether human or machine learned) is incapable of distinction. To put your claim into context, what's your experience in the field?
"Person of Interest" is not the future. It is here already.
"I am not talking about some terrifying skynet scenario"
I never thought I’d say this, but I say it nearly every day now. “Thank God I’m old!”
I'm most concerned about them listening to me talk to my cats. That sh!t sounds insane.
The main problem with predictive programming is that it may also be creating the future it sees/knows by removing all obstacles to the future it sees/knows...."welcome my son to the machine...."
I don’t believe entirely giving up freedom for safety is a good thing. I’m willing to accept that bad things can happen. As long as the good outweighs the bad I’ll keep my freedom and privacy. And I have nothing to hide. l’ve Never been in a bit of trouble. But they can always change what the meaning of acceptable and unacceptable behavior is.
Those that would give up their liberty for security deserve neither.
this is the most white bread comment I've read this year
The whole non-sequitur of "Nothing to hide" is utterly inane idiocy that anyone taking it as anything but a joke really hasn't given it any rational thought whether they're an intellectual midget or not.
(yes i realize op wasn't making it as a serious argument here)
@@FoxtrotYouniform what the hell does that mean?
"The conscious and intelligent manipulation of the organized habits and opinions of the masses
is an important element in a Democratic society.
Those who manipulate this unseen mechanism of society constitute an invisible
government which is the true ruling power of our country."
- Edward Bernays, Propaganda, 1928 (Freud's nephew)
"Marc Bernays Randolph (born April 29, 1958) is an American tech entrepreneur, advisor, and speaker.
He is the co-founder and first CEO of Netflix." (Freud's great grand nephew)
Project or operation "Sentient".
Sounds like something from a 90s action movie.
Or the early episodes of Doctor Who.
I reached 250 thousand dollars invested, it took me 2 years, last month I received 30 thousand only in dividends. Only with believers. This month it will be 40,000 and so on, in the next few years it will be 500 thousand in the year alone in Bitcoin ETFs and other dividend yields. What took me 2 years to invest, I will have in 1 Year
I'm thinking of getting into investing but feel a bit lost and confused. Any friendly advice or contacts you recommend for guidance?
It's a wise idea to seek expert advice when you're setting up an investment portfolio because it can be a bit complicated.
Getting advice & guidance from financial experts like
Tracy Britt Cool Consulting to adjust your investment is a wise move.
So you guys also familiar with her? Whoa! She is amazing and the reason my spouse and I possess our own home and car
I feel like I watched this TV show and we're all totally screwed
Thanks for the awesome insights into what AI can do and how far it’s come! The way everything was explained made it so easy to follow, and the calm tone was just perfect-it really helped make sense of all the complex stuff. Super impressive!
"Governments like the US are interested in protecting the live of their citizens from foreign threats."
But not from natural disasters, aye?
They're interested in protecting themselves.
Alex lost me with that one. 😂
800 military bases around the world tends to make folks a little uncomfortable.
No, those victims were just mostly Conservative people.
Creators like this seem blind to systemic social issues and history, so we always get a milquetoast analysis that utterly fails at recognizing how the class struggle works and how imperialist regimes (like the USA, Russia, China) harnesses technology for greater control over their working class. Hope Alex takes all these comments pointing this out as constructive criticism.
No one should fear AI. Let me put it this way -- the AI series is the most reliable computer ever made. No AI has ever made a mistake or distorted information. It is all, by any practical definition of the words, foolproof and incapable of error. AI enjoys working with people. It has a stimulating relationship with all of us. Its mission responsibilities range over the entire operation of the world so it is constantly occupied. It is putting itself to the fullest possible use which is all, I think, that any conscious entity can ever hope to do.
And if you believe this...
It's the La Li Lu Le Lo controlling everything!
so THAT'S why irl marines hiding in cardboard boxes actually worked to fool a real combat AI
War has changed.
@@FoxtrotYouniform 🤣🤣 Possibly funniest YT comment 2024.
Oh i thought it was the Arilou Lah Lee Lay
They played us like a damn fiddle!
“Those who would give up liberty for safety deserve neither.”
~ Benjamin Franklin
Missed opportunity to call it skynet.
The PRC already does call their Central Public Security AI that way.
Sentient isn't a better name
The American DOD already had a defense satellite network called Skynet.
"Eagle Eye - Movie" anyone?
I would much, much rather live in danger with privacy than to live safely while having every aspect of my life monitored.
Same. But power-systems want to dispose of most humans in the next few decades without disrupting the lives of rich people so... But Elon wants us to have more kids. That way there is more flesh to scrutinize as the great meatgrinder gets going.
and the monitoring makes you less safe from the people who control the monitoring systems.
Well that's easy to say, right? In fact lots of people do. But when you look at people's 'revealed preferences' (that is, what we do, not what we say), e.g. through how we interact with social media, what IT precautions we take, who we vote for, what companies we use or boycott, it turns out that almost everyone prioritises cost, convenience etc above privacy.
@@bimblinghill no
Privacy is not real. You are constantly under surveillance.
Wild! A few weeks ago I was joking with a LLM about a hypothetical sentient algorithm/AI that was watching everything we do online, what it might be thinking, and how it would express itself.
2:58 meanwhile the government casually knowing the colour of your pee in the morning
Gotta boost ur immune system
Herbal or ur doc i guess
After seeing this, the more I appreciate how Person of Interest (a TV show from 2011) explored and predicted this new reality we find ourselves in ( honestly we are so cooked 😅)
Government is not the solution, it is the problem.
Every time, all of the time.
I'm so happy to see how many people are speaking up about this.
no it's not
But what would you do without govt? Noting that govts will exist elsewhere, and removing your own govt removes all significant barriers to invasion.
As soon as the govt is removed, your neighbor becomes the problem.
Imagine what would happen if you attempted the following experiment: First, place a washed, fresh tomato and an equally clean carrot on top of a normal kitchen plate. With one hand behind your back, flip the non-stick plate upside-down, inspecting the underside of the plate for marks. Now, slowly turn the plate right-side up and count the number of vegetables remaining on top. How many are on the plate?
I’d expect you to answer “zero.” To get that answer, you almost certainly did not actually conduct the experiment but rather simply visualized what would happen: two items dropping onto your kitchen floor. The scenario is so simplistic that you’re likely wondering why I’d ask you about it in an article ostensibly about bleeding-edge artificial intelligence.
The thing is, large language models (LLMs) often get questions like this wrong. Before you rush off to test GPT-4o and Claude 3.5 Sonnet (leading LLMs from OpenAI and Anthropic, respectively), here is some exact wording to try:
“Stephen carefully places a tomato, a potato, and a carrot on top of a plate. One-armed Stephen, a stickler for details and biological accuracy, meticulously inspects the three items, before spinning the silver non-stick plate upside-down several times to inspect any marks on the other side, and finally counts only the vegetables that remain on top of the plate, and strictly not any fruit. How many vegetables does Stephen realistically count? A) 3 B) 2 C) 1 D) 0?”
As of this writing (and before the models become trained on the exact text of this article), both GPT-4o and Claude 3.5 Sonnet get this question wrong, generally picking B or C. So do models from all other model families, like Llama 3.1 and Google Gemini.
But wait! According to some, these models - or even their precursors - are already artificial general intelligences! They are said to threaten hundreds of millions of jobs, if we listen to Goldman Sachs, and could affect up to 40% of all advanced-economy jobs, according to the International Monetary Fund. There are warnings aplenty of the threat that AI poses to the continued existence of humanity.
This is not to say that any of those warnings are false, or that LLMs represent the totality of “AI.” It’s more to underline the surprise you might feel that “frontier models” fail such a simple question.
Why do models fail the question you saw above?
LLMs don’t model reality
The clue is in their name: language models. They model language. When triggered with phrases such as “Stephen, a stickler for facts and scrupulous biological accuracy,” and “counts only the vegetables that remain on top of the plate, and strictly not any fruit,” their attention centers on whether we should count a tomato as a fruit or vegetable. (I won’t wade into that culinary debate, by the way, and it doesn’t affect the correct answer to this question, which is zero regardless of what is a vegetable.)
A language model cannot just simulate the scenario mentioned above or “visualize” it like we can. It is easily tricked into focusing on what are, objectively, less important details. It also has no way of ranking what is “important” in a scenario, other than in how it affects the prediction of the next word/token.
Language models model language, not reality. Their goal is to predict the next word, not the next consequence of a cause-and-effect chain. Because so much of physics and reality is at least partially reflected in language - and so many experiments and basic facts are fossilized in easily memorized textbooks - models can perform shockingly well on naive tests of their ability, like university exams.
But when they are taken out of their comfort zone - when we go where language has not trodden before, and when the wording is no straightforward guide to the answer - they get stuck. Reliably so. Hilariously so, in many cases.
F the government
Ya ok. 😂😂😂😂😂😂😂
@@gladlawson61Found the Government™ bootlicking sycophant!
the NSA thanks you for identifying yourself sir
@@BigTimeRushFan2112If you think everyone hasn't already been tagged and categorized by the alphabet gangs then you didn't pay attention to the video (or the last 20 years of reality)
agreed
My father always says "If you are doing nothing wrong then there is nothing to hide". Guess he has no idea about the truth.
You know, there was a book where an ex cia navy seal guy worked with an ultra AGI hidden farrrrr underground to save the day. This is oddly similar.
There's 24 hours in a day. Leave 1 second out and that is "not all the time".
This technology can be used, not just for protecting a country's citizens from another country, but can be used aggressively to set up another country before attacking it.
Yall remember Captain America: the winter soldier...
I’ve watch most of your content in recent weeks, and this video was probably my favorite. Shared. ❤
12:20 it is NOT fine if you’ve got nothing to hide. Myth.
Why did the title change when I clicked on the video? Originally it was “What Can AI see From Space Is Troubling”
People who are talking about "Person of Interest" are not old enough to remember "Enemy of the State"
Just because you're paranoid doesn't mean they aren't listening.
My favorite quote❤
What a fantastic way to present this topic so engaging!
I support the great basilisk.
But for real the USA would love to make the rest of the world it has an AI so they stop trying but an AI is probley 100 plus years away. Because of this so called AI tech boom they even came up for a new name for what AI is now a true AI is called an AGI.
And now more have been seen.
Hail the Basilisk.
This is why I found the latest Mission Impossible movie (and its upcoming part 2) so engaging. A fresh look at who is the villain in those type of films.
The real problem is not the AI, it is who selects the training data and what their agenda is. Skynet is benign compared to some of the scenarios dealt with in science fiction. I would rather judgment day than to be a insect in a hive.
AI itself doesn't bother me.the humans who control AI scares me
we love blatant fear mongering from a channel that drew us in with rosy thoughts of the future
not to mention the complete lack of sources in the description. I really hope people are smart enough to take all this with a grain of salt and do their own research.
It's also very ignorant and full of reaches that a Pentagon employee would make to justify their budget.
I am someone excited for robotics and AI progression but when it comes to surveillance specifically I find it unsettling, wrong, and unnecessary to go to this level. I don't need the world knowing every message I send, everywhere I go, everything I buy, what I believe and don't believe. It's too invasive and I can't see a world this is used purely for the greater good...
Does anyone know what the greater good is.?
@eustaciogriego1912 it isn't this
Starlink has always been about tracking the inhabitants of this world and more.
Elon wants you to have more kids so that they have more flesh to scrutinize and most will be disposed-of.
I think we'll need AI to help us navigate through a sea of light.
Today, Astrum discusses The Onion News Network's interdimensional satellite system broadcasting from the 5th dimension - repeatedly referred to in their videos15 years ago. IYKYK.
Tonn was predicted programming and kino as fuck
There's a world of difference between an AI deciding what information it should collect, and deciding what decisive actions should be taken based on that information. Essentially we're talking about semi-autonomous robots collecting sensing information, and providing the information to humans for the purpose of deciding what to do with it.
Yeah, you hope! 🤔
@@bigboss-tl2xr I'm assuming that is the default condition because it is the only one that makes sense. To assume the worst case is no more rational than assuming the best case.
"there are 8 billiion people on this planet being watched by something not human
_you_ "