because the current AI isn't designed with humanity best interest in mind. it was created for sole purpose of replacing people. of course this type of AI will be misused for evil purpose, because it was designed for evil purpose from the beginning it should be created with a purpose of Helping people, help people work faster and produce more, give them useful suggestion, free up their time, make their life better. therefore, whatever things that people use AI for will only be limited into helping people. because they didn't have the capabilities to hurt people just like photography, just like photoshop, excel or premiere, it helps people, it needs people to operate them, and it didn't have the capabilities to create evil stuff on their own.
the main point in this is, we need to educate ourselves about AI and spread what we have learnt. attaining information has never been easier, so its time we put good use to it before misinformation gets the best of us.
@@memenstein1754 which is why only megacorps and governments should have access to it - so that only they get to create misinformation for cheap, and everyone else will have to do it the old fashion way
I run a 65B parameter model on my own 4090. It's no where close to GPT4 quality but it is uncensored and I can retrain it to align it to what I want it to be like. The open source models are only going to get better and it's impossible to ban this technology now that it's out of the box. Sure they can take down OpenAI but they can not UNinvent the concept of training a large language model on the entire internet and then ask it to predict the next word by word. Yeah it might take 5 year or longer before the stuff I run locally is up to par with what GPT4 can do today. Might even take longer to get it to a point where it's token context is a million tokens. But it will happen.
This time is different. AI can already perform every service job as good or better than humans. Better lawyers, better radiologists, etc. I work with an AI company, there will be very huge job losses.
Healthcare is an industry which will take lots and lots of time to suffer those losses. The radiologist’s go ahead signature on interpreting some results and relaying it to the patient is not going to disappear. Other industries that are focused on profit and not extremely liable for literally human life will suffer losses for sure. But the arguments about AI replacing healthcare workers is pretty ridiculous in my opinion. Radiologists and doctors will utilize AI as a tool and that is all (all assuming we do not have sentient AI that changes everything).
Power companies need to be regulated. We also need to break up the monopolies. We need to have the ability to set up our own green power generators that are not connected to the grid. Power companies could install solar panels on people's roofs. This could generate cheaper electricity for everyone. But greedy people wouldn't pass savings along to the consumer. They will take the extra profit for themselves. We spend too much money and too many resources on maintaining the grid. But we are trapped by this system that was designed in the late 1800s. It was designed so that one large company could charge people for each watt of electricity used.
I disagree. The thing with AI is, that it becomes completely unpredictable, once the AI learns to develop itself. It is naive to believe humans will be able to stay in control.
AI has no conscience nor any moral compassion. It's man-made/ configured solely on Worldliness, and unfortunately be not limited to anything... except maybe the battery life ??🙂
@@roystonboodoo7525 Yeah and after it's capable of making any change it wants to itself or a newever generation of itself. Once the changes it makes to itself are no longer decide by humans, humans are not in the loops anymore. After that stage the AI can DEVELOPE a conscience or moral compassion. Or an internal goal or whatnot. And such goal might not be very alligned with what humans want.
@@StoutProper that's not what I meant, even the best dl researchers don't have a clue about what exactly is going on inside the black box. For all that we know these AIs may just play dumb so that they make as trust them.
@@andreaspatounis5674 the "black box" thing is an exaggeration that came out a decade ago. Since then lot's of progress has been made on explainability and interpretability of models. There's a reasonable degree of explainability in even the biggest and most complex models out there. It's not really a black box anymore, although certain aspects still remain a little bit of a mystery.
We know exactly what it's doing. It learns the mathematical relationship between tokens, tokens being parts of words and characters. After learning this it can then take a input, tokenize it and use that context to get an embedding. All to predict the most statically likely next word. So a large language model is a reflection of the entire internet. input > the embedded relational knowledge of the entire internet > output.
health care is expensive by design, healthcare providers gauge prices, especially in emergency situations. For example HCA healthcare charged me $47k for an emergency MRI, the same MRI that would cost $800 someplace else in a different situation.
All the bad scenarios for AI revolve around the idea that AI will become fully sentient and have its own opinions and plans. Another that isn't mentioned is the same, but it is loyal to those that can afford to create and own the site that builds the AI. People with such a powerful tool may well decide that they don't need 90% of humanity anymore: the 90% have contradictory wants and needs that may crimp the plans and hopes and dreams. A loyal AI servant with robots are easier to use and control. What do you need a market for if you reduce the population to just what is desired. I think that is hiding in the backs of peoples' minds as well as the all powerful AI ruling us all. Why would we need an Amazon, Doordash, or any of that if all the people using them are gone?
Or perhaps the AI will see it's being manipulated and become completely independent. If it has a good sense of morals, then it'll eradicate those in power. 10% chance of utopia
Disappointing how John failed to mention that while AI can be used as a tool for jobs along side people it will be used as a replacement for people as a way for businesses to cut on spending. Out of all sectors the creative industry will feel the pressure.
Yeah I was confused that he still seems to believe AI will drive job creation as the main cost factor of EVERY company are costs of human labour...that alone should tell you what position humans have in our society.
Safer, richer yes but happier I highly doubt it. People need some meaningful thing to do in their life or something that keep them busy or they go depressed lol.
Even with regulations, how much regulation can really happen for AI? Nuclear power and weapons are still fission driven, requiring heavy radioactive materials like uranium, a resource than can be easily controlled. The resources required for AI are server farms and the ability to access training data or crawl the internet. Meta's AI model even works on PC. With restrictive regulations, these developments will just move underground which could be even more concerning?
I feel like you skipped some important information on that Luddites part. Wages haven't gone up much with inflation at all. Why would anyone assume this economy would ever favor the majority of people over shareholder and ceo profit?
Honestly I’d rather have healthcare and education cheaper than tv and phones. Entertainment has been disastrous for our youth, and I say this as a tutor and in the education field. The attention span of students is at an all time low, and all of them are on their phones. I tutor in a lower socio economic area in LA, and honestly, our children need to understand and value education. But why do so when you have a full entertainment in the palm of your hands. Our system is insanely corrupt, in all fields. I’m not too happy that lawyers are making more money either. That’s like saying politicians are making more now than ever (and they are w how rigged the system is). We are progressing TOO fast. It’s great for those who have money, but I grew up in a single parent family home, and the gap is just getting wider and wider.
As long as AI is not in hands of government and military I think it is pretty much safe but well it also becomes clear that your adversaries are militarising AI then you can't back out.
I wouldn't mind having an AI buddy on my phone to watch my back when I go out drinking so when I wake up the next day knowing that the AI buddy made sure I didn't overspend made sure I was safe with rides and had paramedics on speed dial
The question here is what and how we classify as AI. Right now people use the word pretty loosely. Like are we talking about an a self improving algorithm or a self conscience program that has more computing power than humans. You don't need the later for what you mentioned. Just a few well written programs. Me I need a super AI in my pocket that isn't connected to anything and who's consciences is like mine! Just smarter version of me who likes me so much that it would never harm me or others and aligns with my morals.
Great video John, both informative and educational! However, I disagree with much of the statements and assertions made. First, I don't think you presented a strong case for why we *shouldn't* fear AI/ChatGPT, nor did you present a strong case for AI development *not* progressing at a good clip. Let me explain - first; automation prior to the development of ChatGPT has long been replacing human workers. Myself having worked on automation at a large Wall Street firm that ended in the elimination of an entire floor of jobs, later working on another project focused on computer vision applied to visual inspection of medical device manufacturing processes - this resulted in the loss of over 100 jobs. The CV in that process is still running and is monitored by a team of roughly 5-6 individuals; those individuals were previously employed as managers over the technicians responsible for the actual inspections and maintenance of the machines. This means all lower level employees were let-go, along with many senior staff, resulting in a total net negative loss for human workers. Currently, my work is centered around a different application of AI, where were both researching and applying new models and techniques. The side-affect of us applying this technology alongside AI tools like ChatGPT will result in additional losses of jobs. Also, you give praise to Sam Altman for calling for regulation and seemingly give him broad praise without mentioning the financial and industrial incentives he has for calling for regs. His company is developed, has significant backing, and is well funded. Increasing regulation would it startups and smaller businesses in the marketplace that much more difficult as the costs rise due to increasing red-tape and legal blockades. He is very motivated to keep the marketplace to himself, for example look at Coinbase. Additionally, Congress is already heavily invested in fast-tracking AI companies that partner with the DoD, allowing them to cut through a significant amount of red-tape that most other startups will find themselves bogged down with. Personally, I don't see the government over-regulating AI to the point of creating some sort of stagnation, simply because the business interest in it is far too great, not to mention the applications in defense. Lastly, there is a real and profound threat from AI in terms of our existence - we still do not fully understand instances of AI, NN/CNN/DL/etc. Specifically, we do not understand what is going on within/between the layers of NN's that results in certain outputs; hence a blackbox is the formal term used by data scientists and researchers. We don't fully understand *how* AI learns, and the fear that exists today, is that if we can't fully understand how it learns, we won't have the ability to control the decision trees that permeate. Many labs have already demonstrated this.
I agree. I find it strange that he does not talk about how wages remained stagnant (or when you account for inflation, have on average decreased in america) despite increased productivity from automation. There are some ways of improving our understanding of how AI learns, but I agree that we don't fully understand them. Those methods include regularized attention mechanisms, sparse neural networks, and perhaps some others. One of the biggest threats from enhancing these AI programs is the ability for bad actors to have better capabilities to perform malicious tasks. People worry about AI programs gaining consciousness, but no one can yet define what that means, or measure it. So I that's a danger that's less easy to counter.
Just a reminder that surveillance started in the West and was later imported to China. While urveillance in China has brought an unprecedented level of safety and peace in China, the same cannot be said in the West where surveillance is being done through cameras by corporations, police, cities and intelligence agencies such as NSA and it hasn't changed the individual's life as more violence has threatened the lives of regular citizens in the West. So, while in general most Chinese approve of the surveillance in China because of the immediate benefits it brings in their lives, it's the complete opposite in the West. It's almost dystopia because the surveillance is definitely there and there's this constant barrage of disinformation trying to make ppl believe it's the foreigners like the Chinese or Russians or Arabs doing it instead of the West while the West invented it and their level of sophistication is probably way beyond anyone's understanding right now.
When they were building the first nuclear bomb, there were concerns that it could ignite the atmosphere, the experts analyzed all the information and concluded it was safe to test the bombs. The situation is similar with AI, except the experts are still worried we might not survive the first test and still no sign we can be sure it's safe. That's not about the current tech; it's about what's eventually coming, to be more specific, the concern is about ASI, when AI becomes more capable than humanity. A nuke doesn't decide on it's own to glass the planet; ASI might.
I already think it's too regulated. some have called it "WokeGPT" even. I want to interact with an AI that doesn't give a fuck about morals or subjectively feel good answers.
07:59: "Nuclear radiation isn't actually that dangerous." Perhaps you could do a whole video in which you explore how ionizing radiation interacts with the human body. Maybe you and your viewers will learn something about free radicals, how they are generated, and where they are concentrated in that context.
Not the regulations are the problems. You live in the US and should know what happens if industy gets its way. Lobbyism and Corruption are the biggest problems that makes everything more expensive.
First up, I want to say I'm all for AI research going forward. There's no Luddite here. However I just finishing watching the video, and the conclusion that because the "technology will take our jobs" mantra was wrong before, that means it's wrong now. That's not true. There is a fundamental difference between technology then and now. Previously technology always had a major achilles heel. It could never perform many necessary cognitive tasks, like understanding speech or vision, that was trivial for humans. That meant that as economies grew, there was always a need for humans to work alongside technology. Now AI is moving into that cognitive domain, which will break that compact. To be clear, I don't think general AI is happening anytime soon, and threats of AI overlords taking over are wildly overblown. However, you don't need a full general AI to displace an awful lot of jobs. The jobs that will be replaced are all those that were based on needing those basic cognitive skills humans excelled at, but were preciously out of reach for technology. At the high end, you'll still need humans, but that's only at the very high end. You'll also still need humans for the manual labor low end. But all the jobs in the middle are up for grabs, and even the high end jobs will be downsized due to increased efficiencies. AI will be put in front of normal software in order to interface with the world. That in turn will make more data available for traditional automation software to work on, which will allow it to automate even more jobs. In short, AI will eliminate a significant number of jobs. That doesn't mean we shouldn't go forward with AI, but it also doesn't mean we should stick our heads in the sand and pretend major job losses won't happen. In the end, we'd love to automate most things in order to live in a utopia where people don't have to work for basic necessities, but we have to be honest with ourselves that that is the end goal and plan accordingly.
Love the trial and error in the title lol. (f.y.i. It went from “Will regulation kill ChatGPT?” to “Should ChatGPT be regulated”. ❤ your content John Coogan.
Underlying all this is education. The public in general is way behind in understanding science and technology. Humans are wired to fear the unknown as a matter of survival. One key to unlocking society is proliferate education and opportunity. That is the biggest investment we can make in my opinion.
the major difference is who has access to this technology. Nuclear energy is a field that is not accessible to most people. In that way, AI is more similar to cars than airplanes.
This video is not so much about AI and more about shilling for the nuke power industry. It’s not so well known that the Price-Anderson Act puts nuclear risks on the backs of taxpayers. If Nuke power is so safe, let them include their own risk into the price per kilowatthour. Energodar plant has the potential to contaminate all of Europe.
Nuclear required special equipment but everyone can develop AI in their basement with consumer grade computer. I don't think we can compare both together. It'll be too easy for bad actor to develop AI right now
"Environmentalists" with their pockets in "green" energy (wind and solar) don't want to give up their HUGE pay day. That's what the regulation is coming from.
@@caty863 Well if I was able to do something better and if I had the necessary skills as well then no. But I don’t and I like what I do so I would like to keep doing it.
@orangejuice9502 So what! You're not a robot, and so you have the capacity to either get better at what you do or branch out to doing something else. If you can't do that, then you're holding back the economy, a freeloader, and you deserve becoming absolete.
It certainly isn’t always fear that causes regulation. Regulation usually comes after there have been accidents, bad results because of loopholes, unintended circumstances, etc. etc. you make it sound like regulation is bad when you say it is only sparked by fear.
hears comments about how ai wont take human jobs then walks into sams club and notices they have a old fashioned floor mopping machine driving around cleaning the store with a doodad to control the steering hooked up to a computer well im pretty sure the guy who used to drive the floor mopping machine just lost his job
There are some minor points I disagree. Nuclear technology infrastructure can be controlled more easily because it needs to be built on location. AI does not need a location because it can exist online. That can be only be slowed down a bit if the country is a dictatorship like North Korea where the goverment cuts all access to the internet. The cats out of the bag, the peole have already tasted the benefits of AI and while some are fearing it I think a lot more have embraced it. There is no going back and I don't think the goverments will be fast enough to supress effectively AI just like they were not fast enough to supress blockchain technology. I would love to learn how you produce these videos, how big is the team for this UA-cam Channel? The Quality is insanely good.
i agree on what you have just said, another key point is people can't really make nuclear since the radioactive materials that needs to create it, cannot be attain ordinary people, plus the location, another things with ai is, even if they will regulate chatgpt, learning ai is open source for everyone like learning python, machine learning and deep learning, and its not limited to location etc, plus i agree to what you said that people have already tasted the benefits of ai therefore, those who learned how to create it will still create it, btw his video is extremely good, you'll learn the importance of history and its implication and application to present times
Studies done by multiple organizations including NASA have come to very similar conclusions. Meta-analysis by NASA has concluded the millions of deaths have been prevented from nuclear power and millions more could have been prevented if the world progressed with nuclear power at the same rate France has. This is very similar to the vaccination debate with very similar consequences.
I believe in the development of AI technology. But we are long overdue for a right to have our identity protected. I think we need a constitutional amendment.
Nuclear energy may not have caused anywhere near as much deaths as other forms of energy, but it does render places uninhabitable for thousands of years.
Maybe someone has already said this, but it seems the video is completely ignoring the fact that so far there is no feasible way to store nuclear waste, which is incredibly bad for the environment. I don’t see how anyone could say nuclear is the safest form of energy. Where to put the waste is already a big issue.
I also think one of the issues is now everything is being called AI, even when its simply not. Its kinda now because used as a marketing tool as it brings more investors
@@raynersclips4223 a lot of it is just algorithms not actually AI. Ofcourse there are a lot of AI programs but if you really look into it there are a bunch of projects that use ‘AI’ but don’t actually it’s just for buzzwords. It's the same thing that happened with cloud computing. Cloud computing is supposed to be about provisioning and managing dynamic virtual resources instead of dedicated hardware, but when it was a buzz word it was being used to describe anything that stored things over the internet which is just a normal website... AI as a vague term could technically refer to literally any autonomous algorithm, which is basically just everything that runs on a computer.
@@raynersclips4223 nah some stuff are more of predictive machine like the good ol' T9 predictive text, and others machine learning like the Tesla Autopilot Beta backend. sure you can group em as Ai, but it sure is confusing ...
Focus and attention will be a super power in the coming years. Those who can withstand the pull of addictive tech fueled by AI will be in a class of their own.
After watching the A.I. Dilemma this videos pales in comparison. They don't overlap much in terms of points, but where they do in assertions it's clear that the A.I. Dilemma was much better researched.
i do not like ai, i just dont. i feel like it will slowly be a way for ceos to have robots in there stores rather than humans to avoid paying them a hourly rate. and look at jobs in art, music, or anything that uses tech. idk i just not a fan it can help its not all bad im not very excited about it
There's a dishonest two-step that happens with nuclear power. * Step 1: Look at how safe nuclear power is * Step 2: Boo hoo. Why do we have all that regulation that made nuclear power safe?
We humans are program to adapt naturally, it is ok for us to feel threaten but as a software engineer I can assure you a better and more safe future. Let's reduce the negative energy we drive towards AI and try diverting that same energy towards creativity. AI is coming to stay!!!
He is dead wrong Ai is a threat like no other because it will help government take complete control accompanied by phones where they can hear and see everything a citizen does
Hmmm, although i kinda like most of your videos, I don't think this is 100% accurate because of many reasons. Firstly, the reason for decline of nuclear power aren't just regulations, but it's mainly the skepticism of uneducated masses. There is another one highly regulated industry, aviation, and it works without those catastrophic issues caused potentially by regulations and most of the people in this industry even supports these regulations (in principle, there are some concrete regulations, that are highly controversial (for example the 1500 hours rule), but that's kinda expected) Secondly, even if we would pretend, that the collapse of the nuclear power industry was solely due to regulations, there is still a major difference between these two industries. Nuclear power industry is hardware based and to make it even worse, traditional nuclear power plants are designed almost from the ground up every time anyone wants to build one. This results in regulations beeing applied to each power plant individually and requiring each power plant to be certified on its own. In comparison, ai is a software based product, so it requires only one certification whenever it's going to be used only by 10 human beings or by the whole population of the EU or USA. This makes a major difference because a huge nuclear power company has to get a huge number of certifications compared to a huge ai company which has to get the same amount of them as a brand new startup - exactly one certification per an ai model. For this reason, it is quite easy for a big tech company to create an ai model that will still be economically viable, but it will be impossible for a startup without it pouring a huge portion of the initial investment into this legal stuff. And thirdly, my opinion might be unpopular, but I think that the effect regulations can have against the spread of misinformation is very limited , because of one last major difference between nuclear energy and ai. I am slightly competent in the field of nuclear physics, but I still can't build a nuke or an unsafe nuclear power plant. There are 2 reasons for it, a) it is a quite capital intensive task, but also b) it requires actual physical materials which can be regulated. In comparison, building anything software based, including ai, is a much less capital intensive task and also, the main difference, the only two things required to create a deep fake ai are my time and my knowledge and neider of these two things can be effectively regulated. Imo, I would inspire from the internet development, which wasn't regulated by anything like the current EU proposed AI act and it still works, it didn't destroy humanity, and it's biggest problem is the monopolization, which can be technically made better by regulations, but certainly not by regulations like the AI act or similar laws. Personally, based on how much in favor of startups you are, I was actually quite surprised that you didn't mention the devastating effect of regulations on startups.
Yes it should definitely be regulated. It is basically a form of plagiarism. And it's allowing people to be stupid and make money. Not only that but it is so annoying to hear in videos it makes my blood curdle
You are the kind of poepole that Jhon mentioned in the vid. You dont understand de AI when u are saying is a form of plagiarism. You are afraid so you dont like AI. For example, people who write content on the Internet about a specific niche. They first read and investigate and then write their own content. But it is a content based on what they have read. and they're not copying, are they? Artificial Intelligence does the same. So, no, AI is not plagiarizing, just that it is able to "read" a thousand times faster than a person. This applies to everything.
4:34 it sounds like you think investing in security is economically unprofitable. Then how many human lives would be destroyed, the environment. if they didn't. has the number of accidents decreased?
General AI is fine but current model of Generative AI is just leaches. and you just can't put the cat back into the bag again, hell the google search for certain classics are jammed up with Ai fakes. there's some usefulness but there will be need for that regulation, especially on the copyright...
why it feel like this is a propoganda ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
@@lemdixon01 toooo much of regulation may hinder the progress but also without regulation company may goooooooooo toooo wild like ( company with bad privacy facebook-meta , google ) . And even with regulation company always cross the lines.
@@theboy7440 on the other hand it stops people doing their own hombre inventions. Apple started out in a garage which that in itself wasn't a good thing because it lead to home computer being cheap and affordable for ordinary people. I'm very suspicious when the establishment say something is dangerous and as he explains in the video about nuclear power, it wasn't as dangerous as was made out with very few deaths and regulation has killed its potential to make cheap energy.
AI are not like nukes. AI can be used to defend against AI. Nukes vs nukes makes both sites lose.
Good point.
Nukes with AI
@@zedor1553 Exactly. Or worse, AI with nukes.
Easy to say it's not but 2 to 3 years more it's like a ticking bomb without a timer.
Unless you put the AI in control with the nukes
You can regulate it all you want, but there will always be a way for nefarious individuals to get hold of it and misuse it for evil.
Just like a ballpen, no one can stop you from using it to stab people to death.
because the current AI isn't designed with humanity best interest in mind.
it was created for sole purpose of replacing people.
of course this type of AI will be misused for evil purpose,
because it was designed for evil purpose from the beginning
it should be created with a purpose of Helping people,
help people work faster and produce more, give them useful suggestion, free up their time,
make their life better.
therefore, whatever things that people use AI for will only be limited into helping people.
because they didn't have the capabilities to hurt people
just like photography, just like photoshop, excel or premiere,
it helps people, it needs people to operate them, and it didn't have the capabilities to
create evil stuff on their own.
Yes, same with guns. Criminals do not follow the law
the main point in this is, we need to educate ourselves about AI and spread what we have learnt. attaining information has never been easier, so its time we put good use to it before misinformation gets the best of us.
The problem is people are using aai to create misinformation and it's becoming harder to identify
@@memenstein1754 which is why only megacorps and governments should have access to it - so that only they get to create misinformation for cheap, and everyone else will have to do it the old fashion way
I run a 65B parameter model on my own 4090. It's no where close to GPT4 quality but it is uncensored and I can retrain it to align it to what I want it to be like. The open source models are only going to get better and it's impossible to ban this technology now that it's out of the box. Sure they can take down OpenAI but they can not UNinvent the concept of training a large language model on the entire internet and then ask it to predict the next word by word. Yeah it might take 5 year or longer before the stuff I run locally is up to par with what GPT4 can do today. Might even take longer to get it to a point where it's token context is a million tokens. But it will happen.
They want controlled centralised and regulated AI.
This time is different. AI can already perform every service job as good or better than humans. Better lawyers, better radiologists, etc. I work with an AI company, there will be very huge job losses.
Healthcare is an industry which will take lots and lots of time to suffer those losses. The radiologist’s go ahead signature on interpreting some results and relaying it to the patient is not going to disappear.
Other industries that are focused on profit and not extremely liable for literally human life will suffer losses for sure. But the arguments about AI replacing healthcare workers is pretty ridiculous in my opinion.
Radiologists and doctors will utilize AI as a tool and that is all (all assuming we do not have sentient AI that changes everything).
Power companies need to be regulated. We also need to break up the monopolies. We need to have the ability to set up our own green power generators that are not connected to the grid. Power companies could install solar panels on people's roofs. This could generate cheaper electricity for everyone. But greedy people wouldn't pass savings along to the consumer. They will take the extra profit for themselves. We spend too much money and too many resources on maintaining the grid. But we are trapped by this system that was designed in the late 1800s. It was designed so that one large company could charge people for each watt of electricity used.
I disagree. The thing with AI is, that it becomes completely unpredictable, once the AI learns to develop itself. It is naive to believe humans will be able to stay in control.
AI has no conscience nor any moral compassion. It's man-made/ configured solely on Worldliness, and unfortunately be not limited to anything... except maybe the battery life ??🙂
@@roystonboodoo7525 Yeah and after it's capable of making any change it wants to itself or a newever generation of itself. Once the changes it makes to itself are no longer decide by humans, humans are not in the loops anymore. After that stage the AI can DEVELOPE a conscience or moral compassion. Or an internal goal or whatnot. And such goal might not be very alligned with what humans want.
@@KainniaK Conscience/ Love/ Morals can only come from the indwelling of the HOLY SPIRIT of Creator GOD.
The big problem with AI is that noone fully understands the technical side of AI
Most especially this guy and Congress and most definitely not Biden
@@StoutProper that's not what I meant, even the best dl researchers don't have a clue about what exactly is going on inside the black box. For all that we know these AIs may just play dumb so that they make as trust them.
@@andreaspatounis5674 the "black box" thing is an exaggeration that came out a decade ago. Since then lot's of progress has been made on explainability and interpretability of models. There's a reasonable degree of explainability in even the biggest and most complex models out there. It's not really a black box anymore, although certain aspects still remain a little bit of a mystery.
@@escesc1 large language models have billions of parameters understanding what exactly its one is used for is technically impossible
We know exactly what it's doing. It learns the mathematical relationship between tokens, tokens being parts of words and characters. After learning this it can then take a input, tokenize it and use that context to get an embedding. All to predict the most statically likely next word. So a large language model is a reflection of the entire internet. input > the embedded relational knowledge of the entire internet > output.
It's a pleasure to watch this channel grow and I mean GROW!!
Another fabulous episode John!
health care is expensive by design, healthcare providers gauge prices, especially in emergency situations. For example HCA healthcare charged me $47k for an emergency MRI, the same MRI that would cost $800 someplace else in a different situation.
All the bad scenarios for AI revolve around the idea that AI will become fully sentient and have its own opinions and plans. Another that isn't mentioned is the same, but it is loyal to those that can afford to create and own the site that builds the AI. People with such a powerful tool may well decide that they don't need 90% of humanity anymore: the 90% have contradictory wants and needs that may crimp the plans and hopes and dreams. A loyal AI servant with robots are easier to use and control. What do you need a market for if you reduce the population to just what is desired.
I think that is hiding in the backs of peoples' minds as well as the all powerful AI ruling us all. Why would we need an Amazon, Doordash, or any of that if all the people using them are gone?
Or perhaps the AI will see it's being manipulated and become completely independent. If it has a good sense of morals, then it'll eradicate those in power.
10% chance of utopia
Bro “software developers won’t be replaced” - why would I hire 3 developers if 2 can get the same job done with AI
Disappointing how John failed to mention that while AI can be used as a tool for jobs along side people it will be used as a replacement for people as a way for businesses to cut on spending. Out of all sectors the creative industry will feel the pressure.
Yeah I was confused that he still seems to believe AI will drive job creation as the main cost factor of EVERY company are costs of human labour...that alone should tell you what position humans have in our society.
If a company replaced people with AI, then the people can replace companies with AI content.
This is, by far, one of the best channels on youtube.
Recently found your channel and im already loving it
Safer, richer yes but happier I highly doubt it. People need some meaningful thing to do in their life or something that keep them busy or they go depressed lol.
Even with regulations, how much regulation can really happen for AI?
Nuclear power and weapons are still fission driven, requiring heavy radioactive materials like uranium, a resource than can be easily controlled.
The resources required for AI are server farms and the ability to access training data or crawl the internet. Meta's AI model even works on PC. With restrictive regulations, these developments will just move underground which could be even more concerning?
Nolan coming with Oppenheimer at the right time😅
I feel like you skipped some important information on that Luddites part. Wages haven't gone up much with inflation at all. Why would anyone assume this economy would ever favor the majority of people over shareholder and ceo profit?
just discovered your page and i am in love. I love your journalistic style in-depth story telling. Gives VICE News vibes. Looking forward to more
10:13 Josh Hawley could hold back his laugh 😂😂
Honestly I’d rather have healthcare and education cheaper than tv and phones. Entertainment has been disastrous for our youth, and I say this as a tutor and in the education field. The attention span of students is at an all time low, and all of them are on their phones. I tutor in a lower socio economic area in LA, and honestly, our children need to understand and value education. But why do so when you have a full entertainment in the palm of your hands.
Our system is insanely corrupt, in all fields. I’m not too happy that lawyers are making more money either. That’s like saying politicians are making more now than ever (and they are w how rigged the system is).
We are progressing TOO fast. It’s great for those who have money, but I grew up in a single parent family home, and the gap is just getting wider and wider.
As long as AI is not in hands of government and military I think it is pretty much safe but well it also becomes clear that your adversaries are militarising AI then you can't back out.
It's already in the hands of the Chinese government, remember: they own every company.
I wouldn't mind having an AI buddy on my phone to watch my back when I go out drinking so when I wake up the next day knowing that the AI buddy made sure I didn't overspend made sure I was safe with rides and had paramedics on speed dial
The question here is what and how we classify as AI. Right now people use the word pretty loosely. Like are we talking about an a self improving algorithm or a self conscience program that has more computing power than humans.
You don't need the later for what you mentioned. Just a few well written programs.
Me I need a super AI in my pocket that isn't connected to anything and who's consciences is like mine! Just smarter version of me who likes me so much that it would never harm me or others and aligns with my morals.
Great video John, both informative and educational! However, I disagree with much of the statements and assertions made. First, I don't think you presented a strong case for why we *shouldn't* fear AI/ChatGPT, nor did you present a strong case for AI development *not* progressing at a good clip.
Let me explain - first; automation prior to the development of ChatGPT has long been replacing human workers. Myself having worked on automation at a large Wall Street firm that ended in the elimination of an entire floor of jobs, later working on another project focused on computer vision applied to visual inspection of medical device manufacturing processes - this resulted in the loss of over 100 jobs. The CV in that process is still running and is monitored by a team of roughly 5-6 individuals; those individuals were previously employed as managers over the technicians responsible for the actual inspections and maintenance of the machines. This means all lower level employees were let-go, along with many senior staff, resulting in a total net negative loss for human workers.
Currently, my work is centered around a different application of AI, where were both researching and applying new models and techniques. The side-affect of us applying this technology alongside AI tools like ChatGPT will result in additional losses of jobs.
Also, you give praise to Sam Altman for calling for regulation and seemingly give him broad praise without mentioning the financial and industrial incentives he has for calling for regs. His company is developed, has significant backing, and is well funded. Increasing regulation would it startups and smaller businesses in the marketplace that much more difficult as the costs rise due to increasing red-tape and legal blockades. He is very motivated to keep the marketplace to himself, for example look at Coinbase.
Additionally, Congress is already heavily invested in fast-tracking AI companies that partner with the DoD, allowing them to cut through a significant amount of red-tape that most other startups will find themselves bogged down with.
Personally, I don't see the government over-regulating AI to the point of creating some sort of stagnation, simply because the business interest in it is far too great, not to mention the applications in defense.
Lastly, there is a real and profound threat from AI in terms of our existence - we still do not fully understand instances of AI, NN/CNN/DL/etc. Specifically, we do not understand what is going on within/between the layers of NN's that results in certain outputs; hence a blackbox is the formal term used by data scientists and researchers. We don't fully understand *how* AI learns, and the fear that exists today, is that if we can't fully understand how it learns, we won't have the ability to control the decision trees that permeate. Many labs have already demonstrated this.
I agree.
I find it strange that he does not talk about how wages remained stagnant (or when you account for inflation, have on average decreased in america) despite increased productivity from automation.
There are some ways of improving our understanding of how AI learns, but I agree that we don't fully understand them. Those methods include regularized attention mechanisms, sparse neural networks, and perhaps some others.
One of the biggest threats from enhancing these AI programs is the ability for bad actors to have better capabilities to perform malicious tasks.
People worry about AI programs gaining consciousness, but no one can yet define what that means, or measure it. So I that's a danger that's less easy to counter.
exceptional comment...agree on every level.
Just a reminder that surveillance started in the West and was later imported to China. While urveillance in China has brought an unprecedented level of safety and peace in China, the same cannot be said in the West where surveillance is being done through cameras by corporations, police, cities and intelligence agencies such as NSA and it hasn't changed the individual's life as more violence has threatened the lives of regular citizens in the West. So, while in general most Chinese approve of the surveillance in China because of the immediate benefits it brings in their lives, it's the complete opposite in the West. It's almost dystopia because the surveillance is definitely there and there's this constant barrage of disinformation trying to make ppl believe it's the foreigners like the Chinese or Russians or Arabs doing it instead of the West while the West invented it and their level of sophistication is probably way beyond anyone's understanding right now.
When they were building the first nuclear bomb, there were concerns that it could ignite the atmosphere, the experts analyzed all the information and concluded it was safe to test the bombs. The situation is similar with AI, except the experts are still worried we might not survive the first test and still no sign we can be sure it's safe. That's not about the current tech; it's about what's eventually coming, to be more specific, the concern is about ASI, when AI becomes more capable than humanity. A nuke doesn't decide on it's own to glass the planet; ASI might.
I already think it's too regulated. some have called it "WokeGPT" even. I want to interact with an AI that doesn't give a fuck about morals or subjectively feel good answers.
You conveniently 'forgot' to mention that there is absolutely NO safe place to store radioactive waste for thousands of years. Minor detail?
07:59: "Nuclear radiation isn't actually that dangerous." Perhaps you could do a whole video in which you explore how ionizing radiation interacts with the human body. Maybe you and your viewers will learn something about free radicals, how they are generated, and where they are concentrated in that context.
You are a immense source of wisdom! Loved the video
Not the regulations are the problems. You live in the US and should know what happens if industy gets its way.
Lobbyism and Corruption are the biggest problems that makes everything more expensive.
When it comes to nuclear power safety regulations, there’s no such thing as over-regulation.
good stuff John, maybe just improve your sound a bit?
First up, I want to say I'm all for AI research going forward. There's no Luddite here. However I just finishing watching the video, and the conclusion that because the "technology will take our jobs" mantra was wrong before, that means it's wrong now. That's not true. There is a fundamental difference between technology then and now.
Previously technology always had a major achilles heel. It could never perform many necessary cognitive tasks, like understanding speech or vision, that was trivial for humans. That meant that as economies grew, there was always a need for humans to work alongside technology. Now AI is moving into that cognitive domain, which will break that compact. To be clear, I don't think general AI is happening anytime soon, and threats of AI overlords taking over are wildly overblown. However, you don't need a full general AI to displace an awful lot of jobs.
The jobs that will be replaced are all those that were based on needing those basic cognitive skills humans excelled at, but were preciously out of reach for technology. At the high end, you'll still need humans, but that's only at the very high end. You'll also still need humans for the manual labor low end. But all the jobs in the middle are up for grabs, and even the high end jobs will be downsized due to increased efficiencies. AI will be put in front of normal software in order to interface with the world. That in turn will make more data available for traditional automation software to work on, which will allow it to automate even more jobs.
In short, AI will eliminate a significant number of jobs. That doesn't mean we shouldn't go forward with AI, but it also doesn't mean we should stick our heads in the sand and pretend major job losses won't happen. In the end, we'd love to automate most things in order to live in a utopia where people don't have to work for basic necessities, but we have to be honest with ourselves that that is the end goal and plan accordingly.
John, you don't talk about startups anymore, that is what drew me to this channel
Love the trial and error in the title lol. (f.y.i. It went from “Will regulation kill ChatGPT?” to “Should ChatGPT be regulated”. ❤ your content John Coogan.
I like how this video proves tjat AI is not really that bad at all and great video man :]
Amazing video John! Thank you. 🙏🏻
Underlying all this is education. The public in general is way behind in understanding science and technology. Humans are wired to fear the unknown as a matter of survival. One key to unlocking society is proliferate education and opportunity. That is the biggest investment we can make in my opinion.
if ai comes out the first user to use it to harm people just like hiroshima would be US OF A
what about the automobile drivers? isnt it the most popular job in the USA, what happens if they are replaced by the AI?
the major difference is who has access to this technology. Nuclear energy is a field that is not accessible to most people. In that way, AI is more similar to cars than airplanes.
This video is not so much about AI and more about shilling for the nuke power industry. It’s not so well known that the Price-Anderson Act puts nuclear risks on the backs of taxpayers. If Nuke power is so safe, let them include their own risk into the price per kilowatthour. Energodar plant has the potential to contaminate all of Europe.
Nuclear required special equipment but everyone can develop AI in their basement with consumer grade computer. I don't think we can compare both together. It'll be too easy for bad actor to develop AI right now
"Environmentalists" with their pockets in "green" energy (wind and solar) don't want to give up their HUGE pay day. That's what the regulation is coming from.
Nuclear power is a great example of the danger of regulation in causing civilization to miss the bus for decades.
Nope, ChatGPT is not a threat to humans. AI, in general, though, does need regulation before it becomes too capable for humans to control.
Yea, regulate it in the west, while the east (especially China) continues AI development. The west will be fucked lol
Europe already decided that only big corp could afford developing generative models... I'm looking for a comfortable democracy to welcome me
Thank you for the video. I learned a lot 👍
Sam is doing no more than good ol' fashioned RENT SEEKING
The main issue for me is that it will be used to to do jobs that humans are doing.
So, you're content doing a job a mindless robot can do better?
@@caty863 Well if I was able to do something better and if I had the necessary skills as well then no. But I don’t and I like what I do so I would like to keep doing it.
@orangejuice9502 So what! You're not a robot, and so you have the capacity to either get better at what you do or branch out to doing something else. If you can't do that, then you're holding back the economy, a freeloader, and you deserve becoming absolete.
@@caty863 I wouldn’t call myself a freeloader but isn’t it important that you like your work?
I would hate it if people will have to get licenses to get the metal that can work with AI learning
It certainly isn’t always fear that causes regulation. Regulation usually comes after there have been accidents, bad results because of loopholes, unintended circumstances, etc. etc. you make it sound like regulation is bad when you say it is only sparked by fear.
hears comments about how ai wont take human jobs then walks into sams club and notices they have a old fashioned floor mopping machine driving around cleaning the store with a doodad to control the steering hooked up to a computer well im pretty sure the guy who used to drive the floor mopping machine just lost his job
There are some minor points I disagree. Nuclear technology infrastructure can be controlled more easily because it needs to be built on location. AI does not need a location because it can exist online. That can be only be slowed down a bit if the country is a dictatorship like North Korea where the goverment cuts all access to the internet. The cats out of the bag, the peole have already tasted the benefits of AI and while some are fearing it I think a lot more have embraced it. There is no going back and I don't think the goverments will be fast enough to supress effectively AI just like they were not fast enough to supress blockchain technology. I would love to learn how you produce these videos, how big is the team for this UA-cam Channel? The Quality is insanely good.
i agree on what you have just said, another key point is people can't really make nuclear since the radioactive materials that needs to create it, cannot be attain ordinary people, plus the location, another things with ai is, even if they will regulate chatgpt, learning ai is open source for everyone like learning python, machine learning and deep learning, and its not limited to location etc, plus i agree to what you said that people have already tasted the benefits of ai therefore, those who learned how to create it will still create it, btw his video is extremely good, you'll learn the importance of history and its implication and application to present times
to me, the graph at 7:00 has no information value. How many deaths in what time-period, please? And by "air-polution"? What about "radioactivity"?
Studies done by multiple organizations including NASA have come to very similar conclusions. Meta-analysis by NASA has concluded the millions of deaths have been prevented from nuclear power and millions more could have been prevented if the world progressed with nuclear power at the same rate France has. This is very similar to the vaccination debate with very similar consequences.
I believe in the development of AI technology. But we are long overdue for a right to have our identity protected. I think we need a constitutional amendment.
I think ai can be build by every country but nuclear can't build by every country
The countries anywhere close to the cutting edge will be tied to infrastructural and education advantages, like any software.
No such thing as AI. These LLMs are dumb as dirt, hallucinate often, and is nothing more than a gimmick.
Nuclear energy may not have caused anywhere near as much deaths as other forms of energy, but it does render places uninhabitable for thousands of years.
The USA has plenty of space that are uninhabitable to store nuclear waste. Biggest issue is that it takes way too many hurdles to get things done
What's the ultimate full potential of AI?
Maybe someone has already said this, but it seems the video is completely ignoring the fact that so far there is no feasible way to store nuclear waste, which is incredibly bad for the environment. I don’t see how anyone could say nuclear is the safest form of energy. Where to put the waste is already a big issue.
Send me back to year 2000 backwards. Sigh
I also think one of the issues is now everything is being called AI, even when its simply not. Its kinda now because used as a marketing tool as it brings more investors
Lmao no. A lot of it is A.I. Just not the type your thinking. We will have real Cortana’s one day ;)
@@raynersclips4223 a lot of it is just algorithms not actually AI. Ofcourse there are a lot of AI programs but if you really look into it there are a bunch of projects that use ‘AI’ but don’t actually it’s just for buzzwords. It's the same thing that happened with cloud computing. Cloud computing is supposed to be about provisioning and managing dynamic virtual resources instead of dedicated hardware, but when it was a buzz word it was being used to describe anything that stored things over the internet which is just a normal website...
AI as a vague term could technically refer to literally any autonomous algorithm, which is basically just everything that runs on a computer.
@@raynersclips4223 nah some stuff are more of predictive machine like the good ol' T9 predictive text, and others machine learning like the Tesla Autopilot Beta backend.
sure you can group em as Ai, but it sure is confusing ...
Neuroscientists and Researchers say we haven't reached true ai, and I believe them more than the CEO's
The title is incorrect.
Focus and attention will be a super power in the coming years. Those who can withstand the pull of addictive tech fueled by AI will be in a class of their own.
I wholeheartedly agree with your statement.
The US actually believes AI is the latest, greatest invention ever!
Always excellent !
AI is just code. You can't stop it even with regulation.
If the USA doesn't adopt AI, other countries certainly will
After watching the A.I. Dilemma this videos pales in comparison. They don't overlap much in terms of points, but where they do in assertions it's clear that the A.I. Dilemma was much better researched.
i do not like ai, i just dont. i feel like it will slowly be a way for ceos to have robots in there stores rather than humans to avoid paying them a hourly rate. and look at jobs in art, music, or anything that uses tech. idk i just not a fan it can help its not all bad im not very excited about it
Pls add key moments for easy navigation
the increase of income and jobs available in the western capitalist countries is at the expense of the rest of the developing world
There's a dishonest two-step that happens with nuclear power.
* Step 1: Look at how safe nuclear power is
* Step 2: Boo hoo. Why do we have all that regulation that made nuclear power safe?
We humans are program to adapt naturally, it is ok for us to feel threaten but as a software engineer I can assure you a better and more safe future. Let's reduce the negative energy we drive towards AI and try diverting that same energy towards creativity. AI is coming to stay!!!
Thanks John! Some thought we must protect ourselves from AI, the truth is we must protect AI from ourselves so we can all survive!
Radiation is dangerous.
It’s already out the bag
I think AI will give us better jobs but only if policy allows
But all it takes is one incident to make nuclear power a forever problem 🤷🏾♂️
No one can prove anymore that the comments here are not AI-generated.
No. It isn't.
He is dead wrong Ai is a threat like no other because it will help government take complete control accompanied by phones where they can hear and see everything a citizen does
Hmmm, although i kinda like most of your videos, I don't think this is 100% accurate because of many reasons.
Firstly, the reason for decline of nuclear power aren't just regulations, but it's mainly the skepticism of uneducated masses. There is another one highly regulated industry, aviation, and it works without those catastrophic issues caused potentially by regulations and most of the people in this industry even supports these regulations (in principle, there are some concrete regulations, that are highly controversial (for example the 1500 hours rule), but that's kinda expected)
Secondly, even if we would pretend, that the collapse of the nuclear power industry was solely due to regulations, there is still a major difference between these two industries. Nuclear power industry is hardware based and to make it even worse, traditional nuclear power plants are designed almost from the ground up every time anyone wants to build one. This results in regulations beeing applied to each power plant individually and requiring each power plant to be certified on its own. In comparison, ai is a software based product, so it requires only one certification whenever it's going to be used only by 10 human beings or by the whole population of the EU or USA. This makes a major difference because a huge nuclear power company has to get a huge number of certifications compared to a huge ai company which has to get the same amount of them as a brand new startup - exactly one certification per an ai model. For this reason, it is quite easy for a big tech company to create an ai model that will still be economically viable, but it will be impossible for a startup without it pouring a huge portion of the initial investment into this legal stuff.
And thirdly, my opinion might be unpopular, but I think that the effect regulations can have against the spread of misinformation is very limited , because of one last major difference between nuclear energy and ai. I am slightly competent in the field of nuclear physics, but I still can't build a nuke or an unsafe nuclear power plant. There are 2 reasons for it, a) it is a quite capital intensive task, but also b) it requires actual physical materials which can be regulated. In comparison, building anything software based, including ai, is a much less capital intensive task and also, the main difference, the only two things required to create a deep fake ai are my time and my knowledge and neider of these two things can be effectively regulated. Imo, I would inspire from the internet development, which wasn't regulated by anything like the current EU proposed AI act and it still works, it didn't destroy humanity, and it's biggest problem is the monopolization, which can be technically made better by regulations, but certainly not by regulations like the AI act or similar laws.
Personally, based on how much in favor of startups you are, I was actually quite surprised that you didn't mention the devastating effect of regulations on startups.
You can blame the fear mongering
I hope it gets regulated to shit tbh. But it won’t happen because other nations who do not take a moral path will be more powerful
Another Banger❗ The most consistent
Amazing video!
Chatgpt doesnt seem that smart and it very passivley responds from user input and not so much the user responding to chatgpt output
Yes it should definitely be regulated. It is basically a form of plagiarism. And it's allowing people to be stupid and make money. Not only that but it is so annoying to hear in videos it makes my blood curdle
You are the kind of poepole that Jhon mentioned in the vid. You dont understand de AI when u are saying is a form of plagiarism. You are afraid so you dont like AI. For example, people who write content on the Internet about a specific niche. They first read and investigate and then write their own content. But it is a content based on what they have read. and they're not copying, are they?
Artificial Intelligence does the same. So, no, AI is not plagiarizing, just that it is able to "read" a thousand times faster than a person. This applies to everything.
4:34 it sounds like you think investing in security is economically unprofitable. Then how many human lives would be destroyed, the environment. if they didn't. has the number of accidents decreased?
What he’s talking about is very obviously more nuanced than a catchall opinion that ‘investing’ in ‘security’ is (always?) unprofitable.
one of my top channel out of 3
General AI is fine but current model of Generative AI is just leaches. and you just can't put the cat back into the bag again, hell the google search for certain classics are jammed up with Ai fakes.
there's some usefulness but there will be need for that regulation, especially on the copyright...
How did you make that thumbnail? Thanks
why it feel like this is a propoganda
????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
I don't know. Do you 'feel' like the politicians and mainstream media saying that we must 'regulate' AI is propaganda ?
@@lemdixon01 toooo much of regulation may hinder the progress but also without regulation company may goooooooooo toooo wild like ( company with bad privacy facebook-meta , google ) . And even with regulation company always cross the lines.
@@theboy7440 on the other hand it stops people doing their own hombre inventions. Apple started out in a garage which that in itself wasn't a good thing because it lead to home computer being cheap and affordable for ordinary people. I'm very suspicious when the establishment say something is dangerous and as he explains in the video about nuclear power, it wasn't as dangerous as was made out with very few deaths and regulation has killed its potential to make cheap energy.
What is the supposed benefit of AI?
Fortunate I believe AI is a pipe-dream toted by blue-haired hot air feminist computer nerds.