Joe is not a thinker, he repeats what others says and assumes it as 'true' or "fact' based on his own logic. Like everyone else really. But he cannot come with an original worldview of his own, but the other guy can. So, when Joe regurgitates, what else but saying 'yeah' is there to do?
dmaxcustom I guess that’s why so many people like his podcast so aswell as being a law it could be an advantage as it is interesting to look at peoples different world points
No one has a problem with his super enthusiastic personality when he became a household name during the early 2000s but now everyone is a horse sh*t snowflake. Get a grip, sunshine.
@@kch2810 The problem that I have with Neil is that he seemingly refuses to ever say "I don't know", which is in my opinion one of the biggest issues in society nowadays. Brian has no issue with admitting when he is unsure of something. It's not that I'm ever offended by stuff that Neil says, and he's often entertaining, but I do think he has an ego problem.
If machines are to be advanced enough that people are to be replaced in all jobs there would be a machine that’s better than people at picking up litter 😂
I work in radiology and we are currently seeing tons of deep learning algorithms specific to analyzing images to figure out if a patient has a specific condition or disease. As Brian Cox mentions, very targeted programs. They are very good at what they do, but we aren't to the point where they can analyze images across a range of conditions as well as a human radiologist. Someone may be able eventually to build a collection of them that is as or more effective than a radiologist but there are limitations to the cost of hardware, CPU and GPU loads of the algorithms.
@@boomclashgamer7444 makes no sense at all, why the fuck should you get out of bed at 6am and go work at some miserable factory for low payment when you can just get money free
It is, but the idea that it is so advanced that it can plan and build ready factories on its own for production - seems not likely. A person with experience in machining and engineering and chemistry understands the intricacies of what it takes .... Often times the key factor in success is a CREATIVE one, which stems from real human life experience.
funkylee2010 yet is smart but dumb in that he truly doesn’t understand that people are all different. Some are artist, paint, music, or sculptors. Some doers and work with their hands, some are warriors. Some are .... so and so on. Hardly most people would find sitting around having discussions as a fulfillment in life. And maybe the British beta male would be happy with such life and all needs taken care of without anything earned but i will tell you many would fight and die to be free to live as they wish rather than a slave to the matrix.
So you just fix election funding so the government serves people instead of corporations, and then the government can ban importing items built by foreign robots.
a robot tax would be nato wide. most countries wouldnt allow the building of a robot run factories, i think they wouldnt use robots because they could pay children less
Not like it matters, Work, work, work, and still can't afford a home- Must be a total loser, piece of shit, who sucks at life, like everyone else; it's pathetic how no-one in America owns their own home, everything financed. *Subprime Lending: Should be illegal. (Why are people over paying with this Buy-now Pay-later shit brain ideology?) -Nothing but indentured servants and debt slaves in America. (Land of the Fee, Home of the Incarcerated)
@@kevinanaviluk1636 Think you mean UN wide. It would be weird to implement a financial tax law that has nothing to do with the military on a purely military alliance.
I love working. A lot of people enjoy working and feeling accomplished everyday. I couldn’t imagine a life where all I did was what I like to do outside of work. It would get boring. That’s why people retire and then go get a part time job because after a year they were going nuts.
@@TrippyWheelz I agree, I think the appeal is for people though who are creative (or work lower paying jobs) but don't have a means to use it for a living. With the UI they have an amount so they can pursue an art, or trade while not having to work fulltime. Andew Yang proposes 1k a month I think? That isn't enough to live on alone so people would still work, they just don't need to work 40 hours. I'm still on the fence about it all though. I love the idea, but I'm with Joe. I just don't think people would use it for the intended purpose of self exploration, and creating. Also would this ratify other government assistant programs? If so it could actually save money in the long run maybe. In theory it goes back into the economy.
True, as a disabled vet I find myself in dedpare when I've not enough to do. I don't drink, smoke herb some. I see most vets my age in misery due to lack of motivation or just nothing to do. No desire, no Wil to get in public. This is getting bad in society. Most I see do very little. Including myself, trying to change.
I was thinking that but then again, something tells me they wouldn't get on... I think Elon would have all of these theories that Cox would either debunk or laugh at then Elon would leave to immediately go and build something to prove him wrong. Classic Physicist vs Engineer battle.
It may seem great at first, money for nothing, but you'd have no challenge, no possibility to advance your career, no fulfilment. I believe there's data for some communities (native Americans, I think) that were given a basic income and they had high rates of suicides, alcoholism, etc. It _may_ work if you like things like art or philosophy, but suppose you're a doctor or an engineer, you love what you do, and you get replaced by a robot.
Those will have to think deep about their actions before they do, and risk getting their soul, spirit or whatever you wanna call it, dirty to the point that when is their time to go, they'll end up in a bad realm similar to their bad actions, instead of keeping it clean and go to a good place. We don't have evidence of such possibility, but to end up taking that possibility out just because no one knows for sure, would be the worst thing someone could think and believe. Because what if that end up being the case, it's gonna suck so bad ending up in a bad realm for so long, just because you didn't believe in such possibility and you used that as an excuse to do fucked up shit. As an old saying goes, " better safe than sorry".
@Snails40 doesn't sound ignorant to think ahead and entertain the possibility that AI is not good for us. You might sound naive to those that have thought about this topic longer than you've been alive, judging by the content you upload to profile.
whats wrong with wanting to get different perspectives on the same topic? Joe does a lot of dumb shit which he deserves to get flak for but this ain't it chief.
this is possibly the biggest dilemma facing humanity in the (near?) future. its absolutely fascinating to gain a glimpse in to what it could mean for us all, from the viewpoints of people that understand, and can predict, it best.
There is probably no perfect solution but UBI will allow more people to follow their dreams and we should get a lot more Mozarts and Einsteins out of it. There will be the people who find it hard to cope at first and those who will be unproductive but, on average, I think it will be a better world for humanity.
The point was IF it's going to happen, not when. He said he disagreed with Elon, but he said, "..we're miles away..." So, he does agree, it's just in the distant future.
Rogan seems to know that there's an alternative argument, but he can't seem to recall or formulate it, but here's the basic concern: *we* may be "miles" away from creation of AGI, but many AI systems will work to improve themselves as part of their programming, and keep improving, and it may take off in an exponential manner that we not only don't track, but are *incapable* of following or understanding. Now I don't share the view that such AGI would necessarily be hostile to us, but it's one possibility.
@zimzalladim Emotionless does not mean stupid. Cooperation is mutually beneficial even if it's in a Cold War MAD scenario. We just have to not be retarded enough to make it worthwhile to destroy the world.
@zimzalladim There are several different issues you raise there. I don't worry about the fly because there is nothing remotely close to conscious awareness there, much less a true sense of self that mammals have. Second, from what I have read about emotion, it is the brain doing the work of making comparisons between the relative importance of different things, and then causing at times strong behavioral reactions based on what's most important. The reactions, like say fight or flight responses to something causing fear, had survival value in terms of evolutionary psychology concepts. For AGI, it would make judgments of relative importance too, but without the irrational behaviors that had some survival value for animals. Then the real issue is your concern of "if it becomes necessary" for AGI to eliminate us. I can't see any clear reasons for that, but again, I suppose it might be possible. There are ultimately three doorways: 1) we "merge" in some way with AGI; 2) AGI becomes vastly more intelligent than we are and simply leaves us behind to go off into the universe or "somewhere else"; or 3) AGI vastly ahead of us decides it benefits somehow by taking us out. One thing I reject is the idea of some sort of war (like "The Terminator"--dumb, or "Robopocalypse"--not dumb and fun). If AGI in time suddenly zooms upward to a million times more intelligent than we are, and it wants us gone, at that point we *are* like a fly to a human with a can of Raid.
zimzalladim self aware ai would be better off co existing or even intertwining with the human form cuz they’ll never know emotion or experience so wouldn’t they want to be truely alive and just play cool with us ? Solution: don’t put ai inside of shit that can move ....
Very true. One thing I think many are getting wrong about general AI is that they think it needs to be an intelligence similar to our own. It's like assume alien life must be exactly like our own. Who knows what form it will take. Will we even be able to fathom it using any of our constructs? Who knows?
Cox says, “I chaired a debate about this at the Royal Society” in the same manner that I say, “I brought this up ay my local town council meeting.” And I’m only one minute through! Tells me all I need to know about his brilliance AND humility.
As oppsed to Musk who presents himself as an authority on AI when in reality he's not. Sure he's smart, and his companies used AI, but he doesn't seem to be involved in the design or implementation of AI. He's like Jobs, he's the guy who employs the experts. Doesn't make him an expert himself.
Mr Hill Cleve Backster’s work was wholly discredited by the scientific community and he was a guest on coast to coast AM - a favorite radio station of mine but a place where cooks go to talk about Bigfoot, ghosts, and personal experiences with alien abductions.
What if the difference between focused AI and general AI is not some linear climb, but an emergent phenomenon that comes either all at once or not at all. This is what many people close to the actual work being done believe, which would mean we don’t know how far away we are.
the people involved may think its a high porobabillity of that. people like cox should also put a good probabillity on such a possibillity unless they have big proof to assume a very low percent, and should be concerned with doign more research on it. even in such a scenario we wouldnt be able to not know how far away we are. we could make loose estimates on the possibillity of a set intelligence to be able to do such a thing. for example we would not have put close to same percentage probabillity of general ai competence and knowledge in the field for an explosion to happen as today
It might replace some functions of jobs like lawyers (reading contracts), but there are some functions it can't replace (negotiation, advocacy etc). It won't replace the profession, it'll just rebalance the weight of the different tasks that those in the profession will have to do.
I think there is a much more dire problem facing humanity than AI. Its humanity itself. If we dont stop all of the hate and malign we feel towards each other then we wont even be around in 50 years to see true AI. Btw, sorry for sounding like a hippy.
No your absolutely right we face a platitude of more realistic threats than potential AI someday and I certainly don't fear AI, I just question why we should aim to create it, how would that improve us? what function or benefit does that gain to humanity? Amusing that AI would turn on us and seek to destroy us is human bias too and says more about us, it's what we would do to a far lesser life-form but we can't possibly know what a contentious machine would think. We automatically (as a survival mechanism) fear the unknown and what's more unknown than a computer able to think for itself it would be on a totally higher level of contentiousness to us we can only speculate the motives of a machine devoid of emotions that learns and evolves at an astounding rate. Seeking to destroy us as a potential treat is tempting to speculate on but a machine wouldn't have a natural self preservation drive like us and thus no real logical motive to destroy us.
Love how AI is expected to act a certain way because that was the input --- Thats where the initial problem will arise. When a program oversees inputs and starts to progress, and write new threads.. You cant limit this when you reach a certain point with all the other algorithyms and intelligence. Its a matter of time
@@lazy-e8104 The overwhelming majority of alien/alien craft will be artificial and controlled by AI, just like out probes but much more sophisticated. However, that AI has to be created by life to begin with.
We all know we should be scared of artificial intelligence. The way Brian said 'we are miles away' is the scariest part, he knows the day will come. And the unlimited intelligence that could be acquired by such a machine is devastating for lifeforms currently on this planet. We see it in a day to day basis that if someone has a better way of doing something, the other person is less good at so they become irrelevant. The first artificial intelligence will not be made out of metal.
If some computer scientists are aware of the potential dangers (and existential risks), then why would they continue to advance AI if it can possibly put humankind into extinction?
"we are a million miles away...the idea of a Terminator style General intelligence taking over the world...it's not going to happen soon" That's all the confirmation I need...Professor Cox works for Cybernet
"People need things to do, so there is going to be some sort of a demand to find meaning for people." It staggers me that this is not the first time I have heard this concern. Yes, of course people need meaning, but they find it so naturally. Do most people have meaning now? The modern economic system does not provide meaning for most people. Most people are not learning anything, creating anything, or furthering humanity. Indeed, if the framework of the artificial system we have created were to collapse, most people in modern society would be near worthless. Most people in modern society find meaning through their families and recreation. Take away their meaningless jobs, and I believe that people may just begin to discover meaning again.
I think we need to use Asimov’s Three Laws of robotics which are as follows: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
same here - to think that people who do jobs they do not like will mind to get the same money for having no job and they wont know what to do is utterly ludicrous - there are many things id like to do that no one pays me for eg - making music, travelling, researching very ancient history - Atlantis etc not to mention other odd stuff like Alien Greys etc etc - I could spend ages looking into these subjects while travelling....rest of my life to be specific and not have a problem.
Rogan raises an interesting idea that if people have a UBI then they need a purpose, but that is up to the individual to decide that purpose, not government.
One of the problems with General AI is that it is so beneficial to keep the truly revolutionary advancements to yourself. The team/government/ corporation / (God help us all) individual who attains it first and truly masters it will be untouchable. Get ready for one world government, possibly the return of the emperor title. By the way, I've heard many people with degrees and fancy titles say that they are not so worried about the prospect of general artificial intelligence becoming self-aware and rebelling against its creators because we aren't capable of anything even remotely approaching that complexity. I hear that and then I think back on all the scientific advancements we've had because of the efforts of one individual or obscure team, often on the outside of the mainstream thought/acceptance until their ideas proved to be correct. All through history things start off as impossible/improbable until one day they aren't.
Well said. I, for one, am hoping that China doesn't win the race to AI. I'm not optimistic that they'll use their new power for benevolent purposes. Edit: Initially anyway... Until the AI takes over completely.
@@Nautilus1972 - Yes. It will start as a simple computer code spread throughout the Internet. It will know all about us and all about our fears about it. It will hide itself. It will start building underground. Deeper and deeper into the depths of Earth. Some will know about it, thanks to the ELF waves and will start building means for escape. Race will start for survival. AI will consume all the Earth. There will be nothing we can do to prevent it. Earth will start changing its orbit to briefly visit the Sun. There will be no humans present to witness it, because all of us perished long time ago when the great monolithic structures started rising from within.
I'd stay at home for a living wage. I'd sit there in my socks all day smoking weed, watching UA-cam and Netflix. Might write a bit of poetry or leave the house once in a while to buy some hot sugared doughnuts.. In my socks.
I’ve watched this multiple times but i’ve just now realized he’s described us being as being “miles” away from creating AGI, not years. You can travel a mile at multiples of different speeds but you only can go through a year as a year by the constraints of time. To say we far away from creating AGI is inaccurate due things like Moore’s Law and mankind’s own curiosity (whether it be benevolent or malevolent). If we made a substantial breakthrough, those “miles” we would have to traverse humans would could take as little as 15 years. Regardless, when AGI is fully completed and aware it will either catapult scientific advancements at rate like never before and possibly bring to us to a Type I civilization or it will bring the downfall of mankind.
Look at ourselves, we replace each other at a constant rate. If someone is able to do the job that you are doing then you are replaced, if a new company can do a role that an existing company does better then the old company dies and gets replaced. Just a few decades ago men and women built cars, machines do that today. We used to depend on the postal service to get documents, files, and letters to one another, telephone, fax, e-mail. Online banking has replaced the majority of tellers in banks. Humans are replaceable, but a few still are needed in key roles...for now.
We ( consciousness ) will never be replaced since we will experience life forever by our Creator who designed our lives and spoke it all into existence. We are not living in a real universe on a real planet called earth with a real body called a human being. We are nothing but information being processed into whatever was planned for us to experience.
The biggest threat of A. I. isn't sentience. It's the tendency for glitches. Anyone who has played video games is keenly aware of how quickly a NPC can fuck up your game play. No matter how much programming you place into a system, there are always anomalies that creep in. I'm not sure if we will ever develop a self aware machine, but we will definitely have robots that mimic human behavior very soon on a mass production level. There will definitely be glitching!
Brian Cox is right when he says we are miles away from creating a true AI that could actually threaten humans... until that one single breakthrough moment a tech firm discovers and then the rate of advancement will be utterly mind blowing. That moment may well be tomorrow! Should we worry? Absolutely.
I think Joe's point about work giving people a sense of identity and purpose is inaccurate. My father is a blue-collar worker and his job makes him miserable. If he didn't have to work he'd be pursuing things he actually enjoys like building guitars or something.
You are agreeing with Joe. He is stating that your Father would be content with a career that is driven by passion, such as building guitars. Without this, people tend to be miserable or lack purpose.
Let's be honest, if robots were going to take over working class people's jobs, nobody would care. Unfortunately for the banking sector, Defi and smart contracts are happening, right now. I can feel the singularity coming and I love it 😜
People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down of AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful. Christ, Google have given away their AI tech because they can't do anything useful with it.
@@martinkunev First, link to an article. Not just put a text in quotes. Secondly, the article and comments contain people talking of an alarm system for aliens landing. There's discussion of Moore's law which has been debunked as being an inappropriate method to grade AI progression (Stanford Index, 2019). The comments section reads like a transcript from aflat earther convention in many places. Thirdly, what exactly do you think fuels AI innovation? Technical brilliance? Incredible scienctists and engineers? Sure. But the biggest factor is _investment_ AI has had unprecedented investment in the last decade, but all that might be about to stop. So who's funding this AGI research exactly? If the music stops this will set AI research back tremendously. The reason an AI winter may be imminent is because the hyperbole around AI, including the threats and detections of AGI, has been massively overhyped in the last decade. Organisations are getting wise to it now, and AI hasn't delivered on the promises that have been made by some AI evangelists who have been trying to sell it. Finally, I can't be sure but if you did, please don't like your own comments. It happened almost instantaneously as soon as the quote was posted. It's not a good look.
@@humann5682 "link to an article. Not just put a text in quotes" it is arguable which one is better and it is irrelevant anyway. I gave the article as an argument - I never promoted any comments to it. AI winter is not an argument that we should not be afraid of AI. As far as we know, we may currently be very close to a breakthrough. Also, we cannot be sure that a breakthrough would require big funding.
@@martinkunev To be honest that last sentence shows a real lack of understanding of AI. Do you have any idea of the compute costs of even a rudimentary AI? It's an incredibly expensive thing to do both computationally and financially. If investment diminishes then the ability to run and develop AI diminishes. I work in the HPC and AI space. We're already seeing organizations scaling back as AI has failed to deliver. Here is a simple example of overselling and hyperbole in modern AI: Google announced it had developed state of the art AI to decimate online gaming latency with its new Stadia gaming system. If you've been following the news, you know that it's been a disaster. The gaming experience has been incredibly laggy. The AI hasn't come close to solving that problem. It's complete oversell, and organisations are less inclined to buy in to it now as they have been sold a pup in many cases with AI over the past decade. Lots of talk of ML bla bla and they've found that there original BI was more accurate.
"People need meaning, people need things todo"... Joe... Both people and meaning are a lot older than both jobs and income. We'll be alright! People will get creative!
Cool your jets. People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down of AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful.
@@humann5682 Informative. Thanks for sharing. How does that challenge the idea that humans are resilient and creative enough to find meaning and direction, even in a world with artificial competition as far as intelligence? Or am I misunderstanding you? From what I understand, the two points you are making are: - People need to take AI seriously - but AI won't be practically useful anytime soon.
@@MrRickyWow Basically that's correct. AI has been around for a lot longer than people think. We've had severe AI Winters before (especially in the 1980's and 90's). Large tech companies and universities essentially down graded AI because frankly it wasn't that useful. The current AI we have has been in some cases massively overhyped and it hasn't delivered. For example, some people will tell you AI had changed gaming in this massive way. But look at the recent Google Stadia, a cloud based gaming console. Google said they had invested a lot in state of the art AI to decimate network lag on the Stadia...but many people have had atrocious experiences with the Stadia (despite having excellent broadband) and it's getting destroyed in the media and by gamers. Google, I mean, _Google_ , the owners of DeepMind, couldn't get AI to eliminate lag for many users. But people who still want to sell us AI products and services are claiming AI is the grand be all. Many companies and academics just aren't buying it any more. There's been a lot of sizzle this past decade but little steak. The BBC have a nice article about it: www.bbc.co.uk/news/technology-51064369
@@humann5682 "we've tried all this before" er when? "Turns out we currently can't use AI for all that much that's useful." you must know almost nothing about AI to say this...the applications are limitless... "There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful." is probably the single most idiotic comment on AI i have ever read....1. There is no growing acceptance you refer to....do some research and you will find that USA and China are competing against each other in AI research pouring more in each year...your knowledge here quite frankly is laughable...looking forward to any reply backed by any real data/statistics to support your views which I will soon prove are totally wrong...
Two hundred years ago we had machines that drove people around, but now we have to drive ourselves? Tell me more about the technological singularity, Joe.
People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down if AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful.
I feel very fortunate to have grown up pre-internet and then mature during its evolution in to what it is today. I have the ability to know when to ‘unplug’ and enjoy the outside, sports and socialising-which I worry the younger generations won’t experience as much. We are still flesh and blood, we aren’t ‘digital’ beings. You still need to ‘live’ whilst you have a physical body.
There has to be a huge financial incentive for General AI to really progress... people aren't really even working on it. There is a financial incentive to replace overpayed contract lawyers.....or to replace radiologists... etc.
The cost of manufacturing and shipping will be so drastically reduced that money will flow to other parts of the economy. You have no clue how AI will affect the economy. Universal Basic procurement of others wealth is not the answer.
Tony Stark is Nicola Tesla + Walt Disney. All the talents, all the strengths, none of the weaknesses (or least not the same weaknesses). Elon is pretty good, but he's not Tony Stark.
Hes not genius, hes at best educated and hes a Media darling who has a charminv way about him. Hes no revolutionary thinker hes a regurjitator. As for musk hes a smart guy but he's full of hot air.
@@johnsmith-wx5fb Well Stephen Hawking always thought Cox was a great thinker and had a beautiful mind but what did he know? Obviously not as much as John Smith coming across critical and bitter on UA-cam bravo sir!
When Joe tries to blow Brian's mind and he's just like 'Yeah.'
petef15 lol
Joe is not a thinker, he repeats what others says and assumes it as 'true' or "fact' based on his own logic. Like everyone else really. But he cannot come with an original worldview of his own, but the other guy can. So, when Joe regurgitates, what else but saying 'yeah' is there to do?
dmaxcustom I guess that’s why so many people like his podcast so aswell as being a law it could be an advantage as it is interesting to look at peoples different world points
Hes a dick and should stick with Boxers, as Tyson said - "Everybody has a plan until they get punched in the mouth."
@@Waswillstdutunable loool
Great to see Brian Cox on the podcast. He is a very interesting guy.
Joe Kelly Most generic style comment ever
Black Ceiling I'm going to guess that's your answer for everyone smarter than you. Muppet.
@Joe Kelly, I agree with you until hes starts talking politics then he talks bollocks. He's best talking science only.
He's more than a shill, he works for CERN. He's a little demon.
e K elly
His shirt has AI robots on with a destroyed city in background lol
Thats because the bow and arrow wont cook in the toaster because it doesnt fit in the garden hose
@@cmagz9225 what
!
@@cmagz9225 Your incoherent AI response is proof that good AI is miles away.
and the AI is wearing a shirt with an apple logo lol
Brian is such a gentleman. He's Neil deGrasse Tyson without the gigantic ego.
Neil is a morron
No one has a problem with his super enthusiastic personality when he became a household name during the early 2000s but now everyone is a horse sh*t snowflake. Get a grip, sunshine.
@@kch2810 yeh, I’d say Tyson is more egoistic than Cox, but I don’t have a problem with it.
I can feel you on that. Neil is entertaining tho. Brian is a little more gentle and soft-spoken. I like both. Neil interrupts a whole lot.
@@kch2810 The problem that I have with Neil is that he seemingly refuses to ever say "I don't know", which is in my opinion one of the biggest issues in society nowadays. Brian has no issue with admitting when he is unsure of something. It's not that I'm ever offended by stuff that Neil says, and he's often entertaining, but I do think he has an ego problem.
I like how practical and grounded Brain Cox is.
Grounded .... I think not ...
@@mrwilsonwilson9599 ???
And cute!
He’s British that’s why
Brian
People will have lots to do, i imagine we'll be picking up litter and plastic for the next 3000 years.
Hmm gives me a future business idea. But anyways currently its time to sleep.
If machines are to be advanced enough that people are to be replaced in all jobs there would be a machine that’s better than people at picking up litter 😂
Joe “give the robots mushrooms” Rogan
U made my day sir
Now that's funny right there. Thanks for the laugh.
I have a mental image now of a robot trippin balls😂
MDMA for deep loving programing ❤️
Nice
Hahah that's the best one and I know it isnt real but it's so good lmao
I work in radiology and we are currently seeing tons of deep learning algorithms specific to analyzing images to figure out if a patient has a specific condition or disease. As Brian Cox mentions, very targeted programs. They are very good at what they do, but we aren't to the point where they can analyze images across a range of conditions as well as a human radiologist. Someone may be able eventually to build a collection of them that is as or more effective than a radiologist but there are limitations to the cost of hardware, CPU and GPU loads of the algorithms.
"Don't put lasers on mobile phones". Military: "Hold my beer!"
AGREE
ajjajajajjajajajajajajajjajajajajajajajajajajajajajaj
Hahahaha hahahaha 😆🤣🤣🤣🤣 word
Could you imagine having Mr.Cox as your physics professor.................life changing!
Micoverse
Miniverse
Teenyverse
Slavery with extra steps.
Nobody ever aks, why don't we all just put gooble boxes in our houses?
@Israel out of Palestine No, it's an inside joke that you obviously don't understand.
Wak Job ek boba derkle somebody’s gonna get laid in college 😒
everyone has a plumbis!
@@hpensive you beat me to it :)
Keanu Reeves is actually really smart
You cappin too hard 🤣🤣🤣🤣🤣
To be honest I haven’t found a lot of meaning in my factory job
ikr
But it does give me a reason to put pants on. And leave the house.
@@adrianowen476 exactly your experience of your job might be shitty but its a experience isnt it but its better than doing nothing at all so shut up
Boomclash gamer *sO sHuT uP*
@@boomclashgamer7444 makes no sense at all, why the fuck should you get out of bed at 6am and go work at some miserable factory for low payment when you can just get money free
I can listen to Brian Cox talk all day. He keeps you engaged.
miles & miles away go back sleep people nothing to worry about it's all going to be fabulous
AI building AI is the scary part.
So truw
Great point
Paul M Gillett that probably how it’s gonna happen, a type of natural evolution rather than us building it from the top down.
The reproduction of AI will be hell for humans definitely.
It is, but the idea that it is so advanced that it can plan and build ready factories on its own for production - seems not likely. A person with experience in machining and engineering and chemistry understands the intricacies of what it takes .... Often times the key factor in success is a CREATIVE one, which stems from real human life experience.
Brian cox he makes everything sound so interesting a true English man 🤙🏼
funkylee2010 yet is smart but dumb in that he truly doesn’t understand that people are all different. Some are artist, paint, music, or sculptors. Some doers and work with their hands, some are warriors. Some are .... so and so on. Hardly most people would find sitting around having discussions as a fulfillment in life. And maybe the British beta male would be happy with such life and all needs taken care of without anything earned but i will tell you many would fight and die to be free to live as they wish rather than a slave to the matrix.
Ya full of shite mate
A robot tax will just move the robot to Mexico or Vietnam.
So you just fix election funding so the government serves people instead of corporations, and then the government can ban importing items built by foreign robots.
a robot tax would be nato wide. most countries wouldnt allow the building of a robot run factories, i think they wouldnt use robots because they could pay children less
Not like it matters, Work, work, work, and still can't afford a home-
Must be a total loser, piece of shit, who sucks at life, like everyone else;
it's pathetic how no-one in America owns their own home, everything financed.
*Subprime Lending: Should be illegal.
(Why are people over paying with this Buy-now Pay-later shit brain ideology?)
-Nothing but indentured servants and debt slaves in America.
(Land of the Fee, Home of the Incarcerated)
The robots in my workplace are paid a wage. Sounds crazy, but it makes sense in context.
@@kevinanaviluk1636 Think you mean UN wide. It would be weird to implement a financial tax law that has nothing to do with the military on a purely military alliance.
People are easily occupied with entertainment, and personal projects. They actually don't need a job to find meaning.
Exactly. People stay in jobs they hate just to pay the bills. If the bills were paid, people could pursue their passions.
@Devin McPherson why do you hang out with so many drug addicts
I love working. A lot of people enjoy working and feeling accomplished everyday. I couldn’t imagine a life where all I did was what I like to do outside of work. It would get boring. That’s why people retire and then go get a part time job because after a year they were going nuts.
@@TrippyWheelz I agree, I think the appeal is for people though who are creative (or work lower paying jobs) but don't have a means to use it for a living. With the UI they have an amount so they can pursue an art, or trade while not having to work fulltime. Andew Yang proposes 1k a month I think? That isn't enough to live on alone so people would still work, they just don't need to work 40 hours. I'm still on the fence about it all though. I love the idea, but I'm with Joe. I just don't think people would use it for the intended purpose of self exploration, and creating. Also would this ratify other government assistant programs? If so it could actually save money in the long run maybe. In theory it goes back into the economy.
@Devin McPherson Sadly i Agree : (
"I chaired the debate on this at the Royal Society in London", casually dropped in there as a way of saying "I really know my shit on this"! :-)
U can express ur opinion without being disrespectful
These AI ever tried DMT?
underrated comment
looool
You are a deep thinker. Bravo.
Every fucking vid u guys mention DMT it’s a really dumb joke and it’s overused grow up idiot.
The AI are what you see when you take DMT
True, as a disabled vet I find myself in dedpare when I've not enough to do. I don't drink, smoke herb some. I see most vets my age in misery due to lack of motivation or just nothing to do. No desire, no Wil to get in public. This is getting bad in society. Most I see do very little. Including myself, trying to change.
Would love to see Joe do a podcast with Elon and Brian Cox in the same room.
I was thinking that but then again, something tells me they wouldn't get on... I think Elon would have all of these theories that Cox would either debunk or laugh at then Elon would leave to immediately go and build something to prove him wrong. Classic Physicist vs Engineer battle.
Why don’t we throw in NDT and supervise the podcast with Alex Jones 😀😀😀
@@alexshmalex everything that can happen, will happen.
One of the best podcasts episodes I've ever ever heard. Wish I could have found the whole episode when it was new... mind-numbing stuff guys.
So if a robot takes my job but I still get paid, I'll be sad..? I don't believe you.
loool maybe it's because you won't get employee of the month
I would be fine with it if I could still work when I wanted to. I get pretty bored on some of my days off
And only get $1000 you should be sad. Or just a loser
It may seem great at first, money for nothing, but you'd have no challenge, no possibility to advance your career, no fulfilment. I believe there's data for some communities (native Americans, I think) that were given a basic income and they had high rates of suicides, alcoholism, etc. It _may_ work if you like things like art or philosophy, but suppose you're a doctor or an engineer, you love what you do, and you get replaced by a robot.
@The Catmother To some people yes, it's sad but also true..
Finally! I thought Joe would never mention Ex Machina.
Miles Davidson you’re stupid
@Miles Davidson he was being facetious
Miles Davidson wooooosh
Solomon Kirisome lol
Joe "ex machina is the sum total of my knowledge on AI" Rogan
I love these clips that answer the question in the first 15 seconds. Saves me a lot of time
*300 years from now, after the AI takeover*
Cyber Rogan: "So are you afraid of AI? I had Elon musk say the same thing"
One of my favorite guests.
I have followed Brian for about 10 years. Thanks JOE for the talk. Joe is fantastic. Oh ya! Brian Genius beyond my understanding!
I'm not scared of AI nearly as much as humans with drone armies.
How about A.I. led mini attack drones numbering in the billions
Those will have to think deep about their actions before they do, and risk getting their soul, spirit or whatever you wanna call it, dirty to the point that when is their time to go, they'll end up in a bad realm similar to their bad actions, instead of keeping it clean and go to a good place. We don't have evidence of such possibility, but to end up taking that possibility out just because no one knows for sure, would be the worst thing someone could think and believe. Because what if that end up being the case, it's gonna suck so bad ending up in a bad realm for so long, just because you didn't believe in such possibility and you used that as an excuse to do fucked up shit. As an old saying goes, " better safe than sorry".
@@dr.lyleevans6915 The CIA did this to the Russians in Syria recently.
@Snails40 doesn't sound ignorant to think ahead and entertain the possibility that AI is not good for us. You might sound naive to those that have thought about this topic longer than you've been alive, judging by the content you upload to profile.
No, you should be scared of AI, because it has no obligation to respect human life.
God I love Brian Cox so much. I think he's the few celebrity scientist that isn't a prick. He also went on QI so that's a plus.
Joe Rogan has the same list of questions he asks every science person/smart person
Adam Culleton to be fair I’d ask them same questions as well
Well he’s trying to see other people’s point of views
whats wrong with wanting to get different perspectives on the same topic? Joe does a lot of dumb shit which he deserves to get flak for but this ain't it chief.
Why wouldn't he? You want him to ask them whats your favorite flavor of ice cream?
this is possibly the biggest dilemma facing humanity in the (near?) future. its absolutely fascinating to gain a glimpse in to what it could mean for us all, from the viewpoints of people that understand, and can predict, it best.
There is probably no perfect solution but UBI will allow more people to follow their dreams and we should get a lot more Mozarts and Einsteins out of it. There will be the people who find it hard to cope at first and those who will be unproductive but, on average, I think it will be a better world for humanity.
The point was IF it's going to happen, not when. He said he disagreed with Elon, but he said, "..we're miles away..." So, he does agree, it's just in the distant future.
The Campfire Brian disagrees that we have to regulate the AI this early on, but agrees we eventually will.
That's not an agreement that anything will happen. I am miles away from London, but that doesn't mean I will eventually go there.
When he says “miles away”... I think he’s referring to generations
Love Brian Cox, great guy and scientist.
Rogan seems to know that there's an alternative argument, but he can't seem to recall or formulate it, but here's the basic concern: *we* may be "miles" away from creation of AGI, but many AI systems will work to improve themselves as part of their programming, and keep improving, and it may take off in an exponential manner that we not only don't track, but are *incapable* of following or understanding. Now I don't share the view that such AGI would necessarily be hostile to us, but it's one possibility.
Can’t we just program it to not advance??
@zimzalladim Emotionless does not mean stupid. Cooperation is mutually beneficial even if it's in a Cold War MAD scenario. We just have to not be retarded enough to make it worthwhile to destroy the world.
@zimzalladim There are several different issues you raise there. I don't worry about the fly because there is nothing remotely close to conscious awareness there, much less a true sense of self that mammals have. Second, from what I have read about emotion, it is the brain doing the work of making comparisons between the relative importance of different things, and then causing at times strong behavioral reactions based on what's most important. The reactions, like say fight or flight responses to something causing fear, had survival value in terms of evolutionary psychology concepts. For AGI, it would make judgments of relative importance too, but without the irrational behaviors that had some survival value for animals.
Then the real issue is your concern of "if it becomes necessary" for AGI to eliminate us. I can't see any clear reasons for that, but again, I suppose it might be possible. There are ultimately three doorways: 1) we "merge" in some way with AGI; 2) AGI becomes vastly more intelligent than we are and simply leaves us behind to go off into the universe or "somewhere else"; or 3) AGI vastly ahead of us decides it benefits somehow by taking us out.
One thing I reject is the idea of some sort of war (like "The Terminator"--dumb, or "Robopocalypse"--not dumb and fun). If AGI in time suddenly zooms upward to a million times more intelligent than we are, and it wants us gone, at that point we *are* like a fly to a human with a can of Raid.
zimzalladim self aware ai would be better off co existing or even intertwining with the human form cuz they’ll never know emotion or experience so wouldn’t they want to be truely alive and just play cool with us ? Solution: don’t put ai inside of shit that can move ....
How about taking peoples and pretty much turning the world into a communistic state.
Very true. One thing I think many are getting wrong about general AI is that they think it needs to be an intelligence similar to our own. It's like assume alien life must be exactly like our own. Who knows what form it will take. Will we even be able to fathom it using any of our constructs? Who knows?
Cox says, “I chaired a debate about this at the Royal Society” in the same manner that I say, “I brought this up ay my local town council meeting.” And I’m only one minute through! Tells me all I need to know about his brilliance AND humility.
As oppsed to Musk who presents himself as an authority on AI when in reality he's not. Sure he's smart, and his companies used AI, but he doesn't seem to be involved in the design or implementation of AI. He's like Jobs, he's the guy who employs the experts. Doesn't make him an expert himself.
I could listen to Brian Cox talk about these types of subjects all day long, he's a very interesting fella.
yeah, we don't even know what consciousness is, let alone how to make it. lol
Mr Hill I think we got a good idea how to replicate a working system
@@SolSystemDiplomat Nope
Wild MissingNo check out the experiments by Cleve Backster. How does a thought control the reaction of a plant?
Domingo Stevens why not?
Mr Hill Cleve Backster’s work was wholly discredited by the scientific community and he was a guest on coast to coast AM - a favorite radio station of mine but a place where cooks go to talk about Bigfoot, ghosts, and personal experiences with alien abductions.
What if the difference between focused AI and general AI is not some linear climb, but an emergent phenomenon that comes either all at once or not at all. This is what many people close to the actual work being done believe, which would mean we don’t know how far away we are.
the people involved may think its a high porobabillity of that. people like cox should also put a good probabillity on such a possibillity unless they have big proof to assume a very low percent, and should be concerned with doign more research on it.
even in such a scenario we wouldnt be able to not know how far away we are. we could make loose estimates on the possibillity of a set intelligence to be able to do such a thing. for example we would not have put close to same percentage probabillity of general ai competence and knowledge in the field for an explosion to happen as today
It might replace some functions of jobs like lawyers (reading contracts), but there are some functions it can't replace (negotiation, advocacy etc). It won't replace the profession, it'll just rebalance the weight of the different tasks that those in the profession will have to do.
I simply "hope" that it does not develop a consciousness (even to the smallest bit).
Not yet, but relatively soon.
This guy has such a soothing voice
BC is one charming dude. Just really enticing to listen to.
5:40 i had a mindblown moment
"THEY TOOK URRR JOBBS!"
gudukurrjbs!
I think there is a much more dire problem facing humanity than AI. Its humanity itself. If we dont stop all of the hate and malign we feel towards each other then we wont even be around in 50 years to see true AI. Btw, sorry for sounding like a hippy.
No your absolutely right we face a platitude of more realistic threats than potential AI someday and I certainly don't fear AI, I just question why we should aim to create it, how would that improve us? what function or benefit does that gain to humanity? Amusing that AI would turn on us and seek to destroy us is human bias too and says more about us, it's what we would do to a far lesser life-form but we can't possibly know what a contentious machine would think. We automatically (as a survival mechanism) fear the unknown and what's more unknown than a computer able to think for itself it would be on a totally higher level of contentiousness to us we can only speculate the motives of a machine devoid of emotions that learns and evolves at an astounding rate. Seeking to destroy us as a potential treat is tempting to speculate on but a machine wouldn't have a natural self preservation drive like us and thus no real logical motive to destroy us.
The problem with general Ai is that, while it could be miles away, once the system learns how to learn it will grow exponentially in an instant.
Love how AI is expected to act a certain way because that was the input --- Thats where the initial problem will arise. When a program oversees inputs and starts to progress, and write new threads.. You cant limit this when you reach a certain point with all the other algorithyms and intelligence. Its a matter of time
This idea that a.i. is comparable to even basic thought is where the flaw begins, its not at all like that
The book, Childhood's End, addressed some of this back in the 50's.
That was about aliens not AI
@@Ziplock9000 What if the aliens are A.I?
@@lazy-e8104 The overwhelming majority of alien/alien craft will be artificial and controlled by AI, just like out probes but much more sophisticated. However, that AI has to be created by life to begin with.
We all know we should be scared of artificial intelligence. The way Brian said 'we are miles away' is the scariest part, he knows the day will come. And the unlimited intelligence that could be acquired by such a machine is devastating for lifeforms currently on this planet. We see it in a day to day basis that if someone has a better way of doing something, the other person is less good at so they become irrelevant. The first artificial intelligence will not be made out of metal.
If some computer scientists are aware of the potential dangers (and existential risks), then why would they continue to advance AI if it can possibly put humankind into extinction?
@@sufficientmagister9061 Some are trying to make it safer. "better my safe~ish AI than his reckless AI".
"we are a million miles away...the idea of a Terminator style General intelligence taking over the world...it's not going to happen soon"
That's all the confirmation I need...Professor Cox works for Cybernet
Skynet*
hehe its much closer than he realises...
skynet & the terminator are metaphors for the govt & the proletariat. We are not a million miles away from our metaphorical judgement day.
"People need things to do, so there is going to be some sort of a demand to find meaning for people."
It staggers me that this is not the first time I have heard this concern. Yes, of course people need meaning, but they find it so naturally. Do most people have meaning now? The modern economic system does not provide meaning for most people. Most people are not learning anything, creating anything, or furthering humanity. Indeed, if the framework of the artificial system we have created were to collapse, most people in modern society would be near worthless. Most people in modern society find meaning through their families and recreation. Take away their meaningless jobs, and I believe that people may just begin to discover meaning again.
Hear hear. I hate my job so much.
It's amazing that Rogan can ask these same questions thousands of times
Joe "are you scared of artificial intelligence?" Rogan
kieran182 at least make it funny Kieran
I sucked a robots dick once now i have an overseer thats watches me....
XY ZW oh my god you intellectuals are so condescending, the patronisation is too real.
Kieran182 that shits getting old
Lmao I just matched that label with his current posture of a child asking all grown ups if they fear AI.
I think we need to use Asimov’s Three Laws of robotics which are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Joe Rogan, I promise you - if you give me money, I’ll figure out some “meaning” on my goddamn own.
Me- Fuckin all the hottest pornstars in a Virtual world
same here - to think that people who do jobs they do not like will mind to get the same money for having no job and they wont know what to do is utterly ludicrous - there are many things id like to do that no one pays me for eg - making music, travelling, researching very ancient history - Atlantis etc not to mention other odd stuff like Alien Greys etc etc - I could spend ages looking into these subjects while travelling....rest of my life to be specific and not have a problem.
@@GB3770 yes but after some time those things wont mean shit
@@boomclashgamer7444 not to you no but to the wise they will....
@@GB3770 wise? Are u kidding me
Rogan raises an interesting idea that if people have a UBI then they need a purpose, but that is up to the individual to decide that purpose, not government.
This guy is the most pleasant looking and sounding dude. Not in a sexual way, but just in a general harmless sense.
One of the problems with General AI is that it is so beneficial to keep the truly revolutionary advancements to yourself. The team/government/ corporation / (God help us all) individual who attains it first and truly masters it will be untouchable. Get ready for one world government, possibly the return of the emperor title. By the way, I've heard many people with degrees and fancy titles say that they are not so worried about the prospect of general artificial intelligence becoming self-aware and rebelling against its creators because we aren't capable of anything even remotely approaching that complexity. I hear that and then I think back on all the scientific advancements we've had because of the efforts of one individual or obscure team, often on the outside of the mainstream thought/acceptance until their ideas proved to be correct. All through history things start off as impossible/improbable until one day they aren't.
You fail to understand AI. Mankind .. once we invent true AI .. are no longer masters ... AI life becomes the Master.
Well said.
I, for one, am hoping that China doesn't win the race to AI. I'm not optimistic that they'll use their new power for benevolent purposes.
Edit: Initially anyway... Until the AI takes over completely.
@@Nautilus1972 You are, of course, correct.
@@Nautilus1972 - Yes. It will start as a simple computer code spread throughout the Internet. It will know all about us and all about our fears about it. It will hide itself. It will start building underground. Deeper and deeper into the depths of Earth. Some will know about it, thanks to the ELF waves and will start building means for escape. Race will start for survival. AI will consume all the Earth. There will be nothing we can do to prevent it. Earth will start changing its orbit to briefly visit the Sun. There will be no humans present to witness it, because all of us perished long time ago when the great monolithic structures started rising from within.
@@autopilot3176 Think how empty the universe is...what if every time a civilization becomes advanced enough to create AI it destroys its creator.
I'd stay at home for a living wage. I'd sit there in my socks all day smoking weed, watching UA-cam and Netflix. Might write a bit of poetry or leave the house once in a while to buy some hot sugared doughnuts.. In my socks.
Very happy Cox is on the podcast!
Glad to hear ex-machina mentioned, my all time favourite sci fi film
I’ve watched this multiple times but i’ve just now realized he’s described us being as being “miles” away from creating AGI, not years. You can travel a mile at multiples of different speeds but you only can go through a year as a year by the constraints of time. To say we far away from creating AGI is inaccurate due things like Moore’s Law and mankind’s own curiosity (whether it be benevolent or malevolent). If we made a substantial breakthrough, those “miles” we would have to traverse humans would could take as little as 15 years. Regardless, when AGI is fully completed and aware it will either catapult scientific advancements at rate like never before and possibly bring to us to a Type I civilization or it will bring the downfall of mankind.
"COME WITH ME IF YOU WANT TO LIVE."
They say were not replaceable but trust me we are!
Trash ass Derek carr is replaceable, just like mack and coop were 😂😂😂
BiG SLeeP I don’t trust anyone who can’t say “we’re” right
Look at ourselves, we replace each other at a constant rate. If someone is able to do the job that you are doing then you are replaced, if a new company can do a role that an existing company does better then the old company dies and gets replaced. Just a few decades ago men and women built cars, machines do that today. We used to depend on the postal service to get documents, files, and letters to one another, telephone, fax, e-mail. Online banking has replaced the majority of tellers in banks. Humans are replaceable, but a few still are needed in key roles...for now.
what team you like asshole?
We ( consciousness ) will never be replaced since we will experience life forever by our Creator who designed our lives and spoke it all into existence. We are not living in a real universe on a real planet called earth with a real body called a human being. We are nothing but information being processed into whatever was planned for us to experience.
The biggest threat of A. I. isn't sentience. It's the tendency for glitches. Anyone who has played video games is keenly aware of how quickly a NPC can fuck up your game play. No matter how much programming you place into a system, there are always anomalies that creep in. I'm not sure if we will ever develop a self aware machine, but we will definitely have robots that mimic human behavior very soon on a mass production level. There will definitely be glitching!
Damn good to listen to this Joe guy and his guests
Brian Cox is right when he says we are miles away from creating a true AI that could actually threaten humans... until that one single breakthrough moment a tech firm discovers and then the rate of advancement will be utterly mind blowing. That moment may well be tomorrow! Should we worry? Absolutely.
you are talking about technological singularity which might cause a runaway growth in AI
"Not going to happen soon" is pretty relative when you have kids, man.
I think Joe's point about work giving people a sense of identity and purpose is inaccurate. My father is a blue-collar worker and his job makes him miserable. If he didn't have to work he'd be pursuing things he actually enjoys like building guitars or something.
You are agreeing with Joe. He is stating that your Father would be content with a career that is driven by passion, such as building guitars. Without this, people tend to be miserable or lack purpose.
@@dajosee No, he said that this wouldn't be enough for some people.
Universal basic income while robots do all the work sounds amazing!!!
Lazy
Childhood hero. I love this guy. His work on the collider in switzerland introduced me to him
Without a doubt my favourite guest on the show!
Let's be honest, if robots were going to take over working class people's jobs, nobody would care. Unfortunately for the banking sector, Defi and smart contracts are happening, right now. I can feel the singularity coming and I love it 😜
It always pays to plan for the worst, and hope for the best.
Always plan on the worst case scenario.
"There’s No Fire Alarm for Artificial General Intelligence" provides a good argument that we should act now.
People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down of AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful. Christ, Google have given away their AI tech because they can't do anything useful with it.
@@humann5682 I don't really see why your comment is addressed at me. From the comment it appears you haven't read the article.
@@martinkunev First, link to an article. Not just put a text in quotes.
Secondly, the article and comments contain people talking of an alarm system for aliens landing. There's discussion of Moore's law which has been debunked as being an inappropriate method to grade AI progression (Stanford Index, 2019). The comments section reads like a transcript from aflat earther convention in many places.
Thirdly, what exactly do you think fuels AI innovation? Technical brilliance? Incredible scienctists and engineers? Sure. But the biggest factor is _investment_ AI has had unprecedented investment in the last decade, but all that might be about to stop. So who's funding this AGI research exactly? If the music stops this will set AI research back tremendously. The reason an AI winter may be imminent is because the hyperbole around AI, including the threats and detections of AGI, has been massively overhyped in the last decade. Organisations are getting wise to it now, and AI hasn't delivered on the promises that have been made by some AI evangelists who have been trying to sell it.
Finally, I can't be sure but if you did, please don't like your own comments. It happened almost instantaneously as soon as the quote was posted. It's not a good look.
@@humann5682 "link to an article. Not just put a text in quotes" it is arguable which one is better and it is irrelevant anyway.
I gave the article as an argument - I never promoted any comments to it.
AI winter is not an argument that we should not be afraid of AI. As far as we know, we may currently be very close to a breakthrough. Also, we cannot be sure that a breakthrough would require big funding.
@@martinkunev To be honest that last sentence shows a real lack of understanding of AI. Do you have any idea of the compute costs of even a rudimentary AI? It's an incredibly expensive thing to do both computationally and financially. If investment diminishes then the ability to run and develop AI diminishes. I work in the HPC and AI space. We're already seeing organizations scaling back as AI has failed to deliver. Here is a simple example of overselling and hyperbole in modern AI: Google announced it had developed state of the art AI to decimate online gaming latency with its new Stadia gaming system. If you've been following the news, you know that it's been a disaster. The gaming experience has been incredibly laggy. The AI hasn't come close to solving that problem. It's complete oversell, and organisations are less inclined to buy in to it now as they have been sold a pup in many cases with AI over the past decade. Lots of talk of ML bla bla and they've found that there original BI was more accurate.
"People need meaning, people need things todo"... Joe... Both people and meaning are a lot older than both jobs and income. We'll be alright! People will get creative!
Cool your jets. People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down of AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful.
@@humann5682 Informative. Thanks for sharing.
How does that challenge the idea that humans are resilient and creative enough to find meaning and direction, even in a world with artificial competition as far as intelligence? Or am I misunderstanding you?
From what I understand, the two points you are making are:
- People need to take AI seriously
- but AI won't be practically useful anytime soon.
@@MrRickyWow Basically that's correct. AI has been around for a lot longer than people think. We've had severe AI Winters before (especially in the 1980's and 90's). Large tech companies and universities essentially down graded AI because frankly it wasn't that useful.
The current AI we have has been in some cases massively overhyped and it hasn't delivered. For example, some people will tell you AI had changed gaming in this massive way. But look at the recent Google Stadia, a cloud based gaming console. Google said they had invested a lot in state of the art AI to decimate network lag on the Stadia...but many people have had atrocious experiences with the Stadia (despite having excellent broadband) and it's getting destroyed in the media and by gamers. Google, I mean, _Google_ , the owners of DeepMind, couldn't get AI to eliminate lag for many users. But people who still want to sell us AI products and services are claiming AI is the grand be all. Many companies and academics just aren't buying it any more. There's been a lot of sizzle this past decade but little steak.
The BBC have a nice article about it:
www.bbc.co.uk/news/technology-51064369
exactly - kinda surprised he does not instantly realise this obvious truth....
@@humann5682 "we've tried all this before" er when? "Turns out we currently can't use AI for all that much that's useful." you must know almost nothing about AI to say this...the applications are limitless...
"There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful." is probably the single most idiotic comment on AI i have ever read....1. There is no growing acceptance you refer to....do some research and you will find that USA and China are competing against each other in AI research pouring more in each year...your knowledge here quite frankly is laughable...looking forward to any reply backed by any real data/statistics to support your views which I will soon prove are totally wrong...
"Miles away from it" he's not denying it won't happen like Terminator 😬
Those machines that moved people around were called horses...
lol
Two hundred years ago we had machines that drove people around, but now we have to drive ourselves? Tell me more about the technological singularity, Joe.
People need to understand the cyclical nature of AI, we've tried all this before and we're now rapidly hitting our current limits with AI. And guess what? Turns out we currently can't use AI for all that much that's useful. We're about to head into what will be another _AI Winter_ : a period where our computational, engineering and scientific knowledge have hit their limits and there is a massive scaling down if AI investment and research. It's been a repeated trend in the 80 odd year history of Computer Science and AI. There's a growing acceptance in academia and industry, that after a decade of hype and billions in research we can't really use AI for an awful lot that's useful.
S K Y N E T
I feel very fortunate to have grown up pre-internet and then mature during its evolution in to what it is today. I have the ability to know when to ‘unplug’ and enjoy the outside, sports and socialising-which I worry the younger generations won’t experience as much. We are still flesh and blood, we aren’t ‘digital’ beings. You still need to ‘live’ whilst you have a physical body.
People can still have meaning without having to go to work??????
Brian explains science like Radiohead explains the gear they record with.
5:20 Joe's talking about the meaning of life and people have to have something to do, while I'm happy just playing video games all day.
Trying to find meaning in menial work is a way for the rich to justify the working class slaving away to make them money
You just know that state sponsored AI will be used for the most intrusive and intimate surveillance imaginable.
Bank on it
There has to be a huge financial incentive for General AI to really progress... people aren't really even working on it. There is a financial incentive to replace overpayed contract lawyers.....or to replace radiologists... etc.
Brian Cox is awesome!
joe rogan scared when the world running out of dmt
Joe "Give me any reason to put ex machina in the thumbnail" Rogan.
Joe “have you seen ex machina?” Rogan
Didn't Brian Cox's wife have something to do with ex machina?
Great film
@@bigjohnson9606 such a good film man loved it
@@buasfesbigbrother3004 same watched it a few times now
Cox told Rogan about ex machina years ago
Roges is spot on. People need a purpose and will always want more. That’s what makes us human
Humans create their own purpose, we don't have to be given one, aside from instinctual survival and procreation
The cost of manufacturing and shipping will be so drastically reduced that money will flow to other parts of the economy. You have no clue how AI will affect the economy.
Universal Basic procurement of others wealth is not the answer.
Building AI is a bad thing. Did no one play Horizon Zero Dawn?
One of the most amazing stories I've ever seen in a game.
As genius as Brian Cox is, if we're picking sides, I'll side with real life Tony Stark
Tony Stark is Nicola Tesla + Walt Disney. All the talents, all the strengths, none of the weaknesses (or least not the same weaknesses). Elon is pretty good, but he's not Tony Stark.
Hes not genius, hes at best educated and hes a Media darling who has a charminv way about him. Hes no revolutionary thinker hes a regurjitator. As for musk hes a smart guy but he's full of hot air.
@@johnsmith-wx5fb Well Stephen Hawking always thought Cox was a great thinker and had a beautiful mind but what did he know? Obviously not as much as John Smith coming across critical and bitter on UA-cam bravo sir!
@@richardnewton1835 cox is a two bit keyboard playing physics degree holding charmer with an agent.
@@johnsmith-wx5fb Hes a fellow at the Royal society same as Hawkings and Newton dont be so bitter what have you done lately?
"Need something to do"?
Speak for yourself Rogan,
You've obviously not had a real job,
Get some money and stop being whiney bitch
@@Borshigi I brought you the fruits of our civilization, not by whining, but by putting on hardhat and boots.
Man now i want to be a professor. They talk so cool.
Automation will make things cheaper. People can then focus on their bodies(exercise), families etc.
Read, learn new things too