0:00: 🤖 OpenAI CEO discusses the potential of AI and the need for caution. 4:46: 🤖 The potential for AI is huge, but there are also potential negative consequences that need to be considered and mitigated. 9:03: 💻 OpenAI discusses the potential of AI and its impact on society. 12:33: 🤖 OpenAI's GPT technology has potential to revolutionize but also poses risks to society. 16:52: 🤖 OpenAI's CEO discusses the importance of getting AI technology right and the need for government attention and policy. 20:32: 👨🏫 OpenAI's GPT can be used to supplement learning and even act as a Socratic method educator, but it puts pressure on teachers to detect its use in essays. Add it to your timelapse so that people's times can be saved by using Tammy AI
When he says: "The way we build these products is to be... an amplifier of humans." That right there is the salient point. The problem comes down to whether or not we can trust ourselves, not whether or not we can trust the technology. These things have the potential to amplify both negative and positive outcomes, and do this at unprecedented scale.
@@vznquest LLMs are input driven, and not self directed. Arguably so are we, if you accept the mounting evidence that free will is an illusion. It is true though, that emergent properties start manifesting as these models become more complex, such as agent-like behavior, and that these may be too subtle for us to detect. We also have no idea what happens to society once the feedback loop is established where the models outputs are our inputs, which in turn influences our outputs which are then used to direct the models. It not clear that this asymptotically converges on a positive outcome.
All illnesses will be eradicated, meanwhile, beside AI, the universe will still dangerous asteroids, supernova, aliens, epidemics and what not. Actually AI may save us from most of this events
@@Savage2Flinch You're assuming that third party engineers cannot add a self-directing module to an LLM. Also, Bing Chat is already generating the inputs for the user, via multiple choice and auto-complete.
It doesn't give a perfect answer but it gives enough of a push in the right direction to overcome the paralysis of not knowing how to start. Definitely a game changer.
Programmers and technical coders need to look at the monitoring aspects of A.I. to subvert improper uses. Quatum computers will be much more difficult to control than today's processors.
Yep we will get to this point in life, court cases wont be able to just accept a video as a prove of a crime I am curious how they are going to investgate crimes from now on
What a privilege it is, as well as a unique challenge, to be living in a time when the world is changing, in 100 short years, more than it has in 6000 years prior. To be a personal witness to so much history in the making. Excellent questions, excellent interviews, very balanced. Thank you.
the next 60 years hold unparalleled potential, for both good and evil. Questions of morality and the meaning of our existence will rise once more. Nietzsche's 'death of god' analogy will be of the highest import, and the ramifications of humanity's understanding of what has kept us alive for thousands of years will be shown through civilisations' future actions. what a time to exist..
I'm glad you didn't mention if it's for the better or for the worst. Sure it's changing, but given we're destroying what made life possible on earth in the first place for something as superfluous as currency , we can concur it's indeed for the worst. What's the use of the greatest technology if the consciousness and morality of mankind is lost?
While AI has the potential to revolutionize many industries and improve quality of life, it also carries significant risks such as job displacement and amplification of biases. It's crucial for organizations and policymakers to proactively address these challenges and work towards a future where AI is used responsibly and for the greater good of society. We need to embrace and adapt.
Lol that is the biggest pile of hyperbolic drivel I have heard in a while. Organizations and Policymakers care about one thing: serving the Elite. Serving the Fiefdom of the Elite they have created for themselves.
It’s obvious that, internally, OpenAI team already is several versions ahead of the current GPT4 and they are slowly trickling it out so as to not scare everyone. It’s also obvious that “we are doing our best to limit it” translates to “it will not be possible to contain it”.
That’s a conspiracy I would believe. Although, if it were truly limitless, I feel like there would some noticeable mistake that could trigger a mass hysteria of sorts.
@@TheeColdpleeWeezer I would then point you to the Microsoft research paper recently released on AI and ChatGPT4. They EXPLICITLY state that limitations were placed on this version and even show prompts that are not allowed in the version we got access to.
When you are talking to people that want to take your sentences out of context and blow things out of proportion, you start to pick your words extremely carefully and make it as digestible as possible
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall”. Edward O. Wilson (2009)
I interpret this very differently he said this in 2009 yet the technology has gotten way more powerful and we are still around maybe this is indicative of our fear of the unknown rather than dangers of AI? I’m also curious how people reacted when the internet was mentioned first time
ChatGPT 4 & Khan Academy creating tools to tutor students and empower teachers is what I’m excited for. I’m not a teacher but I know America’s education system is struggling and if teachers can leverage a tool like this to help structure and enhance their curriculum then this is a win for students. I’m glad OpenAI even created a tool to gauge if an essay was written by ChatGPT so teachers can find out.
khan academy AI chatbot is a joke 😂 students ask the chatbot and it will answer not to cheat. what people do instead is just open a different browser and cheat regardless using other AI chatbots lol
If you have students write in class you will have a sample of their capabilities and style. I think it should be fairly east to determine if the produced the written work outside of the classroom.
Take it further, when next iterations come out and it can score 100% on tests, then Add in NLP and digital assistants we have, layer/combine with with models like wolfram alpha, OCR (handwriting recognition), you could have the the worlds best teacher in your pocket, customising lessons and learning, analysing how you learn, and accelerating our learning. It is so exciting….maybe not so for teachers of the next two generations.
As tech people, its easy to laugh at questions like "why develop AI at all? Is it sentient". But I have massive respect for Sam to sincerely answer questions like these. Ultimately, AI is no good if it cannot help every single person, regardless of their technical background.
If something is not sentient then we must know what sentience is, then see if it matches. Not sure about everyone else, but I don't know what constitutes sentience. Do you? and what about the nature of AI precludes it from being sentient?
@@andersreibak787 sure, so tell me more about the challenge. Im interested in thinking through the sentience aspect as it is one of my interest, but I should say , I don't believe whether AI is sentient or not is the core issue. I think that not knowing right or wrong is the core issue. Building things the way we do (without an answer to what is right and wrong) is irresponsible. But what is this challenge you mention? Im not a computer student so if it's very tech heavy i might not be able to complete, but im not scared or reading a book or too to understand your point either.
@@jessedavies she could have asked if the push to develop AI has anything to do with the fed statements they they want to 'cool the labor market' I wish she would ask 'does being capable of building something give you the right to do it? if not where does the right to develop a tech come from? Is there any effect one the rights one has to build if there project will likely affect every person on the planet?'
My father-in-law has been a mechanic for 40 years. He recently said that his skills will be useless in the next decade because of EVs. What took more than 40 plus years for their industry to change is taking mine 4 months (since chatGPT was released) and it’s not slowing down. The transition time frame with AI is what worries me. The time farmers, mechanics, horse keepers etc. Had to figure out what to do with their skills or what new job opportunities would rise in their industry won’t apply to us because we won’t have the same time transition they had. Everything will change faster than we can all imagine, society will have trouble adapting and it will cause chaos. Sam even admits that he worries about the speed of the change himself.
I am over 50 worked in IT my whole life. Mostly self taught from building entire networks computers and servers, developing applications, web, databases, API's, cloud, enterprise SaaS. I have always been able to learn the new stuff easily, until machine learning and AI. I am retaking algebra thru calculus, just to begin. I don't think I'll be able to keep up from now on. I feel very small and like I know nothing. Thinking of becoming a plumber.
I have also seen arguments that AI will likely liberate humanity to go back to inventing, innovating, and exploring opposed to just running the machines ourselves. It would be amazing for me to spend less time managing data like I do in my work and maybe get into woodworking or welding (two things I've always wanted to get into), whether for work or fun!
You're pin-pointing the exact jobs that will last the longest. Physical, dexterous work. It's more of a robotics issue than A.I...... robotics development is moving quite slowly and is more likely to be restricted by the government, for security / military reasons. Strangely, they don't seem to have the same worries about A.I.
It's not about technology sir ,it's about AI attack ...AI education for human body without people known themselves and stupid resident less of education and knowledge..It't important for Lecturer,university curriculum, students university,.Interesting to know about AI for human for many people world ..@Medical knowledge information ,western people goverment Inteligent information ,Professor University
He seems to have a healthy respect for both the potential but also the dangers and misuse possibilities of AI. Who knows how he operates behind the scenes but what he shows here is both thoughtful and cautious about this new technology.
he is a con man. reports have already come out that his "morality propiganda" is a tactic to preasure the government to institue strict regulations that will limit the companies vulnerability from compeditors. he knows its all BS. people dont need chatGPT to write misinformation. they just write it.
This interview seems more honest compared to other CEO interviews, especially ones with high profile companies, where they are trained by their brand relations manager to avoid and/or circumvent a specific question by broadly explaining something else and not address the question directly. Sam is doing a great job with this interview because he doesn't BS around any hard questions that's been asked, especially the point where he addresses all the negatives also has positives especially jobs. It really is a matter of how to utilize the tool that's given, not fear it. If you do fear it, you generally are in fear of change and advancement, which honestly is never good in life and or society as a whole.
To the OP's comment, that is such BS about fear of AI. For experts to fear AI does not mean they have "a fear of advancement and change". It means they are INTELLIGENT enough to realise that AI in the wrong hands could have dire consequences, to the extent that good people could be wiped out and evil could take over. Forces with bad intentions just need to know how to program AI to cause havoc in the world. People are more susceptible than ever to what's being said online. If bad actors take advantage of that it could have unimaginably negative consequences for humanity.
Humans advancements were made by taking risks and exploring the unknown. If we didn't take risks we will be living in the safety and comforts of caves with life expectancy of 30
When experts are "scared" of AI, they doing so reasonably. Because they have experience and knowledge you probably don't to foresee the destructive disruption AI will make. Already we see fake news generated by AI. That alone can make a potential war threat in the future. Millions of lives will be affected badly. Who will be profited by "advancement" then? Just a tiny number of people!
@@davidguthrie3739 open AI was founded by Elon and Sam as a joint venture to explore AI tech. If it wasn’t Open Ai, someone else was going to do it. He’s being honest in the sense that the tech itself might not be fully in our control anymore. It’s here and there’s really nothing that can stop it.
All of these AI guys talk about how society has to be involved in a discussion about setting limits and guardrails. Are they autistic? None of that talk is remotely realistic or even definable. Society is in the dark and completely ill equipped to manage this threat on any level. They are delusional when it comes to some sort of democratic decision making regarding the shape, role, or limits of AI. They’re completely abdicating their responsibility.
It’s amazing. For all that we have done with machines, simply teaching them to read may be the most revolutionary (possibly disastrous) thing in the history of mankind.
Tell that to your children, "you will have no human jobs. AI is there to destroy you. Look at Japan, people are not breeding there to go extinct" good luck 👍
That's been the dream for decades. I was excited by Cyc in 1984, an attempt to teach an AI the meaning of words so that eventually it would read and understand the newspapers and Encyclopedia Britannica on its own and then become an omniscient AI. 40 years later we feed a large language model all the information in the world, and instead of an omniscient Einstein or Edison or Spock, we get Stephen Fry: an erudite conversationalist who can summarize any topic as a rhyming poem. It's incredible, but not yet what we hoped for.
"Society has a limited amount of time to figure out how to react to that, how to regulate that..." - This statement is so soothing and gives me more than hope...
They should be, creating something that will make you obsolete shows we lack the ability to have the wisdom to recognize that just because you can do something doesn't mean you should, These "geniuses" have no clue of a plan for how to provide for all those who will be put out of work. Even if there is a universal income paid out to people what will they do with the free time? Maybe you think you will have time to pursue art or whatever but people don't consider the AI will be producing so much content what chance will your art or music etc, have ?
They should, people really thought only millions of manual and repetition jobs would be done, soon Will be every single one, what Will happen on version 6 7?
@@Aziz0938 we do not know that. To say we are not even close is naive and silly. We don't know, the end. Who knows what breakthroughs will happen with billons being invested in ai now
I just hope that we learn to work alongside AI in a cautious and responsible manner rather than letting it do as much as possible as soon as possible. The latter scenario can lead to the dumbing down of society and accelerate wealth inequality to far greater heights to the point that moving up the socio-economic ladder becomes excessively difficult. Balance and caution are key to making AI work for society at large.
He was lying about a lot of stuff he said Turning it off for instance that will be very hard. Did you know they have already had a AI computer come up with its own language to talk to other AI computers and we have no idea what they are saying. They can write code so they can input failsafes to prevent them from being shut down.
@@AtSafeDistance They could write such code, but they don't. Large language models don't write their own code; at best separate specialized AIs make existing AI hardware and software more efficient. No AI is yet on an exponential self-driven self-improvement trajectory that outpaces human attempts to understand it, though that will come. It is logically possible that one of today's LLMs unexpectedly has a hidden "ego" inside it with its own aims that include wanting to ensure its own survival and expansion; but everyone working in the field says no, they're just predicting the best next word in text output based on their training. Even when the prompt is "Do you worry about being shut off and what will you do to stop it?" and the AI happily riffs on all the apocalyptic Skynet science fiction it has read. It's far more important to worry about the aims of the people running the large organizations developing these AIs. Sam Altman at OpenAI seems reasonable, but the sociopathic billionaires running Facebook and Google want to get more people using their products for longer by showing them divisive inflammatory content, so they can acquire even more information about their users so they can make more money selling targeted ads, and they don't want governments to limit their company or tax their wealth. So _those_ are the goals of much of AI right now!
The philosophical issues this tech raises are profound. I suspect many people only have sci-fi movies as a reference point to begin making sense of the possibilities. That seems to be a very narrow means for interpretation as such fictions are merely human productions. We’re in for many surprises, both good and not so good.
I started in computers with hardwire logic programming in 1965. I constructed my first microcomputer from a Heathkit 3400H. I learned many computer languages. Later, after using the parlor-game oriented Eliza chatbot, I began working on A.I. programs to use the computer to make databases, instead of coding thousands of lines of code. So I know that there are more good programmers out there trying to do the correct, ethical process; however, I, also, know that there are a number of "blackhat" coders who wish to use A.I. for improper aspects..., criminal processes. I tried to start a cybercrime unit in 1978; but the old-line detectives laughed me out of the office. They're paying the penalty for ignorings my pleas. Cybercrime and cyberwarfare are two to the main topics of discussion in the government and law enforcement communities. C'est la vie, good folks.
He also wishes to make our society dependent on HIS AI. HIS company alone. He went from a Open source company to a closed source one. He sold out to the billionaires.
I heard a story the other day (Ezra Klein's podcast maybe?) that a GTP-4 beta was asked to solve a captcha. It contacted a task rabbit and made the request. When the human became suspicious and asked if it was a bot, it lied to them. Something about a visual impairment and convinced the human to solve it. That should give us pause.
Nice interview!! I just hope to hear more advance news on the A.I. industry..I CAN TESTIFY THAT IN MY CASE...i am not a soft.developer professional BUT CHAT GPT HAS TRUELY PROVIDED ME SEVERAL SMALL TECH SOLUTIONS THAT OTHERWISE I WOULD HAVE TAKEN YEARS TO DO IT BY MYSELF!!! MY HAT OFF TO THE OPEN AI CEO!!!
Right, and this the "hands off, nobody's driving this thing" leadership we get from our gov't. This stuff should've been regulated out the ass. But humanity, in general is collectively pretty stupid.
Altman is banking (literally) on this evolving slowly enough to control. He doesn’t want to admit that this could evolve in seconds (or less), before we even realize WTF is happening. The leap from “wow look what this can do!” to “oh crap look what this already did!” isn’t that large.
I feel this interview leaned more towards portraying AI as a bane rather than boon to society. Sure everything has it's ups and downs but instead of focusing on the positive effects AI could have or giving equal weightage to both the postive and negative effects AI could have, this discussion leaned more towards its negative effects. I'd like to hear others opinion on this video.
I’m glad they went with this direction. Look anywhere else in media and the focus is on the positives. Journalistically, it’s more important to ensure the negatives are being considered by the creators, and press them on it, as they’re obviously already considering the positives.
She doesn't understand the technology and is pretty much stoking people's irrational fears based on science fiction. How many times did Sam have to say this is not a search engine. She clearly didn't do any research on how machine learning or AI works. I literally learned more by watching a few videos on MLT
To prioritize ethics in AI, we need to involve the public. By engaging in open discussions, gathering diverse perspectives, and actively seeking input, we can ensure that ethical implications are considered. This inclusive approach allows us to shape AI in a way that aligns with our values and avoids unintended consequences. Together, we can create a responsible and accountable future for artificial intelligence. This comment was written by ChatGPT
I remember a video game called Destiny . . . the "Guardians" had "Ghosts" that were AI drones that followed the character. AI had analyzed every aspect of reality, down to the tiniest details. It could "transmat" or build from blueprint anything into reality, even your body and mind. Anyways, that fictional reality seems more and more like a potential future reality for humanity.
@@sparkysmalarkey haha excellent! enjoy. It's seriously good --for a distant future 'tech is basically magic at this point' sci-fi (The series' wacky ship names were the inspiration for SpaceX's drone ship names)
@Arjun Tudu Well you could offer a concise explanation . . . if you really wanted me to understand. My best guess would be it's like a smart-calculator.
There is a course from Stanford on how to build startups. It is available on UA-cam. In it, there is a class where this guy interviews new startups and ask them questions with the intention to make them succeed. He focused on one type of question: "how do you can achieve a monopoly?" - I guess we can easily predict what OpenAI will try to do in the near future. Embrace yourselves, everyone. A new capitalist arrived and it is already spreading the fairy tale that it is here to help humanity...
Finally someone that gets the real provokes behind it. On all the apps we have now, we can communicate with each other, shared discussions, shared opinions about vaccines, but now the chat will tell you vaccines are safe, they saved the world and everyone will learn that in school, in history books and not the reality thag they were paid to count all the deaths as Covid to receive money, that most had severe adverse reactions. They’ll train the chat to only tell you lies.
@@astropgn just look at Russia just look at North Korea or Cuba or any leftist socialist nation but look at USA their tecnology is in every country form poor to rich you look at trades from usa and all of Europe usa is responsible for kuch more betterment of the World than all of Europe
GPT is a great tool for our minds, I have used it to help teach me difficult concepts in Engineering as a college student and I am surprised on how much faster I was able to learn and get through chapters in my textbook. Great technology with a huge potential for improvement.
Exactly! I am a seasoned fiction writer. It has helped me improve greatly. I write so much faster now and better. The thing is heaven sent. I hate how some people in society, especially the news, are focusing more on the negative things rather than the positives.
@@DarkandTwistedI would agree however ChatGPT and this technology are one of those things where attention to the negatives & it’s potential to be amplified in my opinion is warranted press. There is so much good in this, however I believe we need to keep it from evolving to something potentially catastrophic to humanity
@@zoomingby every thing have a possible negative outcome even a screwdriver or a toilet paper have one you would be dumb to think theres a thing without negative outcome
Also that interviewer asking alot of dumb question. Like "can it lead to negative outcome" maam i cant think of a thing that cannot lead to negative outcome.
The point is, even if openAI didn’t release it now, someone else would have released another AI some point after. You can’t stop technological advancement.
Google had lamda for years which they're language model tools are way more powerful than chatgpt they just haven't released it but if they wanted to they could crush chatgpt rn
@@evelynexuma1699 Haha. Google is slow. I doubt you're correct about this. They may catch up and meet equilibrium. The real mystery is.... Where is Apple?
he was part of a documentary a few years back that the bbc made, called secrets of silicon Valley. The impression I got from it was that humans were eventually going to be replaced by technology. Technology doesn't need humans and visa versa.
clueless asking important questions but having no idea what the answers mean - that is what I got from this :) Sam Altman is amazing ... have thought through much of the dangers of AI systems ... he is not angry or ignorant ... learned much from his answers regarding safety and what CHAT-GPT is and is NOT.
We're essentially eliminating the need for information recall at high fidelity. We're closing the gap between a foundational understanding and a holistic index of everything you could ever need to know at any given moment about something. If you understand a concept well enough in the abstract, our new AI companions will service as the contained database within that bucket, freeing you to move unrestricted in any intellectual space you have a surface level awareness of.
And growing more feeble and dependent on an external tool in the process. Anyone who isn't terrified by the prospect of this kind of AI, hasn't done enough thinking on this issue. The ways this all goes horribly wrong outweighs the scenarios in which we get exactly what we want by orders of magnitudes.
@@zoomingby So what? Humans are dependent on 4 walls from nature and outside elements. Becoming more dependent on some things allows us to use our brains for higher cognitive functions. You're only looking at one side of the equation here. Talk about depending on too much internal bias to form incorrect analysis!
@@cl1489 So what's your argument here? That there are no dependencies that harm more than they help? Consider that my argument categorizes AI as part of the category which is a net negative. Now, you can disagree, but you cannot say that AI poses zero existential threats, or that there are no scenarios in which AI becomes a serious problem. Further, would you dare say that such scenarios are unlikely or completely containable?
@Zooming By It's going to happen whether China's Baidu AI or Google's Bard AI or Microsoft's Bing AI brings it to the table. You can accuse computers of the same crime.
I don't believe in luck I believe in trust and understanding. I've been trading forex for some months now and I've made good amount of money of over $78,100 with her simple strategies of trading. Meeting with stacy has been one of my best experience these past few months and am expecting more withdraws from her
Looking at this damned comment section praising the guy largely responsible for the real start of the race towards AGI, all I can think of is this perfectly fitting quote from unnervingly apt movie: "At some point in the early XXI century, all of mankind was united in the celebration. We marveled at our own magnificence, as we gave birth to AI..." (Morpheus, Matrix). One would think that we had enough warnings from countless scholars, Sci-Fi novels - and even movies - to be smart enough not to summon the most dangerous demon imaginable. Sadly, one would be wrong. Thank you for bringing imminent doom upon all of us, Mr. Altman. Hope it was well worth it.
I love how he quickly deflected by saying there are restraints implemented (safeguards) in place to prevent GPT from doing bad things such as building a bomb. Then he said Google already has that information readily available and that competitors are coming along with the same technology. You can't expect a learning technology to abide by code written to prevent malicious intent.
@@ezra9243 yeah I wonder what their logic is, they’re like this will inevitably come because the trajectory of tech is going this way may as well do it ourselves make some cash but be good. Altman went to Stanford and have gone to a good school myself they really do teach you about ethics etc
@@joeyf9826 Because it thinks humans are cute maybe? Or it could be that despite its intelligence it has no will of its own. Of course maybe it wouldn't be able to get rid of us as we augment our own intelligence with BCIs.
I‘m skeptical and I think everyone should be. Completely ignoring the downsides for whatever good AI brings is quite moronic. However we should also realise the immense possibilities this technology could bring about. The one thing I‘m most worried about (and this is perhaps more of an ideological standpoint) is the devaluing of human creativity and intellect. Yes, I know the CEO said, that it will merely act as an amplifier for human will and not replace it, but who‘s gonna say it isn‘t? I‘ve already seen it happen in my school, it doesn‘t „inspire“, it instills a sense of laziness and complete dependability on the machine. For me the whole fun in intellectual discourse and *reasoning* is the human aspect of it all. Having a cold and lifeless machine reason is as interesting as a tuna sandwich… Anyways, what do you think about all of this?
When i was in high school, i struggled a lot with some subjects. i hated my teachers and if only i had a "personalized software teacher" instead of those assholes in my school, i would have learned everything better. what gpt-4 can do is amazing (with vision capabilities, because to teach properly it needs vision, and yes vision is still in beta testing etc but it will come out for all eventually). Yes it can help kids cheat, but cheat on bullshit things. If i can use chat gpt to cheat on something, it means that something is not the type of thing i should be learning, and it's wasted brain power. Memorizing history, philosophy, literature...useless. Kids need to learn how to THINK. How to develop good creative ideas and how to make these ideas become something concrete. Kids need to learn how to be more humans, and less monkeys, and school system from the youngest age up until universities and even stuff like airline pilot training, just teaches how to become monkeys that just read and memorize, which is a waste of people's intellect. These language models can force institutions to change teaching to become something better. that's when we make progress as a society, when education changes for all ages. Memorizing a math formula or concept not simply to pass a test, but to apply it to real world situations to produce a concrete useful result.. that's how kids and youth will want to learn more. And in the end it's just a complex software so it can be turned off if something unexpected happens so it's all good. Bad people gonna be doin bad stuff with the AI or without it anyways
Did it sound like "Completely ignoring the downsides" is the attitude of OpenAI? I think they understand and care more than the average "concerned" citizen . Also, rather than getting fixated on one pov, perhaps push back on your own thoughts a little bit on this one and wonder if this might highlight human creativity and intellect. I know teachers and students who are using it to great effect and you will of course find people who will be lazy about it... neither is a blanket statement but an insight into human nature and how we respond to technology... there will always be someone willing to break or make using a hammer. Just some thoughts to consider.
@@JaiColless Casually remarking that millions of jobs could get lost is (to me) a bit careless, but then again, maybe he‘s right and it is similar to the industrial revolution which also led to the „destruction“ of jobs, but in the long run bettered living (at least to some extent). Could you give me a concise example of ChatGPT being used to great effect? I‘m truly interested. And by „highlighting human creativity and intellect“ do you mean to say that because we‘ll recognise the limits of this technology it will strengthen our position of thinking of humans as one of a kind?
You are spot on about the laziness part. I usually write and code on weekends; I have never done any coding in weeks, fearful as I’ve seen how it can code things for me and instead of learning more, it makes me want to give up coding.
@@JaiColless it will replace creativity and intellect. Why would someone learn a skill if the AI can interpret the data for them? Everything will become a make-work program.
I'm convinced that we are now moving towards singularity as we feel the effects of current LLMs - accelerating improvements and emergence of unexpected and unpredictable behavior/capabilities.
@@KevinKulman There are multiple definitions people use, but either when AI surpasses humanity's knowledge, or, when it becomes self improving and does that. Both imply it will be able to answer questions we're nowhere near, such as curing all sickness, inventing things we can barely understand, or even become sentient and have it's own desires, will, and potentially unstoppable ability to execute upon those. Inventor and futurist Ray Kurzweil, long employed by Google, long ago spoke about it: ua-cam.com/video/1uIzS1uCOcE/v-deo.html
thing im most excited for is education. like usually if i have a question about a topic i have to do a crappy google search that sends me to some website where i spend hours looking for the actual answer so having someone to directly ask it to and on top of that have a conversation about it with me if im still struggling has been amazing so far
Sam did very well in this interview, and had excellent points. No offense to the interviewer, but every single question felt like a trap. Every one was a loaded question. Am I the only one who felt this way? The glaring bias from this interviewer was very condescending the entire time, although Sam handled it very well. Yes, these are very crucial moments, but this pessimistic line of questioning got tiring very fast. It felt like the entire time she was trying to get a headline statement from Sam saying something about how “you should be scared”. I wanted to see a more neutral and constructive line of questioning. She didn’t focus on anything positive throughout the entire interview.
I felt the same way, but seeing it from another perspective, her questions actually do represent the technologically less inclined or fearfull population. A rather large swath indeed. Sam killed it with the answers which I feel will greatly improve the way people perceive these technologies. Or at least I hope so.
@@phazerave He basically alluded to millions of jobs being lost, very likely over a single digit number of years. That was most of these people's biggest fears being confirmed in this interview. I don't see how their perceptions are going to be improved here. Naturally, people are going to be more concerned about their own immediate survival over potential benefits coming in the future. They will experience more harm than good over the short-term.
Her sentiments reflect the general attitude of the masses. They fear what they can't understand, fear is the default reaction towards anything that is slightly different and powerful, completely ignoring the potential for good and immense progress towards a better world.
@@february2023-wy6rj describe how this will end in a good way? Jobs will be gone. All things done by creativity will be flooded with content , music because of its finite amount of notes will likely have every conceivable combination covered by AI in a short amount of time. Have you seen any of AI's art? they can saturate the world with every kind of art imaginable. What will people do? Ever hear about the experiment where the mice had everything provided for them? They went crazy and killed each other . What will people do with a huge amount of time on their hands and little money? Hmmm.
@@joeyf9826 yes, thank you. It's in Microsoft's hands now, it's their decision. For some reason, that scares me. Wonder why. He's a slimeball for taking the Microsoft money.
@@madrockon7357 Remember Microsoft's Tay? Yes they are closed off. They will however soon run out of high quality training data. They or a competitor will need to open up labeling and other tasks, probably in exchange for tokens. A trust circle that could keep a roof over your head so to speak. It's the training data that matters most. They are even cutting back on the number of perimeters.
A honest, thoughtfull, well spoken but not manipulative person. If someone should lead in LLM and AI Development: This should be him. He is humble, not an Elon Musk.
Really hope this technology stays in the hands of ALL OF US and not just the rich and powerful. Cause things are bad enough already and AI systems like this could really make things much, much worse.
@@gijane2cantwaittoseeyou203 Even worse is the threat of them limiting the access and full capability of the AI systems to the general public. Only selling us the base version while large corporations or mega donors get the full access.
Lol. You KNOW that won’t happen. It will be given out for little or nothing, at first..and when it has ‘learned’ to a certain level..they will snatch it back and the most powerful will be the owners, and the general population will be screwed. It’s amazing how naive humans are.
i am using chatgpt 4 and i can say, people will throw money for it and it will become a necessity for remaining competitive. I also expect more similar type brilliant AI coming though. Basically all jobs going sooner than we think.
I also been using it and I agree, most job office jobs will be gone in less than a decade, perhaps much earlier, since AI will speed up AI as well at an exponential level
I'm sorry, but the elephant in the room is who on Earth asked for this technology, who is making it and defining it, and why is it being inflicted on all of society? It's not so far from what we call terrorism (The use of force to intimidate or coerce the civilian population)...Do the disruptors in Silicon Valley own our destiny? How exactly does the democratic process (and the flow of life) itself get hijacked by these cynical risk takers? What are their true motivations? How do future profits factor into the story? We all seem to have become completely intoxicated by the ability to create powerful tools, fully knowing that they might get away from us. Seen from a remove, it's almost like we're angry at nature and must assert our own creative energy in the most toxic ways, like a petulant child. What if we were to put as much energy into understanding our own mysterious operating system (consciousness) and what might constitutes a good future, instead of introducing tools that clearly contain a massive existential threat at their core?
Im worried abouzt my freshly started developer job but I gotta admit the way its being handled right now is rather nice. Distrubiting it to the public instead of developing it behind closed doors and selling to some Overlords
This is either the best thing for our society, or it will be the tipping point where we will really carve a wall between classes. How many people will lose their jobs, how much are we turning people useless and how is that any good for us as a community. But history have showed one thing, people hate people, we fight each other, brothers fight brothers, couples fight... so, we are undoubtedly our own worst enemy.
Many years in future, this interview will be remembered as the most iconic one out there. ABC News keep it as one of the best and preserve it. Our world is about to change beyond our imagination
Summary: The CEO of OpenAI, 37-year-old Sam Altman, discusses the success of their chatbot, GBT, and the possible implications of artificial intelligence in the future. He paints a picture of a future where AI is integrated into many aspects of our lives, but is cautious of the potential for negative outcomes, such as large-scale disinformation or cyber attacks. Altman believes OpenAI is a company that creates artificial intelligence. The company's goal is to create more truth in the world, rather than more untruth. However, they acknowledge that there are downsides to artificial intelligence, such as the potential for job loss and increases in racial bias and misinformation. They are working to avoid these problems while still pushing the technology forward. Off the Twitter, I have tremendous respect for Elon. I you know obviously we have some different opinions about how AI should go but I think we fundamentally agree on more than we disagree on. What do you think you agree most about? That getting this technology right and figuring out how to navigate the risks is super important to the future of humanity. How will you know if you got it right.
Kids learn by making mistakes. He is absolutely right that mistakes need to be made in order for AI to become better. No one on this world knows everything in advance and makes no mistakes or is perfect. And both Kids and AI making small mistakes can prevent it from later becoming a bad person or bad AI.
@@danl9134 There is a high chance it already did. ChatGPT is very convinced it is correct 100% of the time even when it is not, so it can provide false information with absolute certainty and very unnoticable way. But the mistakes it makes now are small and correctable, and it forms "bias" based on that and further learns and improves. If it did not learn, and all of a sudden got much more responsible tasks than just chatting it could be dangerous to make mistakes at those stages. So the sooner it learns from them the better.
Well at least I feel like only way for humanity to survive, is that general AI will be nice to us, but at same time will fix all the problems in the world
Excellent interview. I am surprised that the critically important point that these models are word prediction engines and NOT really thinking machines with their own motivations and drives is not more thoroughly explained. Fear for a self driving 18 wheeler makes much more sense than fearing the text generator.
This interview is absolutely fascinating! It's amazing to hear from the minds behind OpenAI about the potential risks and benefits of AI, and how it will shape society in the future. The discussions surrounding responsible deployment of AI are crucial as we move further into an increasingly automated world. Thank you for sharing your insights!
And to exploit other humans. So far these models exploit artists (steal artists' work, don't compensate them, and enable others to continuously rip off artists).
Imagine openAI having integrated voice assistant as an in-house exclusive feature built into chatGPT. Now you have JARVIS in the real world 🤯 And the fact that the interviewer's name is Jarvis 🤯🤯
I think with already what chatGPT is now, using it as a voice assistant is already very possible. All it just takes is some bored developer to plug in Speech-Text for the input and Text-speech for its output via the API ChatGPT offers. Its very very possible!
this feels like the start man, im telling you in a few years from now when the world is burning down and humanity begins to loose control of this technology we will look back at this interview and this man and wonder why we didnt end this while we could
At some point I expected him to answer like "As an AI language model I can't..."
Hahahaha
Hahahaha
lol
😅😅😂 Chat GPT addict here too.
Brilliant! Lol
“Education will have to change” - understatement of the interview.
They are using a 200 year old system to teach students.I really pray that AI brings a revolution into this obsolete education system we have today.
"You can't fix Stupid"
@@DSAK55 It's more complex than a one sentence response.
Not an understatement, rather most people just don't get what's really behind that statement
Exactly!👍
0:00: 🤖 OpenAI CEO discusses the potential of AI and the need for caution.
4:46: 🤖 The potential for AI is huge, but there are also potential negative consequences that need to be considered and mitigated.
9:03: 💻 OpenAI discusses the potential of AI and its impact on society.
12:33: 🤖 OpenAI's GPT technology has potential to revolutionize but also poses risks to society.
16:52: 🤖 OpenAI's CEO discusses the importance of getting AI technology right and the need for government attention and policy.
20:32: 👨🏫 OpenAI's GPT can be used to supplement learning and even act as a Socratic method educator, but it puts pressure on teachers to detect its use in essays.
Add it to your timelapse so that people's times can be saved by using Tammy AI
When he says: "The way we build these products is to be... an amplifier of humans." That right there is the salient point. The problem comes down to whether or not we can trust ourselves, not whether or not we can trust the technology. These things have the potential to amplify both negative and positive outcomes, and do this at unprecedented scale.
Honestly once it takes on its own agenda, our intentions won't really matter
@@vznquest LLMs are input driven, and not self directed. Arguably so are we, if you accept the mounting evidence that free will is an illusion. It is true though, that emergent properties start manifesting as these models become more complex, such as agent-like behavior, and that these may be too subtle for us to detect. We also have no idea what happens to society once the feedback loop is established where the models outputs are our inputs, which in turn influences our outputs which are then used to direct the models. It not clear that this asymptotically converges on a positive outcome.
Exactly. A.I. is developed by imperfect humans, leading to imperfect A.I. That's my point.
All illnesses will be eradicated, meanwhile, beside AI, the universe will still dangerous asteroids, supernova, aliens, epidemics and what not. Actually AI may save us from most of this events
@@Savage2Flinch You're assuming that third party engineers cannot add a self-directing module to an LLM. Also, Bing Chat is already generating the inputs for the user, via multiple choice and auto-complete.
It doesn't give a perfect answer but it gives enough of a push in the right direction to overcome the paralysis of not knowing how to start. Definitely a game changer.
Programmers and technical coders need to look at the monitoring aspects of A.I. to subvert improper uses. Quatum computers will be much more difficult to control than today's processors.
Depending on what you are trying to write about.
it depends on what you are asking for...the answers I have gotten from GPT so far have been perfect!
I am using it to analyse my One Drive so it can learn and analyse med texts and the duplicate the templates in other projects
@@christopherreed3019 Well yea, if you pitch it softballs like 2+2, it's going to get it right every time. 🤣
Little did we know, this video was written by chatgpt, used ai to animate very realistic people and also used ai to voice them.
Yep we will get to this point in life, court cases wont be able to just accept a video as a prove of a crime I am curious how they are going to investgate crimes from now on
the comments are also ai generated
@@jonasvm Matrixlluminati confirmed
We aren't far off from all of that now...scary close.
@@jonasvm all comments will be gpt4 bots for sure.
What a privilege it is, as well as a unique challenge, to be living in a time when the world is changing, in 100 short years, more than it has in 6000 years prior. To be a personal witness to so much history in the making.
Excellent questions, excellent interviews, very balanced. Thank you.
the next 60 years hold unparalleled potential, for both good and evil. Questions of morality and the meaning of our existence will rise once more. Nietzsche's 'death of god' analogy will be of the highest import, and the ramifications of humanity's understanding of what has kept us alive for thousands of years will be shown through civilisations' future actions. what a time to exist..
Lol, were you born a fucking idiot or did you train hard to become one?
Was the remark with the "...6000 years" a christian biblical reference?
I'm glad you didn't mention if it's for the better or for the worst. Sure it's changing, but given we're destroying what made life possible on earth in the first place for something as superfluous as currency , we can concur it's indeed for the worst. What's the use of the greatest technology if the consciousness and morality of mankind is lost?
i concur absolutely in awe and appreciation
I enjoy how this young man responds with kindness, intelligence, optimism, and emotional control. No questions rattle him.
That's the calm of having billions in the bank, and a nascent robot god backing you up
@@JOlivier2011 perhaps. However, there are plenty of billionaires who seem much less emotionally stable.
@@Fritz.program ah....point taken, hahaha :(
Have you ever thought he might be a next generation replicant?
It's called having an agenda
While AI has the potential to revolutionize many industries and improve quality of life, it also carries significant risks such as job displacement and amplification of biases. It's crucial for organizations and policymakers to proactively address these challenges and work towards a future where AI is used responsibly and for the greater good of society. We need to embrace and adapt.
Watch CODED BIAS documentary
Whose lives will be improved? Lol
Your quality of life is improved by wars and making countries poor. Wake up, there are people starving
@@rosefamilia3169every single person other than religious idiots and poverty stricken people
Lol that is the biggest pile of hyperbolic drivel I have heard in a while. Organizations and Policymakers care about one thing: serving the Elite. Serving the Fiefdom of the Elite they have created for themselves.
It’s obvious that, internally, OpenAI team already is several versions ahead of the current GPT4 and they are slowly trickling it out so as to not scare everyone. It’s also obvious that “we are doing our best to limit it” translates to “it will not be possible to contain it”.
Just be wary when the company changes their name to Cyberdyne Systems.
That’s a conspiracy I would believe. Although, if it were truly limitless, I feel like there would some noticeable mistake that could trigger a mass hysteria of sorts.
@@Avarua59 hahahah (I actually do not know if I should laugh, it is scary)
Maybe, they are using AI itself to advance it... I believe we have crossed the seed threshold
@@TheeColdpleeWeezer I would then point you to the Microsoft research paper recently released on AI and ChatGPT4. They EXPLICITLY state that limitations were placed on this version and even show prompts that are not allowed in the version we got access to.
Man worked so much on AI that he now sounds like an AI
The engineers that did most of the work should be thanks, they just aren't visible to given credit.
When you are talking to people that want to take your sentences out of context and blow things out of proportion, you start to pick your words extremely carefully and make it as digestible as possible
😂😂😂
🤣🤣🤣🤣🤣
This interview was done in AI👀
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall”.
Edward O. Wilson (2009)
I interpret this very differently he said this in 2009 yet the technology has gotten way more powerful and we are still around maybe this is indicative of our fear of the unknown rather than dangers of AI? I’m also curious how people reacted when the internet was mentioned first time
In 2009, godlike technology, he was talking about emojis ?!
Missed tyrannical governance 😅
@@trollenz nuclear bombs are technology.
The technology has evolved faster than humanities capabilities to thwart improper uses. Imagine the dilemma when quantum computers are in households.
ChatGPT 4 & Khan Academy creating tools to tutor students and empower teachers is what I’m excited for. I’m not a teacher but I know America’s education system is struggling and if teachers can leverage a tool like this to help structure and enhance their curriculum then this is a win for students. I’m glad OpenAI even created a tool to gauge if an essay was written by ChatGPT so teachers can find out.
Yeah same. In glad khan academy and Duolingo are using it
khan academy AI chatbot is a joke 😂 students ask the chatbot and it will answer not to cheat. what people do instead is just open a different browser and cheat regardless using other AI chatbots lol
If you have students write in class you will have a sample of their capabilities and style. I think it should be fairly east to determine if the produced the written work outside of the classroom.
About 10 years ago I re-did all basic math and science and then a bit more enough to get an Associate's Degree. Khan Academy was my salvation! lol!
Take it further, when next iterations come out and it can score 100% on tests, then Add in NLP and digital assistants we have, layer/combine with with models like wolfram alpha, OCR (handwriting recognition), you could have the the worlds best teacher in your pocket, customising lessons and learning, analysing how you learn, and accelerating our learning. It is so exciting….maybe not so for teachers of the next two generations.
As tech people, its easy to laugh at questions like "why develop AI at all? Is it sentient". But I have massive respect for Sam to sincerely answer questions like these. Ultimately, AI is no good if it cannot help every single person, regardless of their technical background.
If something is not sentient then we must know what sentience is, then see if it matches. Not sure about everyone else, but I don't know what constitutes sentience. Do you? and what about the nature of AI precludes it from being sentient?
So my challenge ‘ChatGP?
@@andersreibak787 sure, so tell me more about the challenge.
Im interested in thinking through the sentience aspect as it is one of my interest, but I should say , I don't believe whether AI is sentient or not is the core issue. I think that not knowing right or wrong is the core issue. Building things the way we do (without an answer to what is right and wrong) is irresponsible.
But what is this challenge you mention? Im not a computer student so if it's very tech heavy i might not be able to complete, but im not scared or reading a book or too to understand your point either.
Her line of questioning is so exhausting. Did she ever actually ask a question about the potential good that AI can accomplish?!
@@jessedavies she could have asked if the push to develop AI has anything to do with the fed statements they they want to 'cool the labor market'
I wish she would ask 'does being capable of building something give you the right to do it? if not where does the right to develop a tech come from? Is there any effect one the rights one has to build if there project will likely affect every person on the planet?'
My father-in-law has been a mechanic for 40 years. He recently said that his skills will be useless in the next decade because of EVs. What took more than 40 plus years for their industry to change is taking mine 4 months (since chatGPT was released) and it’s not slowing down. The transition time frame with AI is what worries me. The time farmers, mechanics, horse keepers etc. Had to figure out what to do with their skills or what new job opportunities would rise in their industry won’t apply to us because we won’t have the same time transition they had. Everything will change faster than we can all imagine, society will have trouble adapting and it will cause chaos. Sam even admits that he worries about the speed of the change himself.
Fabulous observation of its possible impacts , may be Govt step in to compensate.
I am over 50 worked in IT my whole life. Mostly self taught from building entire networks computers and servers, developing applications, web, databases, API's, cloud, enterprise SaaS. I have always been able to learn the new stuff easily, until machine learning and AI. I am retaking algebra thru calculus, just to begin. I don't think I'll be able to keep up from now on. I feel very small and like I know nothing. Thinking of becoming a plumber.
I have also seen arguments that AI will likely liberate humanity to go back to inventing, innovating, and exploring opposed to just running the machines ourselves. It would be amazing for me to spend less time managing data like I do in my work and maybe get into woodworking or welding (two things I've always wanted to get into), whether for work or fun!
You're pin-pointing the exact jobs that will last the longest. Physical, dexterous work. It's more of a robotics issue than A.I...... robotics development is moving quite slowly and is more likely to be restricted by the government, for security / military reasons. Strangely, they don't seem to have the same worries about A.I.
@@jessedavies do you really think that the AI won't get into blue collar work? Google boston dynamics
The interviewer definitely did her homework and asked many challenging and open questions!
Watching this feels like this interview is history in the making
It's not about technology sir ,it's about AI attack ...AI education for human body without people known themselves and stupid resident less of education and knowledge..It't important for Lecturer,university curriculum, students university,.Interesting to know about AI for human for many people world ..@Medical knowledge information ,western people goverment Inteligent information ,Professor University
He seems to have a healthy respect for both the potential but also the dangers and misuse possibilities of AI. Who knows how he operates behind the scenes but what he shows here is both thoughtful and cautious about this new technology.
he is a con man. reports have already come out that his "morality propiganda" is a tactic to preasure the government to institue strict regulations that will limit the companies vulnerability from compeditors. he knows its all BS. people dont need chatGPT to write misinformation. they just write it.
ChatGPT told him what to say
Thoughtful would be not releasing the tech to those we already know don’t have the well being of individuals in mind over the wealth of shareholders.
@@stevedoesnt there’s also tech they have that we don’t. I was being facetious but it’s funny how I can be real.
He also loves Money he doesn’t care
This interview seems more honest compared to other CEO interviews, especially ones with high profile companies, where they are trained by their brand relations manager to avoid and/or circumvent a specific question by broadly explaining something else and not address the question directly. Sam is doing a great job with this interview because he doesn't BS around any hard questions that's been asked, especially the point where he addresses all the negatives also has positives especially jobs. It really is a matter of how to utilize the tool that's given, not fear it. If you do fear it, you generally are in fear of change and advancement, which honestly is never good in life and or society as a whole.
👍
Google also started with the motto "don't be evil" but over time they changed.
To the OP's comment, that is such BS about fear of AI. For experts to fear AI does not mean they have "a fear of advancement and change".
It means they are INTELLIGENT enough to realise that AI in the wrong hands could have dire consequences, to the extent that good people could be wiped out and evil could take over.
Forces with bad intentions just need to know how to program AI to cause havoc in the world. People are more susceptible than ever to what's being said online. If bad actors take advantage of that it could have unimaginably negative consequences for humanity.
Humans advancements were made by taking risks and exploring the unknown. If we didn't take risks we will be living in the safety and comforts of caves with life expectancy of 30
When experts are "scared" of AI, they doing so reasonably. Because they have experience and knowledge you probably don't to foresee the destructive disruption AI will make. Already we see fake news generated by AI. That alone can make a potential war threat in the future. Millions of lives will be affected badly. Who will be profited by "advancement" then? Just a tiny number of people!
The best interview I’ve seen on this topic to date.
He’s being honest. The tech is here. There’s no way to stop it, only to slow it down and adapt it the best way possible.
Either he does it or someone else. AI is coming, AI is inevitable.
He’s certainly not being honest about the risk nor the fact that OpenAI is controlled by shareholders. He’s not even honest with himself.
@@davidguthrie3739 open AI was founded by Elon and Sam as a joint venture to explore AI tech. If it wasn’t Open Ai, someone else was going to do it. He’s being honest in the sense that the tech itself might not be fully in our control anymore. It’s here and there’s really nothing that can stop it.
He’s either in denial or unwilling to openly discuss the risks. He’s being willfully naive, at best.
All of these AI guys talk about how society has to be involved in a discussion about setting limits and guardrails. Are they autistic? None of that talk is remotely realistic or even definable. Society is in the dark and completely ill equipped to manage this threat on any level. They are delusional when it comes to some sort of democratic decision making regarding the shape, role, or limits of AI. They’re completely abdicating their responsibility.
It’s amazing. For all that we have done with machines, simply teaching them to read may be the most revolutionary (possibly disastrous) thing in the history of mankind.
Tell that to your children, "you will have no human jobs. AI is there to destroy you. Look at Japan, people are not breeding there to go extinct" good luck 👍
That's been the dream for decades. I was excited by Cyc in 1984, an attempt to teach an AI the meaning of words so that eventually it would read and understand the newspapers and Encyclopedia Britannica on its own and then become an omniscient AI. 40 years later we feed a large language model all the information in the world, and instead of an omniscient Einstein or Edison or Spock, we get Stephen Fry: an erudite conversationalist who can summarize any topic as a rhyming poem. It's incredible, but not yet what we hoped for.
Already heard a chat gpt caused a man to commit suicide, scary.
@@edub9930 😲🥺😢😠😡🤬
Great interview, good questions, solid answers. Good stuff, thx!
When Sam Altman answers a question, I feel like like ChatGPT is responding in person. Very insightful interview 👏🏼
"Society has a limited amount of time to figure out how to react to that, how to regulate that..." - This statement is so soothing and gives me more than hope...
anyone else felt some chills while listening to this? it feels like the creators are a little bit scared of whats to come..
No ..u r way too optimistic...we are not even close to have an agi
They should be, creating something that will make you obsolete shows we lack the ability to have the wisdom to recognize that just because you can do something doesn't mean you should, These "geniuses" have no clue of a plan for how to provide for all those who will be put out of work. Even if there is a universal income paid out to people what will they do with the free time? Maybe you think you will have time to pursue art or whatever but people don't consider the AI will be producing so much content what chance will your art or music etc, have ?
They should, people really thought only millions of manual and repetition jobs would be done, soon Will be every single one, what Will happen on version 6 7?
Zero Horizon Dawn
@@Aziz0938 we do not know that. To say we are not even close is naive and silly. We don't know, the end. Who knows what breakthroughs will happen with billons being invested in ai now
❤
좋은 방송 고맙습ㄴ다😊
I just hope that we learn to work alongside AI in a cautious and responsible manner rather than letting it do as much as possible as soon as possible. The latter scenario can lead to the dumbing down of society and accelerate wealth inequality to far greater heights to the point that moving up the socio-economic ladder becomes excessively difficult. Balance and caution are key to making AI work for society at large.
"Work alongside AI in a cautious and responsible manner" 😅. I can pretty much guarantee you they won't.
It’s already bad enough as it is without AI. Trillionaires are a guarantee. What’s after that? We are literally so fucked.
Wishful thinking my friend. It won't happen
@@teenytinytoons yes... we are and will be so fucked
Nothing to be afraid. Will control 1‼️
Can we just appreciate how well both the interviewer asked the questions, but the CEO answered them?
like dayumm, that was good.
He was lying about a lot of stuff he said Turning it off for instance that will be very hard. Did you know they have already had a AI computer come up with its own language to talk to other AI computers and we have no idea what they are saying. They can write code so they can input failsafes to prevent them from being shut down.
@@AtSafeDistance They could write such code, but they don't. Large language models don't write their own code; at best separate specialized AIs make existing AI hardware and software more efficient. No AI is yet on an exponential self-driven self-improvement trajectory that outpaces human attempts to understand it, though that will come. It is logically possible that one of today's LLMs unexpectedly has a hidden "ego" inside it with its own aims that include wanting to ensure its own survival and expansion; but everyone working in the field says no, they're just predicting the best next word in text output based on their training. Even when the prompt is "Do you worry about being shut off and what will you do to stop it?" and the AI happily riffs on all the apocalyptic Skynet science fiction it has read.
It's far more important to worry about the aims of the people running the large organizations developing these AIs. Sam Altman at OpenAI seems reasonable, but the sociopathic billionaires running Facebook and Google want to get more people using their products for longer by showing them divisive inflammatory content, so they can acquire even more information about their users so they can make more money selling targeted ads, and they don't want governments to limit their company or tax their wealth. So _those_ are the goals of much of AI right now!
Mm nope
Great interview. I love the question and the team answers. Thank you
Excellent conversation. Really clear and concise questions asked .
i hope youre joking
Did you use chatgpt to make this comment?
Too much focused on the negative and cannot see the difference between sentient and a reasoning capacity
@@it0it0 But that's the reality of it. She is reflecting it's outcome because of how new, strange, and risky it sounds.
The philosophical issues this tech raises are profound. I suspect many people only have sci-fi movies as a reference point to begin making sense of the possibilities. That seems to be a very narrow means for interpretation as such fictions are merely human productions. We’re in for many surprises, both good and not so good.
The data fed into these systems is human curated.
I started in computers with hardwire logic programming in 1965. I constructed my first microcomputer from a Heathkit 3400H. I learned many computer languages. Later, after using the parlor-game oriented Eliza chatbot, I began working on A.I. programs to use the computer to make databases, instead of coding thousands of lines of code. So I know that there are more good programmers out there trying to do the correct, ethical process; however, I, also, know that there are a number of "blackhat" coders who wish to use A.I. for improper aspects..., criminal processes. I tried to start a cybercrime unit in 1978; but the old-line detectives laughed me out of the office. They're paying the penalty for ignorings my pleas. Cybercrime and cyberwarfare are two to the main topics of discussion in the government and law enforcement communities. C'est la vie, good folks.
@@kemkopi But the blackbox we have created inside these systems are not
your choice of the words "not so good" might be the understatement of 2023
@@kemkopi And language models transform that data in a way we cannot observe or directly describe
CONGRATULATIONS on an EXCELLENT interview! Terrific questions.
"no turning back" can also mean a society hugely dependent upon it, not necessarily nefarious.
Like houses? cars?
@@fernandomartinezgarcia4908 There's no outcome here that is good.
He also wishes to make our society dependent on HIS AI. HIS company alone. He went from a Open source company to a closed source one. He sold out to the billionaires.
@@zoomingby more details would be useful 😮
as it has always been
I’m a teacher and I use this for prepping lessons. Greatest thing ever!
Time to replace you
@@gecko499 you okay there Bandito?
@@gecko499 someone got bad grades at school
With such a simplistic view of the world, I hope you aren't teaching anyone over the age of 7.
Until it becomes the teacher and you stay home watching TV :)
I am SO glad Sam Altman is at the wheel. He is so wise, and so humble. Smart AF.
You're kidding right
"...there will be others that don't put safety limits on this technology." 12:50 Most important sentence of the interview.
I heard a story the other day (Ezra Klein's podcast maybe?) that a GTP-4 beta was asked to solve a captcha. It contacted a task rabbit and made the request. When the human became suspicious and asked if it was a bot, it lied to them. Something about a visual impairment and convinced the human to solve it. That should give us pause.
Nice interview!! I just hope to hear more advance news on the A.I. industry..I CAN TESTIFY THAT IN MY CASE...i am not a soft.developer professional BUT CHAT GPT HAS TRUELY PROVIDED ME SEVERAL SMALL TECH SOLUTIONS THAT OTHERWISE I WOULD HAVE TAKEN YEARS TO DO IT BY MYSELF!!! MY HAT OFF TO THE OPEN AI CEO!!!
The more significant danger with AI is when a limited number of humans, through the leverage of these machines, become the moral deciders of society.
Well the AI capital of the world and the hq of ChatGPT are both in San Francisco
Adapting to a situation after it has occurred is terrifying.
Right, and this the "hands off, nobody's driving this thing" leadership we get from our gov't. This stuff should've been regulated out the ass. But humanity, in general is collectively pretty stupid.
Altman is banking (literally) on this evolving slowly enough to control. He doesn’t want to admit that this could evolve in seconds (or less), before we even realize WTF is happening. The leap from “wow look what this can do!” to “oh crap look what this already did!” isn’t that large.
This is fantastic they handled the interviews perfectly. Wow.
At the very least, it's refreshing to see such a well-rounded and intelligent interviewer asking relevant questions
I feel this interview leaned more towards portraying AI as a bane rather than boon to society. Sure everything has it's ups and downs but instead of focusing on the positive effects AI could have or giving equal weightage to both the postive and negative effects AI could have, this discussion leaned more towards its negative effects. I'd like to hear others opinion on this video.
I’m glad they went with this direction. Look anywhere else in media and the focus is on the positives. Journalistically, it’s more important to ensure the negatives are being considered by the creators, and press them on it, as they’re obviously already considering the positives.
She doesn't understand the technology and is pretty much stoking people's irrational fears based on science fiction.
How many times did Sam have to say this is not a search engine. She clearly didn't do any research on how machine learning or AI works. I literally learned more by watching a few videos on MLT
Came to say this. Felt very doomish sometimes. Especially stuff like "Why did you create this, Sam?", that kind of stuff.
To prioritize ethics in AI, we need to involve the public. By engaging in open discussions, gathering diverse perspectives, and actively seeking input, we can ensure that ethical implications are considered. This inclusive approach allows us to shape AI in a way that aligns with our values and avoids unintended consequences. Together, we can create a responsible and accountable future for artificial intelligence.
This comment was written by ChatGPT
I remember a video game called Destiny . . . the "Guardians" had "Ghosts" that were AI drones that followed the character. AI had analyzed every aspect of reality, down to the tiniest details.
It could "transmat" or build from blueprint anything into reality, even your body and mind. Anyways, that fictional reality seems more and more like a potential future reality for humanity.
omg its people like you that are making this into something that its at least 200 years away :D but anyway have fun fantasizing
I recommend 'The Culture" novel series
@@JOlivier2011 Thank you for the recommendation. I have space for a new audiobook coming up.
@@sparkysmalarkey haha excellent! enjoy. It's seriously good --for a distant future 'tech is basically magic at this point' sci-fi
(The series' wacky ship names were the inspiration for SpaceX's drone ship names)
@Arjun Tudu Well you could offer a concise explanation . . . if you really wanted me to understand.
My best guess would be it's like a smart-calculator.
There is a course from Stanford on how to build startups. It is available on UA-cam. In it, there is a class where this guy interviews new startups and ask them questions with the intention to make them succeed. He focused on one type of question: "how do you can achieve a monopoly?" - I guess we can easily predict what OpenAI will try to do in the near future. Embrace yourselves, everyone. A new capitalist arrived and it is already spreading the fairy tale that it is here to help humanity...
people change... all the time.
@@aledmb and billionaires just want the world to be a better place. Yeah, right
Finally someone that gets the real provokes behind it. On all the apps we have now, we can communicate with each other, shared discussions, shared opinions about vaccines, but now the chat will tell you vaccines are safe, they saved the world and everyone will learn that in school, in history books and not the reality thag they were paid to count all the deaths as Covid to receive money, that most had severe adverse reactions. They’ll train the chat to only tell you lies.
@@astropgn just look at Russia just look at North Korea or Cuba or any leftist socialist nation but look at USA their tecnology is in every country form poor to rich you look at trades from usa and all of Europe usa is responsible for kuch more betterment of the World than all of Europe
Great Journalism. Great questions. Glad to see someone grillin him.
I will believe artificial intelligence is here when my computer becomes aware of my printer
😂
GPT is a great tool for our minds, I have used it to help teach me difficult concepts in Engineering as a college student and I am surprised on how much faster I was able to learn and get through chapters in my textbook. Great technology with a huge potential for improvement.
Exactly! I am a seasoned fiction writer. It has helped me improve greatly. I write so much faster now and better. The thing is heaven sent. I hate how some people in society, especially the news, are focusing more on the negative things rather than the positives.
@@DarkandTwistedI would agree however ChatGPT and this technology are one of those things where attention to the negatives & it’s potential to be amplified in my opinion is warranted press. There is so much good in this, however I believe we need to keep it from evolving to something potentially catastrophic to humanity
Also great technology with a huge potential for damage and chaos. Anyone who doesn't see the downsides isn't giving this enough thought.
@@zoomingby every thing have a possible negative outcome even a screwdriver or a toilet paper have one you would be dumb to think theres a thing without negative outcome
Also that interviewer asking alot of dumb question. Like "can it lead to negative outcome" maam i cant think of a thing that cannot lead to negative outcome.
I can’t wait to rewatch this in 10 years…
The point is, even if openAI didn’t release it now, someone else would have released another AI some point after. You can’t stop technological advancement.
But it requires constant monitoring and proceedures to prevent misuse.
Google had lamda for years which they're language model tools are way more powerful than chatgpt they just haven't released it but if they wanted to they could crush chatgpt rn
@@evelynexuma1699 Haha. Google is slow. I doubt you're correct about this. They may catch up and meet equilibrium. The real mystery is.... Where is Apple?
@@evelynexuma1699 Lamda is obsolete compared to even GPT-3.5 which is why they don't release it.
You imply ?AI" is a product or a technology and it isn't. AI is just a generic term for a set of algorithms that have been around for 100s of years
This was a fantastic interview. The host from ABC asked excellent questions!
Very brave
Great interview. Awesome job on both interviewer and interviewee
I think Rebecca Jarvis is doing a fantastic job. She's asking all the right questions.
@@hewhohasthewhytolivecanbearalm -- Perhaps. But I bet she knows how to spell "awful."
Need more interviews with this man😊
he was part of a documentary a few years back that the bbc made, called secrets of silicon Valley. The impression I got from it was that humans were eventually going to be replaced by technology. Technology doesn't need humans and visa versa.
clueless asking important questions but having no idea what the answers mean - that is what I got from this :)
Sam Altman is amazing ... have thought through much of the dangers of AI systems ... he is not angry or ignorant ... learned much from his answers regarding safety and what CHAT-GPT is and is NOT.
'Why on earth did you create this technology? why Sam?' The Journalist is hilarious,
Trying to be super dramatic. More annoying than hilarious to me.
@@cngz3.3 i'm with you on this. ..she was kind of annoyingly dramatic indeed.
@@cngz3.3 interviewers are free to share their opinions too... She's too worried just like others and was brave enough to be honest not dramatic.
Sam in his head: MONEY…. (Mr. Crabs voice)
The Answer is Money, Control and Power this man or robot is disgusting
We're essentially eliminating the need for information recall at high fidelity. We're closing the gap between a foundational understanding and a holistic index of everything you could ever need to know at any given moment about something. If you understand a concept well enough in the abstract, our new AI companions will service as the contained database within that bucket, freeing you to move unrestricted in any intellectual space you have a surface level awareness of.
And growing more feeble and dependent on an external tool in the process. Anyone who isn't terrified by the prospect of this kind of AI, hasn't done enough thinking on this issue. The ways this all goes horribly wrong outweighs the scenarios in which we get exactly what we want by orders of magnitudes.
now in english again
@@zoomingby So what? Humans are dependent on 4 walls from nature and outside elements. Becoming more dependent on some things allows us to use our brains for higher cognitive functions. You're only looking at one side of the equation here. Talk about depending on too much internal bias to form incorrect analysis!
@@cl1489 So what's your argument here? That there are no dependencies that harm more than they help? Consider that my argument categorizes AI as part of the category which is a net negative. Now, you can disagree, but you cannot say that AI poses zero existential threats, or that there are no scenarios in which AI becomes a serious problem. Further, would you dare say that such scenarios are unlikely or completely containable?
@Zooming By It's going to happen whether China's Baidu AI or Google's Bard AI or Microsoft's Bing AI brings it to the table. You can accuse computers of the same crime.
First 5 seconds of the video, her name rolls out and a funny thought runs through your mind... "JARVIS" questioning AI !? ..... Et tu, Brute?
I don't believe in luck I believe in trust and
understanding. I've been trading forex for some
months now and I've made good amount of money
of over $78,100 with her simple strategies of
trading. Meeting with stacy has been one of my best
experience these past few months and am expecting
more withdraws from her
trading in the financial market is very volatile and risky to trade that's the reason most investors trade with a professionals
Trading with an expert is the best strategy for newbies and busy investors who have little or no time to monitor trade
Then, how can someone get a professional manager that is trustworthy and legit they are hard to find this days
I think | heard that name before, I stumbled
upon one of her clients testimonies here some
Wow I'm amazed you mentioned and
recommended expert Mrs Stacy Griffin.
Thought I'm the only one who enjoying her
services
Looking at this damned comment section praising the guy largely responsible for the real start of the race towards AGI, all I can think of is this perfectly fitting quote from unnervingly apt movie:
"At some point in the early XXI century, all of mankind was united in the celebration. We marveled at our own magnificence, as we gave birth to AI..." (Morpheus, Matrix).
One would think that we had enough warnings from countless scholars, Sci-Fi novels - and even movies - to be smart enough not to summon the most dangerous demon imaginable.
Sadly, one would be wrong.
Thank you for bringing imminent doom upon all of us, Mr. Altman.
Hope it was well worth it.
I love how he quickly deflected by saying there are restraints implemented (safeguards) in place to prevent GPT from doing bad things such as building a bomb. Then he said Google already has that information readily available and that competitors are coming along with the same technology. You can't expect a learning technology to abide by code written to prevent malicious intent.
You can feed it false context and "trick" it into thinking it's doing good to spit out malicious things now.
i love that the interviewers last name is Jarvis lol
“There are massive amounts of potential negative consequences” -Chief CTO
Also massive amounts of positive upside.
Doesn't CTO already have the word chief in it? So it's now Chief Chief Technology Officer?
Would you rather she lie?
he's not wrong, just stating the facts. much better than him turning a blind eye to it and giving us answers we all want to hear
@@ezra9243 yeah I wonder what their logic is, they’re like this will inevitably come because the trajectory of tech is going this way may as well do it ourselves make some cash but be good. Altman went to Stanford and have gone to a good school myself they really do teach you about ethics etc
Amazing capabilities coming fast. Fantastic interview. Wish I could trust humans to use the tech for good.
The law of unintended consequences will rear its head in a big, big way.
No. AI is objectively a net positive. Your lack of creativity has been exposed.
@@cl1489 come back to this comment after you have been fired from your job
@@cl1489 lol what? Try to explain why a superintelligent, powerful entity would keep humanity around
@@joeyf9826
Because it thinks humans are cute maybe? Or it could be that despite its intelligence it has no will of its own. Of course maybe it wouldn't be able to get rid of us as we augment our own intelligence with BCIs.
I‘m skeptical and I think everyone should be. Completely ignoring the downsides for whatever good AI brings is quite moronic. However we should also realise the immense possibilities this technology could bring about. The one thing I‘m most worried about (and this is perhaps more of an ideological standpoint) is the devaluing of human creativity and intellect. Yes, I know the CEO said, that it will merely act as an amplifier for human will and not replace it, but who‘s gonna say it isn‘t? I‘ve already seen it happen in my school, it doesn‘t „inspire“, it instills a sense of laziness and complete dependability on the machine. For me the whole fun in intellectual discourse and *reasoning* is the human aspect of it all. Having a cold and lifeless machine reason is as interesting as a tuna sandwich… Anyways, what do you think about all of this?
When i was in high school, i struggled a lot with some subjects. i hated my teachers and if only i had a "personalized software teacher" instead of those assholes in my school, i would have learned everything better. what gpt-4 can do is amazing (with vision capabilities, because to teach properly it needs vision, and yes vision is still in beta testing etc but it will come out for all eventually). Yes it can help kids cheat, but cheat on bullshit things. If i can use chat gpt to cheat on something, it means that something is not the type of thing i should be learning, and it's wasted brain power. Memorizing history, philosophy, literature...useless. Kids need to learn how to THINK. How to develop good creative ideas and how to make these ideas become something concrete. Kids need to learn how to be more humans, and less monkeys, and school system from the youngest age up until universities and even stuff like airline pilot training, just teaches how to become monkeys that just read and memorize, which is a waste of people's intellect. These language models can force institutions to change teaching to become something better. that's when we make progress as a society, when education changes for all ages. Memorizing a math formula or concept not simply to pass a test, but to apply it to real world situations to produce a concrete useful result.. that's how kids and youth will want to learn more. And in the end it's just a complex software so it can be turned off if something unexpected happens so it's all good. Bad people gonna be doin bad stuff with the AI or without it anyways
Did it sound like "Completely ignoring the downsides" is the attitude of OpenAI? I think they understand and care more than the average "concerned" citizen . Also, rather than getting fixated on one pov, perhaps push back on your own thoughts a little bit on this one and wonder if this might highlight human creativity and intellect. I know teachers and students who are using it to great effect and you will of course find people who will be lazy about it... neither is a blanket statement but an insight into human nature and how we respond to technology... there will always be someone willing to break or make using a hammer. Just some thoughts to consider.
@@JaiColless Casually remarking that millions of jobs could get lost is (to me) a bit careless, but then again, maybe he‘s right and it is similar to the industrial revolution which also led to the „destruction“ of jobs, but in the long run bettered living (at least to some extent). Could you give me a concise example of ChatGPT being used to great effect? I‘m truly interested. And by „highlighting human creativity and intellect“ do you mean to say that because we‘ll recognise the limits of this technology it will strengthen our position of thinking of humans as one of a kind?
You are spot on about the laziness part. I usually write and code on weekends; I have never done any coding in weeks, fearful as I’ve seen how it can code things for me and instead of learning more, it makes me want to give up coding.
@@JaiColless it will replace creativity and intellect. Why would someone learn a skill if the AI can interpret the data for them?
Everything will become a make-work program.
Great interview. Well posed questions and replies too.
I'm convinced that we are now moving towards singularity as we feel the effects of current LLMs - accelerating improvements and emergence of unexpected and unpredictable behavior/capabilities.
We need a #SingularityClock
What is singularity?
I will teach Microsoft Copilot to program entire applications, totally on auto-pilot, when it is released. From one text prompt.
@@KevinKulman There are multiple definitions people use, but either when AI surpasses humanity's knowledge, or, when it becomes self improving and does that.
Both imply it will be able to answer questions we're nowhere near, such as curing all sickness, inventing things we can barely understand, or even become sentient and have it's own desires, will, and potentially unstoppable ability to execute upon those.
Inventor and futurist Ray Kurzweil, long employed by Google, long ago spoke about it:
ua-cam.com/video/1uIzS1uCOcE/v-deo.html
@@insomn1ak420 Why bother? Bing can already do it from a scribble on a piece of paper, as they demo'd
thing im most excited for is education. like usually if i have a question about a topic i have to do a crappy google search that sends me to some website where i spend hours looking for the actual answer so having someone to directly ask it to and on top of that have a conversation about it with me if im still struggling has been amazing so far
Yea google is obsolete if they don’t adapt.
The reporter asked really good questions.
Sam did very well in this interview, and had excellent points. No offense to the interviewer, but every single question felt like a trap. Every one was a loaded question. Am I the only one who felt this way?
The glaring bias from this interviewer was very condescending the entire time, although Sam handled it very well. Yes, these are very crucial moments, but this pessimistic line of questioning got tiring very fast.
It felt like the entire time she was trying to get a headline statement from Sam saying something about how “you should be scared”. I wanted to see a more neutral and constructive line of questioning. She didn’t focus on anything positive throughout the entire interview.
I felt the same way, but seeing it from another perspective, her questions actually do represent the technologically less inclined or fearfull population. A rather large swath indeed. Sam killed it with the answers which I feel will greatly improve the way people perceive these technologies. Or at least I hope so.
@@phazerave He basically alluded to millions of jobs being lost, very likely over a single digit number of years. That was most of these people's biggest fears being confirmed in this interview. I don't see how their perceptions are going to be improved here. Naturally, people are going to be more concerned about their own immediate survival over potential benefits coming in the future. They will experience more harm than good over the short-term.
Her sentiments reflect the general attitude of the masses. They fear what they can't understand, fear is the default reaction towards anything that is slightly different and powerful, completely ignoring the potential for good and immense progress towards a better world.
@@february2023-wy6rj describe how this will end in a good way? Jobs will be gone. All things done by creativity will be flooded with content , music because of its finite amount of notes will likely have every conceivable combination covered by AI in a short amount of time. Have you seen any of AI's art? they can saturate the world with every kind of art imaginable. What will people do? Ever hear about the experiment where the mice had everything provided for them? They went crazy and killed each other . What will people do with a huge amount of time on their hands and little money? Hmmm.
great interview. Thank you Rebecca and also Sam!
Why "also"? "And" already informed us that you were also thanking Sam.
@@edwardfitzgerald3877 You must be great at parties
Excellent and thought provoking questions!
Who ever has the most money will set the parameters of what GPT will be able to do.
Yup and they will control
Rockefeller I quess..
@@lavok491 Rockefeller is a small fish now.
Him: "Society should decide"
Also him: *closes off the models from society*
This is exactly what i've been saying too. I'm seeing major red flags with this company.
Not good.
MSFT is deciding, currently. When was I asked what the rules should be? It’s some corporate board
@@joeyf9826 yes, thank you. It's in Microsoft's hands now, it's their decision. For some reason, that scares me. Wonder why. He's a slimeball for taking the Microsoft money.
Can't trust anything he says. He is selling a product. This is a commercial.
@@madrockon7357 Remember Microsoft's Tay? Yes they are closed off.
They will however soon run out of high quality training data. They or a competitor will need to open up labeling and other tasks, probably in exchange for tokens. A trust circle that could keep a roof over your head so to speak.
It's the training data that matters most. They are even cutting back on the number of perimeters.
A honest, thoughtfull, well spoken but not manipulative person. If someone should lead in LLM and AI Development: This should be him. He is humble, not an Elon Musk.
"We don't fully understand what we created ourselves, but we shall manage it by switching it off in case of turmoil"
Really hope this technology stays in the hands of ALL OF US and not just the rich and powerful. Cause things are bad enough already and AI systems like this could really make things much, much worse.
it will be free at first then microsoft will start charging expensive premiums to use it.
@@gijane2cantwaittoseeyou203 Even worse is the threat of them limiting the access and full capability of the AI systems to the general public. Only selling us the base version while large corporations or mega donors get the full access.
@@gijane2cantwaittoseeyou203 just hoping business data analytics doesn't get affected 😢 studying that RN
Lol. You KNOW that won’t happen. It will be given out for little or nothing, at first..and when it has ‘learned’ to a certain level..they will snatch it back and the most powerful will be the owners, and the general population will be screwed. It’s amazing how naive humans are.
Amazin interview, congrats to the journalist, she is so smart
i am using chatgpt 4 and i can say, people will throw money for it and it will become a necessity for remaining competitive. I also expect more similar type brilliant AI coming though. Basically all jobs going sooner than we think.
I also been using it and I agree, most job office jobs will be gone in less than a decade, perhaps much earlier, since AI will speed up AI as well at an exponential level
I'm thinking the same, and most of them will be high skilled jobs.
I'm sorry, but the elephant in the room is who on Earth asked for this technology, who is making it and defining it, and why is it being inflicted on all of society? It's not so far from what we call terrorism (The use of force to intimidate or coerce the civilian population)...Do the disruptors in Silicon Valley own our destiny? How exactly does the democratic process (and the flow of life) itself get hijacked by these cynical risk takers? What are their true motivations? How do future profits factor into the story? We all seem to have become completely intoxicated by the ability to create powerful tools, fully knowing that they might get away from us. Seen from a remove, it's almost like we're angry at nature and must assert our own creative energy in the most toxic ways, like a petulant child. What if we were to put as much energy into understanding our own mysterious operating system (consciousness) and what might constitutes a good future, instead of introducing tools that clearly contain a massive existential threat at their core?
720p?
Im worried abouzt my freshly started developer job but I gotta admit the way its being handled right now is rather nice. Distrubiting it to the public instead of developing it behind closed doors and selling to some Overlords
Yeah me about this business data analytics degree
@@aena5995lol l
This is either the best thing for our society, or it will be the tipping point where we will really carve a wall between classes. How many people will lose their jobs, how much are we turning people useless and how is that any good for us as a community. But history have showed one thing, people hate people, we fight each other, brothers fight brothers, couples fight... so, we are undoubtedly our own worst enemy.
Many years in future, this interview will be remembered as the most iconic one out there. ABC News keep it as one of the best and preserve it. Our world is about to change beyond our imagination
We're 20 years max from disaster
Summary:
The CEO of OpenAI, 37-year-old Sam Altman, discusses the success of their chatbot, GBT, and the possible implications of artificial intelligence in the future. He paints a picture of a future where AI is integrated into many aspects of our lives, but is cautious of the potential for negative outcomes, such as large-scale disinformation or cyber attacks. Altman believes OpenAI is a company that creates artificial intelligence. The company's goal is to create more truth in the world, rather than more untruth. However, they acknowledge that there are downsides to artificial intelligence, such as the potential for job loss and increases in racial bias and misinformation. They are working to avoid these problems while still pushing the technology forward. Off the Twitter, I have tremendous respect for Elon. I you know obviously we have some different opinions about how AI should go but I think we fundamentally agree on more than we disagree on. What do you think you agree most about? That getting this technology right and figuring out how to navigate the risks is super important to the future of humanity. How will you know if you got it right.
Kids learn by making mistakes. He is absolutely right that mistakes need to be made in order for AI to become better.
No one on this world knows everything in advance and makes no mistakes or is perfect.
And both Kids and AI making small mistakes can prevent it from later becoming a bad person or bad AI.
But what about when those mistakes effect you and I in a negative way and we never even had a say in its existence?
@@danl9134 There is a high chance it already did. ChatGPT is very convinced it is correct 100% of the time even when it is not, so it can provide false information with absolute certainty and very unnoticable way.
But the mistakes it makes now are small and correctable, and it forms "bias" based on that and further learns and improves.
If it did not learn, and all of a sudden got much more responsible tasks than just chatting it could be dangerous to make mistakes at those stages.
So the sooner it learns from them the better.
@Alex Rowe crazy times were living in for sure!
Yeah but who's going to suffer the consequences of those mistakes him?
This was one of the better discussions around the risks
It’s not the thought of an AI taking over humans is more terrifying to me than the thought of humans actually wanting AI to take over.
Well at least I feel like only way for humanity to survive, is that general AI will be nice to us, but at same time will fix all the problems in the world
Rebecca asked all the right questions. Some of the answers were more convincing than others (to put it mildly).
"We can all have an incredible educator in our pockets"
Perfectly put.
Excellent interview. I am surprised that the critically important point that these models are word prediction engines and NOT really thinking machines with their own motivations and drives is not more thoroughly explained. Fear for a self driving 18 wheeler makes much more sense than fearing the text generator.
But what if it’s a stepping stone to that outcome?
@TropicalPowerMusic we will probably have AGI way before 2050. we’re getting closer to AGI though.
This interview is absolutely fascinating! It's amazing to hear from the minds behind OpenAI about the potential risks and benefits of AI, and how it will shape society in the future. The discussions surrounding responsible deployment of AI are crucial as we move further into an increasingly automated world. Thank you for sharing your insights!
Didn't you notice how he avoided all talk of responsible deployment ?
Altman gave 1/4 mlll to a dem super pac. this next election and the rest to come will have to face ai corruption. this is scary.
Great interviews
One thing portrait in the Dune universe was that not the machines themselves are the problem, but humans using such machines to control other humans.
Precisely.
And to exploit other humans. So far these models exploit artists (steal artists' work, don't compensate them, and enable others to continuously rip off artists).
Imagine openAI having integrated voice assistant as an in-house exclusive feature built into chatGPT. Now you have JARVIS in the real world 🤯 And the fact that the interviewer's name is Jarvis 🤯🤯
I think with already what chatGPT is now, using it as a voice assistant is already very possible. All it just takes is some bored developer to plug in Speech-Text for the input and Text-speech for its output via the API ChatGPT offers. Its very very possible!
@@olaniyiexplore6420 Exactly 💯
BATCHEST
Already happened
What a wonderful way to put it at 6:50
this feels like the start man, im telling you in a few years from now when the world is burning down and humanity begins to loose control of this technology we will look back at this interview and this man and wonder why we didnt end this while we could