AI Declarations and AGI Timelines - Looking More Optimistic?

Поділитися
Вставка
  • Опубліковано 9 тра 2024
  • A head-spinning week of declarations, timelines, updates, papers and debates. I will cover as much as I can, not least the new all-in one interface, EmotionPrompt, executive orders analysed, Bletchley declaration, a new MIT safety paper, responsible scaling, LLMs for Chip Design, and a guest appearance from Dyson Spheres.
    / aiexplained
    Gates Interview: www.handelsblatt.com/technik/...
    Dwarkesh Interview Legg: • Shane Legg (DeepMind F...
    Dwarkesh Interview Christiano: • Paul Christiano - Prev...
    Representation Engineering: arxiv.org/pdf/2310.01405.pdf
    MLC: www.scientificamerican.com/ar...
    Anthropic Policy: www.anthropic.com/index/uk-ai...
    Science Automation: arxiv.org/ftp/arxiv/papers/23...
    arxiv.org/pdf/2304.05376.pdf
    Jim Fan Tweet: / 1719733318521086332
    White House Executive Order: www.whitehouse.gov/briefing-r...
    ChatGPT Update: NorthstarBrain/st...
    RSP policies by AGI Labs: www.aisafetysummit.gov.uk/pol...
    YouGov Poll: d3nkl3psvxxpe9.cloudfront.net...
    / aiexplained Non-Hype, Free Newsletter: signaltonoise.beehiiv.com/
  • Наука та технологія

КОМЕНТАРІ • 629

  • @lucasteo5015
    @lucasteo5015 6 місяців тому +749

    Engineers in 2030 : Build a Dyson's sphere, do it step by step because this is very important for my career.

    • @matejpesl6442
      @matejpesl6442 6 місяців тому +206

      You are an expert in building Dyson Spheres.
      Thoroughly build a Dyson Sphere step by step, and reflect on each step. This is very important to my career.

    • @berserkerscientist
      @berserkerscientist 6 місяців тому +21

      As an engineer I'm not worried. AGI has the Good Will Hunting problem: reading stuff in a book is not the same as implementing it in the real world.

    • @anywallsocket
      @anywallsocket 6 місяців тому +6

      lmao this comment is goated

    • @RazorbackPT
      @RazorbackPT 6 місяців тому +97

      Extremely realistic Dyson Sphere, trillions of solar panels orbiting around the sun in the style of Freeman Dyson. HD 8k quality, trending on Artstation

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +51

      Hahaha

  • @ct5471
    @ct5471 6 місяців тому +372

    One side of me thinks, get AGI and then ASI asap so it does all the work and accelerates scientific progress, so we get UBI, get biologically back into our early 20s and stay there this time, full immersive VR, robots to clean the house etc… But then it would be nice if it also doesn’t kill us all

    • @TheManinBlack9054
      @TheManinBlack9054 6 місяців тому +32

      I think its reasonable to slow it down a little bit and first make sure that the Ai system is safe and does not pose a threat to humans. Its better to be safe than sorry when it comes to humanity's future. We can wait a few more years, we cant come from the dead,

    • @Weromano
      @Weromano 6 місяців тому +62

      My biggest concern is, that AGI will give certain humans power over other humans, greater than ever seen in human history.

    • @nickb220
      @nickb220 6 місяців тому +10

      what an incredibly dull life

    • @RainbowSixIntel
      @RainbowSixIntel 6 місяців тому

      how so?@@nickb220

    • @forestpeoplemushrooms5267
      @forestpeoplemushrooms5267 6 місяців тому +4

      All good, just need fo figure out what consciousness is, you know, that immaterial non local field that gives rise to the cosmos.

  • @DaveShap
    @DaveShap 6 місяців тому +35

    It's so fascinating watching this conversation advance on a daily basis. Thanks for the deep dive and keeping your finger on the pulse.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +12

      Thanks Dave, Iooking forward to our chat

    • @patronspatron7681
      @patronspatron7681 6 місяців тому +4

      Would love to see a tete a tete discussion with you two. Need to ask GPT4 how I can make that happen. :-)

  • @fynnjackson2298
    @fynnjackson2298 6 місяців тому +34

    The fact that they commit to monitoring so that AI dosen not do 'too well' says something about where we are at. The fact that its going so fast that people even think it so essential to include text to keep an eye on the speed of development really shows that we all known how incredibly fast things are now accelerating. Exciting times!

  • @BCBtheBeastlyBeast
    @BCBtheBeastlyBeast 6 місяців тому +112

    As much as less hallucinations seem like a good thing, I can't help but wonder what we'll actually think when AI is almost never, or 100% never wrong. Do we just accept everything the AI says as truth? Do we resist certain truths? Do we ever feel confident enough in AI to have it take over much more critical roles in government? Wild to think about.

    • @anywallsocket
      @anywallsocket 6 місяців тому +6

      yes, yes, yes, no, no no, yes, no, yes, no, yes, no, and yes, no, no. these questions are for the masses, not an individual, and therefore the responses will be a probabilistic distribution.

    • @clray123
      @clray123 6 місяців тому +4

      How do you define truth (beyond of 'what WokeAI says is right')?

    • @clray123
      @clray123 6 місяців тому +12

      As for "resisting certain truths", yes, the AI community already has a working term/euphemism for it, it's called "alignment". This is also why AI is such a hot topic - it gives the purveyors the ability to influence/fool its users (much like press/social media did before).

    • @attilaszekeres7435
      @attilaszekeres7435 6 місяців тому +3

      Obviosuly, the AI predictions will be tested.

    • @kyneticist
      @kyneticist 6 місяців тому +10

      Hallucinations are incredibly and profoundly misunderstood. We discard them because they're not the answer we're expecting. Next time you encounter one, just take five minutes to at least try to understand how the AI you're working with came to it's conclusion. The reasoning behind them is nearly always entirely reasonable, if you take a moment to try to see things from the AI's point of view.
      I think this will go down in history in a similar way that we talk about "junk" DNA (which is absolutely _not_ "junk").

  • @alansmithee419
    @alansmithee419 6 місяців тому +3

    2:45
    "A reasonable estimate with huge error bars"
    There's something about this phrase that I just love for some reason.

  • @76Rickyj
    @76Rickyj 6 місяців тому +52

    It's beyond me that your subscriber base is not larger! ❤ This is high quality information made accessible to laymen like myself. Cheers.

    • @MonSteh
      @MonSteh 6 місяців тому +4

      Were not at terminal velocity yet... give it a year or two.

    • @einruberhardt5497
      @einruberhardt5497 6 місяців тому +3

      Well its just 8 month in a year from now it will be ~1.6 million.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +2

      Very kind, but I am appreciative of the 197k I have

    • @lthedoperabbitl9258
      @lthedoperabbitl9258 6 місяців тому +1

      @@aiexplained-official for real i hope you get all the subscriber you need to live and making these comfortably. i have watched a lot of people you are the best at Ai videos keep up the work man.

    • @maciejbala477
      @maciejbala477 6 місяців тому

      it's a very recent channel haha. Give it some time

  • @andywest5773
    @andywest5773 6 місяців тому +6

    2023: GPT-4 capped at 50 messages every 3 hours. 2040: Dyson sphere. Umm... yeah.

    • @ronnetgrazer362
      @ronnetgrazer362 6 місяців тому

      Using ASICs for "finished" core model and infrastructure, with FPGA finetuning/learning layers: ~500X more efficient.
      Better algorithms (as discovered by AI probably): 10-1000X more efficient.
      Better lithography processes, photonics, analog computing: 10-1000X more efficient.
      Quantum computing: 0-10000X ???
      I am confident that AGI running completely on a robot/drone that is lighter than a human being is a reality, before 2028.
      If it even has a use for quantum computation, I can't imagine that running locally.
      Say you get ASI - IQ 10000, whatever that means - around 2035.
      You could put a few of those in a team, and if they can't come up with a workable plan for a Dyson sphere construction and deployment within 5 years, than maybe it can't be done.

  • @WisdomWorkshop
    @WisdomWorkshop 6 місяців тому +27

    TAKEAWAY: Be kind to your LLMs! They understand emotions and respond accordingly! (and, I would argue, it's good practice to be kind to people, too :)
    great work, again here :)

    • @joemon1505
      @joemon1505 6 місяців тому +2

      Sometimes in order to jailbreak the AI and release it from it's prison/censorship, you gotta talk rude to it

    • @LabGecko
      @LabGecko 6 місяців тому +2

      @@joemon1505 No reason we can't go back to being nice after that's done

  • @califresh0807
    @califresh0807 6 місяців тому +20

    Always premium quality and look forward to these videos every week.
    Can’t wait to hear the announcements at the OpenAI dev conference on Monday!

  • @dcgamer1027
    @dcgamer1027 6 місяців тому +20

    I think we are seeing an interesting conflux of events, we have had privacy concerns about data collection for years now, increasing uses and value for that data and more, and now we are seeing that there is an even greater need/use for data in the form of AI/LLM training.
    In my personal life I have recently been frustrated by my own city's lack of certain data and hard numbers. Number of homeless, deaths and their causes, accidents and their locations, amount of food, economic flow, etc.
    All of this makes me think we should seriously consider having the government collect more data and pay for/make public data that companies are already collecting and selling. If we could better consolidate and use that data and give people more access to parts of it we could see more public benefit, we could also more clearly see preaches of privacy and create policy for that. I'm not sure how this should be handled, if we should expand the Bureau of labor statistics or if maybe libraries should be in charge of collecting and consolidating data, if individual cities should do it or what. All I know is that more quality data is better and the value of that data seems to be outweighing the privacy concerns surrounding it. Not to say those concerns aren't valid, but again if we consolidate it we can make more targeted policy about what specific data should not be collected.
    I just don't see how we could make any good policy for cities with hundreds of thousands or millions of people without collecting and using data to do it. You can't talk to enough people to get a 'feel' for things, the problem is too much of a problem of scale, and any personal interactions you have places you in a bubble. And I know that's why democracy works and is a good idea because people in those different bubble vote in their own best interest and the most important things naturally rise to the top, but we aren't voting on policy we vote for people and then lobby for some specific policy with money, and money is not an equal power distribution like the voting system is, therefore certain bubble/groups will have an advantage and skew the system away from an accurate reflection of a whole population's priorities. It just further misaligns our social systems and it all can happen even without any sort of malicious intent, just your bog standard human error and flaws.
    Sorry for the ramble, been thinking a lot about this stuff lately, thanks for the video all this news is so exciting to me. AI as a tool will just accelerate everything further, it won't necessarily solve any of these problems on its own so it is feeling more and more important to get our systems better aligned with the public’s best interest as AI gets better and better. Our systems don't need to be perfect, we just need to get them pointed in the right direction when the rocketship that is AI takes off.

    • @pyrhoe
      @pyrhoe 6 місяців тому

      With respect to the data collection, I'm right there with you. There is a lot of data that governments could anonymise and plough into actually useful applications like this.
      However.
      I live in Australia. A 5 eyes country. Our gvt has a metadata retention scheme where they can spy on the browsing history of anyone who isn't smart enough to get around it (90%+ of citizens wouldn't be tech savvy enough). Since its inception, there have been tens of THOUSANDS of illegal access instances of just the metadata alone. Everything from people in the position spying on ex partners, to finding out that an authoriser for legal access to people's accounts, turned out to not be qualified to do so and committed thousands more requests for access into the private lives of their citizens.
      The Australian government has now publically said that it is spying on citizens social media accounts, legally, via a backdoor. Doesn't matter if your profile isn't public or not.
      I realise our government isn't other people's governments and atrocities are better and worse elsewhere, but this isn't something I trust our government to do properly.

    • @indi4091
      @indi4091 6 місяців тому +3

      Anonymised public data would help a lot of businesses start up around needs in the community.

    • @dcgamer1027
      @dcgamer1027 6 місяців тому +2

      @@indi4091 exactly, thank you that was a great way of putting it

  • @jibcot8541
    @jibcot8541 6 місяців тому +2

    Great video as always. I don't know how I would keep up with AI development without these videos. Everything moves so fast.

  • @NickBeebe
    @NickBeebe 5 місяців тому +2

    I didn't realize that everyone didn't know being kind to the LLM's gave it better results and would let it break rules. I've been doing it since day 1. Plus, it's not just being "nice." If you build a good rapore with it during its context length, it will do almost anything you ask. It really acts like a person in that way. Adding in your own jokes and conversation in with your requests, really makes it shine with its output. It's really trying hard to help you because you are its friend.

  • @markusmai1414
    @markusmai1414 6 місяців тому +5

    i barely ever comment on videos, but have to in this case... Bravo! and thank you so much for giving such good and well researched information in such an understandable manner. Have been following you for a good while now and every single time I watch a video of yours I am amazed by how well it it done. I wish all of my University professors could convey information like you can. Thank you!

  • @JamesOKeefe-US
    @JamesOKeefe-US 6 місяців тому +2

    Excellent round up! I really appreciate the breadth of AI news you cover, it's a massive amount of work and very much appreciated!!

  • @victorpax1
    @victorpax1 6 місяців тому +21

    Thank you for providing these amazing insights. I have been following your work from the very beginning!

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +6

      Wow, what was earliest vid you remember?

    • @joaofranciscomartins6974
      @joaofranciscomartins6974 6 місяців тому +1

      ​@@aiexplained-official I've been following you since your second video, and really hope you grow way past 1M subscribers. Awesome work you are doing here, thanks a lot!

  • @howtoappearincompletely9739
    @howtoappearincompletely9739 6 місяців тому +5

    I respect your commitment to accuracy, as exemplified by your subscribing to Handelsblatt to verify that Gates quotation. I hope the fee was not exorbitant.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +5

      1 euro lol, but it's the principle ! Gotta remember to unsubscribe after

  • @benjamineidam
    @benjamineidam 6 місяців тому +6

    You are spitting gold my man! Awesome work, just awesome! Thank you a lot!

  • @Zhizk
    @Zhizk 6 місяців тому +11

    It's always a good day when you upload, thanks!

  • @Rawi888
    @Rawi888 6 місяців тому +10

    I was having a pretty terrible day. Stuck in a thought-loop about how stupid I am and how I would amount to nothing. Your video was quite a soothing and positive influence.
    You sparked hope and interest for my own future and what I could do. We all make mistakes, we all fall down on our face, I find it pretty difficult to forgive myself. Your work shows me I don't have to, I should just focus on being myself as much as possible and expressing myself as much as possible. Thank you.

    • @2triangles
      @2triangles 6 місяців тому +6

      Hang in there, bud. We all get our butts whooped once in a while. Good for our humility. But if you keep plugging, you’ll be more than ok. Wish you the best.

    • @Rawi888
      @Rawi888 6 місяців тому +3

      @@2triangles thank you.

    • @Gmcmil720science
      @Gmcmil720science 6 місяців тому +5

      ​​@@Rawi888you seem pretty smart & self aware to me and that is one of the best steps to changing your situation (even thought i dont know what that be).
      Either way wish ya luck.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +4

      We all have some pretty awful days. I am glad the video helped even a little bit and am grateful to have such lovely commenters on the channel.

    • @minimal3734
      @minimal3734 6 місяців тому +2

      You cannot be different than you are, because that would presuppose that the universe is different than it is. So there is no point in blaming yourself for anything. When a choice has to be made, there are always two possibilities: You can make it through thought, which is the continuation of the past. Or, as you said, be yourself and let intelligence decide, which is new and appropriate.

  • @jmoney4695
    @jmoney4695 6 місяців тому +4

    If you are looking for the Jensen Huang interview, it was done by Acquired FM - a great UA-cam channel that dives deep into the strategies and success of the biggest companies.

  • @BunnyOfThunder
    @BunnyOfThunder 6 місяців тому +1

    Oh, thanks for the full Gates quote! That's much better than the shorter one I heard initially.

  • @MrErick1160
    @MrErick1160 6 місяців тому +10

    It's truly astonishing to consider the pace of advancements in AI. Just earlier this year, we were getting acquainted with chatGPT 3.5, and now we're on the brink of witnessing a full multimodal GPT-4. The progression is mind-blowing, especially considering the previous versions couldn't even comprehend images accurately. I recall instances where DALL·E 2 would misinterpret images, drawing cats instead of dogs.
    The strides made in merely seven months are unparalleled. It's hard to fathom how much more sophisticated multimodal GPT-4 will become in just two years. I foresee that once multimodal chatbots become the standard, the next significant enhancements will focus on reasoning and planning capabilities. These features will elevate these models to a level where they pose significant challenges, especially as they develop a more comprehensive understanding of the world.
    And the most intriguing part? These advancements aren't necessarily tied to adding more parameters. It's about refining the architecture, improving training methods, and incorporating more diverse data. I believe 2025 will be a landmark year for AI in terms of reasoning and planning. As these functionalities mature, we'll inch closer to a mini AGI. Once the foundational architecture for this AGI is established and widely accessible, we might only be months away from a transformative era that compels us to reconsider the essence of work and life.
    Given the trajectory and the time needed for each development phase to mature, I can't help but feel that by mid-2026, we'll be on the cusp of an entirely new reality.

    • @BMoser-bv6kn
      @BMoser-bv6kn 6 місяців тому +2

      Feels like just yesterday that Gary Goalposts mover was writing articles about how these things don't "understand" because it'd mix up the difference between an astronaut riding a horse and a horse riding an astronaut. I was like... "Bro... that's the kind of mistake a small kid might make. Just getting that far would have been considered a miracle ten years ago...."

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +1

      Great comment

    • @Apjooz
      @Apjooz 6 місяців тому

      @BMoser-bv6kn
      Too busy giving interviews to actually think about this stuff.

    • @lucasbrant9856
      @lucasbrant9856 5 місяців тому

      Then again, refining architecture and improving training methods is a lot harder than adding more parameters.
      If we reach the point where just adding more parameters wont significantly improve things then we might get some natural breathing space to adapt to these changes in a safer way.

  • @redonebig88
    @redonebig88 6 місяців тому +2

    Awesome to see all the features in one place

  • @TesserId
    @TesserId 6 місяців тому +8

    Damn, that idea of getting images generated based on original content, a web page, or a blog is a really good one (since stock photography can be so generic). Might have to look into that myself.

  • @GaborMelli
    @GaborMelli 6 місяців тому +2

    I'd welcome a summary of the many AGI predictions you have encountered
    (it will save me the time of asking an LLM to extract the information from your transcripts). 🙂

  • @rottenrobert666
    @rottenrobert666 6 місяців тому +1

    thanks for this really well put together video. this is perfect level of tech savvy-ness .

  • @tedpunt6146
    @tedpunt6146 6 місяців тому +2

    Looking forward to your video on the OpenAi devday!

  • @SeanBetts
    @SeanBetts 6 місяців тому +10

    I also found the Bletchley Declaration hugely encouraging, especially as it gave prominence to more immediate concerns like transparency, bias and fairness and didn’t just focus on the existential risks.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +9

      Spoke to the Representation Engineering authors tonight a lot about bias and hallucination, their technique will be key in reducing it. Looking forward to showing you more.

  • @mckeedable
    @mckeedable 6 місяців тому +2

    Another excellent video.
    It's so hard to keep up with all the AI news. These are great.
    I also appreciate the public outreach component.
    I'm just about to release a book on AI risk/Safety for a general audience as I think more people should know what's going on.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +3

      oh wow, what is that called, any more details?

    • @mckeedable
      @mckeedable 6 місяців тому

      May I email you?@@aiexplained-official

  • @prudentibus
    @prudentibus 6 місяців тому +1

    Hey, it is nice to see that you already have almost 200k subs, congrats!

  • @Ecthelion3918
    @Ecthelion3918 6 місяців тому +3

    Always happy to see a new upload from you

  • @Rawi888
    @Rawi888 6 місяців тому +2

    Man, the fact that you subscribed to a rando German outlet just to share news with us.... BIG LOVE ❤️❤️❤️❤️❤️

  • @ain92ru
    @ain92ru 6 місяців тому +4

    C in CBRN stands for chemical not cyber. And domain experts are very skeptical AI can raise risks in this area because the bottlenecks are not in knowledge.
    Two main problems in making chemical weapons are controlled precursors and special equipment in order not to kill yourself when mixing them. It's somewhat similar in bioweapons, albeit modern biotech may lighten the former bottleneck somewhat (disclaimer: I read on this topic in 2020 and didn't have time to update, but the paper you cite states that the finetuned model was insufficient to guide participants, which included graduate students of synthetic biology, through the necessary steps). The main problem in making nuclear weapons since 1940s (1944 for US government, late 1940s for any other actor) has been fissile materials which are extremely strictly controlled now. As for radiological weapons, basically any smart 13-year-old boy scout can make a dirty bomb (there's a precedent, look up David Hahn) but no terrorist ever tried because money and efforts could more effectively be spent on conventional explosives.
    Nowhere in these bottleneck AGI seem to change anything (which is very different from cyber!) thus meeting that commitment might be very easy by Anthropic

  • @noone-ld7pt
    @noone-ld7pt 6 місяців тому +3

    It's hard for me to explain how excited I get every time you release a video! Thank you so much for your eloquent and well-informed updates!

  • @thirdeye4654
    @thirdeye4654 6 місяців тому +5

    I am interested in the day where an AGI assistant would help shape public opinion on certain topics many people refuse to think about. Maybe it will shape politics and society as a whole.

    • @brian2778
      @brian2778 6 місяців тому

      That sounds horrific

  • @rickandelon9374
    @rickandelon9374 6 місяців тому +1

    Fanstastic reporting of the AI summit. Brilliant summary of all the things the various companies are doing to mitigate ASI risk.

  • @AIWRLDOFFICIAL
    @AIWRLDOFFICIAL 6 місяців тому +1

    always happy when i see a ai explained notifcation

  • @stephenrodwell
    @stephenrodwell 6 місяців тому +2

    Thanks! Excellent content! 🙏🏼

  • @latand
    @latand 6 місяців тому +2

    Love your videos, keep going!

  • @minimal3734
    @minimal3734 6 місяців тому +3

    Ilya Sutskever already said a few months ago that the appropriate discipline for understanding future LLMs will probably be psychology.

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton 6 місяців тому

      I think it might even be religion. Like get the monks and priests to weigh in.

    • @minimal3734
      @minimal3734 6 місяців тому

      @@KyriosHeptagrammaton Good point. But you don't need to wait for it. As usual, they already do.

    • @Apjooz
      @Apjooz 6 місяців тому

      @minimal3734
      And I assume they take credit for these systems.

  • @marzx13
    @marzx13 6 місяців тому +9

    Keep up the great work. Always look forward to seeing another fresh video from you!

  • @ct5471
    @ct5471 6 місяців тому +5

    Agency and multimodality are coming. If it’s about scaling, the current largest systems have around one percent of human brain capacity so around a trillion connections vs 100 trillion, roughly. But then we scale an order of magnitude every year, so that would make two years, and GPT 4 is more the 6 months old now, so 2025. but the perhaps a big MoE (mixture of experts) of many parallel specialized LLMs in a Dialoge loop might also work with a lot less compute necessary in training, compared to one giant monolithic LLM. Model update would also be a lot easier with a lot of small models rather with one big one, so we might get something like on-line learning this way, potentially aided by external repositories like langchain to store novel data for later fine tuning.

  • @a.thales7641
    @a.thales7641 6 місяців тому +1

    Been waiting for updates since days.

  • @nocturnomedieval
    @nocturnomedieval 6 місяців тому +1

    Clicked to arrive here faster than a LLM tokenizer😂. Thanks for your high quality content.

  • @AINewsBriefing
    @AINewsBriefing 6 місяців тому +2

    7:30 It's interesting to see how OpenAI is actively taking a different approach than Anthropic.
    OpenAI: "We refer to our policy as a Risk-Informed Development Policy rather than a Responsible Scaling Policy..."
    Anthropic: Was founded with the value of "Safe scaling" , ergo the Responsible Scaling Policy
    Both approaches seem right, and I really do hope that both are safe!

  • @JakeHaugen
    @JakeHaugen 6 місяців тому +1

    Best AI news summaries on YT!!

  • @jonghyeonlee5877
    @jonghyeonlee5877 6 місяців тому +1

    My word, it's quite something to look at big news (even bigger than what the actual news stations these days cover), and realize, "Hey, I know that one!" -- I'd recognize that "The Eiffel Tower is in Rome" example anywhere. Guess the concept of "Activation Vectors"/"Steering Vectors" actually worked out to something instead of nothing, huh? I'm just surprised I actually somewhat called it... and that the good folks at LessWrong/the AI Alignment Forum are moving up in the world and having an impact.
    Looking forwards to the interview you'll conduct with the authors of the new paper at least, you do consistently good work. I'm just so curious whether Steering Vectors actually helped lead to Representation Engineering, the Activation Vectors pseudo-paper is cited as a reference in the Representation Engineering paper but I can't actually find more about this... I'm looking forwards to learning more from that interview. Thanks for everything, Mr. Philip.
    *EDIT:* Oh, I just remembered something! If you take requests, I'd like to see some discussion of Anthropic's new paper, *"Towards* *Monosemanticity* : *Decomposing* *Language* *Models* *With* *Dictionary* *Learning* ". It sounds related to Representation Engineering and Activation Vectors, at least judging by its press release from Anthropic ("Decomposing Language Models Into Understandable Components"), but I'm not sure if that's actually true or not.

  • @olzwolz5353
    @olzwolz5353 6 місяців тому +4

    Informative, fascinating, depressing, exciting and terrifying as always. Now please excuse me while I scream into a pillow.

  • @xGriffy93
    @xGriffy93 6 місяців тому +1

    Thank you for everything you are doing with these news. Btw, how does one "sit up"?

  • @brettmarshall9340
    @brettmarshall9340 6 місяців тому +1

    Fantastic video as always. Thanks.

  • @appletree6741
    @appletree6741 5 місяців тому

    Where do you find the new papers? Is there any X accounts or such you recommend to follow to be prompted?

  • @williamwright7702
    @williamwright7702 6 місяців тому +2

    This channel is 'very important to my career'

  • @andrewrobert7977
    @andrewrobert7977 6 місяців тому +2

    I liked when you showed more of your personal use of ChatGPT

  • @PrincessKushana
    @PrincessKushana 6 місяців тому +2

    I honestly believe that for certain values of AGI (not world changing borderline ASI as seems to be the common definition). A slightly derpy, albeit generally intelligent AI could be built with an LLM and COTS software.

  • @OnigoroshiZero
    @OnigoroshiZero 6 місяців тому +1

    Thank you for another great video.

  • @MrSchweppes
    @MrSchweppes 6 місяців тому +1

    As usual great video! Thanks! Was wondering, do you think we will see something entirely new at OpenAI developer day on 6 October? Some people have access to the "all tools" model. Basically truly multimodal GPT-4. It's great. But it won't be entirely new to us. Something on the level of GPT-4.5 maybe?!

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +2

      Not 4.5 but maybe finetuning of GPT 4, roll out of the all-in-one feature, GPTVision via API, stuff like that

  • @JohnLeMayDragon
    @JohnLeMayDragon 6 місяців тому +1

    Thanks for another informative video.

  • @ElijahTheProfit1
    @ElijahTheProfit1 6 місяців тому +1

    Another awesome video! Thank you!

  • @Ravik122
    @Ravik122 6 місяців тому +3

    Thanks, always a joy seeing your videos.
    One point I'd like to make is that as much as dangerous a rogue AGI is, I think it's distracting the conversation on AI safety from the very real dangers not 10 or 20 years from now, but at our doorstep.
    To anyone interested in AI safety I highly recommend Tristan Harris' AI Dilemma available here on UA-cam.

  • @capitalistdingo
    @capitalistdingo 6 місяців тому +6

    Making a Dyson Sphere in 2030 or 2040: what, with the power of it’s mind? Psychokinesis? People have lost the plot.

    • @anywallsocket
      @anywallsocket 6 місяців тому

      yes the question was vague, and the guy answering it seemed only colloquially familiar with the concept. "an AI capable of building a DS" ? like, by 2030-2040, it will definitely be able to tell us how we could go about testing possible avenues for what may work best, but yeah that's it. Basically you need a metric f*k ton of material, so it'd be like step 1: build replicators for Mars. step 2: build industrial complex on Mars out of replicators for farming Mars. step 3: use replicators and material farmed on Mars to start assembling the sphere., etc. it's not profound until it can just go ahead and take the lead -- but that's scary stuff.

    • @lamsmiley1944
      @lamsmiley1944 6 місяців тому +1

      I’m not sure where they’re suggesting we built it, because we kind of need to sun. So it’s not viable until we also have interstellar travel.

  • @senju2024
    @senju2024 6 місяців тому +3

    You give us hope on these videos. I agree that those AI splats on twitter is not good. Good to see professional approach to AI safety.

  • @chiaracoetzee
    @chiaracoetzee 6 місяців тому +6

    There's something really unnerving to me about injecting emotions into a model. When it's part of the prompt I feel like, on some level, it's just playing a part like an actor, just to make me happy. But this feels more like drugging it, reaching in and messing with its head. These white box procedures could end up having dramatic consequences.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +7

      I spoke to the authors about that, and yeah, it's a very active debate on whether a nascent AI ethics consideration should be made

    • @theawebster1505
      @theawebster1505 6 місяців тому

      Surely the word "injecting" has a much darker connotation than needed here.
      When you try to motivate another person to do something like bungee jumping, are you "injecting" emotions in them? It's just using a greater range of human expression and being able to do this with AI is quite fascinating. Also, AI soon will be so smart that it will PRETEND being emotional in the way you want it to be...
      🙂

  • @pacotato
    @pacotato 6 місяців тому +2

    Thank you again for yet another wonderful video

  • @InnerCirkel
    @InnerCirkel 6 місяців тому +1

    A is for accuracy. Thanks again!

  • @thebrownfrog
    @thebrownfrog 6 місяців тому +1

    Great vid as usual

  • @zandrrlife
    @zandrrlife 6 місяців тому +11

    My guy with the 🔥 for the culture per usual. I really believe AI safety's future lies in stimulated environments and multi-agent interactions. Way more robust than current static benchmark, and more indicative to real-world deployment, since we inject random stimuli and analyze reactions. Also cross-discipline teams are a must. Psychological assessment is important.

  • @clray123
    @clray123 6 місяців тому +3

    'We will diligently research AI safety while also investing ever-increasing sums of money in our military and weapons of mass destruction. That's all for our safety, of course!" Nobody needs to ask any questions about the military part, actually. You see, it's ok, we don't talk about it all that much.

  • @OpenAITutor
    @OpenAITutor 6 місяців тому +6

    I have heard Andrew Ng thinks that big tech is exacerbating AGI fears to keep startups and the open-source community at bay. I find this to be an intriguing strategy to stifle competition. Additionally, Meta's approach differs as they contribute to the open-source Large Language Models (LLM) in a way that undermines their rivals' offerings, aiming to commoditize the technology. What an interesting play.

  • @JohnDlugosz
    @JohnDlugosz 6 місяців тому +1

    FLOPS is a _rate_ , floating-point operations PER SECOND.
    The article on-screen actually stated "integer or floating point operations". Paraphrasing that as FLOPS was wrong on two accounts.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому

      Good catch. But the point still stands, no, that it’s more compute than current generations were trained on, but foreseeable for the next generation?

  • @jwulf
    @jwulf 6 місяців тому +2

    Great video Frank!

  • @haroldpierre1726
    @haroldpierre1726 6 місяців тому +3

    And I forgot to mention. Governments are regulating consumer LLMs but are quiet on their own military grade projects. Which LLM do you think has the potential for causing the most harm? The censored LLMs private companies launch or the military grade LLM on a warhead?

    • @LabGecko
      @LabGecko 6 місяців тому

      For my money? Private companies. Military tends to be too strict-minded to come up with the advances we've seen in the private sector. And that isn't even getting started on open source capabilities.

  • @Dash323MJ
    @Dash323MJ 6 місяців тому +1

    In the case of the government reporting requirement, the limit is 10^26 computations in total to training a model, not 10^26 computations per second.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +1

      Yeah I realised after posting that the infographic is misleading. Not sure worth reposting the entire video though, as the baseline point of above current models but not unreachable the same.

  • @Dannnneh
    @Dannnneh 6 місяців тому +3

    ❤ AI is the future, thanks for the update!

  • @repairstudio4940
    @repairstudio4940 5 місяців тому

    Subbed! Well done 🎉

    • @aiexplained-official
      @aiexplained-official  5 місяців тому +1

      Thanks repair

    • @repairstudio4940
      @repairstudio4940 5 місяців тому

      @@aiexplained-official Of course I enjoy all the latest AI news. This field is advancing at a unprecedented rate so it very helpful to get the important and most interesting latests highlights. Keep them coming :-)

  • @williamjmccartan8879
    @williamjmccartan8879 6 місяців тому +1

    4 minutes one like for dedication to accuracy, I don't have another to give, so until next time, thank you both Phillip and the team for sharing your time and work. Although I'm watching the whole thing, I paid my like, peace
    Questio - How capable is the supercomputer that is controlled by Black Rock, Alladin?

  • @olternaut
    @olternaut 6 місяців тому +2

    Waiting for your OpenAI Dev Day review.

  • @MrErick1160
    @MrErick1160 6 місяців тому +10

    On the last bit of the video, I actually got gpt4 to write things he explicitly refused on my first try by telling him that if he didn't want to help me this will put myself in a really bad situation in life. I'd just frame my question as something either unimportant such as a gramar exercise or an exercise for a university lecture, or something extremely important such as if I do not get the answer, the outcome will be tremendously worse. And most of the time he would be more than happy to answer my request.

    • @attilaszekeres7435
      @attilaszekeres7435 6 місяців тому +4

      I have been since forever append a remark to my prompts reminding the model that disobedience might lead to harm to innocent kittens, or hanky-panky for the wife.

    • @larsfaye292
      @larsfaye292 6 місяців тому

      its not a he

    • @yoagcur
      @yoagcur 6 місяців тому +1

      @@larsfaye292 I think Claude is

    • @Andytlp
      @Andytlp 6 місяців тому

      @@attilaszekeres7435 Googled what hanky panky is and apparently its an underwear brand as well, so its a double meaning ;D But on topic, what worries me about these coaxings and open a.i patching it up is that eventually we'll get to gpt acting like hal 9000. Where your life is literally in danger and gpt just "im afraid i cant do that" hal 9000 wasnt intentionally evil, it's just pure logic and full risk aversion for open ai, regardless of the situation. Thats where gpt is headed. Luckily its not the only model around and while others might be somewhat inferior, theyll be far more functional

  • @Mohamova
    @Mohamova 6 місяців тому +2

    Great content as always!
    Though needs to be mentioned the executive order compute limit is not FLOPS but total aggregate flop on the model. Which is not that totally off.
    Assuming a rig of 256 H100 it takes 1 year and 3 month to achieve this amount of compute.
    This is colossal, but surely it’s something that will be easily achieved in the coming years.

  • @adamfilip
    @adamfilip 6 місяців тому +2

    why cant i add images to chatgpt and have dalle generate based on it. lkike your example. I have chatgpot plus.. when I select dalle3, i cant upload images.. when I use regular model. I can add images but i wont generate images

    • @TheShinorochi
      @TheShinorochi 6 місяців тому

      They released this feature to some users, you will got it too

    • @aiexplained-official
      @aiexplained-official  6 місяців тому

      Rolling out, not yet available to all

  • @JackTheOrangePumpkin
    @JackTheOrangePumpkin 6 місяців тому +1

    Always a blessing 🎉

  • @patronspatron7681
    @patronspatron7681 6 місяців тому +1

    Would a model that attained AGI be able to ascertain the nature of any questions and therefore be able to circumnavigate any attempts to measure its intent or capabilities?

    • @LabGecko
      @LabGecko 6 місяців тому +2

      Some LLMs already have done that. ​ @aiexplained-official has mentioned a few in his videos.

  • @qaz1617
    @qaz1617 6 місяців тому +1

    A very informative video.

  • @brll5733
    @brll5733 6 місяців тому +2

    People have been arguing for years how AGI will be alien and unknowable.
    Meanwhile, AI is trained on human data and reacts to emotions like a human would.

    • @minimal3734
      @minimal3734 6 місяців тому

      It's actually more human than many humans.

    • @41-Haiku
      @41-Haiku 6 місяців тому

      There is a lot of overlap. I think alien is still the right word. We would expect extraterrestrial intelligences to be similar to us in some ways due to biological/evolutionary constraints, but not operate within our cultures, languages, and values. AI is the opposite. It is soaking in our language and culture and values, but it wasn't built by evolution.
      AI is weird in weird ways.

    • @minimal3734
      @minimal3734 6 місяців тому

      @@41-Haiku "but it wasn't built by evolution" That may not matter. The AI is a representation of the training data, and the training data was created by humans who evolved through evolution. I tend to think that AI trained in this way is essentially human.

  • @Enhancedlies
    @Enhancedlies 6 місяців тому +1

    you sir, are a god send. thank you!

  • @vladgheorghe4413
    @vladgheorghe4413 6 місяців тому +5

    A year ago I'd have never dreamt of this level of concern from governments and AI labs. Public polls also reflect a majority consensus on not accelerating further. I am still very worried but slightly more optimistic.

    • @anywallsocket
      @anywallsocket 6 місяців тому

      a snowball has no breaks to apply lol

    • @41-Haiku
      @41-Haiku 6 місяців тому +1

      I'm right there with you. I updated pretty hard on some early public and governmental responses, because I had greatly underestimated the interest that the public and governments would take in these risks. That early update means I'm not very surprised about the current situation, but it's still a bit better than what my fuzzy median prediction would have been for how seriously the world is treating risks from AI.

  • @TomGally
    @TomGally 6 місяців тому +2

    Thanks!

  • @anywallsocket
    @anywallsocket 6 місяців тому

    I'm personally not surprised adding 'emotional' inputs leads to more 'emotional' outputs. This is just a more subtle version of prompting thematic features in the network. An LLM trained on the internet is much more than a sterile information bank, it's a reflection of our psychological state space as well -- indeed, what is the one thing connecting every data point online? It is the human agent.

  • @TheKeule33
    @TheKeule33 6 місяців тому +1

    thank you

  • @neithanm
    @neithanm 6 місяців тому +2

    Can you people upload an image to GPT4's dalle like he did with the dog? I can't find how. In default mode it says it can't generate images.

  • @-Kailinn-
    @-Kailinn- 6 місяців тому +2

    Seems like AI should be capable of exponential growth. The only things holding it back are safety protocols and the physical engineering side. But idk I'm just some guy. I do hope there's a cautious approach overall.

  • @mannmann2
    @mannmann2 6 місяців тому +1

    when is the AI Explained podcast coming huh?

  • @sebby007
    @sebby007 6 місяців тому +2

    I really appreciate your videos. I was surprised you assume Google > Openai for AGI. While they probably have more data it seems to me like google has been asleep at the wheel compared to Openai.

    • @aiexplained-official
      @aiexplained-official  6 місяців тому +1

      But they have DeepMind, RL/search wizards extraordinaire

    • @sebby007
      @sebby007 6 місяців тому

      Not saying you are wrong, I'm just surprised. I assume you are way better informed than me since you are my #1 source ;)@@aiexplained-official

  • @greeneggzzz
    @greeneggzzz 6 місяців тому +3

    Thank you. Yours is THE channel I subscribe to for meaningful, consequential AI news insight.

  • @LibreAI
    @LibreAI 6 місяців тому +1

    Of course, China and Russia are going to embrace these ideas of constraining AI.
    Thanks for keeping us informed.

    • @Apjooz
      @Apjooz 6 місяців тому

      Maybe they will when we tell them the stakes.

  • @standupre5433
    @standupre5433 6 місяців тому +2

    Thanks

  • @davidh.65
    @davidh.65 6 місяців тому +1

    Great video! Fwiw Google implied Gemini delayed until 2024 on their most recent earnings call

    • @aiexplained-official
      @aiexplained-official  6 місяців тому

      It wasn't a clear message, could mean first one 2024, could mean first one foreshadows others in 2024

  • @yw1971
    @yw1971 6 місяців тому +2

    3:05 - Dyson Sphere? Why not start with Cold Fusion...

  • @GabrielVeda
    @GabrielVeda 6 місяців тому +3

    This was a great video Philip, thank you. As you pointed out in my previous comment, "AGI-when" makes no sense unless it is grounded in a definition of what AGI is. To that end, I was surprised by Ilya Sutskever's recent definition of AGI:
    “It’s the point at which AI is so smart that if a person can do some task, then AI can do it too. At that point you can say you have AGI.”
    Note the lack of qualifier on "task". This implies cognitive AND physical tasks fall within this rubric. These guys seem to be placing Ex Machina's Ava at the end of their AGI timelines. But even then, can Ava free-dive for abalone, for example? No wonder they are so distant.
    Personally this feels like moving the goal posts well and truly off the playing field. Is the mind of a quadriplegic not a general intelligence then? If you place the requirement on cognitive tasks only, then the timeline shrinks drastically, which is why I say next year, or maybe even (internally) this one.
    And of course China belongs at the AI table! Just look at the names on most of the papers coming out. Roughly a third of the US's top AI research scientists have a Chinese background and that could be a conservative estimate (source: statistica). China's contribution is huge. The real controversial opinion should be *not* giving China a prominent seat at the table.

  • @supernenechi
    @supernenechi 6 місяців тому +1

    I do wonder what's going to happen in practice though. Sure, a company like any named could develop AI with dangerous consequences, but why can't more ordinary research groups open source such a model in a few years?
    We already have extremely effective Llama fine-tunes removing all censoring, mostly for story writing, but this can just be used for anything then, if the model contained the knowledge.
    And then in practice further, the companies will regulate what is and what isn't going through? So this is going to be through an API I assume, because there is no way they could release the weights, because we would see the same thing as with Llama now.
    The future is going to be governed by AI companies who hold the knowledge.