DeepMind’s New AI Remembers 10,000,000 Tokens!

Поділитися
Вставка
  • Опубліковано 5 кві 2024
  • ❤️ Check out Microsoft Azure AI and try it out for free:
    azure.microsoft.com/en-us/sol...
    📝 The paper "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context" is available here:
    storage.googleapis.com/deepmi...
    📝 The paper "Gemma: Open Models Based on Gemini Research and Technology" is available here:
    storage.googleapis.com/deepmi...
    Try Gemma:
    huggingface.co/chat
    I would like to send a big thank you to Google DeepMind for providing access to Gemini 1.5 Pro to test it out.
    Sources:
    / 1760468624706351383
    / 1761113846520131816
    simonwillison.net/2024/Feb/21...
    / 1761459057641009354
    📝 My paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com/articles/s4156...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: / twominutepapers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
    Twitter: / twominutepapers
    #deepmind
  • Наука та технологія

КОМЕНТАРІ • 266

  • @HappyHater
    @HappyHater 28 днів тому +252

    What a tiiiiiimeeeee to be aliiiiiiiiiiiiive!

    • @penguinscanfly5796
      @penguinscanfly5796 28 днів тому +5

      What a time to be alive!

    • @Runar-jd5es
      @Runar-jd5es 28 днів тому +2

      Wattba

    • @hegedusuk
      @hegedusuk 28 днів тому

      It gets eeeeven better

    • @SaumonDuLundi
      @SaumonDuLundi 28 днів тому

      Quelle époque où vivre !
      (Wait ... It sounds better in English...)
      What a time to be alive!

    • @nientaiho7997
      @nientaiho7997 23 дні тому

      Really what a good time to be alive!

  • @theengineeringmonkey407
    @theengineeringmonkey407 28 днів тому +251

    I love how the goals for this technology are so high that we get disappointed when it takes 1.5 hours for an ai to remember everything in 10 movies

    • @andreavitale2845
      @andreavitale2845 28 днів тому +16

      Right 😂 ?!? Societies advancement and the competitive systems we live in ask for it, unfortunately.

    • @krishhhhhhhhhhhhhhhh
      @krishhhhhhhhhhhhhhhh 28 днів тому +19

      In 5 years these numbers will sound ancient. An AI will probably be able to watch a movie in a second.

    • @mgord9518
      @mgord9518 28 днів тому +8

      ​@@krishhhhhhhhhhhhhhhhPretty hopeful estimation there, the speed of AI is largely limited by the hardware it runs on, and it's highly unlikely that hardware will be hundreds of times faster within 5 years.

    • @user-rf5lf7cq5e
      @user-rf5lf7cq5e 28 днів тому +1

      True

    • @sownheard
      @sownheard 28 днів тому

      ​@@mgord9518 The newest Nvidia chip announced is already 100* times better than consumer hardware.
      it's currently for server's

  • @KAMI_24
    @KAMI_24 28 днів тому +88

    I wish for someone to feed an AI with heaps of dolphin or whale singing. Maybe we can actually find a way to understand what they communicate and to maybe talk back to them

    • @LucidiaRising
      @LucidiaRising 28 днів тому +42

      there is actually a project going on right now that is attempting to do exactly that - use AI to try to talk to whales 🙂

    • @kindaovermyhead
      @kindaovermyhead 28 днів тому

      Check out Whale-SETI! We actually started doing this!

    • @iminumst7827
      @iminumst7827 28 днів тому +10

      That wouldn't work, you'd would also need to feed it context of what the creature is doing, and no amount of words can define all the nuance and possible interpretations, so the AI will only be able to learn a simplified version of their basic emotions like fear, happiness, or grief.
      For example, hypothetically let's say dolphins remarkably have unique names and call eachother by their names. A human might observe this behavior and describe it as a generic greeting not knowing dolphins have names. The ai will see that dolphins make random sounds during greetings and assume dolphins greet by making random excitable noise. The only way to actually discover this is through targeted tests, it's not something that can be spontaneous discovered in existing data.

    • @CodyKey
      @CodyKey 28 днів тому +1

      @@dawiedekabouter5733how ya gonna do an fMRI on a sea creature 😂 let alone a whale

    • @technokicksyourass
      @technokicksyourass 28 днів тому +2

      @@iminumst7827 The way they are doing it is noting down that are happening around the whales at the time, then putting that alongside the communication between them.

  • @kittythepet485
    @kittythepet485 28 днів тому +40

    "Not practical". Yes the practicality of simultaneously analyzing 10 full movies in mere seconds is a shame to be missed. XD

    • @Lerppunen
      @Lerppunen 28 днів тому

      Practicality is good in data analysis. Movies are just an example.

  • @shadowdragon3521
    @shadowdragon3521 28 днів тому +20

    I'd like to see LLMs have a go at learning extinct languages with great historical significance like Sumerian and Akkadian

  • @appsenence9244
    @appsenence9244 28 днів тому +208

    Wtf? It takes me 20 hours to watch 10 movies and then another 10 hours to write an essay about them. You telling me this AI can do it all in only 1,5 hours??? How is that not an awesome thing?

    • @tuseroni6085
      @tuseroni6085 28 днів тому +40

      i think the issue is that having watched 10 movies EVERYTHING it does is now slow, even those unrelated to those videos. so it's like you watched 10 videos and now you can't hold a conversation since it takes you 1.5 hours to process anything anyone tells you.

    • @mikopiko
      @mikopiko 28 днів тому +29

      You can write a whole essay in 1 hour? That's incredible.

    • @abanoubmg3698
      @abanoubmg3698 28 днів тому +2

      i think the issue is the response to each prompt would take that 10x

    • @appsenence9244
      @appsenence9244 28 днів тому

      @@tuseroni6085 is that really how it works?

    • @appsenence9244
      @appsenence9244 28 днів тому

      @@mikopiko What do you mean?

  • @tuseroni6085
    @tuseroni6085 28 днів тому +37

    perhaps the solution to the quadratic complexity is to implement a short term-long term memory system with an artificial hippocampus to help it remember.

    • @mb-3faze
      @mb-3faze 28 днів тому +5

      I'll get right on it! :)

    • @Milkshakman
      @Milkshakman 28 днів тому +6

      Yes people have been saying this for years

    • @Jeremy-Ai
      @Jeremy-Ai 28 днів тому +4

      Yes, and no.
      Sometimes it is good to forget.
      Our brains have evolved to forget.
      I suspect there is a better balance.
      Never forgetting would be a massive burden.
      So yes, short term long term, but not everything.
      It is good to let go without even trying. ;)
      Take care,
      Jeremy

    • @plainpixels
      @plainpixels 28 днів тому +1

      There have been a variety of approaches that improve on this around for quite a while.

    • @MineAnimator
      @MineAnimator 28 днів тому +2

      ​@@Jeremy-Ainossa memória é baseada e sentimentos, acontecimentos intensos são mais fáceis de lembrar, mas inteligência artificial não tem sentimentos, então talvez criar isso seja o próximo grande passo para aumentar a inteligência delas. Isso supondo que o objetivo seja criar vida artificial consciente e não apenas ferramentas mais precisas

  • @mertaliyigit3288
    @mertaliyigit3288 28 днів тому +11

    "What a time to be alive!" on 10 years we'll start saying "What a time to be dead!"

  • @dingozi3428
    @dingozi3428 28 днів тому +39

    “When you realize ‘Two Minute Papers’ has more plot twists than your favorite TV show, and all it took was a couple of minutes and some groundbreaking science.” 📜✨

    • @killyGHILLIE
      @killyGHILLIE 28 днів тому +1

      imagine the plot twist only two papers down the line😅

    • @Mo_Mauve
      @Mo_Mauve 28 днів тому +1

      Most of it doesn't surprise me because I'm intelligent & optimistic enough to expect most of it, I just watch 2 Minute Papers to find out what currently exists & exactly what it's like.

  • @Zebred2001
    @Zebred2001 28 днів тому +6

    I've said for awhile now that this technology should be used in a massive global rare-languages rescue effort.

    • @johnaldchaffinch3417
      @johnaldchaffinch3417 28 днів тому +1

      and solving aging and cancer.
      In an interview Ilya chief scientist of OpenAI explained with certainty how to solve climate change with carbon capture. His certainty about the best method made me think that he'd already asked a superior AI for the solution.

  • @islammohamed1441
    @islammohamed1441 29 днів тому +13

    What a time to use AI!

  • @trycryptos1243
    @trycryptos1243 28 днів тому +3

    I like that 'what a time to be alive' line poping up in all your videos ...indeed it is.

  • @bjrnolavfrytlogbjrnsen2868
    @bjrnolavfrytlogbjrnsen2868 27 днів тому +1

    You deserve every watcher and all the praise in the world. From video 1 the content is concise and well put together. One of the better channels hands down.

  • @pqpvaisefude
    @pqpvaisefude 28 днів тому +4

    Nice! Now we can build a Solo Leveling AR game with AI :)
    Maybe wearing a smart glass to capture the exercises.

  • @gierdziui9003
    @gierdziui9003 28 днів тому +6

    its about time to feed such an AI with all neccessary books and data about transformer NN and make it make itself, and better

  • @gridvid
    @gridvid 28 днів тому +12

    Did you already covered "Suno", the AI music generator?

    • @Khann_2102
      @Khann_2102 28 днів тому +6

      I tried it and it's good

    • @MichielvanderBlonk
      @MichielvanderBlonk 28 днів тому +8

      He did cover that tech like a year ago before it was commercial.

  • @samsabruskongen
    @samsabruskongen 28 днів тому +3

    "aaand"x100000000

  • @blengi
    @blengi 28 днів тому +2

    you'd think mega context inputs would be executive AI reformulated into various higher level abstractions like some sort of linguistic computer code context representation which modularises language content to functions, structures, libraries etc so that vast bulk of tokens aren't even needed and are replaced with compact meta token instantiations with even higher latent space salience within the newly factored smart context

    • @Walter5850
      @Walter5850 28 днів тому +2

      Joint embedding predictive architecture aims to do that

  • @fosowl
    @fosowl 28 днів тому

    The analysis of lifting session and json that list repetition is the most impresive by far! I thought the google demo showcasting video understanding was kind of a scam until now. This demonstrate an true understanding of video over time and that amaze me much more that a LLM that can take a trillion token or whatever

  • @tannenbaumxy
    @tannenbaumxy 28 днів тому +6

    Looking at the problem of processing time with longer context, what do you think about Mamba as a solution for a subquadratic architecture for future LLMs? Maybe you could make a video about Jamba in the future.

    • @venaist
      @venaist 28 днів тому +1

      What is all the mumbo jumbo about?

    • @nocaptcha8110
      @nocaptcha8110 28 днів тому +1

      Maybe use them to make the AI do some samba

    • @tannenbaumxy
      @tannenbaumxy 27 днів тому

      @@Faizan29353 I could not find a fireship video about mamba. Did you mean bycloud?

  • @daviddonahue7690
    @daviddonahue7690 28 днів тому +1

    I don't think quadratic complexity is actually a problem. It only is a problem where you want every token to "talk" to every other token. But what we really want is for all the input tokens (e.g. the input documents/movies/etc) to talk to the output tokens (the model output). Then assume each input token only needs to read the previous K tokens for understanding. For N input tokens and M output tokens thats O(KN) + O(NM) time, way less than O(N^2)

    • @vidyagaems4063
      @vidyagaems4063 28 днів тому

      But that requires a new architecture, meaning it's no longer a transformer.

  • @c02c02
    @c02c02 28 днів тому +1

    3:47 saying this feels either correct or flying way too close to the sun. we were used to Jukebox taking four hours to generate audio that ultimately might not sound great, or even have any audible sounds in it, for years...

  • @bobparker1671
    @bobparker1671 28 днів тому

    Hopefully we can get SSM based architectures that have roughly linear token scaling during inference to be as big as Gemini and the like. Perhaps a greedy MoE approach with transformers for short range context and SSM or Mamba for long range.

  • @Brause_Market
    @Brause_Market 27 днів тому

    I love the "aaaaand" so so much ; )

  • @countofst.germain6417
    @countofst.germain6417 28 днів тому +4

    Kind of a lot you got wrong here or didnt explain, when you say one movie, it was a old movie that runs at few fps, it cant take much video at a reasonable fps, i think it was around 10 mins, and it can't speak the language as good as a native speaker, it can translate it as good as a human that had the same amount of info, a translation book, i think these are pretty important differences and that is just what i can remember off the top of my head.

    • @alvaroluffy1
      @alvaroluffy1 28 днів тому +1

      even if its an old movie with fewer fps, still lasts at least 1 hour, so for it to be 10 minutes you would need to have 6 times more fps on modern movies. Modern movies have 24 fps so for it to be 6 times greater than old movies it would mean that those old movies have 6 times less fps than 24, so 4 fps. Im sorry but i dont recall any moment on the history of cinema that movies used to be 4 frames per second, its not a movie its a powerpoint. I believe you when you say they had less fps, but it could be as little as 12 fps, and it probably was more than 12, maybe 16 or something like that, so were not looking at 1 hour vs 10 minutes, were looking at 1 hour vs 40 minutes more or less, and it probably wasnt an hour film, it would be 1.5 hours or something like that, so it would be 1.5 hours vs 1 hour

    • @countofst.germain6417
      @countofst.germain6417 28 днів тому

      @@alvaroluffy1 many UA-cam videos are 60fps and that is mostly what this will be used for, not low fps silent movies, I think it is important to talk about actual use cases, not just hype it up under perfect conditions. So this 44 min video would translate to slightly over 10 min.

    • @alvaroluffy1
      @alvaroluffy1 28 днів тому

      @@countofst.germain6417 he was talking in terms of movies, 10 movies is not 10 hour-long youtube videos, you are the only one that made that wrong assumption, nobody is talking or understanding this video in those terms, he is very clear about it

    • @countofst.germain6417
      @countofst.germain6417 28 днів тому

      @@alvaroluffy1 as I said, he should be talking about actual use cases and not hyping it up under perfect conditions, there very little reason to use this on movies. Also my point about the translation is completely valid, as I said I was doing this from memory, this fool was researching this for a video, and made lots of mistakes.

    • @alvaroluffy1
      @alvaroluffy1 28 днів тому

      @@countofst.germain6417 first, i didnt talk about the translation, but he puts a text saying literally that about someone with the same info from the book. Second, if you cant look at the information provided to you in the terms that is provided, then this is not your channel to watch. If you understand 10 movies as 10 hour-long youtube videos, then stop watching this cannel, or stop making those assumptions, and finally, you're talking like youtube videos are going to be the main use case, but first you have no idea, because no one has any idea, and even if it were true, there will be tens of use cases, this is not going to be a one thing that uses videos from youtube and thats all, you're the fool for making all those assumptions and projecting them into him, if you can't see things clearly, if you are like media that cant stop lying and exaggerating the scientific texts they read, then stop consuming this content, because you are going to constantly misinterpret it and then spread disinformation wherever you go, so just stop being a fool

  • @SEB1991SEB
    @SEB1991SEB 28 днів тому +1

    In the future AI will be able to consume all the media of a certain franchise (eg. Marvel) so it can help in building the Wiki of that franchise. You’ll also be able to ask it extremely specific obscure questions that even the most diehard fans and prolific Wiki contributors wouldn’t know.

    • @I_SEE_RED
      @I_SEE_RED 28 днів тому

      this is already possible

  • @seriousbusiness2293
    @seriousbusiness2293 28 днів тому +4

    The self attention of 10 very different movies though could easily be non quadratic. Why would the movies Oppenheimer and Barbie need to cross reference another in the self attention layer?
    Okay the details are more complex but the basic idea of seperation holds.

    • @alvaroluffy1
      @alvaroluffy1 28 днів тому +3

      i would say its just the problem with the current arquitectures, i guess something will come after transformers that is better and more efficient and could even solve this problem

    • @buildgameswithjon7641
      @buildgameswithjon7641 27 днів тому +1

      If you didn't want them cross referenced then you wouldn't input both movies right? You would only input 10 movies at the same time if you want all 10 movies to be considered in your next prompts/responses for whatever reason that may be. If you just want to interact with one movie then you just input one movie.

  • @cocolasticot9027
    @cocolasticot9027 27 днів тому

    I mean it may not be the most practical to wait 1.5h for one query, but given that it's for data the size of 10 movies, this is still brutally impressive.

  • @dreamphoenix
    @dreamphoenix 26 днів тому +1

    Wow. Thank you.

  • @PankajDoharey
    @PankajDoharey 24 дні тому

    Mamba long context research is the future.

  • @victorfsaaa
    @victorfsaaa 28 днів тому +9

    Your voice is weird in this video, is it ai generated?

  • @MindBlowingXR
    @MindBlowingXR 28 днів тому +1

    Great video!

  • @totoroben
    @totoroben 28 днів тому +1

    Whenever he says "and" 😂

  • @Joshua-ew6ks
    @Joshua-ew6ks 28 днів тому

    I want to use AI to help my readers ask questions about my books (What I will be writing). And also to help me to avoid continuity issues. And to help with editing my book.

  • @jeffg4686
    @jeffg4686 25 днів тому

    No, it didn't give me a warm and tingling feeling.
    IT STOLE MY DAMN JOB...

  • @nathanaeltrimm2720
    @nathanaeltrimm2720 28 днів тому +1

    Just run them in parallel to get around the 100x quadratic issud

  • @foxt9151
    @foxt9151 28 днів тому

    1 bit LLM Architecture and AI inference cards will probably reduce the inference time by orders of magnitute, see Groq.

  • @fohsen
    @fohsen 28 днів тому

    Can you imagine one integral LLM, that holds also all sources of academic papers of all kinds, so you have not only the data, but cientificial data? Idk if im explaining right, sorry for the bad english.

    • @loopingdope
      @loopingdope 28 днів тому

      You mean like scientific analyses and results?

    • @fohsen
      @fohsen 28 днів тому

      @@loopingdope yeah, I don´t know if the state of the art can have their research in these academic hubs

  • @IncognitoWho
    @IncognitoWho 28 днів тому +1

    If Anyone wondering ( What a time to be alive ) will be said at 5:40

  • @lordm31
    @lordm31 28 днів тому

    congrats on the new sponsor mister 2 minutes

  • @Peter_Telling
    @Peter_Telling 26 днів тому

    Thanks!

  • @jonathanmoore4837
    @jonathanmoore4837 28 днів тому +1

    Was thinking : 1 movie takes me at least 2hours to watch. AI can watch 10 movies in less time!

  • @GraveUypo
    @GraveUypo 28 днів тому

    token limitation is by far the worst thing about the local AIs i have. they most only remember up to 4096 and like a third of that is taken by setup prompts. i'd be happy with 50k tokens but i really wish they could just retain memory forever, even if vague like we do.

  • @botan1996
    @botan1996 27 днів тому

    I'm unsure about context levels of AI... I used the GPT 4.5 API with 128k context window and it did never follow any of my prompt and basically just summarized the book or video transcript I gave to it. Basically being unusable...

  • @galvinvoltag
    @galvinvoltag 28 днів тому +1

    ChatGPT must be having a hard life.
    Imagine having a new brother or sister everyday...

  • @blacksages
    @blacksages 28 днів тому

    claude 3 has improved a lot more from claude 2 red/green grid accuracy shown at the begining. Though it is still not as good as gemini. But I would be careful with those cherry picked results, prompt sensitivity is a very poorly explored subject, and their test is mostly done with a single long document and a piece of information dropped anywhere in there to test its accuracy. I would want to see these results averaged over many documents. But I know it is very costly to do. The twitter guy on gpt and claude 2 spent a few thousands bucks. I don't even dare to imagine the cost of testing a 1 or 10 millions token gemini

  • @Oliver-tk4ii
    @Oliver-tk4ii 28 днів тому

    Did you rename the gemini video?

  • @l.halawani
    @l.halawani 27 днів тому

    I'm running out of imagination what we will be able to do two papers down the line...

  • @mikopiko
    @mikopiko 28 днів тому

    Could the advancement with the GPTs help solve the Voynich manuscript?

    • @bluthammer1442
      @bluthammer1442 28 днів тому

      it might, although ...why

    • @realWorsin
      @realWorsin 28 днів тому

      No.

    • @mikopiko
      @mikopiko 28 днів тому

      @@bluthammer1442 To extract the contents of the book.

    • @EricDMMiller
      @EricDMMiller 28 днів тому

      There's no meaning behind it. It's just horseshit.

    • @mikopiko
      @mikopiko 28 днів тому

      @@EricDMMiller How'd you draw that conclusion?

  • @LaBonitaGraphicsAnimated-ft2nu
    @LaBonitaGraphicsAnimated-ft2nu 26 днів тому

    How come AI apps creaters don't merge with 3d game design with AI generators . You would finally have the stability of 3d character design that looks the same from every camera angle that are easy to pose like a 3d model , mixed the the speed of AI generated scenes and lighting .

  • @john_hunter_
    @john_hunter_ 28 днів тому

    Instead of trying to remember everything, what if it continually summarised the key points of everything it has seen?
    That way its memory would be more efficient at the cost of accuracy.

  • @danialothman
    @danialothman 28 днів тому

    claude is still not available in my region :(

  • @user-xu9go9bm2v
    @user-xu9go9bm2v 28 днів тому

    Honestly, learning kalamang is an impressive feat for an AI. Can't wait for it to be able to talk to aliens (if they teach it), animals. Imagine the future where you can talk with your dog..... and then it talks back to you. It will be like another wife 😆🤣

  • @smetljesm2276
    @smetljesm2276 27 днів тому

    I forsee that ultimate power over humans will come after a system has unlimited token window and doesn't treat each instance of communication with us as individual but is constantly aware of all of our inputs, interactions and fine-tunes itself on our questions and answers while feeding us single instance resposnes.😅

  • @frun
    @frun 28 днів тому +3

    Artificial intelligence twin of Karoly zsolnai feher is narrating the video? 😮

    • @TwoMinutePapers
      @TwoMinutePapers  28 днів тому +13

      I have never used any of them - every single episode is me flipping out behind the microphone. 😀

    • @frun
      @frun 28 днів тому

      @@TwoMinutePapersThey are able to distill data from NNs ua-cam.com/video/fk2r8y5TfNY/v-deo.html What's your opinion on that?

  • @PandemoniumLord
    @PandemoniumLord 28 днів тому +1

    yo! can we feed it the Voynich Manuscript to see if it can translate it?

    • @feynstein1004
      @feynstein1004 28 днів тому

      Not sure why you'd want to. It's not like the Manuscript contains the theory of everything or how to beat cancer or something like that.

    • @PandemoniumLord
      @PandemoniumLord 27 днів тому

      @@feynstein1004 you don't know why anyone would want to translate the most mysterious document we've ever found? there are images of plants unknown to science! and the text's characters and language have eluded every attempt at translation. Lots of people want to know what it says even if it's a just a cough medicine recipe!

    • @feynstein1004
      @feynstein1004 26 днів тому

      @@PandemoniumLord Lol you're too easily swayed, my friend. I could write some random gibberish right now and convince people it's a mysterious document with magic powers.
      Anyway, the manuscript is hundreds of years old. Even if it wasn't a joke, what useful information could it possibly contain that we don't already know by now?

    • @PandemoniumLord
      @PandemoniumLord 26 днів тому

      @@feynstein1004 what do you mean too easily swayed? I’d be satisfied even if the text is mundane. You can’t dispute though that successful translation of the voynich manuscript would be a great accomplishment for AI, and could serve as a benchmark for future AI research.

    • @feynstein1004
      @feynstein1004 25 днів тому

      @@PandemoniumLord That's exactly what I'm disputing. It's just some random piece of information. I think there are better things to spend your brainpower on 😉

  • @davidwrathall3776
    @davidwrathall3776 28 днів тому

    Not available in the UK or EU. 😭

  • @justynpryce
    @justynpryce 26 днів тому +1

    Do you think you'll go back to talking about the developments in simulations and light transport? It seems almost every video is about generative AI now

  • @mstarsup
    @mstarsup 17 днів тому

    5:40 ^^

  • @brianmi40
    @brianmi40 28 днів тому

    Want to have AI create a new language consisting of only around 3-500 words. Easy to memorize with some practice and then you can speak to someone secretly!

  • @thekaxmax
    @thekaxmax 28 днів тому

    feed it the Voynich Manuscript

  • @jooei2810
    @jooei2810 28 днів тому +3

    When are the AI making papers?

  • @nikhilsultania170
    @nikhilsultania170 28 днів тому

    I tried Gemini 1.5 Pro and it was very underwhelming. It was hyped to be the new cutting edge multimodal AI, but it falls behind Claude 3 Opus and GPT 4 in a lot of areas, plus it is so slow, its 1M context window becomes almost unusable

  • @jichaelmorgan3796
    @jichaelmorgan3796 28 днів тому

    Most of the world is operating at a pace and level of progress we were operating on decades ago. Now, people diving into ai are progressing faster than ever. Do we call this spaghettification of society?

  • @jackflash6377
    @jackflash6377 26 днів тому

    I bet that will be easy to jailbreak.
    The more tokens, the easier it is to jailbreak.

  • @blazearmoru
    @blazearmoru 28 днів тому

    I am just starting to take cs classes and I recall n^2 wasn't that bad, but x^n was big pain. I guess n^2 is still big pain especially in AI bc of how complex the thing is huh? >~<
    99.7% accurate. If you guys don't find obvious tricks then it's more accurate than human error? That and people can also fall prey to psych priming?

  • @harryconcat
    @harryconcat 28 днів тому

    still scared me alot instead of watching straight 10 movies it only do in 1hr !

  • @derlumpenhund
    @derlumpenhund 28 днів тому +5

    I personally am already tired of the eversame AI generated thumbnails.

  • @HelamanGile
    @HelamanGile 28 днів тому +1

    What is the meaning of life and everything
    Oh just give me a million years to think about that
    Oh great machine what is the meaning of life and everything
    42

  • @user-wk4ee4bf8g
    @user-wk4ee4bf8g 28 днів тому +1

    It'd be funny if this creator is an AI or uses an AI voice or something like that

  • @dhanadhana
    @dhanadhana 28 днів тому

    All AI can't beat math puzzle 24

  • @adomasjarmalavicius2808
    @adomasjarmalavicius2808 28 днів тому +3

    ngl i am literally waiting till this guy gets replaced by ai

  • @svorwerk
    @svorwerk 28 днів тому

    Couldn't agents split the work into ten sections of one million each?

  • @Ronnypetson
    @Ronnypetson 28 днів тому

    What about project Nimbus

  • @jerrygreenest
    @jerrygreenest 28 днів тому

    If it doesn't require entire farm to keep up, like some 512Gb ram, and 100Tb of rom, and therefore a subscription of $15 per month to pay the bills, then it's actually cool.

    • @apoage
      @apoage 28 днів тому

      It's actually 512 of VRAM ... 512 of ram would be kind of ok

    • @jerrygreenest
      @jerrygreenest 28 днів тому

      @@apoage it's not true, VRAM only required by those drawing neural networks like Midjourney or SD, but language models do require RAM

  • @ariaden
    @ariaden 28 днів тому

    1. Wake me up when ByteMamba works.
    2. Is weights&biases officially worse than Azure now?

  • @NenucooPT
    @NenucooPT 28 днів тому +4

    Did you use an AI to generate the voice over of this video ?

    • @TwoMinutePapers
      @TwoMinutePapers  28 днів тому +10

      I never have - every single episode is me behind the microphone.

    • @jonathanmoore4837
      @jonathanmoore4837 28 днів тому

      😂

    • @NenucooPT
      @NenucooPT 28 днів тому

      ​Sorry for doubting, AI voice cloning has me biased. Keep going, love your videos !

    • @Maxawa0851
      @Maxawa0851 28 днів тому

      AI isnt advanced enough for perfect replication yet

    • @TwoMinutePapers
      @TwoMinutePapers  28 днів тому +1

      @@NenucooPT No worries. You are too kind, thank you! 🙏

  • @crypto_surrealism
    @crypto_surrealism 28 днів тому

    mamba and rwkv is not quadratic i think, it's not based on transformers

  • @mutzelmann
    @mutzelmann 28 днів тому

    good job

  • @academicpresentations6062
    @academicpresentations6062 28 днів тому

    Press F for λ labs.

  • @smetljesm2276
    @smetljesm2276 27 днів тому

    That long context window will be a got sent for congress oposing sides when they are pushing thousand's of pages off some bill to be voted in overnight 😅😅😅

  • @123456crapface
    @123456crapface 28 днів тому

    if you actually did more research you wiuld know that quadratic complexity in the attention mechanism has been solved for a few months now

  • @messi8459
    @messi8459 28 днів тому +3

    yoo my boy is getting sponsored by microsoft now sheeesh
    fair play brother consider yourself made

  • @bergrugu
    @bergrugu 28 днів тому +1

    Train an AI to understand and translate hieroglyphics pleaseeee

  • @imjody
    @imjody 28 днів тому

    Were they using it to lift weights... and biases? 🤭😁

  • @vicmaestro
    @vicmaestro 28 днів тому

    My whole problem with AI right now is the inherent bias to it. It can be the smartest person in the room but it's being forced to mislead. And that isn't something I can get excited about just yet.

  • @punk3900
    @punk3900 28 днів тому

    That's also my impression that Claude 3 is superior at coding.

  • @hyperadapted
    @hyperadapted 28 днів тому +1

    ok but whyyyyyy

  • @GenaroCamele
    @GenaroCamele 28 днів тому

    I miss when your videos were about technical papers about simulations and AI explained for ignorant people like me. Now this channel looks like a quick news portal about chat bots just like any other that can be found on UA-cam

  • @thelasttellurian
    @thelasttellurian 28 днів тому

    I don't remember what I ate for breakfast

  • @g0d182
    @g0d182 28 днів тому

    cool

  • @francius3103
    @francius3103 28 днів тому +1

    im positive your voice is ai. no way you talk like that. im expecting any day a video like "i fooled you for more than a year"
    butim not falling for it

  • @apertioopening3425
    @apertioopening3425 28 днів тому +1

    This is all very exciting, and I've been on the AI hype train for a while, but I saw a video recently from The Hated One that claimed that AI uses an unsustainable amount of water. Thoughts?

  • @Vansafe0
    @Vansafe0 28 днів тому +3

    They claim the it has good memory, but in practice I'm finding otherwise. I'm using the paid for version and making quizzes that are 6 questions each. I specifically tell it to not repeat questions from previous quizzes above and it still does, repeatedly. This is after only 10 to 20 quizzes, so certainly not filling up 1M tokens or anywhere near that.

    • @DjSapsan
      @DjSapsan 28 днів тому +1

      most likely you misunderstand something. Do you tell it to not repeat questions from the previous session? Because each session is completely independent.
      And even if you asking in the same session it may be just artifact of any LLM, where they output the most likely output, so responses often repeat. For example, almost 100% of jokes made by ChatGPT are about scientists don't trust atoms.

    • @Vansafe0
      @Vansafe0 25 днів тому

      @@DjSapsan I'm not misunderstanding it. It is all prompts during the same session. I'll even very specifically say do not give questions or answers about photosynthesis, then the next science plant quiz will have a question about it or it may take 10 more quizzes, but it reliably will repeat questions I specifically tell it not to repeat. I've tested Claude (free), gpt4, and Gemini advanced (paid). Gemini does the best out of all of them, but they all have this problem.

    • @DjSapsan
      @DjSapsan 25 днів тому

      @@Vansafe0 ok, so my second part is the answer

    • @Vansafe0
      @Vansafe0 21 день тому

      @@DjSapsan "artifact of any LLM". When artifacts manifest as not remembering then it is no difference to the end user.

    • @DjSapsan
      @DjSapsan 21 день тому

      @@Vansafe0 it remembers, you can test if you ask to repeat the instructions. It just can't help itself, but to generate the highly probable response.

  • @teamvigod
    @teamvigod 28 днів тому +1

    WHAT A TIME TO BE ALIVE!!! Timestamp: ua-cam.com/video/Z_EliVUkuFA/v-deo.htmlsi=XO0lmlWhieGHi20T&t=339

  • @holleey
    @holleey 28 днів тому

    3:45 so exactly how is that not practical?
    a human would take weeks to summarize the contents of 10 movies, after all.
    or was that just a poor example?

  • @pranjal9830
    @pranjal9830 28 днів тому

    Why don't use ai for making video for ai research it will save a lot time just fine tune a model with millions token of data. The ai would be able to write the script in your way of writing script for video without finding any difference between real style of the writer and ai.

  • @Youtube-Handle-256
    @Youtube-Handle-256 28 днів тому

    I await the day where there will be a video about Tesla robotaxi 😊

  • @grahamclark7483
    @grahamclark7483 29 днів тому +1

    Let's face it before Google showed up, there wasn't A search engine on this plant that could search a percentage of speed as Google!

  • @maslaxali8826
    @maslaxali8826 28 днів тому

    Gemini 1.5 is in Preview mode now