Introducing Gemma - 2B 7B 6Trillion Tokens

Поділитися
Вставка
  • Опубліковано 22 сер 2024

КОМЕНТАРІ • 85

  • @shApYT
    @shApYT 6 місяців тому +118

    Prepare for Gemma-Orca-Wizard-Falcon-Hermes-7B-Uncensored

  • @amandamate9117
    @amandamate9117 6 місяців тому +27

    if you want to test reasoning try this slightly changed riddle: "I hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step" 99%of LLMs will say it needs 10 hours including gemma-7B.
    If you change the prompt adding a example riddle (a 1-shot prompt) with a similar structure, the AI can learn the pattern. For example, a riddle about 3 t-shirts drying in 3 hours, then 6 t-shirts drying also in 3 hours, will help the AI understand that 14 t-shirts would only need 5 hours to dry.

    • @David_Box
      @David_Box 6 місяців тому +10

      According to chatgpt, it takes "≈21.43 minutes" so obviousely it knows something we don't

    • @savvyvideos6454
      @savvyvideos6454 6 місяців тому +4

      also, just try removing the "take a deep breath and proceed step by step" from your original prompt...

    • @amandamate9117
      @amandamate9117 6 місяців тому +2

      @@savvyvideos6454 removing "take a deep breath and proceed step by step" wont change the output. i tried on several models.

    • @akhileshchander5307
      @akhileshchander5307 6 місяців тому

      >>> i hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step
      gemma2b:
      The total time taken to dry 7 shirts is 5 hours.
      Since the shirts are hanging in the same conditions, we can assume that the drying process follows the same rate.
      Therefore, to dry 14 shirts, it will also take 5 hours.

    • @WeebLabs
      @WeebLabs 6 місяців тому +5

      GPT 4 responds correctly to this riddle.
      "If 7 shirts dry in 5 hours under certain conditions, and the next day the conditions are exactly the same, 14 shirts will also dry in 5 hours, provided they all receive the same exposure to the drying conditions."

  • @MikewasG
    @MikewasG 6 місяців тому +6

    🎉🎉🎉 Can’t wait for the fine-tune video! Thanks for sharing!

  • @EstebanAstudillo
    @EstebanAstudillo 6 місяців тому +12

    Gemma is available with ollama.. FYI

  • @JosephLiaw
    @JosephLiaw 6 місяців тому +8

    It would be exciting to see if Gemma can become popular like the Llama

  • @maharishicoding440
    @maharishicoding440 5 місяців тому

    00:00 Introduction of various open source language models
    01:19 Google has open-sourced Gemma, a suite of models
    02:34 Introducing Gemma - 2B 7B 6Trillion Tokens
    03:46 Models trained on TPU V5e with impressive benchmarks.
    04:57 Gemma's terms of use and access request process
    06:02 Using Keras 3.0 and Keras NLP for NLP models
    07:11 Gemma 2B 7B 6 trillion tokens model's potential for multilingual fine-tuning.
    08:18 Gemma 2B 7B 6Trillion Tokens for NLP
    Crafted by Merlin AI.

  • @amandamate9117
    @amandamate9117 6 місяців тому +3

    top video again. i hope we get by the end of the week some monster finetuned version

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      give it a few days, but yes I think a lot of cool models coming

  • @2beJT
    @2beJT 6 місяців тому +5

    Google: "Gemma"
    Me: Gimmie
    Google: NO, GEM-MA.. GEMMA!
    Me: Gimmie Gimmie

  • @dataprospect
    @dataprospect 6 місяців тому

    Dont forget starcoder and santacoder models. They are among the earliest opensource models that standardized data quality checks and pipelines. And inspired so many new models.

  • @fonylew
    @fonylew 6 місяців тому

    So fast! Very information, many thanks!

  • @proterotype
    @proterotype 6 місяців тому +1

    Looking forward to the Hugging Face video and what the community is gonna do with this

  • @ThoughtLineQuotes
    @ThoughtLineQuotes 6 місяців тому

    Really cool I thought there were 1 million tokens. Thanks for the video.

  • @picklenickil
    @picklenickil 6 місяців тому

    My guys going total pokemon on this.
    Evolution after evolution

  • @micbab-vg2mu
    @micbab-vg2mu 6 місяців тому

    Thank you for the great video:)

  • @user-qr4jf4tv2x
    @user-qr4jf4tv2x 6 місяців тому +2

    6T you mean i can just plug an entire book in a single prompt

    • @user-qr4jf4tv2x
      @user-qr4jf4tv2x 6 місяців тому

      Oh nevermind

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      not it is trained on 6T tokens as compared to LLaMA 2 being trained on 2T tokens

  • @user-yd4tl9vw1r
    @user-yd4tl9vw1r 5 місяців тому

    thanks!

  • @chiaracoetzee
    @chiaracoetzee 6 місяців тому

    fyi you say the weights are only English but in my tests it was able to respond to queries in French. It's possible they were going for an English-only dataset but accidentally brought in some other language data.

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      Yeah this is quite common. Especially with languages like French, Spanish etc. A lot of other languages appear even in english text and when you have 6 Trillion tokens that can add up to. a lot. Also the tokenizer is a multi-lingual tokenizer (like the full size Gemnini models) so this can help as well.

  • @avi7278
    @avi7278 6 місяців тому +3

    Gemmani?

  • @hidroman1993
    @hidroman1993 6 місяців тому

    "It's hard to pronounce Gemma instead of Gemini" is a feature, not a bug

  • @yusufnzm
    @yusufnzm 6 місяців тому +2

    Can you provide the KerasNLP thing link?

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      sure here ai.google.dev/gemma/docs/get_started

  • @mshonle
    @mshonle 6 місяців тому +1

    I wonder if Gemma is quantized?

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +6

      There are quanitzed version of it but it is a full resolution model they have released

  • @MrErick1160
    @MrErick1160 6 місяців тому +1

    Can you give some practical applications of such model? I'm data science student andlooking at how to use these models for meaningful purposes

    • @jmann277
      @jmann277 6 місяців тому +1

      smaller models can fit on smaller devices. They’re also cheaper. Out of the box might not work great but maybe you can fine tune for your task.

  • @igor1591
    @igor1591 6 місяців тому

    nice timing!

  • @hqcart1
    @hqcart1 6 місяців тому +2

    it's a simple fact, when your model is NOT a cutting edge, they never open source it.
    seems gemma is going to be used on android, that's that..

  • @user-nm2gz5ce3q
    @user-nm2gz5ce3q 6 місяців тому

    nice

  • @ShanyGolan
    @ShanyGolan 6 місяців тому +1

    Tried 2b. Wow it sucks. 😅😅
    I asked him the derivative of x^3, it couldn't do it. Lol. What??

  • @IdPreferNot1
    @IdPreferNot1 6 місяців тому

    Ollama already has on its model page.. just pick the one you want and run on ollama with 3 words.

  • @pigeon_official
    @pigeon_official 6 місяців тому +1

    im just waiting for LlAMA 3 :(

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      I think it may keep getting delayed as the other open models getting released are raising the bar.

    • @pigeon_official
      @pigeon_official 6 місяців тому

      @@samwitteveenai wasnt llama 3 supposed to be really powerful and almost a really really primative "agi" that what i got from that little zuckerburg speech

    • @pylotlight
      @pylotlight 6 місяців тому

      @@samwitteveenai I don't quite understand llama vs gemma. arent they both models? but why does it sound like gemma would run on top of llama, or how llamacpp allows for any model to be run on it, dont understand the layers here.

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      @@pylotlight it is just a model (2 different sizes of models) there are versions for cpp and other frameworks so it can run on various frameworks, but at the end of the day both Gemma and LLaMA are models

  • @stickmanland
    @stickmanland 6 місяців тому +1

    Woah! Opensource? Google?

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +2

      Maybe not fully open sources but certainly a good step in the right direction

    • @clray123
      @clray123 6 місяців тому +1

      The answer is "no".

    • @NicolasEmbleton
      @NicolasEmbleton 6 місяців тому

      It's open weights. Not open source. Still nice but not all the way.

    • @clray123
      @clray123 6 місяців тому

      @@NicolasEmbletonNot even open weights, the proprietary license comes with strings, just as for LLama2.

    • @blender_wiki
      @blender_wiki 6 місяців тому +1

      Maybe you don't know but google open sourced many many codes in his history and also ML models. 🤷🏿‍♀️🤷🏿‍♀️🤷🏿‍♀️

  • @123arskas
    @123arskas 6 місяців тому

    For a minute I thought the Context-Window is 6 Trillion Tokens Good content

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      now that would be nice lol

    • @123arskas
      @123arskas 6 місяців тому

      @@samwitteveenai Hugging face version works now

  • @user-lv5kh8lb7f
    @user-lv5kh8lb7f 6 місяців тому

    like your video😃

  • @Wanderer2035
    @Wanderer2035 6 місяців тому +1

    It’s censored so it’s not really that good

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      This instruct models are like that but you can fine tune the base model to be how ever you want.

  • @russelllapua4904
    @russelllapua4904 6 місяців тому

    why tf did they name this Gemma?

  • @just.play1ng
    @just.play1ng 6 місяців тому +1

    Is this real 😂?

  • @davk
    @davk 6 місяців тому +1

    Gemini is getting significantly worse now. The same was with GPT3 which despite upgrades lost a lot of quality.

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      worse in what way and which Gemini are you noticing it on?

    • @blender_wiki
      @blender_wiki 6 місяців тому

      What are you talking about? The public chat or Gemini 1.5 on gogole AI studio ?