if you want to test reasoning try this slightly changed riddle: "I hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step" 99%of LLMs will say it needs 10 hours including gemma-7B. If you change the prompt adding a example riddle (a 1-shot prompt) with a similar structure, the AI can learn the pattern. For example, a riddle about 3 t-shirts drying in 3 hours, then 6 t-shirts drying also in 3 hours, will help the AI understand that 14 t-shirts would only need 5 hours to dry.
>>> i hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step gemma2b: The total time taken to dry 7 shirts is 5 hours. Since the shirts are hanging in the same conditions, we can assume that the drying process follows the same rate. Therefore, to dry 14 shirts, it will also take 5 hours.
GPT 4 responds correctly to this riddle. "If 7 shirts dry in 5 hours under certain conditions, and the next day the conditions are exactly the same, 14 shirts will also dry in 5 hours, provided they all receive the same exposure to the drying conditions."
00:00 Introduction of various open source language models 01:19 Google has open-sourced Gemma, a suite of models 02:34 Introducing Gemma - 2B 7B 6Trillion Tokens 03:46 Models trained on TPU V5e with impressive benchmarks. 04:57 Gemma's terms of use and access request process 06:02 Using Keras 3.0 and Keras NLP for NLP models 07:11 Gemma 2B 7B 6 trillion tokens model's potential for multilingual fine-tuning. 08:18 Gemma 2B 7B 6Trillion Tokens for NLP Crafted by Merlin AI.
Dont forget starcoder and santacoder models. They are among the earliest opensource models that standardized data quality checks and pipelines. And inspired so many new models.
fyi you say the weights are only English but in my tests it was able to respond to queries in French. It's possible they were going for an English-only dataset but accidentally brought in some other language data.
Yeah this is quite common. Especially with languages like French, Spanish etc. A lot of other languages appear even in english text and when you have 6 Trillion tokens that can add up to. a lot. Also the tokenizer is a multi-lingual tokenizer (like the full size Gemnini models) so this can help as well.
@@samwitteveenai wasnt llama 3 supposed to be really powerful and almost a really really primative "agi" that what i got from that little zuckerburg speech
@@samwitteveenai I don't quite understand llama vs gemma. arent they both models? but why does it sound like gemma would run on top of llama, or how llamacpp allows for any model to be run on it, dont understand the layers here.
@@pylotlight it is just a model (2 different sizes of models) there are versions for cpp and other frameworks so it can run on various frameworks, but at the end of the day both Gemma and LLaMA are models
Prepare for Gemma-Orca-Wizard-Falcon-Hermes-7B-Uncensored
And make it double
Imagine if it's going to be a model like Mixtral8x7b, with around 60 billion tokens in total instead of just 7.
Dolphin-Gemma
Totally!!!
if you want to test reasoning try this slightly changed riddle: "I hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step" 99%of LLMs will say it needs 10 hours including gemma-7B.
If you change the prompt adding a example riddle (a 1-shot prompt) with a similar structure, the AI can learn the pattern. For example, a riddle about 3 t-shirts drying in 3 hours, then 6 t-shirts drying also in 3 hours, will help the AI understand that 14 t-shirts would only need 5 hours to dry.
According to chatgpt, it takes "≈21.43 minutes" so obviousely it knows something we don't
also, just try removing the "take a deep breath and proceed step by step" from your original prompt...
@@savvyvideos6454 removing "take a deep breath and proceed step by step" wont change the output. i tried on several models.
>>> i hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step
gemma2b:
The total time taken to dry 7 shirts is 5 hours.
Since the shirts are hanging in the same conditions, we can assume that the drying process follows the same rate.
Therefore, to dry 14 shirts, it will also take 5 hours.
GPT 4 responds correctly to this riddle.
"If 7 shirts dry in 5 hours under certain conditions, and the next day the conditions are exactly the same, 14 shirts will also dry in 5 hours, provided they all receive the same exposure to the drying conditions."
🎉🎉🎉 Can’t wait for the fine-tune video! Thanks for sharing!
Gemma is available with ollama.. FYI
00:00 Introduction of various open source language models
01:19 Google has open-sourced Gemma, a suite of models
02:34 Introducing Gemma - 2B 7B 6Trillion Tokens
03:46 Models trained on TPU V5e with impressive benchmarks.
04:57 Gemma's terms of use and access request process
06:02 Using Keras 3.0 and Keras NLP for NLP models
07:11 Gemma 2B 7B 6 trillion tokens model's potential for multilingual fine-tuning.
08:18 Gemma 2B 7B 6Trillion Tokens for NLP
Crafted by Merlin AI.
It would be exciting to see if Gemma can become popular like the Llama
top video again. i hope we get by the end of the week some monster finetuned version
give it a few days, but yes I think a lot of cool models coming
So fast! Very information, many thanks!
Dont forget starcoder and santacoder models. They are among the earliest opensource models that standardized data quality checks and pipelines. And inspired so many new models.
Can you provide the KerasNLP thing link?
sure here ai.google.dev/gemma/docs/get_started
I wonder if Gemma is quantized?
There are quanitzed version of it but it is a full resolution model they have released
Gemmani?
Google: "Gemma"
Me: Gimmie
Google: NO, GEM-MA.. GEMMA!
Me: Gimmie Gimmie
lol !!
Thank you for the great video:)
Really cool I thought there were 1 million tokens. Thanks for the video.
Alas no not 1 Mil for this one
fyi you say the weights are only English but in my tests it was able to respond to queries in French. It's possible they were going for an English-only dataset but accidentally brought in some other language data.
Yeah this is quite common. Especially with languages like French, Spanish etc. A lot of other languages appear even in english text and when you have 6 Trillion tokens that can add up to. a lot. Also the tokenizer is a multi-lingual tokenizer (like the full size Gemnini models) so this can help as well.
Can you give some practical applications of such model? I'm data science student andlooking at how to use these models for meaningful purposes
smaller models can fit on smaller devices. They’re also cheaper. Out of the box might not work great but maybe you can fine tune for your task.
Looking forward to the Hugging Face video and what the community is gonna do with this
nice timing!
6T you mean i can just plug an entire book in a single prompt
Oh nevermind
not it is trained on 6T tokens as compared to LLaMA 2 being trained on 2T tokens
My guys going total pokemon on this.
Evolution after evolution
"It's hard to pronounce Gemma instead of Gemini" is a feature, not a bug
it's a simple fact, when your model is NOT a cutting edge, they never open source it.
seems gemma is going to be used on android, that's that..
Woah! Opensource? Google?
Maybe not fully open sources but certainly a good step in the right direction
The answer is "no".
It's open weights. Not open source. Still nice but not all the way.
@@NicolasEmbletonNot even open weights, the proprietary license comes with strings, just as for LLama2.
Maybe you don't know but google open sourced many many codes in his history and also ML models. 🤷🏿♀️🤷🏿♀️🤷🏿♀️
thanks!
im just waiting for LlAMA 3 :(
I think it may keep getting delayed as the other open models getting released are raising the bar.
@@samwitteveenai wasnt llama 3 supposed to be really powerful and almost a really really primative "agi" that what i got from that little zuckerburg speech
@@samwitteveenai I don't quite understand llama vs gemma. arent they both models? but why does it sound like gemma would run on top of llama, or how llamacpp allows for any model to be run on it, dont understand the layers here.
@@pylotlight it is just a model (2 different sizes of models) there are versions for cpp and other frameworks so it can run on various frameworks, but at the end of the day both Gemma and LLaMA are models
Tried 2b. Wow it sucks. 😅😅
I asked him the derivative of x^3, it couldn't do it. Lol. What??
Instruct model or base model?
like your video😃
Ollama already has on its model page.. just pick the one you want and run on ollama with 3 words.
It’s censored so it’s not really that good
This instruct models are like that but you can fine tune the base model to be how ever you want.
nice
why tf did they name this Gemma?
Gemma - Gemini🥴
Gemini is nice :D
For a minute I thought the Context-Window is 6 Trillion Tokens Good content
now that would be nice lol
@@samwitteveenai Hugging face version works now
Is this real 😂?
very real
Gemini is getting significantly worse now. The same was with GPT3 which despite upgrades lost a lot of quality.
worse in what way and which Gemini are you noticing it on?
What are you talking about? The public chat or Gemini 1.5 on gogole AI studio ?