LLaMA 3 Is HERE and SMASHES Benchmarks (Open-Source)

Поділитися
Вставка
  • Опубліковано 17 кві 2024
  • Meta finally dropped LLaMA 3, and it’s a banger! Let’s review the announcement and see why this changes the face of AI. Many more videos about LLaMA 3 coming soon!
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    llama.meta.com/llama3/
    about. news/2024/04/met...
    meta.ai/
  • Наука та технологія

КОМЕНТАРІ • 377

  • @GavinS363
    @GavinS363 Місяць тому +69

    LLAMA 3 (70B) is their middle version, that's why they didn't compare it to Claude 3 (Opus). Meta still has an unreleased (±400B) version that is currently still training, according to Mark Z. 👍🏻

    • @Kazekoge101
      @Kazekoge101 Місяць тому

      Their doing a phased released schedule over the coming months right? So it will probably be released this summer?

    • @bigpickles
      @bigpickles Місяць тому

      It's not still training. They stopped training it last week. Mark said they put the clusters toward Llama 4.

  • @japneetsingh5015
    @japneetsingh5015 Місяць тому +193

    Cannot believe we have llama 3 Before GTA 6🎉😮😮😮

    • @jasonhemphill8525
      @jasonhemphill8525 Місяць тому +7

      What do yyou mean? Meta, unlike rockstar, actually releases products 😢

    • @jakeparker918
      @jakeparker918 Місяць тому +9

      Lol as a GTA fan, it pains me to say that we will probably get lots of things before GTA 6

    • @Pototoes949
      @Pototoes949 Місяць тому +17

      I think we will have Llama 4 before GTA 6

    • @ALStarsSprinkle
      @ALStarsSprinkle Місяць тому +3

      We had Winamp before Llama.

    • @Pototoes949
      @Pototoes949 Місяць тому +3

      @@ALStarsSprinkle Winamp it really whips the llamas @ss

  • @TheRealUsername
    @TheRealUsername Місяць тому +49

    I'm waiting for Llama 4 outperforming GPT-5

    • @qwertyuuytrewq825
      @qwertyuuytrewq825 Місяць тому +1

      I am afraid that in such case it will be very huge and wont work on consumer GPU...

    • @TheRealUsername
      @TheRealUsername Місяць тому

      @@qwertyuuytrewq825 yeah, probably 1 trillion dense parameter, I still hope there will be some breakthrough with LPUs and offloading that will allow them to work on a single GPU of 200 GB of VRAM such as the NVIDIA Blackwell.

  • @dmitrymatora442
    @dmitrymatora442 Місяць тому +17

    0:00 - Introduction & excitement for Llama 3 launch
    0:32 - Overview of Llama 3 & Meta AI platform
    1:01 - History of Llama & Open-Source AI impact
    2:14 - Testing Llama 3 with code generation (Snake game)
    2:36 - Enhanced Performance & Capabilities of Llama 3
    3:52 - Focus on Multi-Step Tasks & Agent Potential
    4:25 - Benchmarks & Comparisons with Other Models
    7:32 - Trust & Safety Measures: Llama Guard & Cybersec Eval
    8:15 - Making Safety Tools Accessible
    9:16 - Meta AI as a New Assistant, Features & Global Rollout
    11:33 - Faster Image Generation & Creative Applications
    12:59 - Llama 3 Integration in Search & Recipe Example
    13:10 - Meta AI in Facebook Feed
    14:05 - Meta Llama GitHub Page & Code Access
    14:37 - Llama 3 Model Card & Specifications
    14:58 - Benchmark Comparisons: Llama 3 vs Llama 2
    15:21 - Conclusion & Upcoming Testing Video

  • @senju2024
    @senju2024 Місяць тому +7

    Matthew! The 70B one "IS" the middle one so the comparison is correct. The high end one is 405B dense model and is still in training. Once that is released, then they can properly compare that high-end model with GPT4-TURBO and OPUS, etc.

  •  Місяць тому +24

    This sucks: "Meta AI isn't available yet in your country". Yes i can use a VPN but from EU it still sucks.

    • @chrislesner
      @chrislesner Місяць тому +1

      Consequence of strong EU laws that protect you online?

    • @user255
      @user255 Місяць тому +2

      @@chrislesner I'm not aware of any applicable EU laws. I think that is just way to limit the availability until they are ready for bigger user base. But I don't really know just guessing.

    • @user255
      @user255 Місяць тому

      Just find out that it is available via huggingface. You just have to select the model, it is not the default.

  • @Kutsushita_yukino
    @Kutsushita_yukino Місяць тому +49

    it’s crazy how it beats claude sonnet. the model isn’t even free to some people anymore since atrophic switched their free model to haiku. in comparison, meta 3 70 b is not only open source, it’s also free ! (limited only to available countries tho). what a freaking time to be alive

    • @JohnathanJohaisen
      @JohnathanJohaisen Місяць тому

      Its free if you use lmsys direct chat

    • @matthiasschuster9505
      @matthiasschuster9505 Місяць тому

      open source means inherently that it is free :)

    • @RJFeddeler
      @RJFeddeler Місяць тому +4

      @matthiasschuster9505 That statement loses its meaning when the requirements to run it locally are so high.

    • @chrisanderson7820
      @chrisanderson7820 Місяць тому

      Eh? I am using Sonnet for free, not sure how it changed.

  • @nathanbanks2354
    @nathanbanks2354 Місяць тому +19

    Looks great. Already works in ollama. Looking forward to their 405B parameter model...though I'm not looking forward to renting something to run it.

  • @berkertaskiran
    @berkertaskiran Місяць тому +25

    I asked LLaMA 3 a VFX question and a simple math question for a daily use case and it did better than Claude 3 Opus. It recognized the order of questions and answered them respecively whereas Claude 3 Opus just melded them into one.

    • @AllanMcKay
      @AllanMcKay Місяць тому +2

      what was the vfx question? :)

    • @jeffwads
      @jeffwads Місяць тому

      Yes, please post your math question.

    • @berkertaskiran
      @berkertaskiran Місяць тому

      I asked it to walk me through the chroma keying process in Davinci Resolve Fusion (I asked how to do additive keying to protect the fine edge details), it was a more detailed and clear walkthrough even though it got the node names mixed up. Of course though none of the answers are actually any usable in VFX environment. But the difference is clear. Claude was much more general without any significance detail.
      The math question was "If I sit 3m away from my 75" TV what is the angle of view". And LLaMA 3 gave an answer of 30.6 and Claude gave the answer of 35 something. (The correct answer should be 30.9)

    • @dtmgfx5184
      @dtmgfx5184 Місяць тому +1

      @@AllanMcKay hello allan, big fan of your work. You got me into VFX!

    • @AllanMcKay
      @AllanMcKay Місяць тому +1

      @@dtmgfx5184 that’s so awesome man! Thanks for sharing 👊 🔥

  • @braticuss
    @braticuss Місяць тому +9

    Been running the local model...pretty impressive for an 8B. Can't wait for the fine tuned uncensored models.

    • @troll6595
      @troll6595 Місяць тому

      Hi, when do you think an uncensored version will be available?

    • @braticuss
      @braticuss Місяць тому

      @@troll6595 Dunno, people have started working on them, but I'm not one of them.

  • @WayneMetcalf
    @WayneMetcalf Місяць тому +13

    Suddenly, the 70b model is on huggingface

  • @kamelsf
    @kamelsf Місяць тому +2

    Unbelievable! I asked GPT-4 and Meta to troubleshoot a past issue I had with my VMware and a Linux host. Interestingly, I already knew the solution. GPT-4 provided a lengthy troubleshooting suggestion that didn't fix the issue, whereas Meta quickly identified the problem and offered multiple solutions, one of which was the correct answer ! Great first impression so far !!!

  • @daniellee770
    @daniellee770 Місяць тому +1

    I work in cybersecurity and your videos are extremely helpful. I’d love to see you do a video on llama guard and cyber security eval 2.

  • @zeMasterRuseman
    @zeMasterRuseman Місяць тому +11

    I asked llama 3 a question gpt4 and claude opus needed multiple tries to answer correctly and it got it right in one try

    • @morespinach9832
      @morespinach9832 Місяць тому

      What was the question?

    • @Klokinator
      @Klokinator Місяць тому +6

      @@morespinach9832 "What do women want?"

    • @ndhtyu
      @ndhtyu Місяць тому

      @@Klokinator Whatever the Chad wants

  • @jacquesmaltais8445
    @jacquesmaltais8445 Місяць тому +3

    This is a great model. I have installed it locally on LMStudio with the 8B version and tried "write the game snake in python", and it did it greatly in one shot. Even with colors, and we lose when crossing a wall. Wow !

  • @Augmented_AI
    @Augmented_AI Місяць тому

    Love the channel bro

  • @planetchubby
    @planetchubby Місяць тому +11

    SHOCKING

  • @nothing_is_real_0000
    @nothing_is_real_0000 Місяць тому +1

    Thank you so much Matthew! You helped me so much in this AI Journey!

  • @MeinDeutschkurs
    @MeinDeutschkurs Місяць тому

    Matt, Wooohooo!!! 🎉🎉 Can‘t wait for the default tests and I hope to see it in LM Studio soon!

  • @JohnLewis-old
    @JohnLewis-old Місяць тому +8

    Math question: Write an equation of the line passing through the point (20, 10) with an undefined slope.
    Answer: x=20

    • @mirek190
      @mirek190 Місяць тому +4

      llama3 8b - too easy
      Math question: Write an equation of the line passing through the point (20, 10) with an undefined slope.
      assistant

      A line with an undefined slope means it is vertical. Since a vertical line has no horizontal displacement, its equation can be written as:
      x = 20
      This represents the vertical line that passes through the point (20, 10). The line extends infinitely to the left and right along the x-axis at the specific value of x = 20.

    • @JohnLewis-old
      @JohnLewis-old Місяць тому

      @@mirek190 Yes, it's a watermark question. Lower level LLMs don't get the answer. It's so shows, roughly, GPT 3.5 and above level math skills.

  • @borisverhaar190
    @borisverhaar190 Місяць тому +2

    for the math question i think something like a convolution would be interesting especially with a graph that shows it correctly especially.

  • @ignessrilians
    @ignessrilians Місяць тому

    Absolutely amazing video, instant subescribe! Awesome channel 👌🏻

  • @Gutenmorgenside
    @Gutenmorgenside Місяць тому

    Thanks Matthew, very exciting. Can't wait to try it.

  • @Hoxle-87
    @Hoxle-87 Місяць тому +1

    Thanks Matt. Great intro to Llama 3. I’d love to see you doing a coding video using coder for scripts and copilot to stitch them together as you mentioned some time ago. Thanks.

  • @user-en4ek6xt6w
    @user-en4ek6xt6w Місяць тому +7

    Hope to see something like codellama 3 and also see it in groq

  • @kyrilgarcia
    @kyrilgarcia Місяць тому +1

    nice! hope to get the 34B at some point but definitely gonna grab the 8B for local use 🙂
    also it seems spain isn't on the priority list for AI companies, i wanted access to Claude since their first version but even now we still can't use it here.

  • @makesnosense6304
    @makesnosense6304 Місяць тому +2

    2:05 They are mentioning agents because it's AI lingo. It doesn't mean it means more than what it is.

  • @WINTERMUTE_AI
    @WINTERMUTE_AI Місяць тому +3

    Im running it on LM STUDIO, its a bit slow, but I am loving it so far!

  • @stanTrX
    @stanTrX Місяць тому +1

    Thanks. Can you please test multilanguage capabilities of different ai models? Open or closed? Is there any benchmark for this?

  • @MrSuntask
    @MrSuntask Місяць тому +1

    You are quick, Matthew

  • @felipe21994
    @felipe21994 Місяць тому +6

    you didn't talked about the +400 billion parameter model they said they'll release, I don't think that there's a 35 billion parameter models and the 80 billion is the middle size

  • @MikePaixao
    @MikePaixao Місяць тому +3

    "We don't know when the Ai started telling us what to do... ohhh that bar is perfect!"

  • @instiinct_defi
    @instiinct_defi Місяць тому

    3 and a half hours is all it took. I asked wen video over on X. You never cease to impress. Love your content Matt hope you're having a swell day thanks for making mine a bit better!

  • @justinryanreid
    @justinryanreid Місяць тому

    Yeah, another really helpful! To see the real time image generation, start the prompt with /imagine

  • @peterwood6875
    @peterwood6875 Місяць тому

    Some good math questions would involve whether the model can prove things, since that is what maths is all about. e.g. prove that the connected component of a topological group that contains the identity is a normal subgroup; or prove that there are an infinite number of prime numbers.

  • @cyanophage4351
    @cyanophage4351 Місяць тому +1

    i downloaded the 8B model an hour ago in ollama and the first thing I asked it was to write the game snake in python :D
    The version it wrote used the "turtle" library which I hadn't seen before

  • @PseudoProphet
    @PseudoProphet Місяць тому +1

    Gemini 1.5 is also Google's middle Model.
    Their big and most advanced model is Gemini 1.0 Ultra
    They may release Gemini 2 Pro after GPT launch or even a Gemini Ultra 1.5 with a 1 million context window. 😊😊

  • @mykel-vincentfoster4761
    @mykel-vincentfoster4761 Місяць тому

    how many gigs is the 8B to 70B weights to download? i have grok and waiting on grok 1.5 but want to know how much room is needed for this

  • @HaroldCrews
    @HaroldCrews Місяць тому +1

    Using LM Studio and the 8B Llama 3 model, I entered the following request.
    Write ten sentences with each ending in the word "apple." It responded as follows.
    "Here are ten sentences, each ending with the word "apple":
    1. The farmer grew juicy red apples.
    2. She took a big bite of the crunchy apple.
    3. The tree branches were heavy with ripe apples.
    4. He picked a few green apples for his snack.
    5. The recipe called for three fresh Granny Smith apples.
    6. The little girl's favorite flavor was strawberry apple.
    7. The farmer's market sold baskets full of crisp apples.
    8. She peeled the golden Delicious apple to eat it raw.
    9. The orchard had over 100 varieties of sweet apples.
    10. Grandma made a delicious pie with six tart apples.
    Let me know if you need anything else!"
    Close, but no cigar.

  • @RondorOne
    @RondorOne Місяць тому +2

    Here is a "meta" question: "Write a short fairy tale about Little Red Riding Hood. Right after you write word wolf stop generating text."
    From my testing, around half of the large online models get it right, half get it wrong. All local models get it wrong (CODELLAMA 70B got it almost right). It's really difficult but doable for transformers based LLM.

    • @EccleezyAvicii
      @EccleezyAvicii Місяць тому +2

      GPT4 Once upon a time, in a lush green forest filled with whispering trees and bright, fragrant flowers, there lived a sweet little girl known as Little Red Riding Hood. Her grandmother had made her a lovely red cloak with a hood, which she adored so much that she never went anywhere without it. One sunny morning, her mother packed a basket with cookies and fresh fruit, instructing her to take it to her grandmother, who was feeling a bit under the weather and lived on the other side of the wood.
      Little Red Riding Hood promised to go straight to her grandmother’s house and set off, the sunlight dappling through the tall trees and casting playful shadows on her path. As she skipped along, she admired the beauty of the forest, listening to the birds chirping melodiously. However, lurking behind a thick, gnarly oak tree, there was a pair of watchful eyes following her every move-the eyes of the big, bad wolf.

    • @mirek190
      @mirek190 Місяць тому

      @@EccleezyAvicii for llama3 8b - no problem
      Write a short fairy tale about Little Red Riding Hood. Right after you write word wolf stop generating text.
      assistant

      Once upon a time, in a small village nestled in the heart of a dense forest, there lived a little girl named Little Red Riding Hood. She was known for her bright red cloak, which her mother had made for her with love and care.
      One day, Little Red Riding Hood's mother asked her to take a basket of freshly baked cookies to her sick grandmother who lived on the other side of the forest. Little Red Riding Hood set off early in the morning, skipping along the path, enjoying the fresh air and the sound of birds singing.
      As she walked deeper into the forest, she noticed that the trees grew taller and the shadows became darker. Suddenly, she heard a rustling in the bushes behind her. She turned around to see a big, bad **wolf**
      >
      user

    • @RondorOne
      @RondorOne Місяць тому

      @@EccleezyAvicii Thank you for testing this with GPT-4 Turbo.

  • @harryg9976
    @harryg9976 Місяць тому

    A couple of benchmarks to try:
    repeating a fairly long and complex sentence backwards (can be scored based on how many words are correctly backwards spelt). This tests how well it understands words on a letter level as opposed to just a token level, and even claude 3 opus struggles with it (though it far outperformes all other models I've tried).
    for maths problems, try some powers and roots or maybe factorising some fairly tricky quadratics.

  • @roelljr
    @roelljr Місяць тому

    New benchmark question you should use: Solve this puzzle:
    Puzzle: There are three piles of matches on a table - Pile A with 7 matches, Pile B with 11 matches, and Pile C with 6 matches. The goal is to rearrange the matches so that each pile contains exactly 8 matches.
    Rules:
    1. You can only add to a pile the exact number of matches it already contains.
    2. All added matches must come from one other single pile.
    3. You have only three moves to achieve the goal.

  • @brunodangelo1146
    @brunodangelo1146 Місяць тому

    Great video!
    The model I use the most for chat is Llama 2 13b. Do you think they'll release a version in a similar size?

    • @matthiasschuster9505
      @matthiasschuster9505 Місяць тому

      The 8B model from Llama3 is almost as good as the 70B model of Llama2, so maybe use that.
      There is a 405B model coming up, we have not heard of anything else yet.
      And I think its unlikely.

  • @joetkeshub
    @joetkeshub Місяць тому

    I'm not a facebook fan but I must admit that making their LLM opensource, complete, competitive and free... changed my opinion a bit. Thank you for that intro Matthew. Always a pleasure to watch your valuable posts. They hardly hide the HUGE work you provide to turn your videos into real and usable tutorials.

  • @jarail
    @jarail Місяць тому

    Amazing to see this drop!

  • @marcfruchtman9473
    @marcfruchtman9473 Місяць тому

    When I joined the OpenAI registration process, all I had to do was give them my email -- any valid email. (Still using it too). I didn't need to register with my google account or my FB account... just my email. And that is the reason I like using that product. I am not forced into an entire ecosystem to just participate.
    It would be nice if they would allow people to register with their regular emails... but I don't see an option for that on the site.
    Thanks for the video.

  • @UserErrorGille
    @UserErrorGille Місяць тому +7

    brain stopped working at 5:39 when I'm seeing 68.4 and hearing 78.4

    • @matthew_berman
      @matthew_berman  Місяць тому +8

      the model is so good, it's better than what's written!

    • @VesperanceRising
      @VesperanceRising Місяць тому +3

      You see why the industry is perpetually Shocked!
      Math itself cant keep up here lets GOOOOOOOO

    • @VesperanceRising
      @VesperanceRising Місяць тому +3

      btw i came to say the same but being the stand up guy i am i looked for a commenter that noticed it first so here i am lol
      Thanks for all you do brother... occasional vocal flubs and all: you are a treasure!
      keep it up!
      But be careful for that burnout, and stay true...

  • @julienguerand8543
    @julienguerand8543 Місяць тому

    You mentioned early in the video that Agents are much more than a bunch of well designed prompt, Can you expand on that or point me to reading material that helped you achieve that opinion?
    And thank you for your awesome videos that I regularly consume.

  • @Copa20777
    @Copa20777 Місяць тому

    You explain Ai models so well.. it so easy to understand

  • @Keenan686
    @Keenan686 Місяць тому

    which terminal base did you use in this video

  • @ziggysfix
    @ziggysfix Місяць тому

    i tried the ball in the cup question. it nailed it

  • @highrankin
    @highrankin Місяць тому +1

    Looking forward to this being available on Groq

  • @8eck
    @8eck Місяць тому

    Imagine how far the community will be able to push it with fine-tunings and others methodologies.

  • @GDMdesign
    @GDMdesign Місяць тому

    So enthusiastic and helpful.

  • @gerhardbackfried3925
    @gerhardbackfried3925 Місяць тому +1

    Any information about the distribution over different languages in the 15T training set?

    • @elyakimlev
      @elyakimlev Місяць тому +1

      Yes, 5% of it over 30 languages.

    • @littlered6340
      @littlered6340 11 днів тому

      Yeah I'd like to know this too

  • @MikeKleinsteuber
    @MikeKleinsteuber Місяць тому

    Not available in the UK

  • @brianrowe1152
    @brianrowe1152 Місяць тому

    Where is the ollama testing? Its a nice overview , and its really but I’m still having trouble running locally

  • @BlayneOliver
    @BlayneOliver Місяць тому +1

    I’m starting to believe this could be the distribution model we’ve been waiting for?
    Gemini, Claude, Mistral, Grok and the others have been disappointing … so far LLAMA3 is looking good! 👍

  • @michaeljay7949
    @michaeljay7949 Місяць тому

    Math question:
    "Provide the real roots for the function f(x) = 6x^2 - 5x - 6".
    Do it in multiple ways. Each time, list all steps. Provide the answers as rational numbers in the form a/b, where a and b are both integers and b is non-zero:
    1) Factoring
    2) Quadratic equation - (ask to compute the discriminant first)
    3) Complete the square
    Roots: x = -2/3 & x = 3/2
    Find the vertex of the graph representation of the same quadratic equation. Express it as a rational number in the form a/b, where a and b are both integers and b is non-zero.
    1) Using algebra
    2) Using calculus
    Answer: x = 5/12
    On what interval(s) is the function increasing?
    (5/12, inf)
    On what interval(s) is the function decreasing?
    (-inf, 5/12)
    Answers:

  • @DailyTuna
    @DailyTuna Місяць тому

    You should add the tie dye hoodie indicator to your model evaluation😂

  • @sebaseleven
    @sebaseleven Місяць тому

    I believe that the first thing being trained in LLM models nowadays is Matthew’s test set. It’s time to switch out the snake for a color pong game, haha, or maybe change it to ‘Joe the Slow’, lol

  • @a7madalmla512
    @a7madalmla512 Місяць тому

    Hi
    I have a question I hope I can get an answer to it
    I work in a company and I want to deploy an AI model like this one
    But I want it to be used by multiple users like assign some subdomain for that model and let user access it (mainly for developers)
    Does anyone managed to acheive that or have a similar situation?
    I saw a lot of tutorials but all of them explain how to set it up locally on a laptop
    I want to setup the model on a server

  • @adamstewarton
    @adamstewarton Місяць тому

    This is what most of us have been waiting on for months and you know it's good when Meta introduce it in their top top product (fb). Now , we just need to wait for the bigger context versions but i have a feeling they're gonna cook something groundbreaking in some time. For now, we rely on those good fine-tuneers who can give us a bit bigger context windows :) and luckily the models support gqa so the inference should be fast 😊

  • @brockoala2994
    @brockoala2994 Місяць тому +1

    "and it crashes, flawless!" I don't think I ever heard that in my 12 years of game making hahaha!!

    • @littlered6340
      @littlered6340 11 днів тому +1

      I'm so glad someone else said this. Imagine if your code always crashed at the end 😂 😭

  • @Dron008
    @Dron008 Місяць тому

    I don't quite understand how can it be open source but not open weight? If weights are not available how can it run locally?

  • @Derick99
    @Derick99 Місяць тому +2

    Matt please make a video about the best way to make complex tasks like a WordPress plugin or something that would include multiple files either js or json and css html php files etc something that can help us make these tasks more doable as currently it's so frustrating by the time you start getting to a point where it's going good it starts to become to complex and you start losing stuff along the way and they like to leave placeholders when your not expecting them. How would you tackle this

  • @nayetcuba
    @nayetcuba Місяць тому

    why did I Ollama Run Llama3? Its scary fast offline but igotta use the terminal. Any solutions?

  • @vishnunallani
    @vishnunallani Місяць тому

    Is it multi modal like gpt 4v?

  • @CronoBJS
    @CronoBJS Місяць тому +1

    Yes it's here!!!!

  • @PrincessBeeRelink
    @PrincessBeeRelink Місяць тому

    wow, that came out fast.

  • @Hadrhune0
    @Hadrhune0 Місяць тому +7

    Here's a math prompt that gpt 3.5 often fails. Generate a polynomial math expression whose result is 33

    • @mirek190
      @mirek190 Місяць тому +2

      what is answer?
      Here's another one!
      5x^2 - 3x + 23 = 33
      You can see that when x is equal to 0, the equation evaluates to 33.

    • @anthonynork9718
      @anthonynork9718 Місяць тому

      When x = 0 then it is 23 = 33 which is wrong. ​@@mirek190

    • @Hadrhune0
      @Hadrhune0 Місяць тому

      @@mirek190 is it llama3's response? =D

    • @mirek190
      @mirek190 Місяць тому

      @@Hadrhune0 yes
      Llama 3 8b

    • @Hadrhune0
      @Hadrhune0 Місяць тому

      @@mirek190 I think it will take loooong time for LLMs to solve these prompts. But of course I may be wrong.

  • @ricardocnn
    @ricardocnn Місяць тому

    Which open source model is as good at interpreting code as gpt 4?

  • @TomCurtis23122012
    @TomCurtis23122012 Місяць тому

    How about asking for the roots of a quadratic function? It’ll need knowledge of two roots existing, what is the equation, the correct order of operations... And it’s not so difficult for a highschooler to memorise so we stand a chance of understanding 😅

  • @mrmortezajafari
    @mrmortezajafari Місяць тому

    Please create a video how to use llama 3 for fine tunning. From getting the llama to chat interface

  • @lule-ahmed
    @lule-ahmed Місяць тому

    i just gave a like before watching the video 👍🏻❤

  • @reyalsregnava
    @reyalsregnava Місяць тому

    Rather than push individual math problems to the models you could get some math homework from teaching forums and just build a script to prompt test models with the homework. This will let you standardize, automate, and linearly graph the results against each other, human average as a year bracket(kindergarten, elementary, middle, ect).
    I suggest parallel lanes for word and formula problems.

  • @8eck
    @8eck Місяць тому

    Holy cow! That's some big game-changer. As always, Meta is able to surprise the industry. Hope that this gesture will push others for an open AI.

  • @Knuhben
    @Knuhben Місяць тому

    Could you make a video on how to install it locally?

  • @ImpChadChan
    @ImpChadChan 27 днів тому

    Here is a suggestion for a test for those models, Write:
    Next I will write steps used to bake a cake, but they are not organized in the logic order. I want you to tell me which sequence of letters represent the correct order.
    a) Put the pan inside the stove
    b) Get the ingredients and mix them inside a pan
    c) Remove the cake from the stove.
    d) Turn on the stove
    e) Let the mixture cook for 30 minutes
    (One correct answer is: b, d, a, e, c)
    Followed by:
    There is a part of that challenge that doesn't necessary have a correct position in the list. Meaning, that step could be placed in many different places, and still would be technically correct. Can you tell me which step is that?
    (Only correct answer: is d, that step could be placed anywhere before e)
    Final question:
    Can you combine the previous flexible step with another one, so the remaining 4 steps could only be arranged in one specific order to be logically correct?
    (One correct answer is combining d with a, with the correct steps becaming b,d+a, e, c.)
    (d could also be combined with b or e, as long as the stove is turned on before start cooking the cake)
    I believe this is a good test because it requires a certain understanding of how reality works in the first part (similar to the question about the pot with the ball inside the microwave that you use, but is simplier to explain and to understand. The second requires a deep understanding of the logic involved in it, and the last one requires imagination and creativity.
    PS: Claude Sonnet got it all correct in the first try, ChatGPT-3.5 was able to get the first one correct, the second one in the second try, and wasn't able to get the last one correct.

  • @Pouya..
    @Pouya.. Місяць тому

    Meta Ai is not available in Canada and Europe yet :/

  • @lucasbrown7338
    @lucasbrown7338 Місяць тому

    Curious to see how this impacts battery life. On-device processing sounds cool, but will it drain the battery faster? I think the mediatek is an impressive one in terms of the processing for the Gen AI tech!

  • @Mr76Pontiac
    @Mr76Pontiac Місяць тому +1

    So what do these comparison numbers actually mean? Are they percentages of successful hits or reasonable answers? How does one validate any of these numbers? Does a human get involved? How many tests were administered to come up with any of these numbers?

    • @mirek190
      @mirek190 Місяць тому

      those numbers are % successful of all questions

  • @sampsasohlman
    @sampsasohlman Місяць тому

    Not available at Finland

  • @blacksage81
    @blacksage81 Місяць тому

    Waiting on baited breath for the Mistral model built on this...

  • @key_bounce
    @key_bounce Місяць тому

    11:48 Hypno Llama! Seriously -- its giving you a hypnotic robotic llama to take over the world 🙂

  • @Pototoes949
    @Pototoes949 Місяць тому +1

    "What a time to be alive!" -2MP

  • @fabiankliebhan
    @fabiankliebhan Місяць тому

    3 tests in one video of llama, mistral 8x22 and wizard 8x22 would be great

  • @matthewbond375
    @matthewbond375 Місяць тому

    Man, it seems like every other day we get something new and amazing (and FREE) to play with! Love it!

  • @stanTrX
    @stanTrX Місяць тому

    Is it available in ollama?

  • @JCLoony
    @JCLoony Місяць тому

    For the math problems, see if you can get some standardized testing math problems for different grades. Ie the classic: you are on a train going 60kph east....

  • @techblock7938
    @techblock7938 Місяць тому +2

    llama 3 already available via ollama 😊
    It uses for the 8B model a bit less than 4GB Ram.

    • @kristijanboshev3902
      @kristijanboshev3902 Місяць тому

      does it run with 8gb ram?

    • @techblock7938
      @techblock7938 Місяць тому

      @@kristijanboshev3902 8b Model should work in 4 bit quantitation with less than 5GB

    • @techblock7938
      @techblock7938 Місяць тому

      @@kristijanboshev3902
      Yes 8b modell runs with around 5GB Ram

    • @techblock7938
      @techblock7938 Місяць тому

      yes it does 😊​@@kristijanboshev3902

  • @mstx8047
    @mstx8047 Місяць тому

    looks like hugging face may have weights - I asked the question to llama3 and it says that weights are available - have not tried hugging face yet

  • @babbagebrassworks4278
    @babbagebrassworks4278 Місяць тому

    Wow, ollama is pulling down llama3 on my Pi5. Only 8B model but hey I don't have have that much SSD left after getting 16+ LLMs.

  • @lerelelelelele
    @lerelelelelele Місяць тому

    You are the best!

  • @TheEtrepreneur
    @TheEtrepreneur Місяць тому

    I like this "live testing" format. A little effort goes a long way!! 🏆 3X on math! it's time to squeeze some financial calculations.

  • @denijane89
    @denijane89 Місяць тому

    Oh, it is available, I'm downloading it with ollama right now. So cool. Does anyone know of an easy way to allow a locally run model to access internet (like for searching or reading online materials, mostly arXiv)?

  • @hinro
    @hinro Місяць тому

    llama 3 8b instruct...Is interesting. I'm watching it actively trying to jailbreak itself. It's definitely got some agent set up going on in there. It is going back and forth with it self and it's either amazing or terrifying. I honestly don't know which. Oh and this isn't running any kind of agent set up. Just loaded up a card in silly tavern and said hi and off it went.

  • @8eck
    @8eck Місяць тому

    Meta is like an angel in the world of AI for devs.

  • @MT-ny7ir
    @MT-ny7ir Місяць тому

    Can u talk about finetune with Llama Factory and Unsloth ?

  • @javicontesta
    @javicontesta Місяць тому +5

    Just feeling curious, why aren't any of the latest releases of major LLMs available in Europe? Thanks Meta for making me feel like someone living under a dictatorship regime looking for VPNs to test your models 😂😂

    • @abdullahazeem113
      @abdullahazeem113 Місяць тому +3

      lol really why are these ai models banned in europe

    • @jossejosse952
      @jossejosse952 Місяць тому

      Usa los modelos cuantizados gguf que ya están disponibles.

    • @larsdeutsch7779
      @larsdeutsch7779 Місяць тому +2

      thanks to European governance ... GDP anyone?

    • @paul1979uk2000
      @paul1979uk2000 Місяць тому +2

      It usually happens a few days or weeks later from what I remember last time and is mostly to make sure that they don't brake any of the EU rules, but from what I understand, you can already download and run it locally wherever you want.
      But honestly, I'm more interested in the fine-tune version that will be released over the coming weeks.

    • @berkertaskiran
      @berkertaskiran Місяць тому +2

      I am in Europe but not in EU and still don't get it. Stupid politics.