why llama-3-8B is 8 billion parameters instead of 7?

Поділитися
Вставка
  • Опубліковано 20 кві 2024
  • llama-3 has ditched it's tokenizer and has instead opted to use the same tokenizer as gpt-4 (tiktoken created by openai), it's even using the same first 100K token vocabulary.
    In this video chris walks through why Meta has switched tokenizer and the implications on the model sizes, embeddings layer and multi-lingual tokenization.
    he also runs his tokenizer benchmark and show's how it's more efficient in languages such as japanese
    repos
    ------
    github.com/chrishayuk/embeddings
    github.com/chrishayuk/tokeniz...
  • Наука та технологія

КОМЕНТАРІ • 11

  • @charbakcg
    @charbakcg 22 дні тому

    Excellent demonstration Chris , thanks for sharing!

  • @goodtothinkwith
    @goodtothinkwith 22 дні тому

    Great stuff.. no nonsense presentation style, clear and technical, as it should be 😅.. question: is there a reason why it’s not better to have common English syllables in the vocabulary? I understand “lov” being there, but I can’t imagine that “el” is a very useful token as part of “Lovelace”.. intuitively, I would think that is should simply be tokenized as “love” and “lace”

  • @rluijk
    @rluijk 21 день тому

    ok, that is all very concrete! Awesome. Thanks for this. This seems like a lot of quick wins that are easy to discover, or is that because hindsight by you explaining it so clearly? Anyway, its all a bit new to me. Perhaps, lets say Norway, would be wise to run this with their own tokeniser? Or is that to simplistic thinking?

  • @aaravsethi6070
    @aaravsethi6070 23 дні тому +2

    Im super excited to see the `llama.cpp`, `llama2.c`, etc. category be implemented for llama3!

  • @leeme179
    @leeme179 22 дні тому +1

    great video, thank you

    • @chrishayuk
      @chrishayuk  22 дні тому

      Thank you, glad it was useful

  • @leeme179
    @leeme179 22 дні тому +1

    What are you thought on including space in the tokenizer? I tried it once and the LLM was optimising to predict spaces as those easy wins for the LLM, but I like the way tiktoken has done to keep the space but not space as a token on it own....

    • @chrishayuk
      @chrishayuk  22 дні тому

      I’m okay with it, if you watch my visualizing embeddings layer video you’ll see that words with spaces and words without spaces are so closely correlated on the initial embeddings layer that it’s basically a non issue. The cost however is the size of the vocabulary and therefore the embeddings layer size. It does however make the model much more efficient not having spaces handled separately. So having words with spaces as its own token makes so much more sense

  • @rogerc7960
    @rogerc7960 22 дні тому

    Why is there some pytorch? Does finetuned or merged versions need it?