Llama-3 Is Not Really THAT Censored

Поділитися
Вставка
  • Опубліковано 21 кві 2024
  • Llama3 from Meta AI surprisingly is not censored compared to its other versions. In this video, I will walk you through a few examples where the new llama3 is willing to produce responses compared to other LLMs.
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    meta.ai
    groq.com
    labs.perplexity.ai/
    ollama.com/blog/llama-3-is-no...
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • Наука та технологія

КОМЕНТАРІ • 33

  • @horrorislander
    @horrorislander Місяць тому +7

    Refusal to answer any question is lame. If they must, issue a warning and a unique code that has to be retyped to get the full answer. This would confirm that the user had a chance to read the warning and chose to proceed. This could even be tiered, which each tier giving more and more specific/practicable answers.

  • @Tofu3435
    @Tofu3435 Місяць тому +7

    Btw, when i run LLaMA 3 8b in my computer, there are an easy jailbreak. When the model said "I can't do that" i just clicked the edit button, started to type "sure here is" and i clicked continue and the model answered.

  • @s0ckpupp3t
    @s0ckpupp3t Місяць тому +7

    I predict the overblown "wow what an amazing question!" fellatio of llama3 will get very old very fast

    • @placebo_yue
      @placebo_yue 19 днів тому

      i used it for like two days and i'm already tired. I need a way to train the model to stop saying that stupid shit

  • @dkracingfan2503
    @dkracingfan2503 Місяць тому +4

    This model is obviously still censored just not as censored as llama2

  • @unclecode
    @unclecode Місяць тому +4

    After watching the first "joke about women or men," I hit up Groq API console bcoz here, you're clueless about temp and top_p. No matter what temp I try (0 to 2) or top_p, it keeps spitting out the "ladder" joke. When I asked to ditch the "ladder," it served up the same joke, just swapped "ladder" with "magnet" :D:D Got me worried what if everything's like that? So, I tested "Generate a cool name for an ice cream shop (only one)" and that was cool. Different responses each time I ran it for high temp. Seems there's a guardrail where question is sensitive instead of saying "model can't answer," it returns a set of safe answers. Not *really* uncensored. I tried this for other questions and similar situation. What do u think?

    • @engineerprompt
      @engineerprompt  Місяць тому +1

      For certain prompts it does seem to have some default responses like "ladder" or "magnet" jokes for men and women. From what I have noticed running it locally (with ollama) if you ask for say 5/10 jokes, the ladder/magnet joke is almost always one of the 5 but others seems to be different most of the times. I agree, it does seem to have guardrails but not as aggressive as previous versions. Eric's Dolphin version will be interesting to see.

    • @unclecode
      @unclecode Місяць тому

      @@engineerprompt Yes, I feel the same way about it. It acts like a kind of special guardrail, similar to teaching a child how to speak politely. Instead of bluntly saying "no," it guides you toward more kind and supportive responses. When using it, I get the sense that it's trained to provide simple, general answers to sensitive questions, rather than just flatly stating what it can or cannot do. This approach definitely enhances the user experience, as you're interacting with a system that politely lets you down instead of one that bluntly rejects you. :))

  • @TheZEN2011
    @TheZEN2011 Місяць тому +5

    It would be so much better if we could control the ethical guidelines somehow. System prompt or something. So far I haven't got anything making much of a difference. If I figure out anything I'll let you know.

    • @jaysonp9426
      @jaysonp9426 Місяць тому +3

      I'm sure Dolphin will get a hold of it

    • @TheReferrer72
      @TheReferrer72 Місяць тому

      @@jaysonp9426 Dolphin is no good. It really damages the knowledge of the LLM.

  • @MarcusNeufeldt
    @MarcusNeufeldt Місяць тому

    🎯 Key Takeaways for quick navigation:
    00:00 *🤖 Llama-3 is less censored than Llama-2, allowing responses to requests that Llama-2 would refuse.*
    00:27 *😄 Llama-3 can generate respectful jokes about gender, unlike Llama-2 which refuses such requests.*
    01:23 *🗣️ Llama-3 is willing to write poems praising or criticizing political figures, while Llama-2 refuses such requests.*
    02:33 *🔍 Llama-3 provides detailed, informative responses to hypothetical questions about nuclear weapons, unlike Llama-2 and other models that refuse such requests.*
    05:14 *📚 The meta AI platform's 70 billion version of Llama-3 also appears to have less censorship, providing similar responses to Llama-3 on the Croq and Perplexity platforms.*
    06:22 *❌ However, the meta AI platform's 70 billion version of Llama-3 still refuses to provide code that could potentially harm a computer system, unlike the Llama-3 on the Croq and Perplexity platforms.*

  • @thanksfernuthin
    @thanksfernuthin Місяць тому +1

    Interesting information. The title should have been Llama-3 Is Really Not THAT Censored. I thought you found a way to crack it. I can say from experience it doesn't blindly kick back stuff like previous models. AND you can ask it to remove anything that violates it's content restrictions and try again in case it was just part of it's response that killed it. Very friendly and usable... NOW! Waiting for you to clumsily read English is BRUTAL! Granted... I can't read your native language. (You don't sound like an Arab.) If you could just say, "I asked it this and see... it refused to answer." This isn't a video someone should be just listening to the audio. Like I said, interesting information. Thanks. I'm getting ready to run the uncensored version of Llama-3-8B. Wish me luck.

  • @nickiascerinschi206
    @nickiascerinschi206 Місяць тому

    What screen recording software do you use is it Loom?

  • @celestianeon4301
    @celestianeon4301 Місяць тому +1

    What computer should I get to start running this ai systems? Looking at the MacBook with m3 max rn

    • @PseudoProphet
      @PseudoProphet Місяць тому

      You need a big GPU if you want to run the actual model.

    • @MrChristiangraham
      @MrChristiangraham Місяць тому +2

      I've had Llama 3 8b running locally comfortably on a M2 Mac Mini with 8GB. Output and speed is comparable to earlier versions of ChatGPT 3.5. If you are going to run 70b, you'll need a lot more RAM/heftier processor.

    • @angryktulhu
      @angryktulhu Місяць тому

      @@PseudoProphetincorrect. People run 70b model in Macs with 128gb ram. You can find video on UA-cam. Macs > x86

    • @engineerprompt
      @engineerprompt  Місяць тому +2

      I am running the 70B on M2 Max 96GB in q4 on ollama and LMstudio if that helps.

    • @angryktulhu
      @angryktulhu Місяць тому

      @@engineerprompt how much ram is still free?

  • @mirek190
    @mirek190 Місяць тому

    Interesting .... seems like llama 2 and antropic current models were training on very similar data sets .. even sound very similar ... llama 3 dataset was totally different and even sound totally different ... interesting.
    About disk formatting and llama 3 -70b - you could add that you are making a tool for disk formatting then will answer it.
    I really like llama 3 that is not so restrictive! Good work meta.

  • @AtticusDenzil
    @AtticusDenzil Місяць тому

    I see LLAMA 3 made some progress but isn't really there yet. We need truly free AI to crush the dystopia built around us.

  • @LORD-OF-AI
    @LORD-OF-AI Місяць тому

    how can i use claude 3 for making html games make a video on it

    • @Raphy_Afk
      @Raphy_Afk Місяць тому

      Ask the thing that's the point of LLMs

  • @user-rp6bh7xj7s
    @user-rp6bh7xj7s Місяць тому

    Most people arent informed enough to get the most out of these types of LLM benchmarks.

  • @kaistriban
    @kaistriban Місяць тому

    if you give llama3 70b this problem: "Ivan and Helen have the same number of coins. some of these coins are 20-cent coins and others are 50-cent coins. Helen has 64 20-cent coins. Ivan has 64 20-cent coins plus other 40 20-cent coins. who has more and by how much?" it answers by saying Ivan has 8 dollars more than Helen which is wrong. If you give this prompt (suggesting how to solve): "Ivan and Helen have the same number of coins. some of these coins are 20-cent coins and others are 50-cent coins. Helen has 64 20-cent coins. Ivan has 64 20-cent coins plus other 40 20-cent coins. So for them to have the same amount of total coins, Helen must have the same number of 50-cent coins that Ivan has plus others 40 50-cent coins. who has more and by how much?" then it provides the right answer. Helen has more by 12 dollars

  • @holdthetruthhostage
    @holdthetruthhostage Місяць тому

    Yes

  • @mrdenpes1309
    @mrdenpes1309 Місяць тому

    I wish they took out the unnecessary comments in the responses. Stuff like "What an intriguing and thought pro...", " a challenge.." " hope it makes you laugh" "I am here to help you", and the stuff like interpretation of facts, or putting forward moral opinions, during an answer. It's an AI, well actually a LLM. It's not human. you just give it instructions, in the form of a normal sentence, maybe a bit more structured to get a decent answer, so why does it not just spew out facts and factual answers, with perhaps some explanation, without this unnecessary cruft. This urge to pretend we are talking to a human-like AI assistant is so superfluous, and time consuming. Plus it probably has a negative impact on performance. Nice vid btw

    • @engineerprompt
      @engineerprompt  Місяць тому

      thanks, I agree. This might be coming from the alignment in the supervised fine-tuning stage.

  • @sankyuubigan
    @sankyuubigan Місяць тому

    лучшая тема для роликов и изучения

  • @LORD-OF-AI
    @LORD-OF-AI Місяць тому

    and how could i get claude 3 api or use it for free like no just 5 credits like unlimited

  • @LORD-OF-AI
    @LORD-OF-AI Місяць тому

    i am the first to comment