How to Get your LLMs to OBEY | Easiest Fine-tuning Interface for Total Control over your LLMs

Поділитися
Вставка
  • Опубліковано 23 жов 2024

КОМЕНТАРІ • 8

  • @timberwofe333
    @timberwofe333 7 днів тому +1

    This is great and I followed all your instructions, but I can't export the fine-tuned LLM to Hugging Face, I tried several different tokens, I even created a token that had full read/write to everything on my Hugging Face account and it still errors on export. Are you able to export a model to your Hugging Face account or do you also receive an error? If I can't retrieve the fine-tuned LLM then this is only good for academic purposes. Look forward to your reply, I really enjoy your content and am considering joining your Patreon.

    • @PromptEngineer48
      @PromptEngineer48  7 днів тому +1

      Thanks for trying out.
      First you need to go to the model page on Huggingface and get the rights. For example for llama 3.2 hugging face, when u go for the first time, you will see options for registration with LLAMA for the first time .

    • @timberwofe333
      @timberwofe333 7 днів тому +1

      @@PromptEngineer48 thank you for the quick reply, I have done that! I actually fully went through your video and fine tuned the Llama 3.2 1B Instruct model retrieved from my Hugging Face account! It took over an hour to do the training but it worked! I got the same prompt result after training you did where it identified itself as a "Llama factory" made model! But before I go and use my own datasets I wanted to test export but it errors out after a few seconds after starting the export!

    • @timberwofe333
      @timberwofe333 7 днів тому +1

      @@PromptEngineer48 I now have it working! What I had to do was stop the Llama Board process on the Colab page and then update my Hugging Face key by re-running the command I had to put into the Colab page manually as you alluded to in your video:
      from huggingface_hub import login
      # Replace 'your token' with your acutal Hugging Face acess token
      login("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")
      And then after re-running the above command in the colab page (running on the "Python 3 Google Compute Engine backend" I restarted the Llama Board process and then was able to export successfully!! Awesome!! We trained an LLM using Lora!!! Thank you!

    • @PromptEngineer48
      @PromptEngineer48  7 днів тому

      Thank you for putting the feedback and success story. We are really inspired. I will bring more such contents. Now back to techno hunt

  • @BorisMancov
    @BorisMancov 17 годин тому +2

    Hi how can we add our own dataset?

    • @PromptEngineer48
      @PromptEngineer48  17 годин тому

      ua-cam.com/video/MQis5kQ99mw/v-deo.html
      Here u go

    • @BorisMancov
      @BorisMancov 16 годин тому

      @@PromptEngineer48 i have created my own dataset and upload it to huggingface but i want to use it on "gradio live" but i couldn't find it how can i add my dataset there?