Is Falcon LLM the OpenAI Alternative? An Experimental Setup with LangChain

Поділитися
Вставка
  • Опубліковано 2 жов 2024
  • 👉🏻 Kick-start your freelance career in data: www.datalumina...
    The Technology Innovation Institute in Abu Dhabi has launched Falcon, a new, advanced line of language models, available under the Apache 2.0 license. The standout model, Falcon-40B, is the first open-source model to compete with existing closed-source models. This launch is great news for language model enthusiasts, industry experts and businesses, as it presents many opportunities for new use cases. In this video, we are going to compare the new Falcon-7B model against OpenAI's text-davinci-003 model to see if open-source can take on the battle with paid models.
    🔗 Links
    huggingface.co...
    github.com/dav...
    huggingface.co...
    Introduction to LangChain
    • Build Your Own Auto-GP...
    Copy my VS Code Setup
    • How to Set up VS Code ...
    👋🏻 About Me
    Hey there, my name is @daveebbelaar and I work as a freelance data scientist and run a company called Datalumina. You've stumbled upon my UA-cam channel, where I give away all my secrets when it comes to working with data. I'm not here to sell you any data course - everything you need is right here on UA-cam. Making videos is my passion, and I've been doing it for 18 years.
    While I don't sell any data courses, I do offer a coaching program for data professionals looking to start their own freelance business. If that sounds like you, head over to www.datalumina... to learn more about working with me and kick-starting your freelance career.

КОМЕНТАРІ • 43

  • @daveebbelaar
    @daveebbelaar  Рік тому +3

    👋🏻I'm launching a free community for those serious about learning Data & AI soon, and you can be the first to get updates on this by subscribing here: www.datalumina.io/newsletter

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому +10

    Can we try fine tuning the Falcon for future video

  • @noelwos1071
    @noelwos1071 Рік тому

    UNFAIR ADVANTAGE
    .What do you think, as a European citizen, would you have to sue Europe, which hinders the development of progress offered by artificial intelligence and thus causes enormous damage in Europe's lagging behind the whole world. isn't the EU a responsible institution

  • @aditunoe
    @aditunoe Рік тому +1

    In the special_tokens_map.json file of the HF repo there are some special tokens defined that differ from what OpenAI or others use a little bit. Integrating those into a prompt template of the chains seemed to improve the results for me (Also wrote on example in the HF comments). Three interesting ones in particular:
    >>QUESTIONSUMMARY>ANSWER

  • @esakkisundar
    @esakkisundar 11 місяців тому

    How to run the FalconModel locally. Does providing a key run the model in HuggingFace server?

  • @xXWillyxWonkaXx
    @xXWillyxWonkaXx Рік тому +1

    Hey man, love your videos. Two questions:
    Q1. 11:50 are you talking about embedding?
    Q2. From your experience/deduction /observation of the LLM on huggingface, can you train a model like MosaicML MPT-7B, through in QLorRA in the mix and train it to be like GPT4 or even slightly better in terms of understanding/alignment - could using tree of thought mitigate or solve a small percentage of that?

    • @daveebbelaar
      @daveebbelaar  Рік тому

      A1 - No I don't use embeddings in this example. Just plain text sent to the APIs
      A2 - Not sure about that

  • @datanash8200
    @datanash8200 Рік тому +1

    Perfect timing, need to implement some LLM for a work project 🙌

  • @shakeebanwar4403
    @shakeebanwar4403 11 місяців тому

    Can i run this 7b model without gpu my system ram is 32 gb

  • @KatarzynaHewelt
    @KatarzynaHewelt Рік тому +1

    Thanks Dave for another great video! Do you know if I can perhaps download falcon locally and then use it privatelly - without HF API?

    • @daveebbelaar
      @daveebbelaar  Рік тому

      Thanks Katarzyna! I am not sure about that.

  • @Esehe
    @Esehe Рік тому

    @17:30 interesting how my OpenAI output/summary is different, than yours:
    " This article explains how to use Flowwise AI, an open source visual UI Builder, to quickly build
    large language models apps and conversational AI. It covers setting up Flowwise, connecting it to
    data, and building a conversational AI, as well as how to embed the agent in a Python file and run
    queries. It also shows how to use the agent to ask questions and get accurate results."

  • @felixbrand7971
    @felixbrand7971 Рік тому

    I’m sure this will be basic question, but where is the inference running here? Is it local, or is it on huggingface’s resources?

    • @vuktodorovic4768
      @vuktodorovic4768 Рік тому

      That is what I wanted to ask. I mean I loaded this model into the google collab free tier and it took 15gb of ram and 14gb of GPU memory, I cant imagine what hardware you should have to run something like this locally. Also, I can't imagine that hugging face would give you their resources just like that. His setup seems very strange.

  • @mayorc
    @mayorc Рік тому

    But don't you get a free amount of tokens for free that recharge every month using OpenAI, or not? So unless you go over the amount you shouldn't get charged.

  • @Jake_McAllister
    @Jake_McAllister Рік тому

    Hey Dave, love the video! How did you create your website, looks amazing bro 👌

  • @deliciouspops
    @deliciouspops Рік тому +1

    i like how degraded our society is

  • @GyroO7
    @GyroO7 Рік тому

    I feel like using chunk size of 1000 with 200 overlaps will improve the results

  • @oshodikolapo2159
    @oshodikolapo2159 Рік тому +1

    Just what i was searching for. Thanks for this. bravo!

  • @RadekVana-x8b
    @RadekVana-x8b Рік тому

    Thanks man! Finally some code that was working for me 👍

  • @marcova10
    @marcova10 Рік тому

    Thanks, Dave
    with some trials it seems that this version of falcon works for short questions,
    I am finding that in some cases the LLM spits several repeated sentences, may need some tweaking in the output to clean it up
    Great alternative for certain uses

  • @losing_interest_in_everything

    Imagine combine it with Obsidian, Notion or other similar software

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc Рік тому

    Excellent detailed information

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc Рік тому

    Excellent detailed information

  • @luis96xd
    @luis96xd Рік тому

    Great video, I have a doubt, what are the requirements to run locally Falcon-7B instruct? Can I use a CPU?

    • @fullcrum2089
      @fullcrum2089 Рік тому +2

      15GB GPU Memory

    • @luis96xd
      @luis96xd Рік тому

      @@fullcrum2089 Thank you so much! That's a Lot 😱

  • @PeterDrewSEO
    @PeterDrewSEO Рік тому

    Great video mate, thank you!

  • @sree_9699
    @sree_9699 Рік тому

    Interesting! I was exploring the same thing just an hour ago on HF and ran into this video as I opened the UA-cam. Good content.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    Do you have a video on pre training an LLM?

  • @bentouss3445
    @bentouss3445 Рік тому +2

    Really great video on this hot topic of open llm vs closed ones...
    It will be really interesting to see how to self host a open llm to not go through any external inference API.

    • @daveebbelaar
      @daveebbelaar  Рік тому

      Thanks! Yes that is very interesting indeed!

  • @ko-Daegu
    @ko-Daegu Рік тому

    How are you runing a .py file as a jypeter Notebook on the side like that how are you taking each line inside it's one block to the side interactive
    this setup looks neat

    • @daveebbelaar
      @daveebbelaar  Рік тому

      Check out this video: ua-cam.com/video/zulGMYg0v6U/v-deo.html

    • @ko-Daegu
      @ko-Daegu Рік тому

      @@daveebbelaar Merci

  • @fdarko1
    @fdarko1 Рік тому

    I am new to Data science and want to know more about it to become a Pro. Please mentor me.

    • @daveebbelaar
      @daveebbelaar  Рік тому

      Subscribe and check out the other videos on my channel ;)

  • @pragyanbhatt6200
    @pragyanbhatt6200 Рік тому +1

    Nice tutorial Dave, but isn't it unfair to compare two models with different parameters count? falcon -7b has 7billion where as text-davinci-003 has almost 175 billion parameters?

    • @daveebbelaar
      @daveebbelaar  Рік тому +1

      It's definitely unfair, but that's why it's interesting to see the performance of a much smaller, free to use model.

  • @ingluissantana
    @ingluissantana Рік тому

    As always GREAT video!!!! Thanks!!!!