Analytics Camp
Analytics Camp
  • 26
  • 23 551
✅ Easiest Way to Build AI Agents With RAG & CrewAI Locally
#aiagents #crewai #rag #llama3
This tutorial is the easiest way to build AI agents with crewai and LLAMA3 that use a RAG or Retrieval-Augmented Generation framework for setting up a repeatable workflow for your specific tasks with your own documents, which reduces hallucinations and irrelevant responses and improves the results of the LLMs.
Set up your virtual environment for crewai with this easy tutorial:
ua-cam.com/video/XkS4ifkLwwQ/v-deo.html
In-Context Learning (ICL) instead of RAG:
ua-cam.com/video/efEk_dsmuik/v-deo.html
GitHub link for code and process of this video:
github.com/Maryam-Nasseri/RAG-based-AI-Agents/tree/main
www.youtube.com/@analyticsCamp/videos?sub_confirmation=1
Chapters and Key Concepts:
00:00 Intro to RAG
00:29 Fine-tuning LLMs
01:00 In-Context-Learning or ICL
01:32 RAG or Retrieval-Augmented Generation
01:58 RAG paper and method
03:20 Code-along AI agents with CrewAI and RAG system: venv, wget, llama3, subprocess with Popen, agent prompts and tasks, use in e-commerce and business decision-making
Переглядів: 110

Відео

Is ICL The Next Big Thing In AI? 🤯 Replace Fine-tuning LLMs
Переглядів 14014 днів тому
#ai #gpt4o #llm When GPT-4o or omni was released, there was so much excitement about the super cool voice and visual recognition and generation that we overlooked an important aspect of it that I will be discussing today based on this paper by several Stanford University researchers including Andrew NG: Jiang et al. (2024). Many-Shot In-Context Learning in Multimodal Foundation Models Related v...
💯 FREE Local LLM - AI Agents With CrewAI And Ollama Easy Tutorial 👆
Переглядів 87921 день тому
#aiagents #crewai #tutorials Follow along with this super easy code tutorial to set up your local agentic workflow which is 100% free. I will use VSCode and show you how to install CrewAI and Ollama to work in your virtual environment. Check out this video in which I explain the Refinement method for agentic workflow: This Agentic AI Workflow Will Take Over Algorithm Papers Explained ua-cam.com...
🔴 This Agentic AI Workflow Will Take Over 🤯 Algorithm + Papers Explained
Переглядів 8 тис.Місяць тому
#ai #llm #aiagents #agentic What Language Model To Choose For Your Project? 🤔 LLM Evaluation: ua-cam.com/video/PXX2OO7s8wY/v-deo.html : evaluation of Hugging Face models Please subscribe to support this channel :) Explanation of the papers and algorithms of LLM agents in the Agentic AI systems (see timestamps below) using the following concepts and papers: Iterative feedback and refinement for ...
How Sora AI works: OpenAI Text-To-Video Model and LLM
Переглядів 6103 місяці тому
#ai #llm #sora The Model Behind Sora AI Simplified! OpenAI Text-To-Video LLM Have you wondered what the main reason is that makes Sora so good at generating videos? What ingredient is added to the base diffusion model that has taken video generation to a whole new level, setting the scene for unlimited creativity? I will discuss this text-to-image and video model in this video, along with the L...
Beginner's Tutorial: Locally Run Models With Ollama 🤫 5 Steps to Improve LLM Results
Переглядів 1233 місяці тому
#ai #llm #llama #orca I will test the 5-step method for improving the results of large language models on a famous reasoning test between llama 2 and orca 2. I have spent quite some time getting to the bottom line so you don’t have to! Watch till the end to see the winner and how these steps help improve the models’ responses. This test is one example of the GSM8K test used to evaluate LLMs in ...
What Language Model To Choose For Your Project? 🤔 LLM Evaluation
Переглядів 4574 місяці тому
#llm #huggingface #gpt4 #ai With more than 490,000 language models uploaded in the Hugging Face model repositories, how do you find the best language model for your personal or business projects? I have spent two weeks searching for the best models so you don’t have to. In this video, you will get to know all details about Hugging Face LLM Leaderboard and how it evaluates all the models objecti...
Is Mamba Destroying Transformers For Good? 😱 Language Models in AI
Переглядів 6 тис.4 місяці тому
#mamba #transformers #llm #mambaai Select 1080p. The Transformer language model started to transform the AI industry, but it has one main problem that can make it go extinct even before it blasts off in 2024! Watch this easy-to-follow and full-of-fun graphics video about the model architectures and performance differences of the Transformer and Mamba language models. I will compare the function...
Mamba Language Model Simplified In JUST 5 MINUTES!
Переглядів 5 тис.5 місяців тому
#mamba #ai #llm Here’s a super simplified explanation of the Mamba language model with the Selective State Space Model (Selective SSM architecture). In the previous videos, I used the example of sequences of words to show how transformers use the Attention Mechanism to process natural language and predict the next word in a sequence of words, e.g., a sentence. In this video, I show you how Mamb...
The Concept of Backpropagation Simplified in JUST 2 MINUTES! -- Neural Networks
Переглядів 6895 місяців тому
A beginner and easy-to-follow explanation of Backpropagation in Neural Networks, and how it helps to reduce the error in predicting the next word in a sequence in a text. Related videos: Transformer Language Models Simplified in JUST 3 MINUTES! ua-cam.com/video/6n-mOFlhbGI/v-deo.html Mamba Language Model Simplified In JUST 5 MINUTES! ua-cam.com/video/e7TFEgq5xiY/v-deo.html Stick around for more...
Transformer Language Models Simplified in JUST 3 MINUTES!
Переглядів 3855 місяців тому
#ai #llm #transformers #attention Watch how Transformer Language Models work in just 3 minutes! This is Transformers simplified for beginners, based on the journal article ‘Attention Is All You Need’ in 2017 by Google researchers. For more information watch my previous video about how language models work at this link: ua-cam.com/video/n_5spvz-2KI/v-deo.html In the previous video, I showed the ...
5 AI Tools and Features to Expect in 2024 That You NEED to Know!
Переглядів 925 місяців тому
5 AI Tools and Features to Expect in 2024 That You NEED to Know!
This is how EXACTLY Language Models work in AI - NO background needed!
Переглядів 3016 місяців тому
This is how EXACTLY Language Models work in AI - NO background needed!
Is Claude better than ChatGPT? SHOCKING Claude going viral
Переглядів 609 місяців тому
Is Claude better than ChatGPT? SHOCKING Claude going viral
ChatGPT Prompt Engineering 101: 5 Best Prompt Hacks Everyone SHOULD know!
Переглядів 7710 місяців тому
ChatGPT Prompt Engineering 101: 5 Best Prompt Hacks Everyone SHOULD know!

КОМЕНТАРІ

  • @analyticsCamp
    @analyticsCamp 11 днів тому

    Thanks for all your helpful comments :) Here's a related video explaining AI agentic workflow: ua-cam.com/video/lA3Tju4VUho/v-deo.html

  • @analyticsCamp
    @analyticsCamp 11 днів тому

    Some of you asked for AI agents in action; here's a video with code to use 100% free local AI agents: ua-cam.com/video/XkS4ifkLwwQ/v-deo.html

  • @analyticsCamp
    @analyticsCamp 11 днів тому

    Hey, if you are new to LLMs and need to improve the responses, here's a related video that shows 5 ways to improve LLM results: ua-cam.com/video/8IC8bWvORFU/v-deo.html

  • @optiondrone5468
    @optiondrone5468 12 днів тому

    All very exciting things but how long do you think before everyone can have access to all these AI based new applications?

    • @analyticsCamp
      @analyticsCamp 12 днів тому

      Thanks for watching :) You can use ICL with any LLM, especially the ones you can download directly from Hugging Face or via Ollama. Some other interfaces allow users to attach files to process, so you can write your prompts and instructions in those files plus any images you need to attach. I'm not sure about audio and video ICL at this moment, though.

  • @Researcher100
    @Researcher100 15 днів тому

    The explanation was clear, thanks. Does this paper show how to use this method in practice? I think most llm users don't know ins and out of fine tuning so icl can be very helpful for ordinary users.

    • @analyticsCamp
      @analyticsCamp 15 днів тому

      Thanks for the comment :) Yes, the paper comes with all those explanation. Yep, I also believe this can open a way for more ordinary AI users AND many researchers in other fields.

  • @jarad4621
    @jarad4621 19 днів тому

    Sorry another quesion, am i able to use LM studio with crewai as well, wanted to test some other models and its gpu accel allows models to run better then ollama for me, is it still going to have probems due to the issues you fix with the models file or is that issue not a problem for other local servers? Or is ollama the best way because you can actually edit those things to make it work well? Thanks

    • @analyticsCamp
      @analyticsCamp 19 днів тому

      I do not use LM Studio so I cannot comment on that. But Ollama via terminal is pretty sturdy, CrewAI it should work with all Ollama models, but I have not tested all. If you run into issues you can report it here so others can know and/or help you :)

  • @first-thoughtgiver-of-will2456
    @first-thoughtgiver-of-will2456 20 днів тому

    can mamba have its input rope scaled? It seems it doesnt require positional encoding but this might make it extremely efficient for second order optimization techniques.

    • @analyticsCamp
      @analyticsCamp 19 днів тому

      In Mamba sequence length can be scaled up to a million (e.g., a million-length sequences). It also computes the gradient (did not find any info on second-order opt in their method): they train for 10k to 20k gradient steps.

  • @jarad4621
    @jarad4621 20 днів тому

    Also never seen the mistral idea so this model would make a really good agent then better then the others? Really helpful to know, glad I found this. Also can you test agencu dwarm ans let us know what the best agent framewoek is currently? Apparently crew is not great for production?

    • @analyticsCamp
      @analyticsCamp 20 днів тому

      Thanks for watching :) I have tested multiple models from Ollama and mistral seems to have better performance overall, across most tasks. Agent Swarm can be useful for VERY specialised tasks in which general LLMs get it totally wrong. Other than that, it will add to the time/cost of build.But I'm not sure if I understood your question right!

  • @jarad4621
    @jarad4621 20 днів тому

    Awesome I've been looking for some of this info for ages, Best video on agents after watching dozens of vids, nobody explains the problems with other models or fixing model file and how to make sure the local models work, many of these YT Experts are just using local and other nodels snd wondering why it's not working well. Can i use phi 3 mini local as well and it needs same model setup? Also will llama 70b on openrouter api actually work as a good agent or does something need to be tweaked first nobody can answer these things, please help? Thanks!

    • @analyticsCamp
      @analyticsCamp 20 днів тому

      Sure, you can essentially use any models listed in Ollama as long as you make a model file, you can manage the temperature etc. I have used LLAMA 70b before but surprisingly, it did not show better response than its 7b and 13b on most tasks! I recommend LLAMA3 (I may be able to make a video on it if I get some free time, LOL ).

    • @jarad4621
      @jarad4621 20 днів тому

      @@analyticsCamp Awesome thanks ill test the smaller ones first then

  • @optiondrone5468
    @optiondrone5468 21 день тому

    Thanks for sharing your thoughts and practical AI agent workflow. I also believe that this agentic workflow will fuel many LLM based development in 2024

    • @analyticsCamp
      @analyticsCamp 20 днів тому

      Thanks for watching :) If you have a specific LLM-based development/project in mind please let me know. With such easy agentic access, I am also surprised how many AI users are still hooked on zero-shot with paid interfaces!

    • @optiondrone5468
      @optiondrone5468 19 днів тому

      @@analyticsCamp ha ha it also never made sense to me why ppl don't look into open source LLM 🤔 its free, not limiting your token size, free to experiment with different models and most importantly your data (prompt) is yours and don't become automatically OpenAi's property. Keep up the good work, looking forward to your next video.

  • @milanpospisil8024
    @milanpospisil8024 22 дні тому

    Yeah, Pat and Mat, thats from Czech studios :-)

  • @BooleanDisorder
    @BooleanDisorder Місяць тому

    I really don't understand why you have so few views and subscribers.

  • @SuperHddf
    @SuperHddf Місяць тому

    unwatchable with headphones.

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Apologies for the sound quality. Please watch this video using a speaker. Thanks :)

  • @doublesami
    @doublesami Місяць тому

    Very informative looking forward for the in depth video on vision mamba or vmamba

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for watching and for your suggestion. Stay tuned :)

  • @pitrgreat
    @pitrgreat Місяць тому

    Interesting video, good work, but it's very silent, I had to more than double times increase volume to get output like from other random video, somethings wrongs with the sound or style how you recorded ;)

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for your feedback. Sorry about the sound quality; I'll try to fix it in the next videos. Stay tuned :)

  • @blondisbarrios7454
    @blondisbarrios7454 Місяць тому

    Nice video, nice channel, nice voice, nice pronuntiation. Clear to understand ;)

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Glad you think so and thanks for watching :)

  • @ragibhasan2.0
    @ragibhasan2.0 Місяць тому

    Nice video

  • @HannesFoulds
    @HannesFoulds Місяць тому

    Oh come on. What if I tell you, you have been living your life wrong, and there is a much better way...

  • @thecooler69
    @thecooler69 Місяць тому

    gorilla seems like the holy grail. it's strange that it seems to have reached a standstill in development.

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for watching. Their newest development is GoEX, their paper (below reference) talks about how humans can supervise autonomous LLMs. Here's the paper ref: Patil et al., 2024. GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications.

    • @thecooler69
      @thecooler69 Місяць тому

      @@analyticsCamp Groovy, thanks for that!

    • @analyticsCamp
      @analyticsCamp Місяць тому

      👍

  • @erikjohnson9112
    @erikjohnson9112 Місяць тому

    I like your style, content and voice. Just earned a sub.

  • @NLPprompter
    @NLPprompter Місяць тому

    you know this is made by human, and this human is so busy with work... so this video have mono audio at left speaker only.

    • @NLPprompter
      @NLPprompter Місяць тому

      don't bother busy people doing their busy thing. you are asking too much.

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for your feedback! I used to record with Zoom but I switched to this new mic; will need to adjust it as you suggested :)

    • @NLPprompter
      @NLPprompter Місяць тому

      @@analyticsCamp can't wait another explanatory vids like this.

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for your feedback!

  • @christopheboucher127
    @christopheboucher127 Місяць тому

    Great content !! thx ! and yes agentic is my exploration for months... And it's still very difficult to masterize the agentic flow.. sometimes it's going out of limits , i didn't figure out yet how to fix that (with both autogen and crewai)... maybe a topic for a future video ?

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Great suggestion. Yes it can be challenging at times :) Update: Here's the video you requested: ua-cam.com/video/XkS4ifkLwwQ/v-deo.html

  • @seupedro9924
    @seupedro9924 Місяць тому

    The video is very informative but would be great to see the agent power in action

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for watching :) Here's AI agents in action: ua-cam.com/video/XkS4ifkLwwQ/v-deo.html

  • @fintech1378
    @fintech1378 Місяць тому

    more agents are all you need bro, dont treat AI agent as human, think of it as the cell in your body (maybe this is bad analogy), so we need hundreds of them at least to perform certain function

    • @analyticsCamp
      @analyticsCamp Місяць тому

      Thanks for your comment! Maybe you are right!

  • @eduardoramos8317
    @eduardoramos8317 Місяць тому

    Good video! Could you make a comparative with Mamba structure?

  • @TheFocusedCoder
    @TheFocusedCoder Місяць тому

    nice video !

  • @Researcher100
    @Researcher100 Місяць тому

    Very nice video! I liked the explanation of the Reflexion algorithm. Please do more of ai agents.

  • @kvlnnguyieb9522
    @kvlnnguyieb9522 2 місяці тому

    a great video. next video, may be you can explain the details about selective mechanisms in code

    • @analyticsCamp
      @analyticsCamp 2 місяці тому

      Great suggestion! Thanks for watching :)

  • @optiondrone5468
    @optiondrone5468 3 місяці тому

    This is an excellent explanation, however Sora needs more improvements before it can be taken seriously to disrupt the film industry or other type video generation work-flow.

    • @analyticsCamp
      @analyticsCamp 3 місяці тому

      Agree, but let's see the open-source model behind it first, if they ever release :)

  • @facundohannoch3675
    @facundohannoch3675 3 місяці тому

    Thank you!! Could not find much information about how to compare LLMs, and your video was really helpful!

    • @analyticsCamp
      @analyticsCamp 3 місяці тому

      Glad it was helpful! Let me know if you'd like me to cover any topic :)

  • @gangs0846
    @gangs0846 3 місяці тому

    Thank you

  • @optiondrone5468
    @optiondrone5468 4 місяці тому

    I'm new to ML, and when it comes to model selection, I always had questions about what are the important matrices that are considered during model selection. I like what Hugging Face did in their leaderboard, and I also liked your explanation. Thanks for sharing it with us.

  • @Researcher100
    @Researcher100 4 місяці тому

    I see what you did with the GPT answers 😏 And the humans vs models thing with Khabib vs McGreggor was super dope 😂😂😂

  • @eugene-bright
    @eugene-bright 4 місяці тому

    The best tip is to use RAG (e.g. Perplexity AI)

    • @eugene-bright
      @eugene-bright 4 місяці тому

      AutoGen studio + LM Studio as quick custom solution. LangChain is the best when you are familiar with Python

  • @user-du8hf3he7r
    @user-du8hf3he7r 4 місяці тому

    Low audio volume.

  • @ricardofurbino
    @ricardofurbino 4 місяці тому

    I'm doing a work that uses sequence data, but not specific to language. In a transformer-like network, instead of embedding layer for the source and target, I have linear layers; also, I send both source and target to the forward process.. In a LSTM-like network, I don't even need this step, I just have the torch standard lstm cell; in this case, simply source is necessary for the forward pass. Does someone has a code example on how I can do it using Mamba? I'm having difficulties on how I can do it.

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      Hey, I just found a PyTorch implementation of Mamba in this link. I haven't gone through it personally, though; but if it is helpful please do let me know: medium.com/ai-insights-cobet/building-mamba-from-scratch-a-comprehensive-code-walkthrough-5db040c28049

  • @raymond_luxury_yacht
    @raymond_luxury_yacht 4 місяці тому

    Why didn't they call it Godzilla?

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      Funny :) It seems to be as powerful as one!

  • @consig1iere294
    @consig1iere294 4 місяці тому

    I can't keep up. Is Mamba like Mistral model or it is a LLM technology?

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      Mamba is an LLM but has a unique architecture, a blend of traditional SSM-based models together with Multi-layer Perceptron which helps it to add 'selectivity' to the flow of information in the system (unlike Transformer-based models which often take the whole context, i.e., all the information, to be able to predict the next word). If you are still confused, I recommend you watch my video in this channel called "This is how exactly language models work" which gives you a perspective of different types of LLMs :)

  • @richardnunziata3221
    @richardnunziata3221 4 місяці тому

    A 7B to 10B Mamba would be interesting to judge but right now it seems its really good with long content for the small models space

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      You are right! Generally speaking, a larger size of parameter considered in tuning the models give better result. But Mamba is claiming that we don't necessarily need larger models, but a more efficient design of a model to be able to perform comparable to other models, even though it may have been trained on smaller training data and smaller number of parameters. I suggest their article, section 4.3.1 where they talk about "Scaling: Model Size", which can give you a good perspective. Thanks for watching :)

  • @datascienceworld
    @datascienceworld 4 місяці тому

    Great video.

  • @mintakan003
    @mintakan003 4 місяці тому

    I'd like to see this tested out for larger models, such as comparable to llama 2. One question that I have, is whether there are diminishing returns for long distance relationships, compared to a context window of sufficient size. Is it enough for people to give up (tried and true?) transformers, with explicit modeling of the context, over something that is more selective.

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      A thoughtful observation! Yes, it seems that the authors of Mamba have already tested it out against Transformer-based architectures, such as PaLM and LLaMA, and a bunch of other models. Here's what they quoted in their article, page 2: "With scaling laws up to 1B parameters, we show that Mamba exceeds the performance of a large range of baselines, including very strong modern Transformer training recipes based on LLaMa (Touvron et al. 2023). Our Mamba language model has 5× generation throughput compared to Transformers of similar size, and Mamba-3B’s quality matches that of Transformers twice its size (e.g. 4 points higher avg. on common sense reasoning compared to Pythia-3B and even exceeding Pythia-7B)." With regards to scaling the sequence length, I have explained a bit in the video. Here's a bit more explanation from their article, page 1: "The efficacy of self-attention is attributed to its ability to route information densely within a context window, allowing it to model complex data. However, this property brings fundamental drawbacks: an inability to model anything outside of a finite window, and quadratic scaling with respect to the window length." There's also an interesting table of summary of model evaluation (Zer-shot Evaluation, page 13) of different Mamba model sizes compared to GPT-2, H3 Hybrid model, Pythia, and RWKV, where in each instance Mamba exceeds these models' performances (check out the accuracy values in each dataset, especially for Mamba 2.8 Billion parameter model, it is truly unique. And, thanks for watching :)

  • @Kutsushita_yukino
    @Kutsushita_yukino 4 місяці тому

    lets goooo mamba basically has the similar memory as humans. but brains do tend to forget when the information is unnecessary so thats that.

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      That's right. Essentially, the main idea behind SSM architectures (e.g., having a hidden state) is to be able to manage the flow of information in the system.

  • @70152136
    @70152136 4 місяці тому

    Just when I thought I had caught up with GPTs and Transformers, BOOM, MAMBA!!!

  • @yuvrajsingh-gm6zk
    @yuvrajsingh-gm6zk 4 місяці тому

    keep up the good work, btw you got a new sub!

  • @nidalidais9999
    @nidalidais9999 4 місяці тому

    I liked your style and your funny personality

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      Thanks for watching, I love your comment too :)

  • @MrJohnson00111
    @MrJohnson00111 4 місяці тому

    You clearly explain what is the difference between Transformer and Mamba, thank you but could you also give the reference paper you mention in the video let me dive in ?

    • @analyticsCamp
      @analyticsCamp 4 місяці тому

      Hi, glad the video was helpful. The reference for the paper is also mentioned multiple times in the video, but here's the full reference for your convenience: Gu & Dao (2023). Mamba: Linear-Time Sequence Modelling with Selective State Spaces.

  • @viswa3059
    @viswa3059 4 місяці тому

    I came here for giant snake vs giant robot fight

  • @Researcher100
    @Researcher100 4 місяці тому

    Thanks for the effort you put into this detailed comparison, I learned a few more things. Btw, the editing and graphics in this video were really good 👍

  • @optiondrone5468
    @optiondrone5468 4 місяці тому

    I'm enjoying these mamba videos you're sharing with us, thanks