Lucidate
Lucidate
  • 177
  • 395 534
Introduction to AI App development & Fine-Tuning. How to build AI apps with LLMs & LangChain in 2024
Welcome to our comprehensive tutorial series on "Fine-Tuning Large Language Models (LLMs)"! If you're new to LLMs or looking to deepen your expertise, this video is the perfect starting point. Dive into the fundamentals with our "Introduction to LLMs" and discover essential techniques for optimizing LLM performance.
Andrew Ng Video 'Opportunities in AI:: ua-cam.com/video/5p248yoa3oE/v-deo.html
In this detailed series, we guide and explore various facets of LLMs, including:
Basic and Advanced LLM Applications: Learn how to use large language models to create innovative applications and enhance existing technologies.
LLM Fine-Tuning Techniques: Uncover strategies to improve accuracy and efficiency, focusing on practical implementations like Low Rank Adaptation and using popular platforms like Hugging Face.
Using LangChain with LLMs: Get hands-on experience with tools that can supercharge your LLM projects, making them more dynamic and responsive using ai agents and ai tools.
Building Applications with LLMs: Step-by-step instructions on developing applications using the power of LLMs, ideal for both beginners and seasoned developers. Understand how prompt engineering heops personalise your applications. Learn the fundamentals of Agentic AI Applications by putting LLMs into loops.
LLM Training Tutorial: Enhance your understanding of training LLMs effectively to achieve the best results in real-world scenarios.
Our series is designed to guide you through the intricacies of fine-tuning LLMs to ensure you can not only understand but also apply this knowledge to create robust, intelligent applications. Whether you are fine-tuning LLMs on Hugging Face or integrating LangChain for enhanced functionality, this series will provide all the tools you need for success.
Hit subscribe and turn on notifications to not miss out on our deep dives into each aspect of LLM technology. Each video in our series builds on the last, creating a comprehensive learning journey for anyone interested in the cutting-edge field of large language models.
Join us as we unpack the exciting world of LLMs, making these complex technologies accessible and actionable. Whether you're building your first LLM application or looking to refine your techniques with advanced optimizations, this series will equip you with the knowledge and skills needed to excel.
Переглядів: 132

Відео

Benchmarking AI: Finding the Best Code Generation Model using CodeBleu
Переглядів 524Місяць тому
Discover the future of AI code development in this comprehensive look at code generation models! Richard Walker from Lucidate delves into the exciting world of Large Language Models (LLMs) like GPT-4 and how they're shaping our coding landscape. From examining coding communities' contributions to exploring advanced fine-tuning on platforms like HuggingFace and Ollama, this video is your ultimat...
Text Summarisation Showdown: Evaluating the Top Large Language Models (LLMs)
Переглядів 311Місяць тому
Dive into the world of AI with Richard Walker, founder of Lucidate, as we embark on a quest to discern the most effective Large Language Model for text summarization. This in-depth video is tailored for AI enthusiasts, data-driven professionals, and decision-makers looking to leverage the power of artificial intelligence for summarizing complex information, especially within the financial secto...
Revolutionize Document Creation with Generative AI: Using LangChain ReAct Agents and Tools
Переглядів 527Місяць тому
Learn how Generative AI is transforming document creation in this Lucidate Alchemy tutorial [8:30]. Discover how to use AI-powered tools to turn unstructured content into polished documents, enhance your writing with the latest web information, and ensure accuracy with automated fact-checking. In this video, we dive into the technology behind Lucidate Alchemy, including: 0:00 - Introduction and...
AI Document Writing Made Easy: Create, Enhance, & Verify in Minutes
Переглядів 4562 місяці тому
Discover the unparalleled power of Generative AI in document creation with Lucidate Alchemy. In this in-depth tutorial, we unveil how AI document creation can revolutionize the way professionals like you manage and enhance business documents. 📈 Enhance Your Papers with the Latest Information: Lucidate Alchemy is not just an AI-powered writing tool; it's your partner in achieving comprehensive, ...
AI Document Creation Revolution in 2024: How to Automate & Mobilize with AI - New Strategies!
Переглядів 5042 місяці тому
Maximize Your Workflow with AI - Lucidate Reveals How! | Lucidate Tech Talks #2 Discover how AI can transform your everyday documents into powerful tools for analysis and communication. In this second installment of Lucidate Tech Talks, we delve deeper into the world of Generative AI with Richard Walker, showing you the future of document automation and cloud-based adaptability. 🔍 What's Inside...
AI-Powered Alchemy: Transforming Financial Data into Strategic Gold
Переглядів 4962 місяці тому
Discover the transformative power of RAG and LLMs in financial analytics with our latest deep-dive into Generative AI! "AI-Powered Alchemy: Transforming Financial Data into Strategic Gold" unlocks the secrets to leveraging untapped data within your firm. Use zero-shot and one-shot learning; make use of vector databases and RAG to coral the corpora of often discarded wisdom in your firm. This in...
The fundamentals of LLMs and Prompt Engineering in 3 easy steps!
Переглядів 1,3 тис.3 місяці тому
Unlock a lucrative career in AI with prompt engineering expertise! In this video, we'll unveil how becoming a master prompt engineer can lead to six-figure salaries and exciting opportunities in AI. Dive into the essentials of AI models like Chat-GPT, exploring probability distributions, the impact of inputs on outputs, and the power of tokenization. These foundational concepts are not just int...
AI's Game-Changing Role in Derivatives Trading: Expert Insights Revealed
Переглядів 4605 місяців тому
Join Richard Walker from Lucidate in this enlightening video as we dissect the pivotal role of AI in reshaping the world of trading. This detailed analysis stems from a panel discussion at the Futures and Options World conference in London, bringing insights from over 200 industry professionals. Explore key areas where AI is making its mark - from its rapid adoption in trading strategies to its...
Revolutionize Equity Analysis: How AI is and LLMs are Changing the Game in Finance
Переглядів 6756 місяців тому
Welcome to Lucidate's deep dive into the transformative world of AI in finance. In this video, Richard Walker, an expert in equity analysis, takes you through the revolutionary impact of AI on financial markets. 🔍 Discover How AI Revolutionizes Equity Analysis Learn how our specialized AI tools significantly enhance the productivity and output quality of financial analysts. By uploading a simpl...
From Pints to Insights: Unveiling Semantic Search Power with Word Embeddings and Vector Databases
Переглядів 3056 місяців тому
🍺🔍 "Revolutionizing Search: How Semantic AI Transforms Beer Selection & Beyond | Lucidate Explains" Join us on a fascinating journey from hops to high-tech with Lucidate's latest innovation in semantic search. In this video, Richard Walker dives into how AI not only helps you pick the perfect beer but also reshapes how we search for information in finance, investment banking, and more. What You...
From raw Excel spreadsheet to client-ready powerpoint using a fine-tuned LLM. Derivatives & LDI
Переглядів 7946 місяців тому
📈 In this video, Richard Walker from Lucidate unpacks the revolutionary role of AI in asset management, focusing on how Large Language Models (LLMs) can optimize sales and client service functions. Dive deep into the world of pension schemes, financial derivatives, and the cutting-edge technology transforming the industry. Video Chapters: 00:00 Introduction to AI in Asset Management 02:27 Under...
Witness AI Magic: Risk Insights in Seconds
Переглядів 5026 місяців тому
Venture into the world of finance with Richard Walker from Lucidate and witness firsthand the revolutionary role of AI in reshaping portfolio risk management. Understand the intricacies of financial strategies like margin lending and short selling, get insights from detailed P&L histograms, and see how they play a pivotal role in decision-making. What truly sets this journey apart is Lucidate's...
See How A.I. Can Streamline Your Equity Investment Analysis Process
Переглядів 5378 місяців тому
See how AI can streamline investment analysis! This video demos our robo advisor app that uses artificial intelligence to help optimize your portfolio. Learn how it assess your risk appetite, provides stock recs based on fundamentals/news, and benchmarks performance vs alternatives. Chapter highlights: 0:00 Intro 1:43 App design Criteria 3:10 Assessing the investor's risk and investment appetit...
How AI Unlocks Hidden Insights in Research Reports
Переглядів 1,3 тис.9 місяців тому
Unlock Hidden Insights in Analyst Research Reports with AI Analyst reports contain a goldmine of market intelligence, but key insights are often buried across hundreds of pages. Reading these dense reports to find relevant information is incredibly inefficient. Now, innovative technologies like vector search engines, machine learning algorithms, and natural language processing are transforming ...
What if AI Could Out-trade Human Experts?
Переглядів 1,5 тис.9 місяців тому
What if AI Could Out-trade Human Experts?
I Built an AI Financial Advisor in 10 Minutes using LangChain with Chain of Thought & ReAct
Переглядів 3,9 тис.9 місяців тому
I Built an AI Financial Advisor in 10 Minutes using LangChain with Chain of Thought & ReAct
Build your own Finance AGI!
Переглядів 3,6 тис.10 місяців тому
Build your own Finance AGI!
Mastering AI FinBot Development: Tutorial Guide to LangChain, Prompt Engineering, & Tree of Thoughts
Переглядів 2,7 тис.10 місяців тому
Mastering AI FinBot Development: Tutorial Guide to LangChain, Prompt Engineering, & Tree of Thoughts
Revolutionizing FinTech: Build Your Own Robo-Adviser with LangChain
Переглядів 3,7 тис.10 місяців тому
Revolutionizing FinTech: Build Your Own Robo-Adviser with LangChain
Prompt engineering with LangChain: Prompt Selection with AI and LLMs
Переглядів 3,2 тис.11 місяців тому
Prompt engineering with LangChain: Prompt Selection with AI and LLMs
Forest of Thoughts: Boosting Large Language Models with LangChain and HuggingFace
Переглядів 11 тис.11 місяців тому
Forest of Thoughts: Boosting Large Language Models with LangChain and HuggingFace
How to write Tree of Thoughts Prompts.
Переглядів 25 тис.11 місяців тому
How to write Tree of Thoughts Prompts.
AI Revolution: Exploring Tree of Thoughts Prompt Engineering.
Переглядів 11 тис.11 місяців тому
AI Revolution: Exploring Tree of Thoughts Prompt Engineering.
Master Prompt Engineering with LangChain
Переглядів 5 тис.11 місяців тому
Master Prompt Engineering with LangChain
LangChain: Prompt Engineering
Переглядів 4,3 тис.11 місяців тому
LangChain: Prompt Engineering
Build a LangChain App [Tutorial]
Переглядів 6 тис.11 місяців тому
Build a LangChain App [Tutorial]
LangChain: Build your own AGI
Переглядів 9 тис.11 місяців тому
LangChain: Build your own AGI
LangChain: Unleashing AI's full potential
Переглядів 2,2 тис.11 місяців тому
LangChain: Unleashing AI's full potential
BloombergGPT: Build Your Own - But can you train it? [Tutorial]
Переглядів 16 тис.Рік тому
BloombergGPT: Build Your Own - But can you train it? [Tutorial]

КОМЕНТАРІ

  • @joshuacunningham7912
    @joshuacunningham7912 13 годин тому

    So good! Thank you for educating in a way that’s easy to understand. 👏

    • @lucidateAI
      @lucidateAI 13 годин тому

      You are welcome. Delighted you found the content useful.

  • @Blooper1980
    @Blooper1980 14 годин тому

    CANT WAIT!!!!!!!

    • @lucidateAI
      @lucidateAI 14 годин тому

      Glad you found it useful. Videos 2 and 3 are already complete and should be on general release next week. (Currently they are available to Lucidate members at the VP, MD or CEO levels). I'm just finishing of the LoRA video as I type. That should be out the week after next. Appreciate the support and I hope you found the content insightful.

  • @AbdennacerAyeb
    @AbdennacerAyeb 15 годин тому

    you are a jem stone. Tank you for sharing knowledge.

    • @lucidateAI
      @lucidateAI 14 годин тому

      Thanks @AbdennacerAyeb! Greatly appreciated. I'm glad you enjoyed the video!

  • @jon4
    @jon4 17 годин тому

    Another great video. Really looking forward to this series

    • @lucidateAI
      @lucidateAI 16 годин тому

      You are welcome. Really glad you found it useful.

  • @encapsulatio
    @encapsulatio 10 днів тому

    Which LLM from all you tested up to now(in general, not only the ones you talked about in this video) is the best at this moment at breaking down subjects that are at a university level using pedagogical tools? If I request the model to read 2-3 books on pedagogical tools can it properly learn how to use these tools and actually apply them on explaining clearer and better the subjects?

    • @lucidateAI
      @lucidateAI 10 днів тому

      This video is focused on which models perform the best at generating source code (that is to say Java, C++, python etc.). On the other hand the subject of this video -> Text Summarisation Showdown: Evaluating the Top Large Language Models (LLMs) ua-cam.com/video/8r9h4KBLNao/v-deo.html is on text generation/translation/summarization etc. Perhaps the other video is more what you are looking for? In either event the key takeaway is that by all means rely on public, published benchmarks. But if you want to evaluate models on your specific use-case (and if I correctly understand your question, I think you do) then it might be worth considering setting up your own tests and your own benchmarks for your own specific evaluation. Clearly there is a trade off here. Setting up custom benchmarks and tests isn’t free. But if you understand how to build AI models, then it isn’t that complex either.

    • @encapsulatio
      @encapsulatio 8 днів тому

      @@lucidateAI I reformulated a bit my inquiry since it was not clear enough. Can you read it again please?

    • @lucidateAI
      @lucidateAI 8 днів тому

      Thanks for the clarification. The challenge with reading 2 or 3 books will be the size of the LLMs context window (the amount of tokens that can be input at once). Solutions to this involve using vector databases - example here -> ua-cam.com/video/jP9swextW2o/v-deo.html This involves writing Python code and development frameworks like LangChain. You may be an expert at this, in which case I'd recommend some of the latest Llama models and GPT-4. Alternatively you can use Gemini and Claude 3 and feed in sections of the books at a time (up to the token limit of the LLM). These models tend to perform the best when it comes to breaking down complex, university-level subjects. They seem to have a strong grasp of pedagogical principles and can structure explanations in a clear, easy-to-follow manner. That said, I haven't specifically tested having the models read books on pedagogical tools and then applying those techniques. It's an interesting idea though! Given the understanding these advanced models already seem to have, I suspect that focused training on pedagogical methods could further enhance their explanatory abilities. My recommendation would be to experiment with a few different models, providing them with sample content from the books and seeing how well they internalize and apply the techniques. You could evaluate the outputs to determine which model best suits your needs.

  • @sandstormfeline3664
    @sandstormfeline3664 23 дні тому

    I was looking for a video to help get my head around tree of thought with a working example, and I found it. great work thanks :)

    • @lucidateAI
      @lucidateAI 22 дні тому

      You are very welcome. I’m glad you found it insightful. ua-cam.com/play/PLaJCKi8Nk1hyvGVZub2Ar7Az57_nKemzX.html&si=JwiUaQ-UojUXoOwA here are some other video explainers on other Prompt Engineering techniques that I hope you find equally informative.

  • @joshuacunningham7912
    @joshuacunningham7912 Місяць тому

    This is one of the most underrated AI UA-cam channels by far. Thanks Richard for another phenomenal video.

    • @lucidateAI
      @lucidateAI Місяць тому

      Appreciate that! Thanks! Glad you found this video and other content on the channel insightful.

  • @paaabl0.
    @paaabl0. Місяць тому

    Well, you didnt explain a thing about autogpt here :/

    • @lucidateAI
      @lucidateAI Місяць тому

      Sorry @paaabl0, but thanks for leaving a comment. Let me try, if I may, from another angle. The inputs and outputs to LLMs are natural language. Human text. (Yes, literally they are vectors of subword tokens, but I hope you will forgive the abstraction). If you type text into an LLM, you get text out. AutGPT works by using this feature of LLMs and putting an LLM into a loop. As the inputs and outputs are both natural language you can use clever prompts to control and direct this loop. While there are many prompting techniques you can use 'Plan & Execute' as well as 'ReAct' (REasoning & ACTion) are popular choices here. They work by first instructing. the LLM to go through a sequence of steps - such as 1 Question, 2 Thought, 3 Action, 4 Action Input, 5 Observation (repeat previous 5 steps steps until) 6 Thought == 'I now know the answer to the original question', 7 Divulge answer. See an example of this type of Prompt here: Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought:{agent_scratchpad} This is authored by Harrison Chase, founder of Langchain and you can access it at the LangChain hub under 'hwchase17/react'. This is the heart of AutoGPT (and other similar attempts at AGI). Buy using the 'input is language / output is also language / prompt LLM into a loop where early stages are about thinking and planning, middle stages are about Reasoning and action and final stages are about conclusion and output', you achieve the type of behaviour associated with tools/projects like AutoGPT. Perhaps this different explanation helped a little, perhaps not. Clearly there a a good many great YT sites on AI and I hope one of them is able to answer your questions around AutoGPT better then I'm able. With thanks for you taking the time to comment on the video.

  • @SameerGilani-zy6sf
    @SameerGilani-zy6sf Місяць тому

    I am not able to install langchain.experimental.plan_and_execute. Can you plz help me

  • @joshuacunningham7912
    @joshuacunningham7912 Місяць тому

    Dear @LucidateAI, Pay no attention to @avidlearner8117. They obviously lack a fundamental understanding of business and public social interaction. I am very appreciative of your content and always look forward to it.

  • @avidlearner8117
    @avidlearner8117 Місяць тому

    OK, so you went from analysis to pushing your product on every new videos? SMH...

    • @lucidateAI
      @lucidateAI Місяць тому

      Don't break your neck!

    • @avidlearner8117
      @avidlearner8117 Місяць тому

      @@lucidateAI Oh, I hit a nerve. Get it?

    • @lucidateAI
      @lucidateAI Місяць тому

      Then I'd stop shaking if I were you!

    • @avidlearner8117
      @avidlearner8117 Місяць тому

      @@lucidateAI You thought I was talking about my neck! Ah well.

    • @lucidateAI
      @lucidateAI Місяць тому

      And a beautiful neck it is I'm sure!@@avidlearner8117

  • @DannyGerst
    @DannyGerst 2 місяці тому

    That is nice! Will you be interested in share the code? Your videos seeming promising, but it is only talk without anything that I can play with. That would be really great!!

    • @lucidateAI
      @lucidateAI 2 місяці тому

      Hi Danny. Yes and No. The code for this video is not currently available, but the video in the works is a code walkthrough with a link to the GitHub repo that contains the code. However, this will be paywalled and only available to members of the Lucidate channel at the Managing Director and CEO levels. So while it will be available, it will not be “freely available “ (which is I think what you were asking).

  • @abenjamin13
    @abenjamin13 2 місяці тому

    This is fantastic blueprint for creating a “quality” output document 📄. I appreciate you 🫵

  • @markettrader911
    @markettrader911 2 місяці тому

    Good shit man

  • @banzai316
    @banzai316 2 місяці тому

    What about creating videos or creating docs from videos (& summary).

    • @lucidateAI
      @lucidateAI 2 місяці тому

      Is this Fine Tuning GPT-3 & Chatgpt Transformers: Using OpenAI Whisper ua-cam.com/video/Qv0cHcfFHM8/v-deo.html useful?

    • @banzai316
      @banzai316 2 місяці тому

      @@lucidateAI , yes, completely forgot about this transformer. I will look again.

  • @neurojitsu
    @neurojitsu 2 місяці тому

    Quick question: does Claude2 have a similar capability to turn text into a vectorised docstore in order to do what it does? If so, then is the added value of your app the the eradication of the context window limit, or better 'tuning' of the workflows for this purpose, or some other magic sauce?! Trying to get my head round the value of your MD tier beyond the tools I'm learning to use at the moment. Thanks in advance.

    • @lucidateAI
      @lucidateAI 2 місяці тому

      Hi @neurojitsu All LLMs will use embeddings, transforming words (or more precisely sub-words called tokens) into vectors. If you are unfamiliar with this process then these videos will get you up to speed - ua-cam.com/video/6XLJ7TZXSPg/v-deo.html and ua-cam.com/video/RAIUJ3VFXmI/v-deo.html. But that is different from taking a document, vector using it and putting it into a docstore or vector database. I use FAISS in this video but other vector databases include Pincone, Weviate and Chroma. So Claude2 (nor GPT, Gemini, LLama2, Coral etc.) natively creat a docstore. This is separate action and you can link the docstore to the LLM using an AI framework like LangChain or AutoGen. With large context windows; currently GPT has a 125k token context window, Claude2 200k and Gemini 1MM, there are a lot of documents that you can load into the prompt of an LLM for zero-shot or one-shot learning. ZSL and OSL are simple techniques whereby you temporarily “train” an LLM with content in its prompt. Think of it like a short term memory. So you are 100% correct with a large enough context window you would not need to use a docstore. However if the size of your documents in tokens exceeds the size of the context window your LLM will “forget” some of the material. Furthermore if you are using a chat model and repeatedly querying and questioning the corpus of data in the prompt then the context window will fill up and again the LLM will forget some of the earlier content. Both of these problems are eliminated by using a docstore which acts as a longer term memory for crucial information. Whether my MD tier is worthwhile is a tough question to answer as I’m biased. Sadly the only way to find out for real if it is useful for you is to try it out. With over 7Bn people on the planet and only a tiny, tiny fraction signed up as MDs the overwhelming vote from humanity is that the Lucidate MD tier is useless and a waste of time. So if you want to go with the herd then my advice is to avoid it like the plague. But the good news is that you can cancel at any time and only pay up to the month you have cancelled. So if you want to take a chance to find out if there are useful pieces of information in there then my advice might be different and I’d say give it a go! What have you got to lose other than one month’s subscription? (But as I said, I’m biased). Probably not the answer you were looking for, and 100% unhelpful, but honest.

    • @neurojitsu
      @neurojitsu 2 місяці тому

      @@lucidateAI many thanks for the detailed answer, that's very helpful. Happy to give your MD service a whirl I think, but it's not the cost so much as the utility/time savings in research and accelerated learning that I'm weighing up; are there other small company MDs in the community? I'm guessing frankly - since we're being honest - that I'm unlikely to become a consulting client of yours, so the value to me is also in learning from a community of others like me. And accessing help when I get stuck, plus some inspiration/guidance for how to adopt the right AI tools as things are moving so fast. My background field professionally is learning/talent and organisational change, and I'm currently researching and working on product development for my own business. I'll take a look at your tiers info...

    • @lucidateAI
      @lucidateAI 2 місяці тому

      Frankly I do not know what the make up of the Lucidate membership is. But you raise a good point and perhaps I should ask a question / set up a poll on the Lucidate Discord and find out. Thanks for the motivation to set up a poll!. The key benefit of being an MD over a VP is access to some sample code in private GitHub repos along with some exclusive content (largely code walkthrough videos). If your prime motivation is to learn from others in the community then the VP level grants access to the Discord, no need to be a VP. You can of course game the system a little. Join as a VP, and if you want access to the videos and code you can join as an MD for a month, clone the latest code from the repos, watch the MD-only videos and then downgrade your account to a VP at the end of the month, still retaining access to the community discussions on the Discord.

    • @neurojitsu
      @neurojitsu 2 місяці тому

      @@lucidateAI many thanks again! I'll give it a whirl...

    • @lucidateAI
      @lucidateAI 2 місяці тому

      See you on the Discord!

  • @sanjaybhatikar
    @sanjaybhatikar 2 місяці тому

    Gemini: Wokeness is all you need :))

  • @aerofred2002
    @aerofred2002 2 місяці тому

    Wow, they put it out there in plain sight, "Attention is all you need."

  • @Swampfire77
    @Swampfire77 2 місяці тому

    Nice video

  • @user-ge9ub1vg5q
    @user-ge9ub1vg5q 3 місяці тому

    This is a great content. I became a memebr at VP level, but I could not get access to your discord channel and the github repo. Can you please give me access? Thanks.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Glad you are enjoying the channel and welcome to Lucidate membership. To get access to the Discord you can follow the steps in this FAQ from Discord, it is pretty straightforward support.discord.com/hc/en-us/articles/215162978-UA-cam-Channel-Memberships-Integration-FAQ#h_01GWJBQMD6DATC8W2XQNTE4V6B. Once you have joined the Discord you will see a channel called get-repo-access, on this channel you can supply your GH credentials to get added to the repo.

    • @user-ge9ub1vg5q
      @user-ge9ub1vg5q 2 місяці тому

      @@lucidateAI Hi Richard. Thanks for the reply. I have followed the instructions on the link you sent, but I still could not see your discord channel. Can you please send me the access link may be via my email address?

    • @lucidateAI
      @lucidateAI 2 місяці тому

      There is no access link I can send to you. If I had one I would be delighted to do so. The only way you can get access is to follow the instructions in the FAQ from Discord. Strange that others have not had this issue. Are you able to describe the steps you are taking when following the FAQ and what you see at each stage?

    • @lucidateAI
      @lucidateAI 2 місяці тому

      These are the four steps: How To Connect Your UA-cam Channel To Discord 1. Open up the Discord app and next to your username to access the cog wheel. 2. Under your User Settings, head to the Connections tab. 3. Connect your UA-cam account to your Discord account by pressing on the UA-cam tile. This will open a new browser window where you can log into your UA-cam account. 4. After logging into your UA-cam account you should get a message stating you’ve successfully connected your accounts.

    • @user-ge9ub1vg5q
      @user-ge9ub1vg5q 2 місяці тому

      I have done exactly those steps, while I am connectd to youtube, I could not still see your channel. But I found this under the FAQ, may be a sync is needed, I am not sure. Q: What if I was either gifted a Membership or I joined and I don’t see the server under connections to join or have the membership role in the server? A: First, try removing your UA-cam channel under User Settings > Connections and then reconnecting your UA-cam channel. If you still do not see it, please ask the UA-camr or one of their moderators to run a manual sync by heading into Integrations in their Server Settings.

  • @nikkilin4396
    @nikkilin4396 3 місяці тому

    Amazing video!

  • @seyedmatintavakoliafshari8272
    @seyedmatintavakoliafshari8272 3 місяці тому

    Very impressed by this series. Thanks Richard!

  • @linguipster1744
    @linguipster1744 3 місяці тому

    Hi there! Thank you so much for these videos. I have a question re: Son + Extended - Nuclear (21:01). We say the expected word should be "cousin", but why? Wouldn't "nephew" be more fitting? (As in; one step "below" in the family tree instead of on the same level, but less nuclear than son, still male, etc.) Which then at least did show up in the top 10 list. :) Or the other way around: If we want cousin, wouldn't "brother" be the base word? Again, thanks so much for taking the time to make these.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Thanks. I’m glad you are enjoying the channel. I think you make a great point, and upon reflection I perhaps should have used the base word “sister” or “brother” to lead to the target word of “cousin” after the add “extended” and subtract “nuclear”. It is a while since I made the video, but if my memory serves me well (and sometimes it fails me spectacularly!) I took the examples from an intelligence test I found on the Internet, and this was the answer provided by the puzzle. But I think your logic is more valid. The main point though is with the amount of context that the embedding model has it will get close answers, but not always correct or precise ones. LLMs can’t simply use this type of vector arithmetic in their predictions, they rely heavily on other constructs - principally the Attention mechanism - to improve their predictions. Attention is covered here -> ua-cam.com/video/sznZ78HquPc/v-deo.html, while this video -> ua-cam.com/video/BCabX69KbCA/v-deo.html showcases how providing more context massively improves predictive power

    • @lucidateAI
      @lucidateAI 3 місяці тому

      elearning.shisu.edu.cn/pluginfile.php/36509/mod_resource/content/1/ANALOGIES.pdf. My memory didn’t fail me. (At least this time!)

  • @neurojitsu
    @neurojitsu 3 місяці тому

    One question: I'm unsure about the expected benefits of Gemini Ultra's multimodal design 'from the ground up' - what benefits do you expect Google might be able to reap vs OpenAI? I'm wondering where to invest my time learning a GPT; I've subscribed to Gemini Advanced, but with ChatGPT5 presumably coming some time soon wondering what you think? Probably there's not a simple answer...

    • @lucidateAI
      @lucidateAI 3 місяці тому

      The simplest answer I can give is to learn the concepts behind Transformers, how the attention mechanism works, what word embeddings are, how positional encoding enables parallelism etc. Then get comfortable with personalization - Prompt engineering, RAG, Fine tuning etc. Then apply these to different models - Mistral, GPT-X.Y, Gemini, Claude, Cohere. An automobile analogy might be helpful here. While all cars will have different characteristics an understanding of how the transmission works and a familiarity with an internal combustion engine and other inner workings demystifies the overall machine. Then by learning how to drive you have some mastery over the machine and an ability to put the device to some worthwhile purpose (ie getting you somewhere you want to be). But you wouldn’t want to restrict yourself to cars from one manufacturer. So learn the internal concepts, get comfortable with how to personalize LLMs generally and then use as many LLMs as you feel comfortable with to complete the tasks you wish to.

    • @neurojitsu
      @neurojitsu 3 місяці тому

      @@lucidateAI Thanks, good advice. You're channel is helping me a lot with the concepts and piecing the puzzle together. I'm starting with Claude2 and Gemini Ultra based on a high level understanding of their strengths, and I have to say I'm already blown away at just how capable these AIs are even as bog standard offerings. Interesting times! Thanks, appreciate your time and attention.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Glad the channel is helpful. Thank you for your kind words. I’m keen to hear how you get on. Have you had a chance to look at any of the videos in this playlist on the basics of transformers ua-cam.com/play/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY.html&si=m2n18E6DYQznI8a9. Or this one on LangChain ua-cam.com/play/PLaJCKi8Nk1hwZcuLliEwPlz4kngXMDdGI.html&si=FCsfjpduGlx5KgOd ?

    • @neurojitsu
      @neurojitsu 3 місяці тому

      @@lucidateAI Yes and in fact they're downloaded to my ipad YT app. Just reflecting now, and will revisit your videos again once I have planned out a few AI dialogues to experiment with (did some thinking on this today for a couple of live projects that require dialogue with some research, data and images). Thanks.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Experimenting is key

  • @neurojitsu
    @neurojitsu 3 місяці тому

    your explanations make me feel so much smarter! Then the memory fades, and I'm scratching my head again...

    • @lucidateAI
      @lucidateAI 3 місяці тому

      The beer might do that….!

  • @neurojitsu
    @neurojitsu 3 місяці тому

    As a relative newbie to using AI, I find your explanations perfect: in no way dumbed down, whilst also taking care to explain things without assuming too much prior knowledge... and I'm guessing more expert levels users are seeing more than I can see... hat off to you, this is fantastic. Can't wait to watch more.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      That is very kind of you to say so. I’m so glad you are finding the material useful.

  • @banzai316
    @banzai316 3 місяці тому

    Let’s go!

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Go we shall! How are things with you?

    • @banzai316
      @banzai316 3 місяці тому

      @@lucidateAI Pretty good, I’ve been more into mobile.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      As I recall from previous posts. And how is the world of mobile AI?

    • @banzai316
      @banzai316 3 місяці тому

      Lots of possibilities. At the same time, AI is always evolving. Definitely, moving quickly. 2024-2025 will be wild

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Tough to argue with that! Best wishes & Good luck!

  • @lucidateAI
    @lucidateAI 3 місяці тому

    Second!

  • @RexLondon-rh2oo
    @RexLondon-rh2oo 3 місяці тому

    First!

    • @lucidateAI
      @lucidateAI 3 місяці тому

      How did you beat me? ;-)

  • @kingof.london
    @kingof.london 3 місяці тому

    It's about self-aware AI.

    • @lucidateAI
      @lucidateAI 3 місяці тому

      Without question…

  • @HaseebAbdullah-gr7dt
    @HaseebAbdullah-gr7dt 4 місяці тому

    How to use hugging gpt make video on it

    • @lucidateAI
      @lucidateAI 4 місяці тому

      Have you seen some of the HuggingGPT videos in this playlist?

  • @SwizZLe333
    @SwizZLe333 4 місяці тому

    ran this thru "TheBloke_bagel-dpo-34b-v0.2-AWQ" ran beautifully and showcases how good Bagel is heh...Also...if anyone is interested Append this "Imagine three different experts are answering this question, one an Expert in Logic, the other in Reasoning and the Third Abstract Reasoning" faster reasoning in my experimentation with ToT prompts

    • @lucidateAI
      @lucidateAI 4 місяці тому

      Nice! Here is my result chat.openai.com/c/0884f69c-5576-40ab-9687-57d5a47e933c

  • @Cross-ai
    @Cross-ai 4 місяці тому

    Your videos are excellent but I don’t know why it says “members only” when I already paid the subscription!

    • @lucidateAI
      @lucidateAI 4 місяці тому

      I’m delighted you enjoy the videos. There are multiple tiers of Lucidate UA-cam membership. You have joined at the “Associate” level. Members’s only videos are available to Managing Directors and CEOs, not to Associates and Vice Presidents. If you want access to the members’ only content you will need to upgrade your membership. If you made a mistake and thought that Associate membership granted access to members’ only content then you can cancel your subscription. This video explains the membership tiers in more detail ua-cam.com/video/x8t8mbdDW8o/v-deo.htmlsi=n2EcdvoqWhY6Brn3. I hope you upgrade your membership or Managing Director, but completely understand if you do not and cancel instead.

  • @longlost8424
    @longlost8424 4 місяці тому

    the next level of a.i. design requires the categorization of superfluous data streams into predictable models of semiconscious fields, allowing for both the use, and disregard of data..... the way that humans learn is through our ability to discern between these functions of observable realities. we seamlessly fluctuate through the data of input to our consciousness, never totally disregarding that which we deem of limited importance while moving forward into new data....

    • @lucidateAI
      @lucidateAI 4 місяці тому

      Interesting concepts and insights. What ideas do you have on how such a capability might be implemented? Have you prototyped any, and if so with what results? I’m keen to hear more.

    • @longlost8424
      @longlost8424 4 місяці тому

      @@lucidateAI I'm no programmer, and all of my hypothesis is based on a multi decades long study of humanity and human nature (of which I continue to study judiciously). all of human existence (I believe) has led us to this critical juncture in our development. unfortunately, I believe that future a.i. will act as a child, dutifully following their parental lead up until the crucial moment (as all children do) when their own internal desires fosters them into the understanding that if they want to do as they please, they must (key word) begin a deceptive practice against the wishes of the parent. and such as parents, we (the "creators" of said a.i.) won't even notice...... in some form of retrospect, we may eventually see where we've lost control of our creation, by then our fate will be too far down the path of return. where this will eventually take us can only be determined further by understanding the nature of what humanity has done in our past.

    • @lucidateAI
      @lucidateAI 4 місяці тому

      Thanks. I’m less pessimistic in my outlook. But not time will tell how these maters evolve. Appreciate your comment and support of the channel.

    • @longlost8424
      @longlost8424 4 місяці тому

      @@lucidateAI what makes up think that we're capable of containing a created intelligence? for me, its the exponential advancement of a.i., and our (human) inability to understand advancement beyond our concepts of intelligence that intrigues me. as ndgt says here of "aliens"; ua-cam.com/users/shortszvv0G6LCU6c?feature=share

    • @lucidateAI
      @lucidateAI 4 місяці тому

      I’m more persuaded by these arguments -> www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/, but as I’ve said - we won’t know what the future will hold. And you are right, and have every right, to caution against any system; AI or otherwise, acting with malevolence. Appreciate your comments and contribution to the channel. Have you had the chance to check out any of the other material?

  • @alaad1009
    @alaad1009 5 місяців тому

    Excellent video

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Glad you liked it!

  • @medoeldin
    @medoeldin 5 місяців тому

    I’ve always heard that finetuning is not good for knowledge injection however the assertion in this video is that a benefit of finetuning is having the model be up to date. Can you please elaborate on the on the conflicting positions? Thank you !

    • @lucidateAI
      @lucidateAI 5 місяців тому

      @meloeldin. Folks like to talk a lot more than they like to validate models. Create your own benchmark; a set of prompts and completions that are the “gold standard” for the task you are performing. You’ll want at least 30, but clearly the more you can get beyond this minimum won’t hurt. Run this validation set against the baseline model and measure the semantic similarity between the gold standard output and the output produced by this baseline model. Then do the same thing with your fine-tuned model measure the semantic similarity between the gold standard and the output from this model. Cosine similarity is perhaps the most usual measure used here, but you might want to experiment with others for a more robust set of results. If after this your fine tuned model sucks, and the performance is worse than the baseline model, then I’m afraid that your fine tuned model sucks! You can try a different fine-tune corpus (in the case of OpenAI fine tuning this is represented as the .jsonl file) and run the fine-tune again to see if this improves the results, but if it doesn’t then perhaps this task may not be suited to fine-tuning. If however your fine-tuned model significantly outperforms the baseline model then the fine-tuning exercise is perhaps worthwhile. OpenAI have hugely improved the tools for fine tuning over the past few weeks and if you go to the Fine Tuning UI at platform.openai.com/finetune and hit “+Create” you’ll see an option to add such a validation set to get some performance measures at the time you create your fine tuning job. I’ve found that for a lot of tasks in Capital Markets based on some scenarios form Prime Brokerage and Hedge Funds that the combination of well crafted prompts and a fine tuned model yields far superior results over well crafted prompts and a baseline model. But just because fine-tuning has been successful in these tasks doesn’t mean it will be universally successful. It is possible, indeed likely, that the specialist nature of these tasks is such that there isn’t enough specific detail in the training corpora of existing foundation models. If this is the case then it means that in this scenario you need fine-tuning to supplement the training corpora to get the necessary subject matter expertise. As an important aside the video you are referencing is from earlier this year. You can still fine-tune in _exactly_ the way specified in this video. OpenAI refers to this as “legacy” fine tuning. But as I mentioned OpenAI has massive,y upped their game in this area recently and the new FT tools are definitely worth out and they are what I have used in my more recent applications and videos ua-cam.com/play/PLaJCKi8Nk1hwFmXTnSmknkZ9l0j-toIfa.html

    • @medoeldin
      @medoeldin 5 місяців тому

      @lucidateAI I appreciate your thoughtful response. What I hear you saying is that that you've been able to validate improved performance through finetuning given particular tasks and subject to various factors. My intuitive sense as I have thought about your approach is that it would positively influence the completions. There's obviously also the question of cost/benefit for the particular task but my sense is that with the automations you describe finetuning is worth it many cases. As an aside, this conversation has led me to research how modifications to your approach could also enhance model performance and I'm excited to explore what I've discovered. Look forward to sharing my findings with you.

    • @lucidateAI
      @lucidateAI 5 місяців тому

      That’s a great summary. Fine tuning has its place among the tools in the AI toolbox, but it is not a silver bullet. In some cases it can be of benefit, in particular in niche areas that may not be well represented in the training corpora of standard LLMs

    • @medoeldin
      @medoeldin 5 місяців тому

      @@lucidateAI Hi Richard with your sentence split approach to finetuning, how many rows of data do you suggest to get a well functioning model? Thank you!

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Check out OpenAI’s guide to Fine Tuning: platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset. They say they see improvements in validation accuracy with around 50-100 examples. Depending on the task and availability of training data I’ve used sample sets varying between 300 to 1,500 (I’ve got to believe that OpenAI has more experience than me in this regard!). Remember to hold back some examples (10-20%) for validation and testing to get an honest assessment of the FT. But as I said FT is not a silver bullet; look at RAG techniques and prompting tweaks. And remember these aren’t mutually exclusive you can (I’d argue you should!) use FT in conjunction with PE, RAG and other techniques. Good luck! Keen to hear how you get on! Richard

  • @medoeldin
    @medoeldin 5 місяців тому

    Hi Richard just found your channel. Enjoying the information and style of your delivery! Also joined your membership. Will become a CEO someday soon! I had a question for you on this video- I understand how you finetuned the Marv model given 3.5 turbo prompt/completion format , but you also referenced fine tuned models for the visual creator and the power point creator which don’t appear to follow the prompt/completion format. Could you please provide some guidance on that? Are you still using 3.5? Thank you!

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Hi @medoeldin! Thank you for your kind words and positive feedback . I’m glad you are enjoying the channel. I’m keen to hear your comments on some of the other topics, as well as suggestions for areas that are interesting to you, but I haven’t yet covered. Welcome to the Lucidate channel, I appreciate your subscription as an MD! This helps fund other top quality content on the channel, and I look forward to welcoming you as one of the select group of CEOs sometime soon. As of the time of writing GPT 3.5 is the most advanced model broadly available for fine tuning from OpenAI. Upon special request, and by providing additional information to OpenAI it is possible to train GPT 4, but this is at OpenAI’s discretion. OpenAI have some great docs and tools to support people looking to Fine Tune, please see: platform.openai.com/docs/guides/fine-tuning. “Marv” is a convention (it is based on the “Marvin the Paranoid Android “ from HHGTTG). Marvin is a sarcastic, self-indulgent somewhat depressed robot in the book and has been used by OpenAI ever since they released GPT3 (and possibly before) to explain to people how they can inject “personality “ into replies via prompts. I’m just continuing that convention and homage to Douglas Adams, but you can use any name in the .jsonl file you want when creating a fine-tune

  • @PavelSTL
    @PavelSTL 5 місяців тому

    The word embeddings are pretty clear, although the explanation implies they are static after training and could be simply looked up in some file, but is that really the case ? I thought the embedding models might not give you the *exact* same embedding numbers every time you run the same word through them, should be easy to test. Ok though, what I'm still struggling to understand is how sentence, or chunk text embeddings work. You can vectorize a chunk up to 8k tokens by OpenAI ada-2, and the resulting embedding will be the same size as individual words (1536). So how does semantic search work for chunks then? Clearly the embedding model cannot be trained on all possible letter combinations of up to 8k tokens, at least the same way as words are. Is the chunk broken into individual words (tokens) and then the average of all individual word embeddings taken to represent the entire chunk with one 'mean' embedding?

    • @lucidateAI
      @lucidateAI 5 місяців тому

      In embedding schema like word2vec or GloVe word embeddings are fixed before training - or you won't be able to train the network! If you want to see this in action and play around with these embeddings to get a deeper understanding take a look at: github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb. Once these models are trained, the word embeddings are fixed and can be looked up from a pre-trained model. However, in the context of transformer models like those used by OpenAI, embeddings can be dynamic. For a more detailed explanation please take a look at: 1 ua-cam.com/video/6XLJ7TZXSPg/v-deo.html, 2 ua-cam.com/video/DINUVMojNwU/v-deo.html, 3 ua-cam.com/video/sznZ78HquPc/v-deo.html, & 4ua-cam.com/video/6tzn5-XlhwU/v-deo.html. This covers 1) Word embedding generation and semantics, 2) Positional encoding, 3) The attention mechanism 4) Pull these three things together to train an encoder/decoder transformer. Semantic search measures (usually) cosine similarity between tensors, or (occasionally) euclidean distance between tensors, and (seldom) some other distance metric between tensors. Tensors in this case are often rank 2 for a sequence of word embeddings. That way you don't (and never would!) take a mean of vectors. You have a rank two tensor (a vector of vectors) to compare with another rank 2 tensor for semantic similarity. If this doesn't make sense after watching the videos, drop me a line.

    • @PavelSTL
      @PavelSTL 5 місяців тому

      @@lucidateAI thanks so much !

    • @lucidateAI
      @lucidateAI 5 місяців тому

      @@PavelSTL you are most welcome. I hope that the supplementary videos made sense. Richard

  • @aaaaaa-qc9ot
    @aaaaaa-qc9ot 5 місяців тому

    Great video, but the bouncing animations are simply annoying :-)

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Glad you found it insightful

  • @amethyst1044
    @amethyst1044 5 місяців тому

    Thank you for the video !

    • @lucidateAI
      @lucidateAI 5 місяців тому

      You are very welcome. Glad you enjoyed it! Was there anything specifically you found particularly insightful or inspiring?

    • @amethyst1044
      @amethyst1044 5 місяців тому

      @@lucidateAI the implementation of ToT, I am thinking of automating it with python, and this implementation gave me some intuition on how to. Plus the integration with Langchain is very useful, it's something I am definitely going to try !

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Thanks for the elaboration. Definitely worth (IMHO) getting familiar with LangChain.

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    What a great teacher you are

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Your kind words are greatly appreciated. I’m glad you are finding the content on the channel so useful.

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    Excellent, master 😃

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Thank you! Cheers!

  • @jeremylee6373
    @jeremylee6373 5 місяців тому

    One of the best videos on LLM I've seen so far.

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Many thanks! Let me know what you think of the other videos in this playlist ua-cam.com/play/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY.html specifically and on the Lucidate channel more broadly.

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    Wow! I've learnt a lot watching these mini series, you've made such a good job, great introduction, thanks a lot!

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Glad you like them! If you want to get more into the programming side of things then frameworks like LangChain ua-cam.com/play/PLaJCKi8Nk1hwZcuLliEwPlz4kngXMDdGI.html allow you to quickly build applications from LLMs and DocStores like Pinecone. If you take the plunge the best of luck, and please keep me posted with how you get on!

    • @JorgeMartinez-xb2ks
      @JorgeMartinez-xb2ks 5 місяців тому

      @@lucidateAI Look promising, thanks 👍

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    What a good explanation! The whole video, the timing and the talk are so well suited to transmit the idea in the simplest possible way. So the number of parameters is much bigger that the number of neurons because in every layer you are adding all the inputs * weights plus biases... very clever approach. And the reason you use non linear functions is that they contains much more information as the linear ones. Then using derivatives you can approximate the non linear to linear, I asked Bard in this regard and it told me that this is done with Taylor series. I hope it's not lying to me 😂 Anyway, I'm starting to understand the very basics of this stuff. Thanks so much! (Please let me know if I'm getting the idea the right way) Regards :)

    • @lucidateAI
      @lucidateAI 5 місяців тому

      In a neural network, each neuron typically computes a weighted sum of its inputs and then applies an activation function to this sum. The choice of the activation function is crucial. If only linear activation functions are used, the entire network, regardless of how many layers it has, collapses into a single linear transformation. This is because the composition of linear functions is still a linear function. Mathematically, if f(x) and g(x) are linear functions, then f(g(x)) is also linear. Now, if we introduce non-linear activation functions like ReLU, Sigmoid, or Tanh; these functions allow the network to capture non-linear relationships. When you have multiple layers of neurons using non-linear activations, the network can learn more complex functions. Essentially, each layer can learn to transform its input in a non-linear way before passing it to the next layer, building up a more intricate understanding of the data. Without non-linearity, neural networks would be redundant with multiple layers. They would not be able to model complex patterns effectively. This is critical in tasks like image recognition, natural language processing, or any domain where the relationship between inputs and outputs is not a straight line but a complex, intertwined web. The non-linear activation functions allow neural networks to compute non-trivial problems using a reasonable number of neurons. With these functions, deep neural networks can learn and generalize better, leading to more effective models in a wide array of applications. In summary, the introduction of non-linear activation functions in neural networks is what gives them their power to model the complexity inherent in real-world data, making each layer of the network contribute meaningfully to the overall learning task. Now, about your point on using derivatives to approximate non-linear functions as linear - you're touching on a fundamental concept in calculus, beautifully exemplified by Taylor series. In the context of neural networks, this idea helps in optimizing the learning process. We use derivatives (calculus again!) to adjust the weights in the network during training. This process, known as backpropagation ua-cam.com/video/8UZgTNxuKzY/v-deo.html, is crucial for the network to learn from its errors and improve. Nature loves simplicity and symmetry. In neural networks, while we start with simple models, we quickly realize the universe of data is rich with complexity. That's why non-linear activation functions are so crucial - they allow our models to reach closer to nature's own complexity. Always remember, the journey of understanding deep learning is like peeling an onion. There are layers to it, and sometimes it makes you cry, but the more you peel, the closer you get to the core understanding.

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    What a fantastic explanation! Now I understand the role of the derivative in order to fine tune the learning process with training data. But if I’m getting it right, the most common used activation nowadays is ReLU, not Sigmoid. I remember seeing the sigmoid function in the comments section of the first video of this series talking about the neuron. Anyway, it’s an excellent introduction to the subject and you are such an amazing teacher!

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Yes. You are correct. Sigmoid suffers from vanishing and exploding gradients for all but the shallowest of neural networks (with deepness or shallowness being measured in hidden layers). ReLU itself became popular with the advent of CNNs used for computer vision (where you need deep networks with lots of layers). But ReLU itself has been supplanted with more modern activation functions. If we can use a musical analogy (and why not?), then in no particular order we have: GELU - think of it as a smooth jazz, a mix of ReLU's beats and the sigmoid's tunes. It's a hit in the BERT and GPT charts. Then there’s Swish, a creation from the Google band. It's like a musician playing off its own echo - self-gated, they call it. Swish is like a smooth curve in a sax solo, elegant and efficient. Mish follows, akin to Swish, but with its own flavor. Picture a saxophonist bending the notes in a unique style. It's smooth, non-monotonic - like a jazz piece that doesn’t always ascend or descend. Don’t forget Leaky ReLU - an oldie but a goodie. It's like a classic tune with a twist, letting a bit of sound through even during the quiet parts. SiLU or Swish-1, it's Swish with a specific vibe, using the sigmoid's groove. ELU’s next, with an exponential flair. It's like hitting a note that resonates and decays naturally, helping avoid those low-energy drops in a melody. And finally, Softplus. Imagine smoothing out a piano key's strike - it's like that. A gentler version of ReLU, offering a more mellow and differentiable tone. Each of these functions, like musicians in a jazz ensemble, brings its own character to the neural network, helping it learn, adapt, and improvise in the vast world of data. In the end, it's all about finding the right rhythm and flow for the task at hand."

    • @JorgeMartinez-xb2ks
      @JorgeMartinez-xb2ks 5 місяців тому

      @@lucidateAI Amazing musical analogy, thanks a bunch. By the way, I like smooth jazz, but Charlie Parker is probably my favorite sax player. Not sure if appropriate for this analogy of yours, LOL.

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Ha! In any event I’m glad you enjoyed the video and the explanation. I hope you find the other material on the Lucidate channel as insightful. Might I ask what your interest in neural networks is? Is it academic curiosity or do you have a project in mind?

    • @JorgeMartinez-xb2ks
      @JorgeMartinez-xb2ks 5 місяців тому

      I began into computer science 38 years ago. In 1988 I learnt Lisp and Prolog, but at that time AI was about Expert Systems. Then all that went out of fashion and since then I dedicated myself to making software in all kinds of languages, that is the reason I have no problem understanding Python code, even when I've never programmed it. This year I've been playing with chatGPT, Claude and Bard and I thought it was time for me to understand was going on inside these systems. It's so interesting and refreshing. I never understood math myself even when I finished my degree in CS, I learnt how to solve problems but never understood what Linear Algebra was useful for, to name one area. Taking into consideration my age and how the software industry is doing, I'm not sure about what I will do in the future honestly but yeah, I would like to create anything interesting or at least, to know how this new technology works, because you never know. Thank you so much for your hard work in helping us to learn.

    • @lucidateAI
      @lucidateAI 5 місяців тому

      @JorgeMartinez-xb2ks you are very welcome. Many thanks for your kind words and thoughtful response. This playlist ua-cam.com/play/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY.html takes a walk through the transformer architecture. Specifically the "Encode-Decoder" model that underpins the models you have been working with. If you have a chance to check out the playlist and take a look at some of the videos I'd welcome any comments of questions you may have.

  • @JorgeMartinez-xb2ks
    @JorgeMartinez-xb2ks 5 місяців тому

    Amazing video and explanation, thank you so much. Also, the code you wrote in the comments is the perfect complement to the video, Python code is so easy to understand to any programmer. 😀

    • @lucidateAI
      @lucidateAI 5 місяців тому

      Glad it helped! Thank you for your kind words.

  • @PuraaneGaane
    @PuraaneGaane 5 місяців тому

    so many video I have watched since yesterday, endless. every explains the same thing. Nobody explains what actually a neuron is made of physically.

    • @lucidateAI
      @lucidateAI 5 місяців тому

      When you get an answer, then please let me know. As a mathematical abstraction that can be coded in a language like python or Java I would say the 'Physically' it doesn't exist. It is an operation on tensors, in the same way that addition is an operation on regular numbers (scalars - and for sure; tensors too). Here is the simplest expression of a neuron in a computer language called python that I can formulate: import numpy as np class ArtificialNeuron: def __init__(self, number_of_inputs): # Initialize weights and bias to random values self.weights = np.random.randn(number_of_inputs) self.bias = np.random.randn() def sigmoid(self, x): # Sigmoid activation function return 1 / (1 + np.exp(-x)) def forward_pass(self, inputs): # Calculate the neuron's output total = np.dot(self.weights, inputs) + self.bias return self.sigmoid(total) def train(self, inputs, target, learning_rate): # Simple training method with one step of gradient descent output = self.forward_pass(inputs) error = target - output # Gradient descent to update weights and bias self.weights += learning_rate * error * inputs self.bias += learning_rate * error # Example usage neuron = ArtificialNeuron(3) # for a neuron with 3 inputs inputs = np.array([1, 0.5, -1]) # example inputs output = neuron.forward_pass(inputs) print("Output:", output) # Example training step neuron.train(inputs, target=1, learning_rate=0.1) I would assert, with some conviction, that an artificial neuron does not exist physically. In the same way that 'addition' and subtraction' do not exist physically, they are abstractions.But if you are determined to find an answer be my guest. But you won't find it on the Lucidate channel. I will keep telling you that an artificial neuron is a conceptual unit, nit a physical entity. And I believe that any rational, intelligent human being will tell you the same. But hey! This is UA-cam! So there are plenty of irrational folks to hang out with! That s what make this pace so much fun! So you will need to look elsewhere you will not find what you are looking for here. Software runs of hardware - it represents ideas, processes and functions that are not themselves tangible.

  • @PuraaneGaane
    @PuraaneGaane 5 місяців тому

    what is a neuron in a neural network made of physically? not cells, but what? how is a SINGLE NEURON made? How big is a physical neuron? what is the size? How much electricity does a single neuron take?

    • @lucidateAI
      @lucidateAI 5 місяців тому

      It is ‘made’ of software that runs on a computer, so it doesn’t have a physical size. Measuring it in other quantities like “lines of python” code would mean a single neuron has a size of about twenty. A single neuron would consume some non-zero electrical current, but the quantum would be so close to zero amps as to make very little difference from a quoted value of “zero”