ChatGPT & GPT-3: Foundation and Fine Tuning: NLP 6

Поділитися
Вставка
  • Опубліковано 13 лют 2023
  • Customize and train GPT-3 and other transformer neural networks such as BARD & BERT.
    , Welcome to Lucidate's video on Foundation Models and Fine-Tuning in AI! In this tutorial, we'll explore the basics of foundation models and how they form the basis for many advanced AI applications. We'll also dive into the concept of fine-tuning, a process that enables users to tailor pre-trained AI models to meet their specific needs.
    In Part 1, we'll discuss foundation models and their importance in providing a solid base for building new AI applications. We'll also explore the challenges associated with these models, including their lack of subject matter expertise in specific disciplines.
    In Part 2, we'll explain how fine-tuning allows users to modify pre-trained AI models to better perform tasks related to a specific dataset. We'll also discuss the use of prompts and completions in the fine-tuning process, and why fine-tuning can lead to greater customization and improved performance.
    Finally, in Part 3, we'll illustrate some applications of fine-tuned AI models in finance and capital markets. We'll explore how these models can be used to analyze financial news articles and other primary sources to determine secular changes in market sentiment, and how they can automate the process of financial reporting.
    Overall, this video will provide you with a comprehensive overview of foundation models and fine-tuning in AI, and explain why they are revolutionizing the AI industry. If you're interested in learning more, be sure to subscribe to our channel for future updates!
    =========================================================================
    Link to introductory series on Neural networks:
    Lucidate website: www.lucidate.co.uk/blog/categ...
    UA-cam: ua-cam.com/users/playlist?list...
    Link to intro video on 'Backpropagation':
    Lucidate website: www.lucidate.co.uk/post/intro...
    UA-cam: • How neural networks le...
    'Attention is all you need' paper - arxiv.org/pdf/1706.03762.pdf
    =========================================================================
    Transformers are a type of artificial intelligence (AI) used for natural language processing (NLP) tasks, such as translation and summarisation. They were introduced in 2017 by Google researchers, who sought to address the limitations of recurrent neural networks (RNNs), which had traditionally been used for NLP tasks. RNNs had difficulty parallelizing, and tended to suffer from the vanishing/exploding gradient problem, making it difficult to train them with long input sequences.
    Transformers address these limitations by using self-attention, a mechanism which allows the model to selectively choose which parts of the input to pay attention to. This makes the model much easier to parallelize and eliminates the vanishing/exploding gradient problem.
    Self-attention works by weighting the importance of different parts of the input, allowing the AI to focus on the most relevant information and better handle input sequences of varying lengths. This is accomplished through three matrices: Query (Q), Key (K) and Value (V). The Query matrix can be interpreted as the word for which attention is being calculated, while the Key matrix can be interpreted as the word to which attention is paid. The eigenvalues and eigenvectors of these matrices tend to be similar, and the product of these two matrices gives the attention score.
    =========================================================================
    #ai #artificialintelligence #deeplearning #chatgpt #gpt3 #neuralnetworks #attention #attentionisallyouneed

КОМЕНТАРІ • 67

  • @labsanta
    @labsanta Рік тому +2

    Foundation Models and Fine-Tuning: The Key to Efficient AI Application Development
    What are Foundation Models and Why are They Important?
    Foundation models are pre-trained AI models that have been trained on massive amounts of data to generate high-quality natural language representations. These models are designed to perform basic NLP tasks such as sentiment analysis, text classification, and machine translation. The importance of Foundation Models lies in their ability to provide a solid base for building new AI applications. By using pre-trained models, developers can avoid the time and computational resources required to train models from scratch and instead focus on fine-tuning the model to their specific use case.
    Fine-Tuning: Customizing Pre-Trained Models for Optimal Performance
    Fine-tuning is the process of modifying existing pre-trained AI models to meet specific needs. In this process, the user provides the model with a smaller data set that is specific to their use case, and the model is then fine-tuned to better perform tasks related to that data set. The benefits of fine-tuning a model like GPT-3 are that it allows for greater customization and improved performance. By fine-tuning a pre-trained model to a specific use case, the model can be adapted to better perform tasks that are related to that use case, leading to more accurate and relevant results.
    The Implications of Not Fine-Tuning Models in Capital Markets
    The implications of not fine-tuning models can be significant, particularly in industries such as Capital Markets, where accurate, up-to-date information is vital. Fine-tuning can provide subject matter expertise that general language models lack and can help overcome the problem of being out of date. By fine-tuning a model to the specific needs of a capital markets application, such as predicting market trends or analyzing financial news, the accuracy of the results can be greatly improved.
    The Process of Fine-Tuning Pre-Trained Models
    Fine-tuning a pre-trained model is a straightforward process that requires users to create a spreadsheet with two columns, one for prompts (the input to the encoder) and the other for completions (the input to the decoder). The completed spreadsheet is submitted to OpenAI, which does some translation of the spreadsheet into a JSON format and then runs the training algorithm to fine-tune the model. The documentation on the OpenAI site is comprehensive, and fine-tuning can be done in a couple of hours, even for large-scale customization.
    Foundation Models and Fine-Tuning are critical components of efficient AI application development. By leveraging pre-trained models, developers can save time and resources while still achieving high-quality natural language representations. Fine-tuning enables users to customize pre-trained models to their specific needs, resulting in better performance and more accurate results. The process of fine-tuning pre-trained models is straightforward and can be done relatively quickly, making it an accessible tool for developers of all levels.
    Thanks for great videos!

    • @lucidateAI
      @lucidateAI  Рік тому +1

      You are very welcome. Thanks for engaging with the channel and for your great contribution. Appreciated.

    • @aiartrelaxation
      @aiartrelaxation Рік тому

      People learn in different ways, I am a visual learner. So the few videos I watched from you made my better understanding of the inner workings up immensely. And your smoothing background music is like a secret key to unlock the mind . Great job to your creator's. 😊😊😊

    • @lucidateAI
      @lucidateAI  Рік тому +1

      Thank you for your comment, I'm thrilled to hear that my videos have helped you to better understand the inner workings of natural language processing! As you mention, people learn in different ways, and I believe that incorporating visual elements into my explanations can be an effective way to convey complex ideas and concepts.
      I'm also glad to hear that you enjoy the background music in my videos! I believe that music can be a powerful tool for creating a calming and immersive environment, and can help to unlock the mind and promote a deeper level of understanding.
      At the end of the day, my goal is to create content that is both informative and engaging, and that helps to demystify the complex world of AI and natural language processing. I am grateful for the opportunity to share my knowledge and insights with others, and I hope to continue to create content that resonates with a wide range of learners and enthusiasts. Thank you for your support! Greatly appreciated.

    • @JackandAI
      @JackandAI Рік тому

      @lucidateAI absolutely. There's so many talking heads out there. And many like to just like to hear themselves talk and don't have a how to set up on their mind. Plus, don't add the NLP inside learning to it. One has to know one to know one. Keep doing what you're doing.

    • @lucidateAI
      @lucidateAI  Рік тому

      Hi @HotSecondNews. Thank you for your comment! We appreciate your feedback and we agree that there are many individuals out there who just like to talk and don't provide practical solutions or actionable steps. And there are some very good You-tubers in AI too. At Lucidate, our mission is to not only provide insights and knowledge on the latest AI technologies and advancements, but also to help individuals and businesses implement these solutions effectively.
      We believe that NLP is a key area within AI and can be used to develop intelligent conversational agents, chatbots, and other tools that can revolutionise the way we interact with language. While the field of NLP can be complex, we strive to make it accessible and understandable for everyone, regardless of their level of technical expertise.
      Thank you for recognising our efforts, and we will continue to try to provide practical guidance and solutions for individuals and businesses looking to leverage AI and NLP technologies. Comments and criticisms help keep us accountable and on that path, accordingly we welcome any comments - either good or bad, you have on our content. With thanks, Lucidate.

  • @elvinado
    @elvinado Рік тому +8

    Thank you. This is an amazing video series.

  • @borntobemild-
    @borntobemild- Рік тому +3

    You articulate complex concepts very well good sir

    • @lucidateAI
      @lucidateAI  Рік тому

      Glad you think so! Greatly appreciate your support of the channel.

    • @GuinessOriginal
      @GuinessOriginal Рік тому +1

      @@lucidateAI question: When submitting fine-tuning data from an individual user or conversation to an OpenAI language model, where and how is the model typically fine-tuned on that specific data? Is the resulting fine tuned model pertaining to that user or conversation added to the main model used by OpenAI, or is the individual or conversational context fine-tuned model typically used only in the context of that user or conversation? Sorry for this question, I’m not even sure if it makes sense, I did ask GPT but I wasn’t sure of the answers I was getting, whether they were right or whether I really understood them ha ha. Thanks again for the videos, really impressive.

    • @lucidateAI
      @lucidateAI  Рік тому +2

      When you fine-tune a model you end up with a key to use that specific model. It would be a question best posed to OpenAI, but I don't _think_ that they use custom fine-tunes to update their main models. Clearly they could, but I do not think they do.
      You get back and OpenAI generated key that allows you access to your own bespoke version of the model you have generated.
      You get a choice of which GPT-3 model you want to update. The least sophisticated is called 'Ada', then 'Babbage', then 'Curie' then 'DaVinci'. Ada is the cheapest to FT and DaVinci the most expensive.
      You can get all the details here: platform.openai.com/docs/guides/fine-tuning
      and here:
      platform.openai.com/docs/api-reference/fine-tunes

    • @GuinessOriginal
      @GuinessOriginal Рік тому +1

      @@lucidateAI thank you very much indeed, this is very helpful. I’ve been looking into it with the help of GPT, if I’m not mistaken I’ll be able to use it to guide me through the process with some hard work and a bit of luck. Sorry to bother you again, one final question if you don’t mind? The key you get for that specific fine tuned model, would it be possible to use that key in some way to personalise responses to a particular person, either straight from the general model or via another smaller bespoke model or function, possibly client side perhaps? I’m thinking more of the form, presentation style and tone of communication rather than the actual content and function in particular. I’m wondering whether you can personalise responses to individual users based on their interactions and responses,and use them and the user details you harvest about them to fine tune the responses to them, in much the same way, say, as Google and Facebook personalise search responses and content provided to the individual based on their 10 000 or so data points? Seems to me to be the most logical next step, if it’s not being done already.

    • @lucidateAI
      @lucidateAI  Рік тому +2

      Thank you for your question, and I'm glad to hear that the information was helpful. In regards to your question, the short answer is that the key is the key to one specific fine-tuned model. So unless you were going to go to the bother (not to say expense!) of creating a separate model _per person_ then the model API_KEY wouldn't be the thing that identifies a person. It will be the fine-tuned model itself.
      It is possible to use a fine-tuned model to personalize responses for a particular person (the big tech companies use this capability all the time, but don't have a separate model per user). One way this could be achieved is by using the fine-tuned model to generate responses based on a whole bunch of different a users' (plural) inputs, and then using feedback from specific individual users to further refine and personalize those responses over time.
      However, it's worth noting that personalization is a complex process that involves a lot of data and computational resources. It also raises important ethical questions around data privacy and the responsible use of AI. It's important to ensure that any personalization efforts are done in a transparent and ethical way, and that users are aware of how their data is being used and have the option to opt out if they wish.
      It's also worth noting that while companies like Google and Facebook _evidently_ do use personalization to tailor search results and content to individual users, they have large amounts of data and sophisticated algorithms to do so. Implementing a similar level of personalization for a small-scale AI model may be more challenging, but it's certainly possible with the right resources and expertise.

  • @michaelraasch5496
    @michaelraasch5496 Рік тому +1

    Incredible series. Much appreciated. Richard, I live in London, so in case we bump into each other then drinks are on me.

    • @lucidateAI
      @lucidateAI  Рік тому

      Thanks Michael for your support of the channel. I'll be sure to hit you up for that drink!! ;-)

  • @benimmortal5858
    @benimmortal5858 Рік тому +1

    I am learning all that I can, Thanks for the great video.

  • @muthukamalan.m6316
    @muthukamalan.m6316 Рік тому +1

    Thanks for amazing video

    • @lucidateAI
      @lucidateAI  Рік тому

      You are welcome. Thanks for your kind comment and for supporting the channel. I hope you also enjoy the other videos in this series -> ua-cam.com/play/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY.html. With thanks, Lucidate.

  • @abenjamin13
    @abenjamin13 Рік тому +1

    Fantastic thank you 🙏

  • @GuinessOriginal
    @GuinessOriginal Рік тому

    Superb again

  • @fujisonfit
    @fujisonfit Рік тому

    Thanks for the great video, it was very helpful. I noticed that the audio volume was quite low.

    • @lucidateAI
      @lucidateAI  Рік тому

      Glad it was helpful! Sorry about the audio levels. Thanks for the feedback.

  • @MrVMA93
    @MrVMA93 Рік тому +1

    Thank you very much for the informative educational series. I apologize for the sudden questions, but would it be possible for you to kindly elaborate a bit more on the use of foundation models such as GPT-3 for text generation in Chinese or Japanese?
    1. I was wondering if transformer models are equally effective with pictographic characters as they are with English text?
    2. Despite their multilingual capabilities, I understand that the training data of models like GPT-3 may be skewed towards English texts. In your opinion, how do you think this may impact the processing of prompts and provision of answers in languages other than English?
    Thank you very much for your time and expertise.

    • @lucidateAI
      @lucidateAI  Рік тому +2

      Thank you for your comment and questions! I'm glad you found my educational series informative. To answer your questions:
      1. Transformer models such as GPT-3 can be effective for generating text in Chinese or Japanese, as well as other languages that use non-Latin scripts. However, the performance may depend on the quality and quantity of the training data available for these languages. For example, if the model has been trained on a large corpus of Chinese or Japanese text, it may be able to generate high-quality text in those languages. However, if the training data is limited or of low quality, the performance may be lower.
      2. You are correct that the training data for models like GPT-3 may be skewed towards English texts. This can impact the processing of prompts and provision of answers in other languages, as the model may not have as much exposure to those languages. However, there are approaches to mitigate this issue, such as fine-tuning the model on language-specific datasets or using multilingual models that have been trained on a variety of languages.
      Overall, while there may be some challenges when using foundation models such as GPT-3 for text generation in languages other than English, there is still significant potential for these models to be effective in generating high-quality text in a variety of languages. Thank you for your questions and interest in this topic!
      ================================
      はい、コメントと質問ありがとうございます!私の教育シリーズが役立ったことを嬉しく思います。質問に答えると、
      GPT-3などのトランスフォーマーモデルは、漢字や仮名文字などのラテン文字以外のスクリプトを使用する中国語や日本語のテキスト生成にも有効である場合があります。ただし、これらの言語のトレーニングデータの品質と量によって、パフォーマンスは異なる可能性があります。例えば、モデルが大量の中国語や日本語のテキストコーパスでトレーニングされている場合、高品質なテキストを生成できる可能性があります。ただし、トレーニングデータが限られている場合や品質が低い場合、パフォーマンスが低くなる可能性があります。
      GPT-3のようなモデルのトレーニングデータは英語テキストに偏っている可能性があるため、他の言語でのプロンプトの処理や回答の提供に影響を与える可能性があります。ただし、言語固有のデータセットでモデルを微調整したり、複数の言語でトレーニングされたマルチリンガルモデルを使用したりするなど、この問題を緩和するアプローチがあります。
      全般的に、英語以外の言語でテキスト生成にGPT-3などのファウンデーションモデルを使用する場合には、いくつかの課題があるかもしれませんが、それでも、これらのモデルは様々な言語で高品質なテキスト生成に有効である可能性があります。 このトピックに関心を持ってくださり、質問してくださり、ありがとうございました
      ====================
      当然,感谢您的留言和提问!很高兴您觉得我的教育系列有所帮助。以下是您的问题的回答:
      Transformer 模型,如 GPT-3,对于使用非拉丁文字符的语言,如中文和日文,生成文本可能是有效的。但是,这取决于这些语言的训练数据的质量和数量。例如,如果模型经过了大量中文或日文文本的训练,它可能能够生成高质量的文本。但是,如果训练数据有限或质量较低,性能可能会较差。
      您是正确的,像 GPT-3 这样的模型的训练数据可能偏向于英文文本。这可能会影响其他语言的提示处理和答案提供,因为模型可能没有接触过这些语言。但是,有一些方法可以缓解这个问题,例如在特定语言的数据集上微调模型或使用经过多种语言训练的多语言模型。
      总的来说,虽然使用 Foundation 模型(如 GPT-3)在英语以外的语言中生成文本可能存在一些挑战,但仍有可能在各种语言中生成高质量的文本。谢谢您的问题和对这个话题的关注!
      ===================

    • @MrVMA93
      @MrVMA93 Рік тому

      @@lucidateAI Thank you so much for your informative response. I truly appreciate the extra effort you put in to attach the Japanese and Chinese translations; it was very kind of you.
      Please forgive me if I have caused any offense, but I am curious to know if the Chinese and Japanese text could be translated using GPT-3 itself. I understand that the text may be your original writing, and if that is the case, I am deeply sorry for any inconvenience or discomfort this may cause.
      Although I cannot read Chinese, I did notice additional information in the Japanese text (specifically, the mention of "non-Latin scripts such as Kanji and Kana"). However, as English and Japanese are not my native languages, I find it difficult to distinguish between AI-generated text and original writing, to be honest. Witnessing such advancements in technology is both astonishing and unsettling, and it leads me to wonder how it may impact the demand for learning foreign languages.
      Anyway, thank you very much for your time and kind response. Have a nice day!

    • @lucidateAI
      @lucidateAI  Рік тому +1

      You're very welcome! I'm glad that you found my response informative, and I appreciate your curiosity about the translations. To answer your question, it is absolutely possible to translate Chinese and Japanese text using GPT-3 and ChatGPT, as the transformer models have impressive capabilities for natural language translation. However, it's important to note that machine translation tools, including GPT-3, still may not always produce accurate or reliable translations, particularly when it comes to nuances of meaning or cultural context. (Although they are getting very, very impressive and improving all the time).
      In this case, both the Japanese and Chinese translations were created by a by GPT-3. I apologize if my previous message was unclear on that point. As a non-Japanese and non-Chinese speaker myself I'm not in a position to finely judge how well, but by doing a simple reverse-translation back to English I was able to determine that they were pretty close.
      I agree with you that the advancements in technology are both astonishing and unsettling, and may impact the demand for learning foreign languages in the future. However, I believe that learning a foreign language has benefits beyond just being able to communicate in that language, including expanding one's cultural knowledge and empathy, improving cognitive function, and opening up new opportunities for personal and professional growth.
      Thank you for your kind words and for taking the time to reach out. Please let me know if you have any other questions. - Lucidate

  • @zyzhang1130
    @zyzhang1130 Рік тому

    Couldn’t agree more on 4:26. Anyways just curious which specific version of GPT3 did you fine tuned on?

    • @lucidateAI
      @lucidateAI  Рік тому +1

      @zychan. Glad you liked the video, much appreciated. Thank you for your comment. I've fine tuned models on ada, babbage, curie and davinci all the way back to GPT-3. Most recently davinci on GPT-3.5 has been the weapon of choice. I don't tend to use ada and babbage, maybe curie once in a while. Have you performed any fine tunes yourself? Any experience you can share with the channel? Once again, thanks for the comment and question. - Lucidate.

    • @zyzhang1130
      @zyzhang1130 Рік тому

      @@lucidateAI I have not but I’m looking into Davinci. Will it u know if there is anything to share :)

    • @lucidateAI
      @lucidateAI  Рік тому +1

      Keen to hear your experiences, as I'm sure are others. Mine have been positive. I tip (for what it is worth) it is not a bad idea for your first attempt at Fine tuning to use ada. While the results aren't nearly as impressive as davinci it is very, very low cost. You can afford to experiment a little before tuning the more complex (and more expensive models). Something to think about.

  • @medoeldin
    @medoeldin 5 місяців тому

    I’ve always heard that finetuning is not good for knowledge injection however the assertion in this video is that a benefit of finetuning is having the model be up to date. Can you please elaborate on the on the conflicting positions? Thank you !

    • @lucidateAI
      @lucidateAI  5 місяців тому

      @meloeldin. Folks like to talk a lot more than they like to validate models. Create your own benchmark; a set of prompts and completions that are the “gold standard” for the task you are performing. You’ll want at least 30, but clearly the more you can get beyond this minimum won’t hurt. Run this validation set against the baseline model and measure the semantic similarity between the gold standard output and the output produced by this baseline model. Then do the same thing with your fine-tuned model measure the semantic similarity between the gold standard and the output from this model. Cosine similarity is perhaps the most usual measure used here, but you might want to experiment with others for a more robust set of results. If after this your fine tuned model sucks, and the performance is worse than the baseline model, then I’m afraid that your fine tuned model sucks! You can try a different fine-tune corpus (in the case of OpenAI fine tuning this is represented as the .jsonl file) and run the fine-tune again to see if this improves the results, but if it doesn’t then perhaps this task may not be suited to fine-tuning. If however your fine-tuned model significantly outperforms the baseline model then the fine-tuning exercise is perhaps worthwhile. OpenAI have hugely improved the tools for fine tuning over the past few weeks and if you go to the Fine Tuning UI at platform.openai.com/finetune and hit “+Create” you’ll see an option to add such a validation set to get some performance measures at the time you create your fine tuning job. I’ve found that for a lot of tasks in Capital Markets based on some scenarios form Prime Brokerage and Hedge Funds that the combination of well crafted prompts and a fine tuned model yields far superior results over well crafted prompts and a baseline model. But just because fine-tuning has been successful in these tasks doesn’t mean it will be universally successful. It is possible, indeed likely, that the specialist nature of these tasks is such that there isn’t enough specific detail in the training corpora of existing foundation models. If this is the case then it means that in this scenario you need fine-tuning to supplement the training corpora to get the necessary subject matter expertise. As an important aside the video you are referencing is from earlier this year. You can still fine-tune in _exactly_ the way specified in this video. OpenAI refers to this as “legacy” fine tuning. But as I mentioned OpenAI has massive,y upped their game in this area recently and the new FT tools are definitely worth out and they are what I have used in my more recent applications and videos ua-cam.com/play/PLaJCKi8Nk1hwFmXTnSmknkZ9l0j-toIfa.html

    • @medoeldin
      @medoeldin 5 місяців тому +1

      @lucidateAI I appreciate your thoughtful response. What I hear you saying is that that you've been able to validate improved performance through finetuning given particular tasks and subject to various factors. My intuitive sense as I have thought about your approach is that it would positively influence the completions. There's obviously also the question of cost/benefit for the particular task but my sense is that with the automations you describe finetuning is worth it many cases.
      As an aside, this conversation has led me to research how modifications to your approach could also enhance model performance and I'm excited to explore what I've discovered. Look forward to sharing my findings with you.

    • @lucidateAI
      @lucidateAI  5 місяців тому +1

      That’s a great summary. Fine tuning has its place among the tools in the AI toolbox, but it is not a silver bullet. In some cases it can be of benefit, in particular in niche areas that may not be well represented in the training corpora of standard LLMs

    • @medoeldin
      @medoeldin 5 місяців тому +1

      @@lucidateAI Hi Richard with your sentence split approach to finetuning, how many rows of data do you suggest to get a well functioning model? Thank you!

    • @lucidateAI
      @lucidateAI  5 місяців тому

      Check out OpenAI’s guide to Fine Tuning: platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset. They say they see improvements in validation accuracy with around 50-100 examples. Depending on the task and availability of training data I’ve used sample sets varying between 300 to 1,500 (I’ve got to believe that OpenAI has more experience than me in this regard!). Remember to hold back some examples (10-20%) for validation and testing to get an honest assessment of the FT. But as I said FT is not a silver bullet; look at RAG techniques and prompting tweaks. And remember these aren’t mutually exclusive you can (I’d argue you should!) use FT in conjunction with PE, RAG and other techniques. Good luck! Keen to hear how you get on! Richard

  • @RobertFletcherOBE
    @RobertFletcherOBE Рік тому

    jesus i nearly shat myself at the very end of the video.

    • @lucidateAI
      @lucidateAI  Рік тому

      Hopefully you are better now ;-) Have you been able to watch any more of the channel - or was the experience of watching this video too harrowing?

  • @detlefeckert1132
    @detlefeckert1132 Рік тому

    Does fine tuning mean running a supervised reinforcement model?

    • @lucidateAI
      @lucidateAI  Рік тому

      Hi Detlef. Thanks for your question. I'm not sure I understand it fully, but let me answer it as best I can and please correct any misunderstanding I may have.
      In NLP, _supervised_ learning and _reinforcement_ learning are two different approaches to training models, and they are not typically used together as a "supervised reinforcement model". At least this is not a phrase that I am familiar with.
      Supervised learning is a type of machine learning where the model is trained on labeled data, with the goal of learning a mapping between input data and corresponding output labels. In the context of NLP, this involves training a model on a corpus of text with labeled examples, such as sentiment analysis or named entity recognition.
      Reinforcement learning, on the other hand, is a type of machine learning that involves training an agent to take actions in an environment to maximize a reward signal. In the context of NLP, reinforcement learning can be used to train conversational agents to generate natural language responses that maximize a reward signal, such as user engagement or task completion.
      So I'm not familiar with the term "supervised reinforcement model" in NLP or with transformers specifically. It's _possible_ that the term could be used to describe a model that uses both supervised learning and reinforcement learning in its training, but it would depend on the specific approach being used. In general, however, supervised learning and reinforcement learning are distinct and separate approaches to training models in NLP.
      Fine-tuning a language model like GPT-3 in the way described in this video typically involves training the model on a specific task or domain by updating its parameters using a supervised learning approach, which involves providing the model with labeled examples to learn from. ('Prompts' and 'Completions')
      Reinforcement learning, on the other hand, is a different approach to machine learning that involves an agent taking actions in an environment to maximize a reward signal, and adjusting its behavior based on the feedback it receives. .
      Please correct any misunderstanding I have in your question. Best - Lucidate.

  • @markoalex8819
    @markoalex8819 Рік тому +1

    Could you theoretically feed it your personal email/messaging data and have it give you more personal answers? Of course privacy would certainly be an issue.

    • @lucidateAI
      @lucidateAI  Рік тому +1

      You could indeed. It would require some preprocessing and data preparation.
      To use a corpus of your own emails to generate prompts and completions, you can follow the steps outlined below:
      Collect and preprocess the email corpus: The first step is to collect a corpus of your own emails and preprocess the data to remove any personally identifiable information or sensitive data. This can be done using data cleaning techniques such as removing email headers, footers, and signatures.
      Tokenize the text: The next step is to tokenize the preprocessed text into smaller units, such as words or phrases. This can be done using natural language processing (NLP) techniques such as word tokenization or sentence segmentation.
      Select prompts and completions: To select prompts and completions from the tokenized text, you can use various techniques such as n-grams or sliding windows. For example, you can use a sliding window of three tokens to generate prompts and completions by selecting the first two tokens as a prompt and the third token as a completion. Alternatively, you can use n-grams to generate prompts and completions by selecting a sequence of tokens of a certain length.
      Filter prompts and completions: Once you have generated a set of prompts and completions, you can filter them to ensure that they are relevant and suitable for fine-tuning the AI model. For example, you can filter out prompts and completions that are too short or too long, or those that contain irrelevant or sensitive information.
      Fine-tune the AI model: Finally, you can fine-tune the AI model using the selected prompts and completions.
      To extract prompts and completions specifically, you can use the sliding window technique or n-grams to generate sequences of tokens from the preprocessed and tokenized text. For example, you can select the first n tokens in a sliding window of m tokens as the prompt and the m-n tokens as the completion. You can then filter these sequences to ensure that they are relevant and suitable for fine-tuning the AI model.
      Great question! I hope this helps. Thanks for your support of the channel. Greatly appreciated.

    • @markoalex8819
      @markoalex8819 Рік тому

      @@lucidateAI Thanks a lot for your detailed answer. Keep making these great videos!

    • @lucidateAI
      @lucidateAI  Рік тому +1

      Thanks for the great question. It has given me some great ideas for some future content. Greatly appreciate the contribution to the discussion on the channel.

  • @kevinehsani3358
    @kevinehsani3358 Рік тому

    I can not find NLP 1 could someone please give me the link for it. thanks

    • @lucidateAI
      @lucidateAI  Рік тому

      Here is a link to the while playlist, with NLP 1 the first video in this list: Transformers & NLP
      ua-cam.com/play/PLaJCKi8Nk1hwaMUYxJMiM3jTB2o58A6WY.html

  • @queenanahita4258
    @queenanahita4258 Рік тому

    Can you record a video explaining RLHF?

    • @lucidateAI
      @lucidateAI  Рік тому

      Queen A! Great idea. (I wish I'd though of that myself...) I'm on it! Please keep the excellent suggestions coming! Thanks for your support of and contribution to, the Lucidate channel.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    More details than just high level would be good I think.

    • @lucidateAI
      @lucidateAI  Рік тому

      Y. Understood. Here is a video with a little more detail on how you might go about creating prompts and completions ua-cam.com/video/uFiI5fK-7B4/v-deo.html
      Please let me know what you think! - Lucidate

    • @erniebert8316
      @erniebert8316 Рік тому

      learning something new is better from the beginning, this way you can build on it layer by layer.

    • @lucidateAI
      @lucidateAI  Рік тому

      Thanks Ernest. Have you been able to access some of the introductory material that Lucidate has produced on AI an neural networks? For instance here is a neural network primer -> ua-cam.com/play/PLaJCKi8Nk1hzqalT_PL35I9oUTotJGq7a.html. And here is some high-level intro material on transformers -> ua-cam.com/play/PLaJCKi8Nk1hxM3F0E2f2rr5j6wM8JzZZs.html

  • @noobicorn_gamer
    @noobicorn_gamer Рік тому +1

    The whole presentation (bgm, animation, etc.) is borderline horrible but it has so many good info that I’m conflicted on how to feel about this channel…

    • @lucidateAI
      @lucidateAI  Рік тому

      Thanks for the feedback, clearly some positive commentary about the info and some constructive criticism of the music and visuals. I’d ask what you think of the rest of the channel, but I’d be very reluctant to force you to watch any of the other videos which have the same visual and musical style. So I really don’t think you’ll be able to sit through to many of them. If yo wish (and only if you wish) we could take any of the video that I’ve done and give them a makeover. We can keep the content and voice track as it is, and (again, only if you wish) you could supply some better background music and much improved animations. Might that be a collaboration you would be interested in? Naturally I’m always interested in improving the quality of the output and responding to requests from viewers. So this, as we say in the UK, kills two birds with one stone. (I hope that idiom translates into other languages and cultures). No worries if it is not a project that interests you. In any event, most importantly, I appreciate the comment and the constructive feedback contained within! Very best, Lucidate.

    • @noobicorn_gamer
      @noobicorn_gamer Рік тому

      @@lucidateAI No don't get me wrong; you have a good info to share and for most people it's all about presentation that makes the cut. You already have the materials that you can excel at so if you were to improve your overall presentation and edits, you'd easily make it into one of notable AI related contenders :) Already did study all your vids and THEN I left a comment so I was eager to learn more regardless of presentation. I was just commenting for people who would be missing out of good materials to learn who aren't that patient ;)

    • @noobicorn_gamer
      @noobicorn_gamer Рік тому +1

      @@lucidateAI For personal recommendation to check out would be (obviously biased and not sponsored) would be Fireship, AI Explained, or CGP Grey. None of them are related to each other but are pleasant to even just turn on that materials gets absorbed into you pleasantly. I'm sure with a bit more research and time, you'd def be amongst notable AI channels around

    • @lucidateAI
      @lucidateAI  Рік тому

      @@noobicorn_gamer Thanks for the feedback and best wishes. I’ll be sure to check out the channels you recommend. Appreciated! Lucidate.