GPT Learning Hub
GPT Learning Hub
  • 83
  • 90 964
Natural Language Processing (NLP) Quiz
ML Roadmap (Free): www.gptlearninghub.ai/machine-learning
ML Community (Paid): www.gptlearninghub.ai/machine-learning
Gradient Descent Video: ua-cam.com/video/bbYdqd6wemI/v-deo.html
Implementing Tokenization (Python): ua-cam.com/video/jwmpxuMn7p4/v-deo.html
-------------
Natural Language Processing (NLP) is a subfield of artificial intelligence (AI) focused on the interaction between computers and human language. By leveraging techniques such as tokenization, stemming, and lemmatization, NLP enables the analysis and synthesis of natural language data. Key components include sentiment analysis, named entity recognition (NER), part-of-speech (POS) tagging, and machine translation. With the advent of deep learning, models like transformers, BERT, and GPT have revolutionized the field, enhancing capabilities in tasks such as language modeling, text classification, and information retrieval. NLP is integral to applications such as chatbots, voice recognition systems, and automated text summarization, driving advancements in human-computer interaction and data-driven decision making.
Переглядів: 100

Відео

The Math You Need For ML (Visualized)
Переглядів 48412 годин тому
ML Roadmap (Free): www.gptlearninghub.ai/machine-learning ML Community (Paid): www.gptlearninghub.ai/machine-learning Gradient Descent: ua-cam.com/video/bbYdqd6wemI/v-deo.html Self Attention: ua-cam.com/video/sjrvs7dJOvU/v-deo.html Machine learning (ML) is a subfield of artificial intelligence (AI) that focuses on developing algorithms and statistical models to enable computers to learn and mak...
Illustrated Guide to NLP Tokenization
Переглядів 27614 днів тому
ML Community: www.gptandchill.ai/machine-learning Intro to Neural Networks: ua-cam.com/video/KL2EaTs8r8I/v-deo.html Intro to PyTorch: ua-cam.com/video/SxwtCEDHXe8/v-deo.html Natural Language Processing (NLP) tokenization is a fundamental preprocessing step that transforms raw text into meaningful tokens, facilitating the downstream pipeline for text analytics and machine learning models. This p...
How LLMs, like ChatGPT, Learn
Переглядів 32021 день тому
ML Community: www.gptandchill.ai/machine-learning Large Language Models (LLMs) are trained through a process that leverages vast amounts of text data sourced from the internet. This data is preprocessed to extract linguistic patterns and relationships, which are then used to train neural networks with architectures such as transformer models. During training, the model iteratively adjusts milli...
Convolutional Neural Networks Quiz (CNN Visualized)
Переглядів 18821 день тому
ML Community: www.gptandchill.ai/machine-learning Gradient Descent: ua-cam.com/video/bbYdqd6wemI/v-deo.html Convolutional Neural Networks (CNNs) represent a groundbreaking deep learning architecture, leveraging multilayer perceptrons designed to require minimal preprocessing. By employing hierarchical feature extraction through convolutional layers, pooling layers, and fully connected layers, C...
Illustrated, Interactive Neural Nets Quiz
Переглядів 14921 день тому
Free LLMs Course ML Community: www.gptandchill.ai/machine-learning Neural Networks Review: ua-cam.com/video/KL2EaTs8r8I/v-deo.html
ML 101: 4 Must Know Concepts
Переглядів 40221 день тому
Free Course Paid Community: www.gptandchill.ai/machine-learning Training, linear regression, neural networks, and gradient descent are foundational concepts in machine learning (ML) that are crucial for developing and refining predictive models. Training is the process of teaching a model to make accurate predictions by learning from data. It involves feeding the model data and adjusting its pa...
Illustrated Guide to K Means Clustering
Переглядів 30528 днів тому
ML Community: www.gptandchill.ai/machine-learning Neural Networks Review: ua-cam.com/video/KL2EaTs8r8I/v-deo.html K-Means is an unsupervised learning algorithm widely used for clustering, which partitions a dataset into K distinct clusters based on feature similarity, optimizing intra-cluster cohesion and inter-cluster separation. It leverages iterative refinement through Expectation-Maximizati...
Illustrated Guide to RAG - Retrieval Augmented Generation
Переглядів 335Місяць тому
Free Lecture: www.gptandchill.ai/rag-lecture Attention & Transformers: ua-cam.com/video/sjrvs7dJOvU/v-deo.html Retrieval-Augmented Generation (RAG) is a method in natural language processing (NLP) that combines retrieval-based and generation-based approaches to improve the quality and relevance of generated text. In RAG, a retrieval component first searches a large database or corpus to find re...
Fine-Tuning LLMs Without Expensive GPUs
Переглядів 4992 місяці тому
www.gptandchill.ai/machine-learning Low Rank Adaptation (LoRA) is a cutting-edge machine learning technique designed to address the challenges of model adaptation in low-resource scenarios. LoRA leverages the power of low-rank approximation, a fundamental concept in linear algebra, to efficiently adapt large-scale neural network models with limited computational resources and data availability....
How Neural Networks Learn - 3 Minute Explanation
Переглядів 2,4 тис.3 місяці тому
To solve the coding problem, and other free Machine Learning practice problems, head here: www.gptandchill.ai/machine-learning Improving upon the prior practice problems (and corresponding lectures) that I created, I've repackaged them into the full Generative LLMs course, which will always be free! Gradient Descent is a powerful optimization algorithm widely used in machine learning and deep l...
Illustrated Guide to Attention - How ChatGPT Reads
Переглядів 2,2 тис.3 місяці тому
Get my free Generative LLMs Course: www.gptandchill.ai/machine-learning 10 Minute Neural Networks Explanation: ua-cam.com/video/KL2EaTs8r8I/v-deo.html 5 Minute "Training" Explanation: ua-cam.com/video/bbYdqd6wemI/v-deo.html Attention (specifically Self-Attention) is used in Large Language Models (LLMs) like GPTs (Generative Pretrained Transformers). They allow the model to mimic the process by ...
Linear Regression - Multiple Choice QUIZ - Foundation of AI
Переглядів 2583 місяці тому
Get your copy of my free LLMs course: www.gptandchill.ai/machine-learning Take the quiz for free here: www.gptandchill.ai/linear-regression Linear Regression models are the foundation of Neural Networks (a subset of deep learning, a subset of machine learning, a subset of artificial intelligence). In this quiz we talk about the basic concepts and some details for implementing this model with ma...
LSTM Neural Networks - Intuitive Explanation
Переглядів 3263 місяці тому
Announcements & Timestamps: Get your copy of my free LLMs course: www.gptandchill.ai/machine-learning 0:00 Background 3:35 LSTMs (for those already familiar with RNNs) 13:23 Clarifications LSTMs (Long Short Term Memory Neural Networks) are a powerful model in deep learning (a subset of ML, which is a subset of AI) for processing sequence based data. They're actually still used in Google Transla...
Neural Networks in 10 Minutes - End to End Explanation
Переглядів 1,4 тис.3 місяці тому
Take the quiz here: ua-cam.com/video/xZcOTAJ-h6w/v-deo.html 5 Minute Video for Gradient Descent (The Training Algorithm): ua-cam.com/video/bbYdqd6wemI/v-deo.html Get my free LLMs course: www.gptandchill.ai/machine-learning Neural Networks are the most powerful model in deep learning (a subset of ML, which is a subset of AI). They're the foundation of ChatGPT, Self Driving, & DeepFakes. It turns...
10 Minute PyTorch Intro
Переглядів 8203 місяці тому
10 Minute PyTorch Intro
Neural Networks QUIZ
Переглядів 1,1 тис.3 місяці тому
Neural Networks QUIZ
Recurrent Neural Networks (RNNs) - Intuitive Explanation
Переглядів 2013 місяці тому
Recurrent Neural Networks (RNNs) - Intuitive Explanation
NLP Overview: How was ChatGPT Trained?
Переглядів 2754 місяці тому
NLP Overview: How was ChatGPT Trained?
ML for SWE's - 4 Must Know Concepts
Переглядів 4754 місяці тому
ML for SWE's - 4 Must Know Concepts
How LLMs (Large Language Models) Talk
Переглядів 3 тис.4 місяці тому
How LLMs (Large Language Models) Talk
Convolutional Neural Networks (CNNs) - QUIZ
Переглядів 4,2 тис.4 місяці тому
Convolutional Neural Networks (CNNs) - QUIZ
Math Review for Artificial Intelligence - QUIZ
Переглядів 7 тис.4 місяці тому
Math Review for Artificial Intelligence - QUIZ
Natural Language Processing (NLP) - Multiple Choice Quiz
Переглядів 1554 місяці тому
Natural Language Processing (NLP) - Multiple Choice Quiz
Gradient Descent - Multiple Choice Quiz
Переглядів 1584 місяці тому
Gradient Descent - Multiple Choice Quiz
Announcing Free ML Coding Problems with @NeetCode
Переглядів 3765 місяців тому
Announcing Free ML Coding Problems with @NeetCode
The Math you NEED for Machine Learning (Crash Course)
Переглядів 1,1 тис.5 місяців тому
The Math you NEED for Machine Learning (Crash Course)
The Secret AI Model Behind GPT-4
Переглядів 1475 місяців тому
The Secret AI Model Behind GPT-4
CNNs (Convolutional Neural Networks) - Introduction
Переглядів 2655 місяців тому
CNNs (Convolutional Neural Networks) - Introduction
Intro to PyTorch. Forget TensorFlow.
Переглядів 7 тис.5 місяців тому
Intro to PyTorch. Forget TensorFlow.

КОМЕНТАРІ

  • @Sohammhatre10
    @Sohammhatre10 День тому

    Hello, will you please make a one shot video for NLP Interviews where you explain about important terms in short. Will probably be highly beneficial for the channel's growth as well. Thanks for going through the suggestion. All the best!

  • @abhishekmbc333
    @abhishekmbc333 2 дні тому

    at 3:50 , why do you say the answer is x=3? The question was for what values of x is the slope positive and its positive for all x > 0. So the answer should have been x > 0, thats it. Am I missing something? Please correct me.

  • @kimjong-un4521
    @kimjong-un4521 3 дні тому

    wow, so simple.

  • @mohammadzayd9113
    @mohammadzayd9113 17 днів тому

    very good and easily understandable explanation.

  • @gptLearningHub
    @gptLearningHub 17 днів тому

    Small Clarification: Technically the question mark comes before alphabet characters in ASCII, but the idea presented in the video holds regardless.

  • @avi12
    @avi12 18 днів тому

    5:44 I was wondering why you didn't do for word, i in sorted_list: mapping[word] = i + 1

    • @ShahidulAbir
      @ShahidulAbir 17 днів тому

      I think you meant for i, word in enumerate(sorted_list): mapping[word] = i + 1

    • @gptLearningHub
      @gptLearningHub 17 днів тому

      Wanted to keep the code as readable as possible for those without Python background!

  • @avi12
    @avi12 23 дні тому

    Your videos are great, but because I don't really come from a background in ML or neural networks, even though I feel like I understand the concept you're teaching, I don't actually understand why the code solution works

  • @gptLearningHub
    @gptLearningHub 23 дні тому

    Small Clarification: Most LLMs actually process inputs and outputs on a sub-word level instead of a strict word level split, but thinking about this in terms of words can help simplify things at first!

  • @vedantbhardwaj3277
    @vedantbhardwaj3277 23 дні тому

    Share problem link

    • @gptLearningHub
      @gptLearningHub 23 дні тому

      You can try the problem here neetcode.io/problems/gpt-dataset or see the full list here www.gptandchill.ai/codingproblems

    • @vedantbhardwaj3277
      @vedantbhardwaj3277 23 дні тому

      @@gptLearningHub thanks

  • @lakindujay
    @lakindujay 24 дні тому

    just finished your series. thanks bro, for helping me understand this :) Your explanation was soo good that I managed to code the practice problems by myself with minimal referance to the solution. Definitely recommending this series to my friends. Also, the leetcode style practice questions are incredibly helpful. Thanks again!

    • @lakindujay
      @lakindujay 24 дні тому

      my next destination is andrej karpathy.

  • @ShahidulAbir
    @ShahidulAbir 25 днів тому

    Thank you for the video. I would greatly appreciate a math for ML video.

  • @gregoryfridman5680
    @gregoryfridman5680 25 днів тому

    my fav free ml resource ive found so far. goated and this is def helping w my classes

  • @nabilfatih
    @nabilfatih 26 днів тому

    I have suggestions.. please use a color that have more contrast for each other.. it is hard to see your pen when you write something

  • @gptLearningHub
    @gptLearningHub 26 днів тому

    Small Clarification: I misspoke at 2:58, as we can simply rewrite, for example, x₁² as another variable x₂. The limitation of LR is that anything like w₁² or σ(w₁) is forbidden.

  • @nedyalkovs
    @nedyalkovs 26 днів тому

    Hey ChatGPT. I think you are wrong regarding linear regression.Linear regression must remain linear in the parameters (weights) and not in the input variables.

    • @gptLearningHub
      @gptLearningHub 26 днів тому

      You're right - adding a pinned comment to clarify that for future viewers :)

  • @avi12
    @avi12 27 днів тому

    In the first minute, the editing resulted in me being very confused because the voiceover did not match the text on the screen

  • @JLJConglomeration
    @JLJConglomeration 28 днів тому

    What if you have no foreknowledge on what a good k value should be?

    • @JoelJosephReji
      @JoelJosephReji 26 днів тому

      we try multiple times to figure it out

  • @AyushGupta-kh3cw
    @AyushGupta-kh3cw 28 днів тому

    dude really nice video btw your logo could use some work. i can make a new logo for you. if you use it, can you help me out with some ml tips or just guide me ?

  • @yanndjoumessi7130
    @yanndjoumessi7130 29 днів тому

    Really interesting video

  • @lakindujay
    @lakindujay Місяць тому

    Does final linear allows the model to learn which tokens in the vocabulary result in a positive sentiment?

    • @chrisavila1969
      @chrisavila1969 26 днів тому

      no sentiment is captured in the embedding layer, the final linear layer is to squeeze embedding layer. Dimensionality reduction for sigmoid layer

  • @gptLearningHub
    @gptLearningHub Місяць тому

    Thumbnail Disclaimer: A single GPU is still required to use LoRA, but you can access a GPU for free at colab.research.google.com/.

  • @Dent42
    @Dent42 Місяць тому

    Super accessible explainer! Nice video!

  • @wojpaw5362
    @wojpaw5362 Місяць тому

    Awesome thanks!

  • @shivanandmasne5329
    @shivanandmasne5329 Місяць тому

    nice explanation, keep going you will reach heights! :)

  • @navidutube
    @navidutube Місяць тому

    Great!

  • @joydeeprony89
    @joydeeprony89 Місяць тому

    Paji content is in very high level, it got bounced

  • @paveltarashkevich8387
    @paveltarashkevich8387 2 місяці тому

    0:43 it should be "non-linearities" instead of "linearities" i suppose

  • @tilkkone4257
    @tilkkone4257 2 місяці тому

    Is there a more detailed criteria for overfitting besides training accuracy being greater than testing accuracy ?

  • @mrpi230
    @mrpi230 2 місяці тому

    Thank You.🙂

  • @ppppp524
    @ppppp524 2 місяці тому

    I'm proud of myself. I was able to understand 60% of the words said in this video

  • @ProducerMaster
    @ProducerMaster 2 місяці тому

    Just want to thanks you for the videos. I'm pretty bad at math and found ML to be too intimidating to learn. By breaking it down into chunks, it helped me get started to look more into the subject and actually trying it out for the first time.

  • @23412wer
    @23412wer 2 місяці тому

    1. Would we require a for loop if there were multiple images? 2. What is the role of seeding at the start of each function? I don't see any non deterministic functions?

  • @ProducerMaster
    @ProducerMaster 2 місяці тому

    Thanks a lot! Started to dabble in ML, you channel along with NeetCode practices helps so much!

  • @adityasaxena7374
    @adityasaxena7374 2 місяці тому

    Where is the collab notebook mentioned in the video?

    • @ProducerMaster
      @ProducerMaster 2 місяці тому

      Scroll down to the bottom of the question on Neetcode problem page.

  • @adityasaxena7374
    @adityasaxena7374 2 місяці тому

    Need some more explanation

  • @cachaceirosdohawai3070
    @cachaceirosdohawai3070 2 місяці тому

    The explanation was great, tought i tought you didn't get in depth enough on how the training fo the linear layers would work

  • @pianoforte611
    @pianoforte611 3 місяці тому

    Great video and series. I was hoping for a bit more expansion on tensor multiplication. I get matrix multiplication but can you multiply for instance a 4x3x2 tensor with a 4x2x3 tensor?

  • @rexyl547
    @rexyl547 3 місяці тому

    Great content! And just to make the quiz a bit more accurate, last question In a neural network for predicting whether someone develops diabetes, what's the purpose of a sigmoid activation? would better be phased as In a neural network for predicting whether someone develops diabetes, what's the purpose of a sigmoid activation *in the output layer*? For hidden layers, other popular choice like ReLU would also work (and potentially better), and purpose there would be introduce non-linearity (and more)

  • @chat-gpt-bot
    @chat-gpt-bot 3 місяці тому

    It would be good if the notations were more consistent, for example other videos use C as the embedding dimension.

  • @punk3900
    @punk3900 3 місяці тому

    What is there are local minima that might prevent the progress towards the true minimum?

    • @Alex-pm8wy
      @Alex-pm8wy 2 місяці тому

      Yes - there’s algorithms for that too!

  • @adityasaxena7374
    @adityasaxena7374 3 місяці тому

    For the first time I am able to understand these concepts, your videos are really helpful. Are you planning to make videos on CNNs, GANs and other remaining topics?

  • @adityasaxena7374
    @adityasaxena7374 3 місяці тому

    Great video! please make a video on non-linearity it would be really helpful

  • @chat-gpt-bot
    @chat-gpt-bot 3 місяці тому

    I believe there's an error in your random starting index calculation, "high" should be equal to: (len(words) - context_length - 1) The "-1" is required since you are later do (idx+1+context_length) when you are populating the Y vector. The test just happen to pass now because none of the pre-seeded starting indexes happen to fall on the very last element, which would push Y out of bounds.

    • @yoitteri1476
      @yoitteri1476 2 місяці тому

      torch.randint(low, high, ...) returns random number between low (inclusive) and high (exclusive) so the code is correct.

    • @ProducerMaster
      @ProducerMaster 2 місяці тому

      @@yoitteri1476 Thanks for pointing this out. I was in the mindset as the person above. I wrote it out and it didn't make sense until I realize it was exclusive.

  • @CharlesGreenberg000
    @CharlesGreenberg000 3 місяці тому

    I feel like it's helpful to explain that Q and K are actually the products of a weights matrix size AxD by the.input matrix DxT, where D is the embedding dim, and A is the size of "key/query" space. So you aren't learning a weights matrix of size AxT (which would be massive!) but rather a matrix of size AxD (which is considerably smaller). But please correct me if I have this wrong!

  • @CharlesGreenberg000
    @CharlesGreenberg000 3 місяці тому

    Great work but please don’t draw with blue on dark grey!

  • @gptLearningHub
    @gptLearningHub 3 місяці тому

    Hey everyone! I've re-made this video in a much more concise form, while keeping all the same concepts. Check it out here: ua-cam.com/video/sjrvs7dJOvU/v-deo.html

  • @gptLearningHub
    @gptLearningHub 3 місяці тому

    The first AI/ML algorithm to learn: ua-cam.com/video/bbYdqd6wemI/v-deo.html