ChrisMcCormickAI
ChrisMcCormickAI
  • 60
  • 679 557
Coding a Paper - Ep. 7: Putting it all together
We’re finally ready to put all the pieces of our model together and run it!
We’ll need to implement the connections between our different components, and combine them into a transformer layer, which we’ll define a class for.
We’ll test everything as we go, and Nick will take us through the bugs and issues he encountered with his original implementation (and how to resolve them, of course!).
With the model complete, we’ll initialize a version to train with:
- 128 embedding size
- 10 layers
- 8 attention heads
The context window is a modest size at 512 tokens, but the past information carried forward by XL recurrence and our retrieval of relevant memories with kNN allow us to approximate a much larger context window!
We’ll move the model onto the GPU, run some training steps on a dataset of suitably-long research papers, and watch our training loss go down successfully.
For those of you who have been following along, congrats on making it through to the end!
How does it feel to have built your own LLM from the ground up?!
Links:
- Link to Colab Notebook: colab.research.google.com/drive/1XZz1sjNt1MKRG6ul_hOGSJFQLS4lRtmJ?usp=sharing
- You can follow me on twitter: nickcdryan
- Check out the membership site for a full course version of the series (coming soon) and lots of other NLP content and code! www.chrismccormick.ai/membership
Chapters:
00:00 introduction
00:36 adjust relative position bias
02:48 fix our attention class
05:20 why normalize keys and queries?
06:40 add relative position bias to attention
08:00 how to put it all together
09:40 pseudocode outline
12:02 tips for putting it all together
15:25 build a layer block
19:47 build the model class and put everything together
30:23 fixing bugs and some rewrites
37:10 running our model!
38:12 moving our model onto a GPU
41:47 next steps to test and optimize
43:30 conclusion
Переглядів: 898

Відео

Coding a Paper - Ep. 6: Adding XL Recurrence to Transformers
Переглядів 5475 місяців тому
In this episode we’re adding recurrence into Memorizing Transformers. Specifically, we’re implementing a type of recurrence as outlined in the Transformer-XL architecture. Transformer-XL was one of the earliest attempts at solving the “long range dependencies” problem with the transformers architecture (the same problem we’re trying to address in Memorizing Transformers!), and they did this by ...
Coding a Paper - Ep. 5: Adding KNN memory to transformers
Переглядів 7705 місяців тому
In this episode it’s time to finally add memory to Memorizing Transformers! This is the crux of the paper, so there’s lots to do but we will take it step by step. The Memorizing Transformers model was created to address the problem of long documents that don’t fit within a single context window. It addresses this by storing information (specifically, the key and value projects from attention) i...
Coding a Paper - Ep. 4: Adding in Position Embeddings
Переглядів 6406 місяців тому
In the last episode we built self-attention but left out a key ingredient: position embeddings. On its own, self-attention doesn’t provide the model with any information about where in a sentence different words are in relation to one another. “John loves Mary,” “Mary loves John,” and “loves John Mary” all look the same to self-attention! That’s why we add in explicit information about the sequ...
Coding a Paper - Ep. 3: Let’s build GPT in an hour
Переглядів 2 тис.6 місяців тому
In this video we’re going to keep it simple: let’s build GPT in under an hour. We’re going to go line by line and show what each step of multihead attention does, then we’re going to create a tiny GPT model using multihead attention and feed our data pipeline from Episode 2 through it. This is a necessary step for coding our paper because Memorizing Transformers is built on variations of the “s...
Coding a Paper - Ep. 2: Processing data to keep GPUs busy
Переглядів 1 тис.6 місяців тому
It’s time to start coding our paper! Luckily we’ll tackle the most exciting part first, which is…processing our dataset? Although it’s not the most compelling part of building a model, selecting a dataset and setting up a data processing pipeline are necessary steps that enable us to quickly run and test a model. Why build a pipeline? While pre-processing an entire dataset can eliminate some bo...
Let's build GPT with memory: learn to code a custom LLM (Coding a Paper - Ep. 1)
Переглядів 11 тис.7 місяців тому
You've used an LLM before, and you might've even fine-tuned one, but...have you ever built one yourself? How do you start from scratch and turn a new research idea into a working model? This is the skill used by industry and academic researchers to turn cutting edge research ideas into production quality code. That's what we'll do in this series: we're going to implement a Google research paper...
Creating Art with AI - Ep. 2.4 - Samplers
Переглядів 368Рік тому
The list of samplers for Stable Diffusion is overwhelming, but I think you can safely narrow it down to two! (Euler and DPM 2M Karras) I’ll justify that choice, show some comparisons, and offer a little insight into what samplers are doing and why there are so many. I’ll also touch on “ancestral” samplers as well as the “Karras” versions. I think that understanding these points will help you av...
Creating Art with AI - Ep. 2.3 - CFG Scale
Переглядів 942Рік тому
The CFG Scale is generally documented as controlling how much influence your text prompt has over the image generation. That may be true, but don’t expect too much from it! In practice, I think it’s most useful as another way to create different variations of a seed that you like. “CFG” stands for “Classifier-Free Guidance”. That name really only has significance to researchers in this field (a...
Creating Art with AI - Ep. 2.2 - Seeds
Переглядів 267Рік тому
Stable Diffusion relies on randomness in order to generate different results. The results are going to be different every time you run it! But it can be very useful to re-create a specific result that we liked, so that we can try tweaking our settings for it to make more subtle changes. To do this, we can exploit a peculiar property of how random number generation works in a computer… A typical...
Creating Art with AI - Ep. 2.1 - Steps
Переглядів 223Рік тому
Stable Diffusion was actually trained to remove noise from very noisy images, using a description of the original image to help it out. To generate art with it, we give it an image that’s actually just pure random noise, and tell it that it’s actually a cat wearing a space helmet. Then SD does its best to “recover” the original artwork that it assumes is buried in there somewhere. It does this ...
Creating Art with AI - Ep. 1.6 - Managing Your Expectations
Переглядів 269Рік тому
What can you reasonably expect from Stable Diffusion? When I first started generating images, I was completely amazed with the creativity and artistic skill of the model. It’s incredibly inspiring! As I moved past just “messing around” with the model, and started actually trying to create specific pieces of artwork (usually something I wanted to make for a friend), I ran into problems. I made a...
Creating Art with AI - Ep. 1.5 - Copying Existing Prompts
Переглядів 220Рік тому
Another effective way to control the style of your image is to use a site like lexica.art to find other images of similar subjects, pick one you like, and re-use their list of artist names and modifiers. Note: This technique seems to work best when the subjects are similar, such as “the inside of a workshop” / factory / shipyard. Links 1. Lexica.art: lexica.art/
Creating Art with AI - Ep. 1.4 - Intro to Modifiers
Переглядів 294Рік тому
Most example prompts you see will contain “modifier” keywords added to the end, such as “highly detailed”, “concept art”, “symmetrical”, … To familiarize yourself with the options, there is a tool called promptoMANIA that has an organized catalog of common modifiers, with an example image for each. Links 1. Promptomania: promptomania.com/stable-diffusion-prompt-builder/
Creating Art with AI - Ep. 1.3 - Leveraging Artists
Переглядів 275Рік тому
Probably *the most* effective way to achieve a particular “look” to your art, and to get higher-quality generations in general, is to reference the name of an existing artist… but what if you aren’t actually familiar with any artists and their styles? (I’m certainly not!) In this video I’ll show you a couple of my favorite tools for browsing popular artists that work well with Stable Diffusion....
Creating Art with AI - Ep. 1.2 - Your First Generation
Переглядів 326Рік тому
Creating Art with AI - Ep. 1.2 - Your First Generation
Creating Art with AI - Ep. 1.1 - Introduction
Переглядів 577Рік тому
Creating Art with AI - Ep. 1.1 - Introduction
Self-Attention Equations - Math + Illustrations
Переглядів 4,5 тис.Рік тому
Self-Attention Equations - Math Illustrations
Notebook Walkthrough - Question Answering with RAG on a Custom Dataset
Переглядів 11 тис.2 роки тому
Notebook Walkthrough - Question Answering with RAG on a Custom Dataset
RAG Deep Dive - Ep. 2 - Preparing Reference Text
Переглядів 8032 роки тому
RAG Deep Dive - Ep. 2 - Preparing Reference Text
RAG Deep Dive - Ep. 1 - Investigating the Code
Переглядів 2,4 тис.2 роки тому
RAG Deep Dive - Ep. 1 - Investigating the Code
Chatbots & Conversational AI - Ep. 3 - Blenderbot 2.0
Переглядів 3,8 тис.2 роки тому
Chatbots & Conversational AI - Ep. 3 - Blenderbot 2.0
Chatbots & Conversational AI - Ep. 2 - Blenderbot 1.0
Переглядів 2,4 тис.2 роки тому
Chatbots & Conversational AI - Ep. 2 - Blenderbot 1.0
Mixing BERT with Categorical and Numerical Features
Переглядів 14 тис.3 роки тому
Mixing BERT with Categorical and Numerical Features
BERT + Categorical Features - Ep. 3 - AirBnb Pricing Prediction
Переглядів 2,1 тис.3 роки тому
BERT Categorical Features - Ep. 3 - AirBnb Pricing Prediction
Question Answering Research - Ep. 4 - Retrieval Augmented Generation (RAG)
Переглядів 2,7 тис.3 роки тому
Question Answering Research - Ep. 4 - Retrieval Augmented Generation (RAG)
BERT + Categorical Features - Ep. 2 - Results & Discussion with Author Ken Gu
Переглядів 1,8 тис.3 роки тому
BERT Categorical Features - Ep. 2 - Results & Discussion with Author Ken Gu
Channel Introduction, June 2021
Переглядів 4,1 тис.3 роки тому
Channel Introduction, June 2021
Question Answering Research - Ep. 3 - Reader Options
Переглядів 8613 роки тому
Question Answering Research - Ep. 3 - Reader Options
BERT + Categorical Features - Ep. 1 - Everything-to-Text
Переглядів 4,6 тис.3 роки тому
BERT Categorical Features - Ep. 1 - Everything-to-Text

КОМЕНТАРІ

  • @adammonroe378
    @adammonroe378 2 місяці тому

    One of the greatest explanation for QA I have ever seen! There are lots of garbage and confusing explanations in Huggingface. But you are the hero!

  • @intelligenttechnologies2476
    @intelligenttechnologies2476 2 місяці тому

    Wow, what a high value channel. Thank you for the effort you put into these videos, the are brilliant.

  • @850mph
    @850mph 2 місяці тому

    BEST explanation of ATTENTION I have seen on the Net.

  • @swethapaddu6610
    @swethapaddu6610 3 місяці тому

    Sir, multilingual extraction from clinical notes I need this dataset where can I download this dataset

  • @putrikamaluddin6595
    @putrikamaluddin6595 3 місяці тому

    hi Chris, given i have news dataset, is there any parameters in the model, that i can fine tune it? so i can train and test it?

  • @royalstingray822
    @royalstingray822 3 місяці тому

    I think one of the best ways to think about steps is like solving a jigsaw puzzle. The AI starts with a big jumble of pieces with seemingly no coherence and then moves a number of those pieces at a time to try and create the puzzle image. If you give the AI say 5 steps, but it's a 512 piece puzzle (a 512 x 512 image would have 512 pixels of width and 512 pixels of length) it has to move the pieces in big handfuls. It's only got 5 moves to solve the puzzle and 512 pieces to move, so it moves 100 at a time. It might be able to group colours and rough textures and such together to make more educated guesses at what should go where, but it's generally a pretty crude approach. Results often sort of resemble the completed puzzle image, but are usually a way off the mark. By contrast, if you give the AI 10,000 steps, well now it has to cut the pieces up into tiny smaller pieces so it has enough to move every step. This means the ai can get carried away creating details which shouldn't exist, since it's too focused on tiny pieces to see the big picture. Somewhere in there will be the perfect step count, where the AI creates the puzzle like a human would, moving each piece in turn until it's perfectly arranged. Now obviously this isn't quite how software like stable diffusion works, but it is a simple way to understand how step count can influence generation. Low step counts force the AI to make bolder decisions about what it puts where. Tiny variance in the initial 'noise' can lead to huge variance in the final picture, as the AI has to make very significant decisions at each step even though it doesn't really have much information to go on. On the other hand, high step counts force the AI to make very small changes - the problem here really is luck. A string of 'bad' decisions is much harder for the AI to undo before it finishes. Take the 'wizard portrait' example in this video - at 50 steps the hair is defined but clearly hair. This is because it has jumped to the most logical conclusion at what should be there and then not been able to keep guessing. By 100 steps it has guessed itself into the wrong answer, by continuing to chase it's own 'refinement' of individual strands of hair which has lead to these more tendril like spiky things. At 150 steps I wouldn't be surprised if the hair becomes a helmet of sorts.

  • @owaizdero1520
    @owaizdero1520 3 місяці тому

    can we do bert as retriver as bloom as generator . RAG model on custom data

  • @terrortalkhorror
    @terrortalkhorror 3 місяці тому

    I am a master's student at Columbia. Your explanation is so much better that the explanation that I had from my faculty. Keep doing what you are doing

  • @user-dq4rc1gv9u
    @user-dq4rc1gv9u 4 місяці тому

    It is great tutorial!!! however loading dataset part is broken.. getting "ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions" error unfortunately... shapes are not matching properly. It is great tutorial though!

  • @doctorshadow2482
    @doctorshadow2482 4 місяці тому

    Thank you. Good explanation. Some questions: 1. At 2:40. Why are we interested in getting the sum as 1, which softmax provides? What's wrong with using existing output values, we already have the weight for 8 higher than others, so we have the answer. Why do we need the extra work at all? 2. At 9:49. What is this "word vector"? Is it still one-hot vector for the word from the dictionary or something else? How is this vector represented in this case? 3. At 15:00. That's fine, but if we trained for "cpupacabra", what would be with the weights when we train for the other words? Wouldn't it just blend or "blur" the coefficients making them closer to "white noise"?

  • @hari8568
    @hari8568 4 місяці тому

    Why are we using learning rate schedulers when we have adam optimizer?i thought that should handle it internally

  • @CJ-ur3fx
    @CJ-ur3fx 4 місяці тому

    04:00 Super useful! I didn't know how to generate those grids!

  • @mm100latests5
    @mm100latests5 5 місяців тому

    🔥 thank you for this top tier series!

  • @mm100latests5
    @mm100latests5 5 місяців тому

    in the collab im seeing an error at: chunked = np.array([doc.reshape(-1, chunk_size) for doc in clipped]) ValueError Traceback (most recent call last) <ipython-input-64-7f02340de32c> in <cell line: 1>() ----> 1 chunked = np.array([doc.reshape(-1, chunk_size) for doc in clipped]) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (3401,) + inhomogeneous part.

    • @CluntPeetus
      @CluntPeetus 4 місяці тому

      Same here. I fixed it at one point, and everything was working well, but I can't for the life of me remember what I did. I assume numpy's to blame. Let me know if you have any luck!

    • @CluntPeetus
      @CluntPeetus 4 місяці тому

      I figured it out! If you're still interested, look up an article titled: NEP 34 - Disallow inferring dtype=object from sequences. Really simple fix!

  • @corgirun7892
    @corgirun7892 5 місяців тому

    What an impressive video! It seems this is precisely why certain Large Language Models (LLMs) can handle context lengths of up to 200k.

  • @Tripp111
    @Tripp111 5 місяців тому

    Hallelujur!

  • @saumyasharma8511
    @saumyasharma8511 6 місяців тому

    so start must be a vector then of the same dimension as the dimension of the final output from encoder ?

  • @imotvoksim
    @imotvoksim 6 місяців тому

    At the end when you are bringing out the heads dimension out of the resulting relative_position_values matrix shouldn't the operation be relative_position_values.transpose(1, -1).transpose(0, 1).unsqueeze(0) so we end up with (batch, heads, sequence, context) instead of (batch, heads, context, sequence)?

    • @ChrisMcCormickAI
      @ChrisMcCormickAI 6 місяців тому

      Good catch! Yes, the context and sequence are in the wrong order (and I've ignored the batch) - your solution puts things in the correct order. We switch to einops later as we put everything together so this will be corrected in later videos. Glad you're enjoying the series :)

  • @imotvoksim
    @imotvoksim 6 місяців тому

    Just commenting to say that this series is appreciated and I took the weekend to follow along! Time well spent. Hopefully continue next weekend

  • @imotvoksim
    @imotvoksim 6 місяців тому

    Thank you very much! Lot of knowledge very concisely (and enjoyably) delivered. Following along and coding my own version, learned a lot already! I knew all of this at a theoretical level but this brings me up to a new dimension (pun intended, haha). Also for broadcasting, I suppose .masked_fill() also broadcasts because at first I thought I'll have to create a 4D mask

  • @mm100latests5
    @mm100latests5 6 місяців тому

    awesome!

  • @Tripp111
    @Tripp111 6 місяців тому

    Thank you.

  • @AnkitMishra-hd5uu
    @AnkitMishra-hd5uu 6 місяців тому

    I m unable to get data, can anyone share the link of SQuad dataset

  • @RemessOfficial
    @RemessOfficial 6 місяців тому

    You're videos are sooo good. Please keep making more of these!

  • @SanthoshKumar-dk8vs
    @SanthoshKumar-dk8vs 6 місяців тому

    Optimizer.step() needs to be outside the segment loop right?

    • @ChrisMcCormickAI
      @ChrisMcCormickAI 6 місяців тому

      The outer loop here is just unpacking a group of data segments, the inner segment loop does all the "normal" training loop operations over those segments.

  • @Breaking_Bold
    @Breaking_Bold 6 місяців тому

    Excellent ..very good...Chris i hope i found this video channel sooner.

  • @mm100latests5
    @mm100latests5 6 місяців тому

    great videos, hate when they end!

    • @typon1
      @typon1 6 місяців тому

      bro left it on a cliff hanger

  • @mohammadrezarezaei2230
    @mohammadrezarezaei2230 6 місяців тому

    Finally, somebody who knows what he's talking about... Just to mention, we appreciate your efforts ❤

  • @typon1
    @typon1 6 місяців тому

    excellent video brother

  • @Tripp111
    @Tripp111 6 місяців тому

    Thank you.

  • @alexeponon3250
    @alexeponon3250 6 місяців тому

    Man go till the end don’t stop this series. I was waiting for that for so long

  • @crtp47
    @crtp47 6 місяців тому

    Long live the Algorithm for recommending this to me.

  • @mm100latests5
    @mm100latests5 6 місяців тому

    thanks, really good videos

  • @kunalsuri8316
    @kunalsuri8316 6 місяців тому

    Thanks for these videos. What will the course cover that these videos won't? Will you be guiding us in implementing multiple papers or only one?

    • @ChrisMcCormickAI
      @ChrisMcCormickAI 6 місяців тому

      Glad you like it! At this point we're only implementing one paper, although there's plenty of practice because Memorizing Transformers incorporates ideas from lots of different papers. We're not sure about the exact content of the course but are thinking it would include quizzes, assignments, and exercises - it also depends on feedback and what viewers think is most useful :) Towards the end of the series on UA-cam we'll provide an update on the status of the course. Thanks!

  • @vipinkataria2209
    @vipinkataria2209 6 місяців тому

    Best Instructor!!!

  • @abeersalam1623
    @abeersalam1623 6 місяців тому

    Sir, I'm new to this field, my research topic is about automatically evaluating essay answers using Bert what should I learn in advance so that I pick up only the main points related only to my research in order not to be distracted by too much information and could please give me your email I want to consult you And thank you

  • @brytonkalyi277
    @brytonkalyi277 7 місяців тому

    °∆ I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it knows only things of the flesh such as our fleshly desires (without fear of God) and cannot comprehend things of the Spirit such as true love and eternal joy that comes from obeying God's Word [Galatians 5:16-26]. Man is a spirit and has a soul but lives in a body which is flesh. When you go to bed it is the flesh that sleeps, but your spirit never sleeps and that is why you have dreams, unless you have died in peace physically. More so, true love that endures and last is a thing of the heart. When I say 'heart', I mean 'spirit'. But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons (love because of material needs) and love for selfish reasons outside God those are things of the flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. However, true love works in conjunction with other spiritual forces such as patience and faith - in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever. To avoid sin and error which leads to the death of your body and your spirit-soul in hell fire (second death), you must make God's Word the standard for your life, not AI or your flesh. If not, God will let you face AI on your own (with your own strength) and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. The prove texts can be found in the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. *HOW TO MAKE GOD'S WORD THE STANDARD FOR YOUR LIFE?* You must read your Bible slowly, attentively and repeatedly, having this in mind that Christianity is not a religion but a Love relationship. It is measured by the love you have for God and the love you have for your neighbor. Matthew 5:13 says, "You are the salt of the earth; but if the salt loses its flavor, how shall it be seasoned? It is then good for nothing but to be thrown out and trampled underfoot by men." Our spirits can only be purified while in the body (while on earth) but after death anything unpurified (unclean) cannot enter Heaven Gates. Blessed are the pure in heart, for they shall see God [Matthew 5:8]. No one in his right mind can risk or even bare to put anything rotten into his body nor put the rotten thing closer to the those which are not rotten. Sin makes the heart unclean but you can ask God to forgive you, to save your soul, to cleanse you of your sin, to purify your heart by the blood of His Son, our Lord and Savior, Jesus Christ which He shed here on earth because Isaiah 53:5 says, "But He was wounded for our transgressions, He was bruised for our iniquities; the chastisement for our peace was upon Him, and by His stripes we are healed". Meditation in the Word of God is a visit to God because God is in His Word. We know God through His Word because the Word He speaks represent His heart's desires. Meditation is a thing of the heart, not a thing of the mind. Thinking is lower level while meditation is upper level. You think of your problems, your troubles but inorder to meditate, you must let go of your own will, your own desires, your own ways and let the Word you read prevail over thinking process by thinking of it more and more, until the Word gets into your blood and gains supremacy over you. That is when meditation comes - naturally without forcing yourself, turning the Word over and over in your heart. You can be having a conversation with someone while meditating in your heart - saying 'Thank you, Jesus...' over and over in your heart. But it is hard to meditate when you haven't let go of offence or past hurts because you need a free spirit to believe His Word and meditate on it. Your pain of the past, leave it for God, don't worry yourself, Jesus is alive, you can face tomorrow, He understands what you are passing through today. Begin to meditate on this prayer day and night (in all that you do), "Lord take more of me and give me more of you. Give me more of your holiness, faithfulness, obedience, self-control, purity, humility, love, goodness, kindness, joy, patience, forgiveness, wisdom, understanding, calmness, perseverance... Make me a channel of shinning light where there is darkness, a channel of pardon where there is injury, a channel of love where there is hatred, a channel of humility where there is pride..." The Word of God becomes a part of us by meditation, not by saying words but spirit prayer (prayer from the heart). Take note, God is Spirit and those who worship Him should do so in spirit and truth (genuinely by living the Word). When the Word becomes a part of you, it will by its very nature influence your conduct and behavior. You become a new creation, guided by the Holy Spirit and not just what you hear or see. Your bad habits, you will no longer have the urge to do them. You will think differently, dream differently, act differently and talk differently - if something does not qualify for meditation, it does not qualify for conversation. *THE BATTLE BETWEEN LIGHT AND DARKNESS (GOOD AND EVIL)* Heaven is God's throne and the dwelling place for God's angels and the saints. Hell was meant for the devil (satan) and the fallen angels. Those who torture the souls in hell are demons (unclean spirits). Man's spirit is a free moral agent. You can either yield yourself to God or to the devil because God has given us discretion. If one thinks he possesses only his own spirit, he is lying to himself and he is already in the dark. God is light while the devil is darkness. Light (Holy Spirit) and darkness (evil spirit) cannot stay together in a man's body. God is Love (Love is light) and where there is no love is hell, just as where there is no light is darkness. The one you yield yourself to, you will get his reward. The reward of righteousness to man's spirit is life (abundant life) and the reward of sin to man's spirit is death. Sin and satan are one and the same. Whatever sin can cause, satan also can cause. Sin is what gives the devil dominion or power over man's spirit. When God's Word becomes a part of you, sin power over you is broken, you become the righteousness of God through Christ Jesus. Where Jesus is, you are and when He went (to the Father), you went. In the book of John 8:42-47, Jesus said to them, “If God were your Father, you would love Me, for I proceeded forth and came from God; nor have I come of Myself, but He sent Me. Why do you not understand My speech? Because you are not able to listen to My word. You are of your father the devil, and the desires of your father you want to do. He was a murderer from the beginning, and does not stand in the truth, because there is no truth in him. When he speaks a lie, he speaks from his own resources, for he is a liar and the father of it. Which of you convicts Me of sin? And if I tell the truth, why do you not believe Me? He who is of God hears God’s words; therefore you do not hear, because you are not of God.” My prayer is, "May God bless His Word in the midst of your heart." Glory and honour be to God our Father, our Lord and Savior Jesus Christ and our Helper the Holy Spirit. Watch and pray!... Thank you for your time and may God bless you as you share this message with others.

  • @Tripp111
    @Tripp111 7 місяців тому

    Epic! Ready for the next video!

  • @KristijanKL
    @KristijanKL 7 місяців тому

    this is a first thing I did when I got gpt 3 access - so great to see it automated online and now running locally

  • @hey-its-me239
    @hey-its-me239 7 місяців тому

    Thank you so much for making this playlist! It isn't just BEST OF ALL for bert, but for transformers intuition as well. Great way to go!!!

  • @fire17102
    @fire17102 7 місяців тому

    Hi Chris hope you are well, Would love to see an update on RAG, best practices or frameworks. Especially, on live changing data. For example in this video you use GoT discussions which don't change. But lets do advance RAG on data that is updated constantly, like items and costs in a warehouse. Availability and costs change, so we need to make sure to update/delete previous chunks which have now the wrong details. Thanks a lot & all the best

  • @devinhoover1129
    @devinhoover1129 7 місяців тому

    Excellent work.

  • @brandonheaton6197
    @brandonheaton6197 7 місяців тому

    There is a tight correlation between the quality of sources cited and the quality of material produced. The quality is high all around here

  • @TreeLuvBurdpu
    @TreeLuvBurdpu 7 місяців тому

    "The Talent Code" is one of my favorite books. So much gold in the domain of self-pedegogy and instruction.

  • @truehighs7845
    @truehighs7845 7 місяців тому

    Looking forward to part 2, but I suspect this will be much longer, thanks a lot for the trouble of explaining all this to noobs!

  • @toxicbisht4344
    @toxicbisht4344 7 місяців тому

    Waiting for the next part

  • @sid-prod
    @sid-prod 7 місяців тому

    this is so insightful, really glad i found this channel

  • @suki0venkat
    @suki0venkat 7 місяців тому

    Thanks!

  • @alexeponon3250
    @alexeponon3250 7 місяців тому

    Hopefully the next episode will come soon

  • @kawsershovon3005
    @kawsershovon3005 7 місяців тому

    How many episodes in total ?