How Large Language Models Work

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 232

  • @mindofpaul9543
    @mindofpaul9543 7 місяців тому +479

    I don't know what is more impressive, LLMs or this guy's ability to write backwards perfectly.

    • @patmil8314
      @patmil8314 7 місяців тому +71

      the whole thing is flipped i guess. He's "writing left handed" and we all know that is impossible

    • @djham2916
      @djham2916 7 місяців тому +15

      It's mirrors and a screen

    • @catherinel7718
      @catherinel7718 7 місяців тому +13

      I have a teacher who can write backwards perfectly. It's creepy lol

    • @chrismartin9769
      @chrismartin9769 7 місяців тому +4

      There are videos that show you how people do this- it's a visual trick not a dexterity master class ;)

    • @gatsby66
      @gatsby66 7 місяців тому

      ​@@djham2916And smoke!

  • @dennisash7221
    @dennisash7221 Рік тому +45

    Very nice explanation, short and to the point without getting bogged down in detail that is often misunderstood. I will share this with others

  • @surfercouple
    @surfercouple 9 місяців тому +13

    Nicely done! You explain everything very clearly. This video is concise and informative. I will share with others as an excellent foundational resource for understanding LLMs.

  • @saikatnextd
    @saikatnextd 9 місяців тому +6

    Martin keen as awesome as usual...... so natural. I love his talks and somehow I owe to him my understandingof complicated subjects in AI> thanks......

  • @DilshanBoange
    @DilshanBoange Рік тому +27

    Great video presentation! Martin Keen delivers a superbly layman friendly elucidation of what are otherwise very 'high tech talk' to people like me who do not come from a tech based professional background. These types of content are highly appreciable, and in fact motivate further learning on these subjects. Thank you IBM, Mr. Keen & team. Cheers to you all from Sri Lanka.

    • @AliyaP-z2g
      @AliyaP-z2g Рік тому

      P ppl

    • @pineapple4199
      @pineapple4199 2 місяці тому

      Hi, I'm one English learner, thanks for your comments that express my thought accurately, and your comment is so long and very nice for me to learn English grammar, all the best

  • @KageManTV
    @KageManTV 9 місяців тому +1

    Really really enjoyed this primer. Thank you and great voice and enthusiasm!

  • @vexy1987
    @vexy1987 3 місяці тому +1

    Seeing Martin here was a pleasant surprise. 🍻

  • @sivasakthisivagnanam886
    @sivasakthisivagnanam886 Місяць тому

    I was looking for an intro and what is finetuning. You are a good presenter. And love the presentation. ON point :)

  • @MrSouks
    @MrSouks 3 місяці тому +1

    Excellent. That did the job for me. Thanks Martin.

  • @CyberEnlightener
    @CyberEnlightener Рік тому +8

    The term large can not be referred to as large data; to be precise it is the number of parameters that is large. So slight correction.

    • @dennisash7221
      @dennisash7221 Рік тому +1

      I do beleive that Large in LLM refers both to the large amount of data as well as the large number of hyper parameters, so both are correct but there is a prerequisite that the data be large not only the paramaters.

    • @TacoMaster07
      @TacoMaster07 Рік тому +2

      There's a lot of params because of the huge dataset

  • @kevnar
    @kevnar 11 місяців тому +5

    Imagine a world where wikipedia no longer needs human contributors. You just upload the source material, and an algorithm writes the articles and all sub-pages, listing everything it knows about a certain fictional character because it read the entire book series in half a second. Imagine having a conversation with the world's most eminent Star Wars expert.

  • @Pontie66
    @Pontie66 10 місяців тому +1

    Hey, nice job!!! yeah, I'd like to see more of these kinds of subjects in the present and the future as well!!!

  • @hatersgonnalovethis
    @hatersgonnalovethis 8 місяців тому +11

    Wait a minute. Did he really write in mirror handwriting?

    • @michaelcharlesthearchangel
      @michaelcharlesthearchangel 7 місяців тому +1

      AI was used to make it appear that he can write on your screen.

    • @penguinofsky
      @penguinofsky 6 місяців тому +8

      He writes it normally but the video is flipped horizontally..

    • @thegamernoobOG
      @thegamernoobOG 3 місяці тому

      If so he is really good at it

    • @Nishantmakadiya
      @Nishantmakadiya 16 днів тому

      ​@@penguinofskyThese guys are on every video did they really don't have common sense

  • @jonniuss
    @jonniuss 6 місяців тому

    tbh, I just love his voice and ready to listen all his videos 🤗

  • @dmitriyartemyev3329
    @dmitriyartemyev3329 6 місяців тому +2

    IBM big thanks to you for all this videos! This videos are really helpfull

  • @decryptifi2265
    @decryptifi2265 5 місяців тому

    Very nice and crisp explanation. Love it.. Thanks

  • @makaveli087
    @makaveli087 Місяць тому +2

    I'm way more impressed with the Digital "Dry-Erase Board" than all the useless AI crap. That's really nice.

  • @Private-qg5il
    @Private-qg5il Рік тому +29

    In this presentation, there was not enough detail on Foundation Models as a baseline to then explain what LLMs are.

    • @Gordin508
      @Gordin508 Рік тому +8

      The foundation model is trained on a gigantic amount of general text data on a very general task (such as language modeling, which is next-word prediction). The LLM is then created by finetuning a foundation model (a specific case of "pretrained model") on a more specific dataset (e.g. source code), sometimes also for a more specific task.
      The foundation model is basically a stem cell for LLMs. It does not yet fulfill a specific purspose, but since it saw tons of data can be adapted to (pretty much) everything. Training the foundation model is extremely expensive, but it makes the downstream LLMs much cheaper as they do not need to be trained from scratch.

  • @mandyjacotin8321
    @mandyjacotin8321 8 місяців тому

    That's amazing! Our company has a great project that can benefit from this and then use the proceeds to benefit mankind. How can we speak more about this? I am very intrigued.

  • @rappresent
    @rappresent 10 місяців тому +1

    great presentation, feels like personal asistant, great!

  • @dsharma6694
    @dsharma6694 6 місяців тому

    perfect for learning LLMs

  • @peterprogress
    @peterprogress 9 місяців тому

    I've liked and subscribed and done it again a thousand times in my mind

  • @NicholasDWilson
    @NicholasDWilson 6 місяців тому

    Lol. I only knew Martin Keen from Brulosophy. This is sort of mindblowing.

  • @SuperRider-RS
    @SuperRider-RS 6 місяців тому

    Very elaborate explanation. Thank you

  • @ArgumentumAdHominem
    @ArgumentumAdHominem 9 місяців тому

    Nice explanation! But I am still missing the most important point. How does one control relevance of the produced results? E.g. ChatGPT can answer questions. So far, what you explained is a model that can predict -> generate the next word in a document, given what has already been written. However, given a set of existing sentences, there is a multitude of ways to produce the next sentence, that would be somewhat consistent with the rest of the document. How does one go from plausible text generators to desired text generators?

    • @Leonhart_93
      @Leonhart_93 8 місяців тому

      Statistical likelihood based on the training data. And then there is a random seed so that there a little variation between inputs and outputs, so that the answer isn't always exactly the same for the same prompt.

  • @sheikhobama3759
    @sheikhobama3759 9 місяців тому

    1 PB = 1024 TB
    1TB = 1024 GB
    1GB = 1024 MB
    1MB = 1024 KB
    1KB = 1024 B
    1B = 8 bits
    So 1 PB = 1024 * 1024 * 1024 * 1024 *1024 Bytes
    Multiply it again by 8 to get the number of bits.
    Guys do correct me if I'm wrong!!

  • @vicweast
    @vicweast 9 місяців тому

    Very nicely done.

  • @mauricehunter7803
    @mauricehunter7803 7 місяців тому

    Other than the physical limitation of space like any other computer has, it seems to me that technology like this should be applicable to robotics and allow for creation of much smarter and adaptive robotics projects and creations.

  • @cushconsultinggroup
    @cushconsultinggroup Рік тому

    Intro to LLM’s. Thanks

  • @eddisonlewis8099
    @eddisonlewis8099 9 місяців тому

    Interesting explanation

  • @edchelstephens
    @edchelstephens 2 місяці тому

    Thank you sir!!

  • @SatishDevana
    @SatishDevana 10 місяців тому +1

    Thank you for posting this video. What are the other architectures available apart from Transformer?

  • @vainukulkarni1936
    @vainukulkarni1936 8 місяців тому

    Very nice explanation, are these foundation models are proprietary? How many foundation models exist?

  • @narayanamurthy5397
    @narayanamurthy5397 8 місяців тому

    Knowig about LLM Model Work Mr. Martin Keen. Can you larger focus on LLM Modelling and what exact related stuff(program skills) is requried. Thank you so much it was pleasant video i appreciated.

  • @amparoconsuelo9451
    @amparoconsuelo9451 Рік тому +3

    Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?

  • @GuyHindle
    @GuyHindle Рік тому +2

    What is meant by, when referring to "sequences of words", "understanding"? I mean, what does "understanding" mean in that context?

  • @poornimak.r.a7345
    @poornimak.r.a7345 2 місяці тому

    I'd like to learn LLM from scratch. Is there any roadmap on how to learn LLM thoroughly ??

  • @chetanrawatji
    @chetanrawatji 9 місяців тому

    Thank You Sir ❤

  • @Chestnut_Ella_Eq
    @Chestnut_Ella_Eq 3 місяці тому

    Very good!

  • @EmpoweredWithZarathos2314
    @EmpoweredWithZarathos2314 Рік тому

    such a great video

  •  Рік тому

    Greate explanation ❤

  • @AIpowerment
    @AIpowerment Рік тому +3

    Did you only mirror the screen and it looks like you can write RTL, isnt it?! wow

  • @kuyajon
    @kuyajon 11 днів тому

    No matter what progress is made in this space, an LLM won't help me win an argument with my wife.

  • @Pontie66
    @Pontie66 10 місяців тому

    Hi Martin, are you there around? Could you please talk about " Emerging LLM App Stack" ? Thanks in advance!

  • @fawadsheikh4294
    @fawadsheikh4294 Місяць тому

    Thanks for great video. I like to know more about how we can build business applications using LLM. Like you said we can train LLM on some specific task. Will it be done in cloud where LLM is hosted and training data is uploaded and then we can get useful output from LLM. I hope you get the idea - what me and may be others are also looking for.

    • @ДмитроПрищепа-д3я
      @ДмитроПрищепа-д3я 14 днів тому

      Yep, you can do something like this if you really wanted to (and if you have like a crap ton of usable data to make a dataset out of). You spin up a rather expensive cloud server, then you download the pure, non-quantized weights of the model you like onto that server together with your data and set up a fine-tune of that model on your dataset. The result will ideally be a model that scores higher on your dataset than the base model.
      Although, if you want a secret, most, if not every single company that uses these chatbots doesn't ever bother with something like this, they just use pure chatgpt api from openAI with their own custom system prompt because it's cheaper in short to mid term. Downside is hilarity that can happen with a chatbot like that because, again, it's just the base model that has been told how to act by the system prompt, not fundamentally changed in its weights to act that way. I've seen a few funny examples of this when one candidate for a job has been "interviewed" by a chatbot and, upon realizing that, he randomly asked the "HR" to write a python script that inverts a binary tree, which the chatbot swiftly did with zero questions and in less than half a minute.

  • @jjcsantanna
    @jjcsantanna 2 місяці тому

    Thanks for the EXCELLENT content and great education work that you do.
    I have a question: How the current LLM (eg. GPT o1, LLaMa3.1, Gemma) "decide" what to do? For example, when I ask it to self-judgement the output ("accuracy_level") it is very precise. Another example is when you ask for a text on whatever topic and then my next prompt is asking to export the text to MS Word file. In this last case it will write a python code to generate the MS Word file (but I didn't explicitly said to the LLM to solve my issue by writing a code). How it decided to do it?

  • @ApPillon
    @ApPillon 8 місяців тому

    Thanks dude

  • @SuccessMindset2180
    @SuccessMindset2180 2 місяці тому

    That’s a very handy way to find limits of AI

  • @She_cooks2023
    @She_cooks2023 8 місяців тому

    Amazing!

  • @ritambharachauhan59
    @ritambharachauhan59 5 місяців тому

    can you guys create some example of usin/ creating llm?

  • @echtlahm
    @echtlahm 4 місяці тому +1

    Unbelievable how he writes mirrored words so quick

  • @BAHAGEELAHMED
    @BAHAGEELAHMED 4 місяці тому

    So LLM is just for text we can't use for Automation staff ?

  • @jeu198
    @jeu198 4 місяці тому +1

    How does chat GPT make graphs? That's not even language. I got it to make a graph plotting the entropy change of the universe between the Big Bang and entropy heat death. It chose appropriate units, labled the graph with notable events and even put the legend at the bottom right like I asked.

  • @brookster23701
    @brookster23701 2 місяці тому

    Any suggestions on implementing LLMs for RPG AS400 coding?

  • @blessingukachukwu
    @blessingukachukwu 5 місяців тому

    Very nicely

  • @korgond
    @korgond 5 місяців тому +2

    I get a remote job offer. The duty is AI training for LLM.
    Shall i go for it? What do you think?

    • @hoti47
      @hoti47 4 місяці тому

      Go for it!

    • @1yndonn3u
      @1yndonn3u Місяць тому

      So, You said this about 4 months ago, what r u doing today? AI training for LLM?

  • @hi5wifi-s567
    @hi5wifi-s567 3 місяці тому

    What about customers service with movies searching?

  • @DjVortex-w
    @DjVortex-w Рік тому +3

    How does ChatGPT know about itself and its own behavior? If you ask questions about those topics, it will answer intelligently and accurately about itself and its own behavior. It will not just spout random from patterns from the internet. How does it know this?

    • @dennisash7221
      @dennisash7221 Рік тому +13

      To start with ChatGPT does not "know itself" it is not self aware, what you are seeing when GPT answers the question "Who are you?" is a pre programmed response that has been put there by the trainers of the model, something like toy with prerecorded messages that you can hear when pressing a button or pulling a string.
      ChatGPT does not "know" anything it simply responds to your prompts or as you see them your questions with the appropriate answers.

    • @Joyboy_1044_
      @Joyboy_1044_ Рік тому +5

      GPT doesn't possess genuine awareness, but it can certainly mimic it to some extent

  • @lexiZero-w7n
    @lexiZero-w7n Рік тому +38

    Why does a gigabyte have more words then a petabyte? I am lost already!!! 1 Gig =178 million words, 1 petabyte is 1.8x10^14 words, and there are only 750,000 words in the dictionary?

    • @turna23
      @turna23 Рік тому +4

      I got this far, stopped the video and searched for a comment like this. Why isn't this the top comment?

    • @abdulmueed2844
      @abdulmueed2844 Рік тому +12

      its not total unique words… basically its text from different websites its different sentences … so lets say you want llm to answer you about coding you train it on all the data on stackoverflow, leetcode etc every available resource … so it knows when users asked questions how to run loop in java the replies were x,y,z …
      its more of glorified and better google search that feels like intelligence …

    • @dasikakn
      @dasikakn 11 місяців тому +24

      He said 178m words in a 1 GB sized file. And a petabyte sized file has 1 million _gigabytes_ in it. So, loosely speaking, you multiply 178m with 1 million to get number of words in an LLM. But…It’s not being fed unique words. It’s getting word patterns. Think about how we speak…our sentences are word patterns that we use in mostly predictable structures and then fill in the blank with more rich words as we get older to convey what want to say with synonyms etc.

    • @intptointp
      @intptointp 9 місяців тому +8

      What makes knowledge so complex is not the words, but the way the words are used.
      Choose any word and you will see that it is linked with hundreds of topics and contexts.
      If I say draw, I could be talking about
      drawing water
      drawing class
      drawing during class
      drawing my friend
      drawing a dog
      drawing a long time
      drawing that sold for a lot of money
      I like drawing
      And so on. These all code for a different idea. And it is these “ideas” or relationships that foundation models encoded.
      With these relationships, you now have the probabilistic weights that allow you to construct realistic and correct sounding sentences that are also likely accurate because of the enormous dataset it was trained on.
      Another context idea. You want to connect fish to swim. This is highly weighted in the llm.

    • @ernststravoblofeld
      @ernststravoblofeld 9 місяців тому

      Typo

  • @nuwayir
    @nuwayir Рік тому +1

    so the transformers only for the language and text related things??

    • @freelancerZohaib
      @freelancerZohaib 11 місяців тому

      no for the image processing too

    • @freelancerZohaib
      @freelancerZohaib 11 місяців тому

      Transformer models, originally developed for natural language processing tasks, have been extended to computer vision tasks as well. Vision Transformer (ViT) is an example of a transformer model adapted for image processing. Instead of using convolutional layers, ViT uses self-attention mechanisms to capture relationships between different parts of an image.

  • @kiyonmcdowell5603
    @kiyonmcdowell5603 9 місяців тому

    What's the difference between large language models and text to speech

  • @Trey_v3.3
    @Trey_v3.3 5 місяців тому

    Knowing how these work only makes the idea that companies have started using LLM's to make decisions seem even more stupid than I already thought it was

  • @kylebrault4414
    @kylebrault4414 6 днів тому

    Yea, but did the LLM use a 30 minute or a 60 minute boil?

  • @schonsospaet22
    @schonsospaet22 7 місяців тому

    Thank you for explaining! 🪲 Min. 3:37 is the major "bug" 🐞 within the learning system, *it does not start off with a related guess, it's random.* 🌬
    I can't wait until the *brain slice chips* can last longer and get trained like a real human brain that is actually learning by feelings and repeating instead of random guessing and then correcting itself until the answer is appropriate. They could soon replace A.I. technology completely, so maybe we shouldn't hype too much about it.
    After all the effort, energy and money we put into A.I. and new technology, it's no doubt that *we could have educated our children better* instead of creating a fake new world, based on pseudo knowledge extracted from the web. 👨‍👩‍👧‍👦👨‍👩‍👧‍👧 Nobody want's to be r3placed without having the benefit of the machine. General taxes on machines and automated digital services could fund better education for humans.
    Dear A.I.: You know what is real fun? Planting a tree in real life! 🍒

  • @yaguabina
    @yaguabina 10 місяців тому +1

    Does anyone know what program he uses to sketch on screen like that?

    • @sebbejohansson
      @sebbejohansson 10 місяців тому +4

      It's a glass window. He is physically writing on it. For it to show the correct way (and him not having to write backwards) they just flip the image!

    • @o__bean__o
      @o__bean__o 3 місяці тому

      ​@@sebbejohansson how it's glowing?

  • @VRchitecture
    @VRchitecture Рік тому

    Something tells me “The sky is the limit” here 👀

  • @tekad_
    @tekad_ 9 місяців тому

    How did you learn to write backwards

  • @rangarajannarasimhan341
    @rangarajannarasimhan341 9 місяців тому

    Lucid, thanks

  • @murunbuch
    @murunbuch Місяць тому

    For me, the backwards writing detracts from the presentation.

  • @RC19786
    @RC19786 Рік тому

    could haver been better, most of it was speculative when it came to application building, not to mention the laws governing it

  • @niket1231
    @niket1231 8 місяців тому

    Need one use case

  • @ClifCollins-k8d
    @ClifCollins-k8d 4 місяці тому

    I am still in the dark as to the purpose of LLM. I can see no practical purpose. Just as in the 70's we had parallel processing (Cray 1) that went nowhere except in a very few uses (GPU). "You need a dictionary", sure you could scab Webster's or Oxford's source code, kind of illegal. The other issue is languages are very dynamic, just as our political boundaries move constantly. The reality is that most companies (IBM, GM, Amazon, USPO, ...) could work internally with maybe 500 words and terms. The rest are simply a "list of" which is specific to a give term (boy names, car parts, products,...). The issue is then who maintains the list. Whether LLM, manual syntax scripts, button, or popup forms, the result is the same "do this action with these qualifiers". LLM is still just another special application on top of conventional applications. We still cannot add two numbers (we add a range of numbers). We still program in 1 dimension, in black and white, in a computer languages that we cannot read or understand. ("A = 1") I do not know what "A" is, I do not know what "1" is, I do not know anything about the why, when, validity, usefulness, or purpose.
    Static technologies we do not need. Alternative way of saying same thing we do not need. Knowledge is knowledge (1 foot equals 12 inches) much knowledge can never be derived. The IPhone cannot be answered with one hand (Slide to Answser). I cannot set some of my clocks without documentation, and why do I have to set them. Fix the simple. Research is great, fine, but do not propagate sales hype over progress. With 49 years with no progress in software technology I get pissed that we have done nothing. I see LLM as just another application layer, if it helps, great.
    The real answer is to have user definable context. Absolue access by the user of his/her own information. User access to all source code. User controlled security. User absolute access to the information, communication, and hardware that they own. Not another application that have little or no control over.

  • @BulwerJulian-e5e
    @BulwerJulian-e5e 2 місяці тому

    Gibson Dale

  • @shravanardhapure4961
    @shravanardhapure4961 Рік тому

    What is quantized version of models, how it would be created?

    • @tonyhawk123
      @tonyhawk123 Рік тому

      A model consists of lots of numbers. Those numbers would be smaller. Fewer bits per number.

  • @pdjhh
    @pdjhh 11 місяців тому

    So LLM based AI is just language not ‘intelligence’? Based on what it’s read it knows or guesses what usually comes next? So zero intelligence?

    • @mauricehunter7803
      @mauricehunter7803 7 місяців тому

      From what I can tell of the subject matter it's more of a mimicked intelligence. That's why the analogy of a parrot was used. Cause this technology can learn, repeat back and limitedly guess what's coming next. But there's a certain level of depth and nuance that a human posses that parrots and chat GPT tech do not.

  • @devperatetechno8151
    @devperatetechno8151 Рік тому

    but how its possible to an LLM innovate when its being trained with over human knowledge boundaries?

    • @mauricehunter7803
      @mauricehunter7803 7 місяців тому

      I'm far from an expert on the matter but the simple answer to your question is that it's programmed to be able to learn and adjust according to many various inputs. Arguable it's probably where robot technology should be headed next. Having an ability to learn and react to that learning.

  • @StracheyAnnabelle-w8c
    @StracheyAnnabelle-w8c Місяць тому

    Anderson Carol Martin Susan Clark Christopher

  • @hkgyguhuviChbjn
    @hkgyguhuviChbjn Місяць тому

    Jones Gary Hernandez George Johnson Steven

  • @lmarcelino555
    @lmarcelino555 9 місяців тому

    I don’t even know where to begin. 😵‍💫

  • @NihalNelsonD
    @NihalNelsonD 21 день тому

    LLM = Large Language Model 😲

  • @boriscrisp518
    @boriscrisp518 Рік тому

    Ugh corporate videos..... the horror

  • @krishnakishorenamburi9761
    @krishnakishorenamburi9761 8 місяців тому

    @2:15 a different sequence. this is just for fun .

  • @saadanees7989
    @saadanees7989 9 місяців тому

    Is this video mirrored?

  • @JuddLambert-v7g
    @JuddLambert-v7g Місяць тому

    Johnathan Place

  • @AddisonAcheson-d7o
    @AddisonAcheson-d7o 2 місяці тому

    Janis Springs

  • @TheDunningKrugerEffectisReal
    @TheDunningKrugerEffectisReal 28 днів тому

    2:43 squeaky sounds

  • @shshe6515
    @shshe6515 8 місяців тому

    Still dont get it

  • @dirkbruenner
    @dirkbruenner 10 місяців тому

    How does this presentation work? You do are not mirror writing behind a glass pane, do you?

    • @sebbejohansson
      @sebbejohansson 10 місяців тому

      Yea, it's a glass window! He is physically writing on it. For it to show the correct way (and him not having to write backwards) they just flip the image!

  • @JanetteDevich-e3u
    @JanetteDevich-e3u Місяць тому

    Raoul Crest

  • @Balthazar2242
    @Balthazar2242 Рік тому +5

    How is he writing backwards

    • @IBMTechnology
      @IBMTechnology  Рік тому +1

      See ibm.biz/write-backwards

    • @karolinasobczyk-kozowska3717
      @karolinasobczyk-kozowska3717 11 місяців тому

      Wow! It's a clever idea 😊

    • @cvspvr
      @cvspvr 5 місяців тому

      @@IBMTechnology oh yeah, then how come your tattoo is the right way round?

    • @micc1211
      @micc1211 4 місяці тому

      Write normally, then mirror the video. Should work. Notice how he is writing with his left hand, yet most people are right handed.

    • @ryanmacalandag5279
      @ryanmacalandag5279 3 місяці тому

      All of you WRONG. All of it was written before they started. As they filmed, he's actually ERASING the text as he goes along. He had to learn how to speak backwards though which I think is impressive.

  • @sankarnatarajan8109
    @sankarnatarajan8109 3 місяці тому

    eventually LLM develops LLM so no human needed, This is not far way i guess given rapid speed of this technology. It's really scary for future generation . what type of employment does exists any guess

  • @eregoldamite8739
    @eregoldamite8739 7 місяців тому

    How are you able to write that way

    • @Saturn_Enslaved
      @Saturn_Enslaved 6 місяців тому

      My chemistry professor does videos with one and explains it in a video: Chemistry with Dr. Steph (thats her Channel), it's the featured video on her page

  • @AnthonyGarcia-k2n
    @AnthonyGarcia-k2n 2 місяці тому

    Lesly Forges

  • @sankarnatarajan8109
    @sankarnatarajan8109 3 місяці тому

    so the only job left is how to ask right question. So you called him/her as prompt engineer 🙂

  • @HolmesElla-n4g
    @HolmesElla-n4g 2 місяці тому

    Martinez Thomas Williams Sharon Taylor Charles

  • @FerdinandAlexakis-u9c
    @FerdinandAlexakis-u9c 2 місяці тому

    Newell Way

  • @shawweeks1242
    @shawweeks1242 Місяць тому

    How is "the sky is bug" not a thing

  • @DeniseJeffers-y2e
    @DeniseJeffers-y2e Місяць тому

    Erling Lodge

  • @DeangeloKillingsworth-n5p
    @DeangeloKillingsworth-n5p 2 місяці тому

    Stiedemann Crossroad

  • @MaryRoush-o6z
    @MaryRoush-o6z Місяць тому

    Elinore Shore