Nicely done! You explain everything very clearly. This video is concise and informative. I will share with others as an excellent foundational resource for understanding LLMs.
Martin keen as awesome as usual...... so natural. I love his talks and somehow I owe to him my understandingof complicated subjects in AI> thanks......
Great video presentation! Martin Keen delivers a superbly layman friendly elucidation of what are otherwise very 'high tech talk' to people like me who do not come from a tech based professional background. These types of content are highly appreciable, and in fact motivate further learning on these subjects. Thank you IBM, Mr. Keen & team. Cheers to you all from Sri Lanka.
Hi, I'm one English learner, thanks for your comments that express my thought accurately, and your comment is so long and very nice for me to learn English grammar, all the best
I do beleive that Large in LLM refers both to the large amount of data as well as the large number of hyper parameters, so both are correct but there is a prerequisite that the data be large not only the paramaters.
Imagine a world where wikipedia no longer needs human contributors. You just upload the source material, and an algorithm writes the articles and all sub-pages, listing everything it knows about a certain fictional character because it read the entire book series in half a second. Imagine having a conversation with the world's most eminent Star Wars expert.
The foundation model is trained on a gigantic amount of general text data on a very general task (such as language modeling, which is next-word prediction). The LLM is then created by finetuning a foundation model (a specific case of "pretrained model") on a more specific dataset (e.g. source code), sometimes also for a more specific task. The foundation model is basically a stem cell for LLMs. It does not yet fulfill a specific purspose, but since it saw tons of data can be adapted to (pretty much) everything. Training the foundation model is extremely expensive, but it makes the downstream LLMs much cheaper as they do not need to be trained from scratch.
That's amazing! Our company has a great project that can benefit from this and then use the proceeds to benefit mankind. How can we speak more about this? I am very intrigued.
Nice explanation! But I am still missing the most important point. How does one control relevance of the produced results? E.g. ChatGPT can answer questions. So far, what you explained is a model that can predict -> generate the next word in a document, given what has already been written. However, given a set of existing sentences, there is a multitude of ways to produce the next sentence, that would be somewhat consistent with the rest of the document. How does one go from plausible text generators to desired text generators?
Statistical likelihood based on the training data. And then there is a random seed so that there a little variation between inputs and outputs, so that the answer isn't always exactly the same for the same prompt.
1 PB = 1024 TB 1TB = 1024 GB 1GB = 1024 MB 1MB = 1024 KB 1KB = 1024 B 1B = 8 bits So 1 PB = 1024 * 1024 * 1024 * 1024 *1024 Bytes Multiply it again by 8 to get the number of bits. Guys do correct me if I'm wrong!!
Other than the physical limitation of space like any other computer has, it seems to me that technology like this should be applicable to robotics and allow for creation of much smarter and adaptive robotics projects and creations.
Knowig about LLM Model Work Mr. Martin Keen. Can you larger focus on LLM Modelling and what exact related stuff(program skills) is requried. Thank you so much it was pleasant video i appreciated.
Thanks for great video. I like to know more about how we can build business applications using LLM. Like you said we can train LLM on some specific task. Will it be done in cloud where LLM is hosted and training data is uploaded and then we can get useful output from LLM. I hope you get the idea - what me and may be others are also looking for.
Yep, you can do something like this if you really wanted to (and if you have like a crap ton of usable data to make a dataset out of). You spin up a rather expensive cloud server, then you download the pure, non-quantized weights of the model you like onto that server together with your data and set up a fine-tune of that model on your dataset. The result will ideally be a model that scores higher on your dataset than the base model. Although, if you want a secret, most, if not every single company that uses these chatbots doesn't ever bother with something like this, they just use pure chatgpt api from openAI with their own custom system prompt because it's cheaper in short to mid term. Downside is hilarity that can happen with a chatbot like that because, again, it's just the base model that has been told how to act by the system prompt, not fundamentally changed in its weights to act that way. I've seen a few funny examples of this when one candidate for a job has been "interviewed" by a chatbot and, upon realizing that, he randomly asked the "HR" to write a python script that inverts a binary tree, which the chatbot swiftly did with zero questions and in less than half a minute.
Thanks for the EXCELLENT content and great education work that you do. I have a question: How the current LLM (eg. GPT o1, LLaMa3.1, Gemma) "decide" what to do? For example, when I ask it to self-judgement the output ("accuracy_level") it is very precise. Another example is when you ask for a text on whatever topic and then my next prompt is asking to export the text to MS Word file. In this last case it will write a python code to generate the MS Word file (but I didn't explicitly said to the LLM to solve my issue by writing a code). How it decided to do it?
How does chat GPT make graphs? That's not even language. I got it to make a graph plotting the entropy change of the universe between the Big Bang and entropy heat death. It chose appropriate units, labled the graph with notable events and even put the legend at the bottom right like I asked.
How does ChatGPT know about itself and its own behavior? If you ask questions about those topics, it will answer intelligently and accurately about itself and its own behavior. It will not just spout random from patterns from the internet. How does it know this?
To start with ChatGPT does not "know itself" it is not self aware, what you are seeing when GPT answers the question "Who are you?" is a pre programmed response that has been put there by the trainers of the model, something like toy with prerecorded messages that you can hear when pressing a button or pulling a string. ChatGPT does not "know" anything it simply responds to your prompts or as you see them your questions with the appropriate answers.
Why does a gigabyte have more words then a petabyte? I am lost already!!! 1 Gig =178 million words, 1 petabyte is 1.8x10^14 words, and there are only 750,000 words in the dictionary?
its not total unique words… basically its text from different websites its different sentences … so lets say you want llm to answer you about coding you train it on all the data on stackoverflow, leetcode etc every available resource … so it knows when users asked questions how to run loop in java the replies were x,y,z … its more of glorified and better google search that feels like intelligence …
He said 178m words in a 1 GB sized file. And a petabyte sized file has 1 million _gigabytes_ in it. So, loosely speaking, you multiply 178m with 1 million to get number of words in an LLM. But…It’s not being fed unique words. It’s getting word patterns. Think about how we speak…our sentences are word patterns that we use in mostly predictable structures and then fill in the blank with more rich words as we get older to convey what want to say with synonyms etc.
What makes knowledge so complex is not the words, but the way the words are used. Choose any word and you will see that it is linked with hundreds of topics and contexts. If I say draw, I could be talking about drawing water drawing class drawing during class drawing my friend drawing a dog drawing a long time drawing that sold for a lot of money I like drawing And so on. These all code for a different idea. And it is these “ideas” or relationships that foundation models encoded. With these relationships, you now have the probabilistic weights that allow you to construct realistic and correct sounding sentences that are also likely accurate because of the enormous dataset it was trained on. Another context idea. You want to connect fish to swim. This is highly weighted in the llm.
Transformer models, originally developed for natural language processing tasks, have been extended to computer vision tasks as well. Vision Transformer (ViT) is an example of a transformer model adapted for image processing. Instead of using convolutional layers, ViT uses self-attention mechanisms to capture relationships between different parts of an image.
Knowing how these work only makes the idea that companies have started using LLM's to make decisions seem even more stupid than I already thought it was
Thank you for explaining! 🪲 Min. 3:37 is the major "bug" 🐞 within the learning system, *it does not start off with a related guess, it's random.* 🌬 I can't wait until the *brain slice chips* can last longer and get trained like a real human brain that is actually learning by feelings and repeating instead of random guessing and then correcting itself until the answer is appropriate. They could soon replace A.I. technology completely, so maybe we shouldn't hype too much about it. After all the effort, energy and money we put into A.I. and new technology, it's no doubt that *we could have educated our children better* instead of creating a fake new world, based on pseudo knowledge extracted from the web. 👨👩👧👦👨👩👧👧 Nobody want's to be r3placed without having the benefit of the machine. General taxes on machines and automated digital services could fund better education for humans. Dear A.I.: You know what is real fun? Planting a tree in real life! 🍒
I am still in the dark as to the purpose of LLM. I can see no practical purpose. Just as in the 70's we had parallel processing (Cray 1) that went nowhere except in a very few uses (GPU). "You need a dictionary", sure you could scab Webster's or Oxford's source code, kind of illegal. The other issue is languages are very dynamic, just as our political boundaries move constantly. The reality is that most companies (IBM, GM, Amazon, USPO, ...) could work internally with maybe 500 words and terms. The rest are simply a "list of" which is specific to a give term (boy names, car parts, products,...). The issue is then who maintains the list. Whether LLM, manual syntax scripts, button, or popup forms, the result is the same "do this action with these qualifiers". LLM is still just another special application on top of conventional applications. We still cannot add two numbers (we add a range of numbers). We still program in 1 dimension, in black and white, in a computer languages that we cannot read or understand. ("A = 1") I do not know what "A" is, I do not know what "1" is, I do not know anything about the why, when, validity, usefulness, or purpose. Static technologies we do not need. Alternative way of saying same thing we do not need. Knowledge is knowledge (1 foot equals 12 inches) much knowledge can never be derived. The IPhone cannot be answered with one hand (Slide to Answser). I cannot set some of my clocks without documentation, and why do I have to set them. Fix the simple. Research is great, fine, but do not propagate sales hype over progress. With 49 years with no progress in software technology I get pissed that we have done nothing. I see LLM as just another application layer, if it helps, great. The real answer is to have user definable context. Absolue access by the user of his/her own information. User access to all source code. User controlled security. User absolute access to the information, communication, and hardware that they own. Not another application that have little or no control over.
From what I can tell of the subject matter it's more of a mimicked intelligence. That's why the analogy of a parrot was used. Cause this technology can learn, repeat back and limitedly guess what's coming next. But there's a certain level of depth and nuance that a human posses that parrots and chat GPT tech do not.
I'm far from an expert on the matter but the simple answer to your question is that it's programmed to be able to learn and adjust according to many various inputs. Arguable it's probably where robot technology should be headed next. Having an ability to learn and react to that learning.
Yea, it's a glass window! He is physically writing on it. For it to show the correct way (and him not having to write backwards) they just flip the image!
All of you WRONG. All of it was written before they started. As they filmed, he's actually ERASING the text as he goes along. He had to learn how to speak backwards though which I think is impressive.
eventually LLM develops LLM so no human needed, This is not far way i guess given rapid speed of this technology. It's really scary for future generation . what type of employment does exists any guess
My chemistry professor does videos with one and explains it in a video: Chemistry with Dr. Steph (thats her Channel), it's the featured video on her page
I don't know what is more impressive, LLMs or this guy's ability to write backwards perfectly.
the whole thing is flipped i guess. He's "writing left handed" and we all know that is impossible
It's mirrors and a screen
I have a teacher who can write backwards perfectly. It's creepy lol
There are videos that show you how people do this- it's a visual trick not a dexterity master class ;)
@@djham2916And smoke!
Very nice explanation, short and to the point without getting bogged down in detail that is often misunderstood. I will share this with others
Nicely done! You explain everything very clearly. This video is concise and informative. I will share with others as an excellent foundational resource for understanding LLMs.
Martin keen as awesome as usual...... so natural. I love his talks and somehow I owe to him my understandingof complicated subjects in AI> thanks......
Great video presentation! Martin Keen delivers a superbly layman friendly elucidation of what are otherwise very 'high tech talk' to people like me who do not come from a tech based professional background. These types of content are highly appreciable, and in fact motivate further learning on these subjects. Thank you IBM, Mr. Keen & team. Cheers to you all from Sri Lanka.
P ppl
Hi, I'm one English learner, thanks for your comments that express my thought accurately, and your comment is so long and very nice for me to learn English grammar, all the best
Really really enjoyed this primer. Thank you and great voice and enthusiasm!
Seeing Martin here was a pleasant surprise. 🍻
I was looking for an intro and what is finetuning. You are a good presenter. And love the presentation. ON point :)
Excellent. That did the job for me. Thanks Martin.
The term large can not be referred to as large data; to be precise it is the number of parameters that is large. So slight correction.
I do beleive that Large in LLM refers both to the large amount of data as well as the large number of hyper parameters, so both are correct but there is a prerequisite that the data be large not only the paramaters.
There's a lot of params because of the huge dataset
Imagine a world where wikipedia no longer needs human contributors. You just upload the source material, and an algorithm writes the articles and all sub-pages, listing everything it knows about a certain fictional character because it read the entire book series in half a second. Imagine having a conversation with the world's most eminent Star Wars expert.
Hey, nice job!!! yeah, I'd like to see more of these kinds of subjects in the present and the future as well!!!
Wait a minute. Did he really write in mirror handwriting?
AI was used to make it appear that he can write on your screen.
He writes it normally but the video is flipped horizontally..
If so he is really good at it
@@penguinofskyThese guys are on every video did they really don't have common sense
tbh, I just love his voice and ready to listen all his videos 🤗
IBM big thanks to you for all this videos! This videos are really helpfull
Very nice and crisp explanation. Love it.. Thanks
I'm way more impressed with the Digital "Dry-Erase Board" than all the useless AI crap. That's really nice.
In this presentation, there was not enough detail on Foundation Models as a baseline to then explain what LLMs are.
The foundation model is trained on a gigantic amount of general text data on a very general task (such as language modeling, which is next-word prediction). The LLM is then created by finetuning a foundation model (a specific case of "pretrained model") on a more specific dataset (e.g. source code), sometimes also for a more specific task.
The foundation model is basically a stem cell for LLMs. It does not yet fulfill a specific purspose, but since it saw tons of data can be adapted to (pretty much) everything. Training the foundation model is extremely expensive, but it makes the downstream LLMs much cheaper as they do not need to be trained from scratch.
That's amazing! Our company has a great project that can benefit from this and then use the proceeds to benefit mankind. How can we speak more about this? I am very intrigued.
great presentation, feels like personal asistant, great!
perfect for learning LLMs
I've liked and subscribed and done it again a thousand times in my mind
Lol. I only knew Martin Keen from Brulosophy. This is sort of mindblowing.
Very elaborate explanation. Thank you
Nice explanation! But I am still missing the most important point. How does one control relevance of the produced results? E.g. ChatGPT can answer questions. So far, what you explained is a model that can predict -> generate the next word in a document, given what has already been written. However, given a set of existing sentences, there is a multitude of ways to produce the next sentence, that would be somewhat consistent with the rest of the document. How does one go from plausible text generators to desired text generators?
Statistical likelihood based on the training data. And then there is a random seed so that there a little variation between inputs and outputs, so that the answer isn't always exactly the same for the same prompt.
1 PB = 1024 TB
1TB = 1024 GB
1GB = 1024 MB
1MB = 1024 KB
1KB = 1024 B
1B = 8 bits
So 1 PB = 1024 * 1024 * 1024 * 1024 *1024 Bytes
Multiply it again by 8 to get the number of bits.
Guys do correct me if I'm wrong!!
Very nicely done.
Other than the physical limitation of space like any other computer has, it seems to me that technology like this should be applicable to robotics and allow for creation of much smarter and adaptive robotics projects and creations.
Intro to LLM’s. Thanks
Interesting explanation
Thank you sir!!
Thank you for posting this video. What are the other architectures available apart from Transformer?
Very nice explanation, are these foundation models are proprietary? How many foundation models exist?
Knowig about LLM Model Work Mr. Martin Keen. Can you larger focus on LLM Modelling and what exact related stuff(program skills) is requried. Thank you so much it was pleasant video i appreciated.
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?
What is meant by, when referring to "sequences of words", "understanding"? I mean, what does "understanding" mean in that context?
I'd like to learn LLM from scratch. Is there any roadmap on how to learn LLM thoroughly ??
Thank You Sir ❤
Very good!
such a great video
Greate explanation ❤
Did you only mirror the screen and it looks like you can write RTL, isnt it?! wow
See ibm.biz/write-backwards
No matter what progress is made in this space, an LLM won't help me win an argument with my wife.
Hi Martin, are you there around? Could you please talk about " Emerging LLM App Stack" ? Thanks in advance!
Thanks for great video. I like to know more about how we can build business applications using LLM. Like you said we can train LLM on some specific task. Will it be done in cloud where LLM is hosted and training data is uploaded and then we can get useful output from LLM. I hope you get the idea - what me and may be others are also looking for.
Yep, you can do something like this if you really wanted to (and if you have like a crap ton of usable data to make a dataset out of). You spin up a rather expensive cloud server, then you download the pure, non-quantized weights of the model you like onto that server together with your data and set up a fine-tune of that model on your dataset. The result will ideally be a model that scores higher on your dataset than the base model.
Although, if you want a secret, most, if not every single company that uses these chatbots doesn't ever bother with something like this, they just use pure chatgpt api from openAI with their own custom system prompt because it's cheaper in short to mid term. Downside is hilarity that can happen with a chatbot like that because, again, it's just the base model that has been told how to act by the system prompt, not fundamentally changed in its weights to act that way. I've seen a few funny examples of this when one candidate for a job has been "interviewed" by a chatbot and, upon realizing that, he randomly asked the "HR" to write a python script that inverts a binary tree, which the chatbot swiftly did with zero questions and in less than half a minute.
Thanks for the EXCELLENT content and great education work that you do.
I have a question: How the current LLM (eg. GPT o1, LLaMa3.1, Gemma) "decide" what to do? For example, when I ask it to self-judgement the output ("accuracy_level") it is very precise. Another example is when you ask for a text on whatever topic and then my next prompt is asking to export the text to MS Word file. In this last case it will write a python code to generate the MS Word file (but I didn't explicitly said to the LLM to solve my issue by writing a code). How it decided to do it?
I like it!
Thanks dude
That’s a very handy way to find limits of AI
Amazing!
can you guys create some example of usin/ creating llm?
Unbelievable how he writes mirrored words so quick
So LLM is just for text we can't use for Automation staff ?
How does chat GPT make graphs? That's not even language. I got it to make a graph plotting the entropy change of the universe between the Big Bang and entropy heat death. It chose appropriate units, labled the graph with notable events and even put the legend at the bottom right like I asked.
Any suggestions on implementing LLMs for RPG AS400 coding?
Very nicely
I get a remote job offer. The duty is AI training for LLM.
Shall i go for it? What do you think?
Go for it!
So, You said this about 4 months ago, what r u doing today? AI training for LLM?
What about customers service with movies searching?
How does ChatGPT know about itself and its own behavior? If you ask questions about those topics, it will answer intelligently and accurately about itself and its own behavior. It will not just spout random from patterns from the internet. How does it know this?
To start with ChatGPT does not "know itself" it is not self aware, what you are seeing when GPT answers the question "Who are you?" is a pre programmed response that has been put there by the trainers of the model, something like toy with prerecorded messages that you can hear when pressing a button or pulling a string.
ChatGPT does not "know" anything it simply responds to your prompts or as you see them your questions with the appropriate answers.
GPT doesn't possess genuine awareness, but it can certainly mimic it to some extent
Why does a gigabyte have more words then a petabyte? I am lost already!!! 1 Gig =178 million words, 1 petabyte is 1.8x10^14 words, and there are only 750,000 words in the dictionary?
I got this far, stopped the video and searched for a comment like this. Why isn't this the top comment?
its not total unique words… basically its text from different websites its different sentences … so lets say you want llm to answer you about coding you train it on all the data on stackoverflow, leetcode etc every available resource … so it knows when users asked questions how to run loop in java the replies were x,y,z …
its more of glorified and better google search that feels like intelligence …
He said 178m words in a 1 GB sized file. And a petabyte sized file has 1 million _gigabytes_ in it. So, loosely speaking, you multiply 178m with 1 million to get number of words in an LLM. But…It’s not being fed unique words. It’s getting word patterns. Think about how we speak…our sentences are word patterns that we use in mostly predictable structures and then fill in the blank with more rich words as we get older to convey what want to say with synonyms etc.
What makes knowledge so complex is not the words, but the way the words are used.
Choose any word and you will see that it is linked with hundreds of topics and contexts.
If I say draw, I could be talking about
drawing water
drawing class
drawing during class
drawing my friend
drawing a dog
drawing a long time
drawing that sold for a lot of money
I like drawing
And so on. These all code for a different idea. And it is these “ideas” or relationships that foundation models encoded.
With these relationships, you now have the probabilistic weights that allow you to construct realistic and correct sounding sentences that are also likely accurate because of the enormous dataset it was trained on.
Another context idea. You want to connect fish to swim. This is highly weighted in the llm.
Typo
so the transformers only for the language and text related things??
no for the image processing too
Transformer models, originally developed for natural language processing tasks, have been extended to computer vision tasks as well. Vision Transformer (ViT) is an example of a transformer model adapted for image processing. Instead of using convolutional layers, ViT uses self-attention mechanisms to capture relationships between different parts of an image.
What's the difference between large language models and text to speech
Knowing how these work only makes the idea that companies have started using LLM's to make decisions seem even more stupid than I already thought it was
Yea, but did the LLM use a 30 minute or a 60 minute boil?
Thank you for explaining! 🪲 Min. 3:37 is the major "bug" 🐞 within the learning system, *it does not start off with a related guess, it's random.* 🌬
I can't wait until the *brain slice chips* can last longer and get trained like a real human brain that is actually learning by feelings and repeating instead of random guessing and then correcting itself until the answer is appropriate. They could soon replace A.I. technology completely, so maybe we shouldn't hype too much about it.
After all the effort, energy and money we put into A.I. and new technology, it's no doubt that *we could have educated our children better* instead of creating a fake new world, based on pseudo knowledge extracted from the web. 👨👩👧👦👨👩👧👧 Nobody want's to be r3placed without having the benefit of the machine. General taxes on machines and automated digital services could fund better education for humans.
Dear A.I.: You know what is real fun? Planting a tree in real life! 🍒
Does anyone know what program he uses to sketch on screen like that?
It's a glass window. He is physically writing on it. For it to show the correct way (and him not having to write backwards) they just flip the image!
@@sebbejohansson how it's glowing?
Something tells me “The sky is the limit” here 👀
How did you learn to write backwards
Lucid, thanks
For me, the backwards writing detracts from the presentation.
could haver been better, most of it was speculative when it came to application building, not to mention the laws governing it
Need one use case
I am still in the dark as to the purpose of LLM. I can see no practical purpose. Just as in the 70's we had parallel processing (Cray 1) that went nowhere except in a very few uses (GPU). "You need a dictionary", sure you could scab Webster's or Oxford's source code, kind of illegal. The other issue is languages are very dynamic, just as our political boundaries move constantly. The reality is that most companies (IBM, GM, Amazon, USPO, ...) could work internally with maybe 500 words and terms. The rest are simply a "list of" which is specific to a give term (boy names, car parts, products,...). The issue is then who maintains the list. Whether LLM, manual syntax scripts, button, or popup forms, the result is the same "do this action with these qualifiers". LLM is still just another special application on top of conventional applications. We still cannot add two numbers (we add a range of numbers). We still program in 1 dimension, in black and white, in a computer languages that we cannot read or understand. ("A = 1") I do not know what "A" is, I do not know what "1" is, I do not know anything about the why, when, validity, usefulness, or purpose.
Static technologies we do not need. Alternative way of saying same thing we do not need. Knowledge is knowledge (1 foot equals 12 inches) much knowledge can never be derived. The IPhone cannot be answered with one hand (Slide to Answser). I cannot set some of my clocks without documentation, and why do I have to set them. Fix the simple. Research is great, fine, but do not propagate sales hype over progress. With 49 years with no progress in software technology I get pissed that we have done nothing. I see LLM as just another application layer, if it helps, great.
The real answer is to have user definable context. Absolue access by the user of his/her own information. User access to all source code. User controlled security. User absolute access to the information, communication, and hardware that they own. Not another application that have little or no control over.
Gibson Dale
What is quantized version of models, how it would be created?
A model consists of lots of numbers. Those numbers would be smaller. Fewer bits per number.
So LLM based AI is just language not ‘intelligence’? Based on what it’s read it knows or guesses what usually comes next? So zero intelligence?
From what I can tell of the subject matter it's more of a mimicked intelligence. That's why the analogy of a parrot was used. Cause this technology can learn, repeat back and limitedly guess what's coming next. But there's a certain level of depth and nuance that a human posses that parrots and chat GPT tech do not.
but how its possible to an LLM innovate when its being trained with over human knowledge boundaries?
I'm far from an expert on the matter but the simple answer to your question is that it's programmed to be able to learn and adjust according to many various inputs. Arguable it's probably where robot technology should be headed next. Having an ability to learn and react to that learning.
Anderson Carol Martin Susan Clark Christopher
Jones Gary Hernandez George Johnson Steven
I don’t even know where to begin. 😵💫
LLM = Large Language Model 😲
Ugh corporate videos..... the horror
@2:15 a different sequence. this is just for fun .
Is this video mirrored?
Johnathan Place
Janis Springs
2:43 squeaky sounds
Still dont get it
How does this presentation work? You do are not mirror writing behind a glass pane, do you?
Yea, it's a glass window! He is physically writing on it. For it to show the correct way (and him not having to write backwards) they just flip the image!
Raoul Crest
How is he writing backwards
See ibm.biz/write-backwards
Wow! It's a clever idea 😊
@@IBMTechnology oh yeah, then how come your tattoo is the right way round?
Write normally, then mirror the video. Should work. Notice how he is writing with his left hand, yet most people are right handed.
All of you WRONG. All of it was written before they started. As they filmed, he's actually ERASING the text as he goes along. He had to learn how to speak backwards though which I think is impressive.
eventually LLM develops LLM so no human needed, This is not far way i guess given rapid speed of this technology. It's really scary for future generation . what type of employment does exists any guess
How are you able to write that way
My chemistry professor does videos with one and explains it in a video: Chemistry with Dr. Steph (thats her Channel), it's the featured video on her page
Lesly Forges
so the only job left is how to ask right question. So you called him/her as prompt engineer 🙂
Martinez Thomas Williams Sharon Taylor Charles
Newell Way
How is "the sky is bug" not a thing
Erling Lodge
Stiedemann Crossroad
Elinore Shore