@@AMFOREVIEW Deep Learning and GPT-4 are fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not [understand]! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs! ------ Douglas Lenat wrestled with creating the true AI . AI lost a giant. He was not correct about some AI-things but he opened my mind about great many more things about AI. RIP Douglas Lenat, 1950-2023
The first AI transformer model, known as "Transformer," was introduced by researchers at Google in the paper titled "Attention is All You Need," published by Vaswani et al. in 2017.
🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to Large Language Models (LLMs) 01:26 🧠 How LLMs Work 03:19 📝 Prompt Design for LLMs 04:43 🤯 Challenges in Writing Model Prompts 05:13 🌟 Conclusion and Applications Made with HARPA AI
In brief (GPT4): In this video, the speakers discuss the power of Large Language Models (LLMs) in understanding and generating human language. LLMs are based on the transformer architecture invented by Google and are trained on massive text datasets. Their ability to be used for a variety of tasks, such as chat, copywriting, translation, summarization, and code generation, makes them incredibly powerful and efficient. LLMs can be utilized without being a machine learning expert, as they function like sophisticated autocomplete systems. Users can input text and receive output text based on the patterns and language learned by the LLM. The input text is called a prompt, and prompt design is crucial for getting the desired output from the LLM. There are two main approaches for prompt design: zero-shot learning, which involves using a single command, and few-shot learning, which includes providing examples. However, there's no optimal way to write model prompts since the results are highly dependent on the underlying model, and small changes in wording or word order can have a significant impact. Users can try out LLMs like Google Bard and experiment with different prompt structures and formats to find what works best for their specific use case.
GPT-4 is fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not [understand]! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
I found more understanding with this video vs one 12 times longer due to the extremely positively beneficial way this was presented in an extremely easy to learn visual manner so thank you very much indeed ❤️
It begins with a possibility of SparQL data modeling and now as some Grant Maclaren breakthrough is getting its traction at conferences and eventbrite/meetup meetups. They could have done that for a while, but you yourself were busy RTU VEF. 0:50 Maybe Dovilė Šakalienė would accept this drawing. 2:09 something you missing here is not your own old video you wish to crop and you wished everybody to do it. 2:53 your exams are just as this, won't pass if you don't know how your own program does the verification. Trying anything as this would make you build your servers and server rooms. You may begin with one rack and just build that system to the size of the user computer - waterfall is not what you want, but it's your best option. 3:15 why chesapeak? Do you see the red nato woman in the crowd 65? Something as this 4:09 too hard to define.
When i finally I get GenAIs to respond with what i wanted (often this will take multiple prompts, corrections), i ask it a simple question: > "Please give the prompt which would have generated the last output"
They don't per se. It just happens to be that in common text an answer frequently follows a question, so the next predicted words happen to be those starting the answer. They never actually leave next word prediction.
@@zvxcvxcz I see but those systems seem to form an answer using the related documents. Let's say neural nets somehow determine which paragraphs are similar to the question asked. Do neural nets also form the answer text or there is some other algorithm that forms the answer text?
According to the knowledge available on public internet, watson is not built on a particular ML mdel. A bunch of algorithm and technologies is used to implement it. For more reference you can visit given link.
Is LLM part of NLP? Is an LLM always an NLP model? Or can an LLM be another kind of model? Also, BERT is Transformer but not an LLM, right? Transformer can be LLM or not, right?
France is to Paris , Japan is to .......???? So you mean to say there is a sentence in word corpus that is very close to above sentence. So my question is i doubt that above sentence is there in word corpus then how will it predict Tokyo. Is there any other technique also used?
Google developed most of base tech that were later used by OpenAI, tech like from word embedding algorithms to Transformer architecture, and they gave them away for free, but Google are not arrogant enough to use the word "Open";😅
Please stop saying they understand. They don't, and saying it over and over are giving normal people a very warped view of what these models are capable of. And yes, the bigger they get and the more info they're fed, the tougher it becomes to demonstrate that they don't understand because the outputs between the traines model and someone with actual understanding become closer together, but it really is actually the case that these models as they work today most certainly do not understand anything.
The words chat and conversation imply spoken communication. However, neither Google Bard nor OpenAI chat GPT are capable of speaking or understanding spoken words. It's strictly text, which gets tedious pretty quickly. These reviews never seem to reflect that. Plus, my view so far is that it's like the Wizard of Oz: just someone behind the curtain trying madly to keep up the illusion.
Wow. LLM is a black box you can not do it. We can train it for all tasks. What a waste! :( One can get a pre-trained free model to build own vector DB with own data and search them.
Subscribe and stay tuned for the next episode! → goo.gle/developers
❤❤ Dear Developers,
It's Okay ❤❤
@@AMFOREVIEW Deep Learning and GPT-4 are fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not [understand]! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!
------
Douglas Lenat wrestled with creating the true AI . AI lost a giant.
He was not correct about some AI-things but he opened my mind about great many more things about AI.
RIP Douglas Lenat, 1950-2023
Thanks for diving in! What a great explanation.
The first AI transformer model, known as "Transformer," was introduced by researchers at Google in the paper titled "Attention is All You Need," published by Vaswani et al. in 2017.
🎯 Key Takeaways for quick navigation:
00:00 🤖 Introduction to Large Language Models (LLMs)
01:26 🧠 How LLMs Work
03:19 📝 Prompt Design for LLMs
04:43 🤯 Challenges in Writing Model Prompts
05:13 🌟 Conclusion and Applications
Made with HARPA AI
In brief (GPT4):
In this video, the speakers discuss the power of Large Language Models (LLMs) in understanding and generating human language. LLMs are based on the transformer architecture invented by Google and are trained on massive text datasets. Their ability to be used for a variety of tasks, such as chat, copywriting, translation, summarization, and code generation, makes them incredibly powerful and efficient.
LLMs can be utilized without being a machine learning expert, as they function like sophisticated autocomplete systems. Users can input text and receive output text based on the patterns and language learned by the LLM. The input text is called a prompt, and prompt design is crucial for getting the desired output from the LLM.
There are two main approaches for prompt design: zero-shot learning, which involves using a single command, and few-shot learning, which includes providing examples. However, there's no optimal way to write model prompts since the results are highly dependent on the underlying model, and small changes in wording or word order can have a significant impact.
Users can try out LLMs like Google Bard and experiment with different prompt structures and formats to find what works best for their specific use case.
You are not using BARD? :P
GPT-4 is fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not [understand]! the implications of their outputs!
Also, GPT-4 is designed by the Wealthy to serve their needs!
I found more understanding with this video vs one 12 times longer due to the extremely positively beneficial way this was presented in an extremely easy to learn visual manner so thank you very much indeed ❤️
What a wonderful video. Simple and concise.
Paying so much attention that I saw the "Email Foramtting".
I use bard everyday and it helped my workflow increase significantly
Your insights are like gems that light up every conversation.
Very good video! These llms are much clearer now!
Wow, amazing video, everything was well explained! I really learn a lot from your videos, thank you so much 🤗🤗
Thank for this Clear Explanation
Can't wait to see what Google has up their sleeves with A.I
Amazing video gilrs! Thank you!
Excellent detailed information, Highly recommend everyone
It begins with a possibility of SparQL data modeling and now as some Grant Maclaren breakthrough is getting its traction at conferences and eventbrite/meetup meetups. They could have done that for a while, but you yourself were busy RTU VEF. 0:50 Maybe Dovilė Šakalienė would accept this drawing. 2:09 something you missing here is not your own old video you wish to crop and you wished everybody to do it. 2:53 your exams are just as this, won't pass if you don't know how your own program does the verification. Trying anything as this would make you build your servers and server rooms. You may begin with one rack and just build that system to the size of the user computer - waterfall is not what you want, but it's your best option. 3:15 why chesapeak? Do you see the red nato woman in the crowd 65? Something as this 4:09 too hard to define.
Thank you! this has helped me learn more about this
It's our pleasure! Glad you enjoyed the video 😎
Very simple and rich video
Thank you!
When i finally I get GenAIs to respond with what i wanted (often this will take multiple prompts, corrections), i ask it a simple question:
> "Please give the prompt which would have generated the last output"
Very good video about LLM and prompt engineering
Thanks for this overview!
Thanks for your explanations.
So clearly,thank you very much
Hello Google,
I am new to English, but want to improve in analytical writing for the GRE, how can I learn like LLM?😅
Like Google bard ...................
well explained..Thank you so much
Great explanation
How does LLM perform on non structured languages. Like hinglish languages?
I'm really with a question.
So, waiting for the next video to learn, what do you think isn't it a hard job?
I still don't understand how do you go from a language model which predicts the next word to a question answering system.
Same
Because it's impossible to predict undefined input.
They don't per se. It just happens to be that in common text an answer frequently follows a question, so the next predicted words happen to be those starting the answer. They never actually leave next word prediction.
@@zvxcvxcz I see but those systems seem to form an answer using the related documents. Let's say neural nets somehow determine which paragraphs are similar to the question asked. Do neural nets also form the answer text or there is some other algorithm that forms the answer text?
Excellent intuitive video about prompt engineering for starters, thank you.
"and definitely let us know what you are building"
being google, I'm pretty sure you already know that
Pretty sophisticated stuff is going on with AI. How is PALM different to BARD ?
Watson developed by IBM more than 10 years was very good in language understanding. What model was behind Watson?
According to the knowledge available on public internet, watson is not built on a particular ML mdel. A bunch of algorithm and technologies is used to implement it. For more reference you can visit given link.
Bard isn’t currently supported in your country. Stay tuned! - that's all about G approach
Is LLM part of NLP? Is an LLM always an NLP model? Or can an LLM be another kind of model?
Also, BERT is Transformer but not an LLM, right? Transformer can be LLM or not, right?
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model?
Very good video!
Nailed the "silence, brand" vibe.
Great video. You guys are very sympathetic and explained the concept of an LLM in a simple but understandable way!
Thanks for shring
France is to Paris , Japan is to .......????
So you mean to say there is a sentence in word corpus that is very close to above sentence. So my question is i doubt that above sentence is there in word corpus then how will it predict Tokyo. Is there any other technique also used?
🇨🇴👋🙋♂👍🤝 Saludos desde la ciudad de Bogotá D.C.
ChatGPT is the elephant in the room. I didnt even know you can talk to Google Bard at the moment
Really helpful. Thank you for explaining it so well!
How did you know I was living under a rock?
The world officially changed forever, Pandora’s box unlocked 2024. I’m an early adopter and investor so I’m retired and watching this play out now 🍿
Can you open source your models?
lol
Everyone heard about this because of ChatGPT which they try to not mention because it's a Microsoft thing.
Microsoft has licensed use, but it is not a Microsoft thing, it is from OpenAI.
Entrepreneurship is dirty bro you gotta do anything to just sustain. No other option
Google had created the first Ai chat bot 2 years earlier than open ai... But launch it because of its harmful effect on people
Google developed most of base tech that were later used by OpenAI, tech like from word embedding algorithms to Transformer architecture, and they gave them away for free, but Google are not arrogant enough to use the word "Open";😅
@@Ani-rw4lnya try to be more like Google
what is a mud kiln?
I still can’t access Bard. Something about permission setting blah blah
I like that the video starts with an insult.
Thanks ❤
The output of LLM is always probabilistic and never deterministic.
So deep
Looks like a dead heat in a zeppelin race 👀
Would be nice to have diffusers models that can run on mobile. Diffusers is so powerful
Please stop saying they understand. They don't, and saying it over and over are giving normal people a very warped view of what these models are capable of. And yes, the bigger they get and the more info they're fed, the tougher it becomes to demonstrate that they don't understand because the outputs between the traines model and someone with actual understanding become closer together, but it really is actually the case that these models as they work today most certainly do not understand anything.
it seams i've been living under a rock jhajhajha ::/
The words chat and conversation imply spoken communication. However, neither Google Bard nor OpenAI chat GPT are capable of speaking or understanding spoken words. It's strictly text, which gets tedious pretty quickly. These reviews never seem to reflect that. Plus, my view so far is that it's like the Wizard of Oz: just someone behind the curtain trying madly to keep up the illusion.
Waiting to see Bard Open-sourced😎
Chapter: fuchsia learning
I'm with you
gosh they are beautiful
One of them
Not bad, but it is more like "How to/you use or interact with LLMs" and a little LESS about "how LLMs work".
yeah guys like google bard
I need LLM
Mention ChatGPT without mentioning ChatGPT
nice explaination
In my experience Bard has been totally unreliable.
Ask bard how many about football results and it gets it wrong.
Cool.
This video demonstrates what's wrong with google lately..
Where is the math? Where is the compilable code? How much is a language model? What would be the hardware requirements?
So you guys are pushing prompt engineering as a real job?
There won't be dedicated "prompt engineers", instead prompt engineering is simply a skill that developers now need to have.
Advance AI
2.2k+...Thanks
❤❤
Wow. LLM is a black box you can not do it. We can train it for all tasks. What a waste! :(
One can get a pre-trained free model to build own vector DB with own data and search them.
Martinez Kimberly Jackson Brenda Lewis Mark
I"m sempel
Deneme yorum
AI is a sophisticated...parrot!
The perfect girl doesn't. exi...
😢I 😢
Aweful audio quality