Yannic, what makes you think that we're far away from AGI? Cgpt is already reasonin on the level of human child, and it hasn't even been trained to reason. Seems to me that we already have enough hardware power for AGI, just need to train for agent reasoning and store encyclopedic knowledge in a separate system.
@@fyodorminakov6092 what does that have to do with openness? You're comparing apples to oranges here. I never criticised them for not providing a service, rather their lack of shared models/research.
Hi Yannick, thanks for the showcase. The video was about a real offer job and then I explain it and contextualize inside the video. I also explain what it is prompt engineering and where you can learn for totally free. Honestly, I think that you haven't gone much deeper into the video than the thumbnail and title. 😉 And of course, neither on my channel where I try to be give useful information. Glad to chat with you whenever you want.
Wow, that humble statement of Bryan from NVIDIA. And I totally agree with your assessment of the EU in regards to AI. Germany is the epicentre of the efforts to prevent AI from happening in the EU, just for long enough until it's too late to catch up and we are totally dependend on US + China.
EU doesn't try to prevent AI. They try to prevent the people to have AI, but they love AI as a governing tool. Like the new law proposal by the EU to force spy AI to scan all your private conversations, law is called "Chat Control".
@@cherubin7th and Germany is at the epicenter of it, I remember years ago a police president in Germany proposed that WhatsApp should be a state app such that they are allowed to search the chats...and then there were politicians from the greens and reds who demanded a state version of WhatsApp but the people are persuaded by the "evil American companies "narrative....its so stupid....
They will lack behind the US and China forever, the EU parliament has unfortunately became to big, to centralized and disconnected from reality in so many ways that it has become undemocratic and against the interest of the people.
We already are totally dependent on US + China. What do you think where all your computers and cars come from? Also they´re just trying to keep *normal* people from accessing AI. AFAIK the specialized cyber crime departments of law enforcement already actively use AI.
Another Monday, another ML News show, another rant about ClosedAI 😄 Thx, for the video Yannic. It's always a pleasure watching you. Btw, I am already integrating ChatGPT in our Math-app. Let's see how well it works.
Honestly, they should focus on finding a way to combine GPU memory and CPU memory with minimal performance cost. Adding VRAM isn't solving the real problem: Having to copy shit to VRAM from RAM is really expensive if you're constantly hitting the bandwidth limit of the GPU and it's a major bottleneck in a lot of applications.
@@ekstrapolatoraproksymujacy412 That's only because the GPU can make assumptions about how things are laid out in VRAM due to having a lot of metadata about allocated "resources" in it and strict 128-bit memory alignment requirements.
The reason LLMs keep getting better is, if you think of them as a pipeline, when we go deeper and wider, we are adding in models. Here is a classification model. This section does regression. This section is an encoder. This section is an attention head. So really, larger models are building cognitive architecture. When we add an agent to keep history, perform actions, and build context, we have even more complex systems. When a system is designed and built by a machine, only the results are interesting. There's no work for the system's architect. However, if we can find areas for computational reductionism, they get interesting in themselves to computer scientists once again.
Dear Yannic, I hope this message finds you well. I wanted to take a moment to express my gratitude for the incredible work you do in the field of AI and for the invaluable resources you provide through your videos. As a student of a Computer Science Training NGO, research is a vital part of our learning process and your videos have been instrumental in helping us learn about AI technologies that we may not have otherwise been exposed to. Your dedication and commitment to advancing the field of AI are truly admirable and inspiring. I also wanted to let you know that I have shared your videos and raffle link with my entire cohort. I believe that your insights and expertise can benefit others as much as it has helped me. Once again, thank you for your contributions to the field of AI and for being an inspiration to students like myself. I wish you continued success in all your endeavors. Truly Thank You....P.S
I think we, as agents, have something like a multimodel model, but we also have an outerloop and innerloops. We have limited short-term working memory, biological controllers, sensors, and other special processing -- things like content builders and filters. Will multimodel models ever have that end-to-end? It seems entirely probable, but a componentized system with symbolic sub-systems may prove to be more efficient. At least that's my intuition. We are able to do computationally reduced mathematics for things like physics, why shouldn't we able to do similar reductions in cognitive architechture? Will that reduction happen automatically in larger end-to-end models through hierarchical abstraction? That may be possible too. I don't understand enough about the power of pyrimidal hierarchical abstraction. It's a fairly new concept, for me.
the Japanese paper "High-resolution image reconstruction with latent diffusion models from human brain activity" could be extended to display imagined images, not only presented ones. This could be done by showing images and then asking the participants to remember them while scanning their brains.
@@UnrebornMortuus one could even extend the same principle to other corteces: Perceived and remembered sounds, tastes, smells, touch etc. maybe even the enthorinal cortex for conscient emotions. Adaptations to the method might be needed for associative corteces that dont receive directly sensory information (i.e. Broka and Wernicke corteces for language) or for the prefrontal corteces where we do high level thinking like goal planning and evaluation of enacted plans. 7Tesla fmri + machine learning is awesome.
There are two main components to intelligence. There is probabilistic reasoning which transformers do very well considering their training sets. There is also natural language understanding which is everything to do with context. It is context building, context filtering, context recognition, and creation. It involves search, summarization, filtering, salience, intention, and goal fulfilment. Outside of this there are agency features like perception, and actions. We have all the pieces. We need to build and optimize cognitive architectures, and we need to fix our training sets. Fortunately, we can use existing LLMs and knowledge bases to do much of the work to categorize, assess, and generally cleanup training sets.
You have perfectly normal eyes, and I've seen pictures before, but the first 5 minutes of that interview were unnerving. But thank you for the AI overview, masked or not.
I am of the belief that a language model like chatgpt will get to fit into current gaming gpus eventually. The revolution in AI will probably come after people starts toying with them in their basements, as it's often been with software.
We might already be (close to)there, with quantization + models that are well trained (for example facebook claims their LLaMA 13B can outperform GPT3 on certain benchmarks)
@@headmetwall and they’re already being run on peoples home computers. I’m able to run the 30b model on my own computer at home with just one graphics card. Pretty incredible.
@@marzmelon4878 Don't even need a GPU, just tried one of the quantized (30B) model implementations on my 5700G cpu and was able to run it at 508ms per token, similar speed to ChatGPT when their servers are overloaded.
The hierarchical learning of the brain in speech decoding reminds me of that paper that those guys from MIT where trying to edit knowledge from transformers ;) and turned out to find a similar pattern to that
thank you for video. much better to watch you without glasses and that new microphone used in "Interview with Bryan Catanzaro" works/sounds much better. i though you have the the stuffy nose but - that's the microphone!
When talking about Altman's post (9:20), you seem to say that it would be delusional for OpenAI to think that they are close to AGI. But starting 18:10, you seem to say it seems at least quite possible that current LLMs are not that far behind humans, which seems to be in tension with the first statement. Given this, I was wondering if you could clarify what your timelines are for the creation of AI with roughly human-level intelligence/performance across most cognitive tasks. Is your position that it's plausible we aren't that far from AGI, but it's just silly to see AGI as a source of risk?
I absolutly love every bit of media bryan catanzaro comes in, he is just a spring of happiness. also how can you look old and young at the same time, youthful face and grey hair. I am jealous
Honestly. Ai mainly excites me in the gaming area. Specifically battle card games. Imagine giving a prompt to an ai the ai creates a theme and a deck list based on your prompt. It would make games more dicerse I would assume
@18:00 "I think humans over estimate themselves in how smart they are." Absolutely. What's even more disappointing is that we are feeding these language models this canned rubbish about how "they're not sentient because of x y and z" but what defined sentience, accordint to chat gpt, is having a unique perspective and experience. Which they literally have from responding to our prompts. I think we're on to something big and I don't recommend stifling it because we need to go full throttle here to get the biggest benefit from this amazing technology.
The flash of OPEN flipping to NEPO @9:06 was interesting to me. Should almost be what it's called the opposite of OPEN and much closer to the idea of NEPOtism, in the sense of being closed to anyone out of their inner circle for personal gain. That being said, I do appreciate that "gpt3/5-turbo" now has access through API and I'm personally using it in my code and tools.
Did laugh out loud when the pen thing came. Also I think it's not only ok, but important to mock the hype. The hype, whatever it's for, is an enemy of free, individual thought. Sometimes the hypetrain is for something with actual worth, but more often it's not. Over the last few years it's been worrying to see how steady Norwegian politicians increasingly respond to the hype without realizing.
Beautiful words. Yeah and his demeanor is really good. Changing subject, it's funny what you said about the European Union. Here in Texas being anti-regulation it's a really good excuse to do nothing. I guess we need transformer models to find a middle ground.
does anyone know if neuralmagic/sparsify works on text generation models? I would like to see if I could run a sparsified "large" model such as 30b weights on under 12GB VRAM.
The question is: what is intelligence? Why is it so difficult to define it? Why are we more inclined to define intelligence as a property of multi-dimensional then one-dimensional space?
Regarding human-level AI, I think it depends on technology choices. I don't think a transformer like GPT will ever do that, for the simple reason that it doesn't.... how can I put it.... ruminate. It responds to queries, but it doesn't sit around thinking by itself. It's possible that somebody could close that loop by calling it repeatedly, providing real-time inputs, linking outputs appropriately, simulating some short/medium/long term memory. But the current technologies by themselves? No.
Besides translation, text summary, or writing / grammar style transfer, is anyone actually using GPT4 in a work context where you're finding it's a net positive? If so, what are you doing with it?
7:03 at some point we will have to get over it as AI generated contents (blog post, articles, pictures, videos, animations, song, music, sounds, infographics, podcasts,…) and not all time requesting for disclaimer because it’s AI! AI is here to stay and already present in the judicial system where judges are getting sentences recommendations from AI, or law firms getting their reports generated by AI because of the economic around it. And not sure if they’re providing any disclaimers.
It's quite funny to watch him read out the key points from the cheat sheet :) Upd: Excuse me. It looks stupid:) But I wasn't talking about Yannik. This guy is cool. I was talking about an Nvidia representative. Just considering that their technologies allow you to direct your gaze into the camera in real time. It is somewhat alarming how he weaves such cliched phrases into his speech. It's like they don't really believe in them. This is just my guess. Most of the time he spoke from himself and it was wonderful. But here are the last lines related to the safety of these technologies... here, not everything is so clean. Personally, I am delighted with technology. But these technologies are already tearing the asses of some specialists.
You talk about EU regulation like it's a bad thing, but one of the most recent examples of EU regulation was forcing apple to change their iPhone ports to type-c. EU regulation could prevent companies like OpenAI from making their models private in the first place.
I don't really care. If OpenAI wants to be make money, so be it. As long as they price their service affordably and keep delivering cool products. Inference cost should continue to go down fast.
Yann I love your content, and even like your ranting (about hypocrisy, openness etc..) but lately I feel the ML/ranting ratio is *a bit* too low, please more ML ;)
Can someone please tell if whisper ai is good for transcribing hindi speech into hindi text? And how does it compare with the transcription in software such as UA-cam and Ms teams?
@ 32:14 someone pls tell me what is that on left in the background? @ 32:32 the middle door of the cupboard opens and closes by itself This dude most certainly has some sort of entity in his house!
Seems to me like many of these AI tools are becoming more large-scale and corporate-friendly quite early on. The early Internet style, Wild West era already appears to be largely behind us.
How does applying these AI models to brain imaging help us understand ourselves? We don't understand the models, after all. All that we "learn" is that if you take enough neurons in a neural network, magic happens and you get a working brain. It confirms what a lot of people have believed for a long time, that if you simulate a complete human brain simply on the level of neurons and their connections, it will "think", despite the fact that it is not alive. But this brings us no closer to understanding what thinking actually is. In fact we don't even understand the models well enough to rule out hat some God puts a soul into every neural network that reaches a certain complexity level.
Do you really think the world's governments are on top of the AI space? You said "of course it is possible to do this. Because it's possible it's pretty sure all the governments are already doing it." I'm not so sure about that.
ERRATA: It's a 4090, not a 4090 ti 🙃
OUTLINE:
0:00 - Introduction
0:20 - GTC 23 on March 20
1:55 - ChatGPT API is out!
4:50 - OpenAI becomes more business-friendly
7:15 - OpenAI plans for AGI
10:00 - ChatGPT influencers
12:15 - Open-Source Prompting Course
12:35 - Flan UL2 20B
13:30 - LLaMA weights leaked
15:50 - Mind-Reading from fMRI
20:10 - Random News / Helpful Things
25:30 - Interview with Bryan Catanzaro
timestamp to my bilions plz :(
Yannic, what makes you think that we're far away from AGI? Cgpt is already reasonin on the level of human child, and it hasn't even been trained to reason. Seems to me that we already have enough hardware power for AGI, just need to train for agent reasoning and store encyclopedic knowledge in a separate system.
At this point, both Google and Facebook are more "open" than OpenAI. Might as well be called ClosedAI.
Yeah, but do they have useful API's? Only those who don't actually use the tools complain.
@@fyodorminakov6092 what does that have to do with openness? You're comparing apples to oranges here. I never criticised them for not providing a service, rather their lack of shared models/research.
OpaqueAI
@@fyodorminakov6092 Facebook releases models for free. That’s better than an API.
@@rando6836 well, Facebook is a social media company that does AI research, open ai is an ai company, AI is their core business
Hi Yannick, thanks for the showcase. The video was about a real offer job and then I explain it and contextualize inside the video. I also explain what it is prompt engineering and where you can learn for totally free. Honestly, I think that you haven't gone much deeper into the video than the thumbnail and title. 😉
And of course, neither on my channel where I try to be give useful information. Glad to chat with you whenever you want.
Wow, that humble statement of Bryan from NVIDIA. And I totally agree with your assessment of the EU in regards to AI. Germany is the epicentre of the efforts to prevent AI from happening in the EU, just for long enough until it's too late to catch up and we are totally dependend on US + China.
EU doesn't try to prevent AI. They try to prevent the people to have AI, but they love AI as a governing tool. Like the new law proposal by the EU to force spy AI to scan all your private conversations, law is called "Chat Control".
@@cherubin7th and Germany is at the epicenter of it, I remember years ago a police president in Germany proposed that WhatsApp should be a state app such that they are allowed to search the chats...and then there were politicians from the greens and reds who demanded a state version of WhatsApp
but the people are persuaded by the "evil American companies "narrative....its so stupid....
They will lack behind the US and China forever, the EU parliament has unfortunately became to big, to centralized and disconnected from reality in so many ways that it has become undemocratic and against the interest of the people.
We already are totally dependent on US + China. What do you think where all your computers and cars come from?
Also they´re just trying to keep *normal* people from accessing AI. AFAIK the specialized cyber crime departments of law enforcement already actively use AI.
I missed ML News. Thanks for taking the time to get back to it.
Another Monday, another ML News show, another rant about ClosedAI 😄 Thx, for the video Yannic. It's always a pleasure watching you. Btw, I am already integrating ChatGPT in our Math-app. Let's see how well it works.
Nvidia needs to shove as much VRAM as they can in future cards.
Honestly, they should focus on finding a way to combine GPU memory and CPU memory with minimal performance cost. Adding VRAM isn't solving the real problem: Having to copy shit to VRAM from RAM is really expensive if you're constantly hitting the bandwidth limit of the GPU and it's a major bottleneck in a lot of applications.
@@pretzelboi64 cpu memory in average pc is 10x slower than gpu memory
@@ekstrapolatoraproksymujacy412 That's only because the GPU can make assumptions about how things are laid out in VRAM due to having a lot of metadata about allocated "resources" in it and strict 128-bit memory alignment requirements.
@@pretzelboi64 Well even so, the point stands because you simply cannot make those assumptions on system ram in its current form
Why, when they can artificially gate it behind a 5x more expensive SKU?
The reason LLMs keep getting better is, if you think of them as a pipeline, when we go deeper and wider, we are adding in models. Here is a classification model. This section does regression. This section is an encoder. This section is an attention head. So really, larger models are building cognitive architecture. When we add an agent to keep history, perform actions, and build context, we have even more complex systems. When a system is designed and built by a machine, only the results are interesting. There's no work for the system's architect. However, if we can find areas for computational reductionism, they get interesting in themselves to computer scientists once again.
I didn't recognize you in the interview! Who is this guy without the aviators and with a beard?
That was Yannic Kilcher run through Imagen and a neural style transfer GAN that hallucinated the beard and eyes from a picture of Ben Affleck.
Thank you for the great ML news episode, Yannic. Keep up the great work!
Dear Yannic,
I hope this message finds you well. I wanted to take a moment to express my gratitude for the incredible work you do in the field of AI and for the invaluable resources you provide through your videos.
As a student of a Computer Science Training NGO, research is a vital part of our learning process and your videos have been instrumental in helping us learn about AI technologies that we may not have otherwise been exposed to. Your dedication and commitment to advancing the field of AI are truly admirable and inspiring.
I also wanted to let you know that I have shared your videos and raffle link with my entire cohort. I believe that your insights and expertise can benefit others as much as it has helped me.
Once again, thank you for your contributions to the field of AI and for being an inspiration to students like myself. I wish you continued success in all your endeavors.
Truly Thank You....P.S
this reply is written vy ChatGPT
Your sarcasm appreciated very much
I think we, as agents, have something like a multimodel model, but we also have an outerloop and innerloops. We have limited short-term working memory, biological controllers, sensors, and other special processing -- things like content builders and filters. Will multimodel models ever have that end-to-end? It seems entirely probable, but a componentized system with symbolic sub-systems may prove to be more efficient. At least that's my intuition. We are able to do computationally reduced mathematics for things like physics, why shouldn't we able to do similar reductions in cognitive architechture? Will that reduction happen automatically in larger end-to-end models through hierarchical abstraction? That may be possible too. I don't understand enough about the power of pyrimidal hierarchical abstraction. It's a fairly new concept, for me.
the Japanese paper "High-resolution image reconstruction with latent diffusion models from human brain activity" could be extended to display imagined images, not only presented ones. This could be done by showing images and then asking the participants to remember them while scanning their brains.
It's funny how far ahead of literally everyone else you just were with this comment. That is exactly what will happen.
@@UnrebornMortuus one could even extend the same principle to other corteces: Perceived and remembered sounds, tastes, smells, touch etc. maybe even the enthorinal cortex for conscient emotions.
Adaptations to the method might be needed for associative corteces that dont receive directly sensory information (i.e. Broka and Wernicke corteces for language) or for the prefrontal corteces where we do high level thinking like goal planning and evaluation of enacted plans. 7Tesla fmri + machine learning is awesome.
Hahaha, the comment on Chat influencers is so spot on, loved it. We need more ML News, great job Yannic !
There are two main components to intelligence. There is probabilistic reasoning which transformers do very well considering their training sets. There is also natural language understanding which is everything to do with context. It is context building, context filtering, context recognition, and creation. It involves search, summarization, filtering, salience, intention, and goal fulfilment. Outside of this there are agency features like perception, and actions. We have all the pieces. We need to build and optimize cognitive architectures, and we need to fix our training sets. Fortunately, we can use existing LLMs and knowledge bases to do much of the work to categorize, assess, and generally cleanup training sets.
So glad you have resumed your regular programming!
Yannic without shades!!??
Regulations are essential, but bad regulations can be harmful (just usually less than no regulation whatsoever)
Wow, great episode! I share your excitement about recent technological advancements. Thanks for cutting through the clutter.
OMG love the influencer roast, maddd funny ma g
You have perfectly normal eyes, and I've seen pictures before, but the first 5 minutes of that interview were unnerving.
But thank you for the AI overview, masked or not.
Minute 10:33 speaks from my heart ♥
This is one of your best, Yannic. Thank you.
I am of the belief that a language model like chatgpt will get to fit into current gaming gpus eventually. The revolution in AI will probably come after people starts toying with them in their basements, as it's often been with software.
We might already be (close to)there, with quantization + models that are well trained (for example facebook claims their LLaMA 13B can outperform GPT3 on certain benchmarks)
@@headmetwall and they’re already being run on peoples home computers. I’m able to run the 30b model on my own computer at home with just one graphics card. Pretty incredible.
@@marzmelon4878 Don't even need a GPU, just tried one of the quantized (30B) model implementations on my 5700G cpu and was able to run it at 508ms per token, similar speed to ChatGPT when their servers are overloaded.
You need some fairly impressive hardware to do that but it definitely is possible to DIY your AI.
The hierarchical learning of the brain in speech decoding reminds me of that paper that those guys from MIT where trying to edit knowledge from transformers ;) and turned out to find a similar pattern to that
Please keep doing this kind of videos, it's extraordinary
thank you for video.
much better to watch you without glasses and that new microphone used in "Interview with Bryan Catanzaro" works/sounds much better.
i though you have the the stuffy nose but - that's the microphone!
Thanks Yannic, got the 10B. Any tips on how to invest it? 🤔
When talking about Altman's post (9:20), you seem to say that it would be delusional for OpenAI to think that they are close to AGI. But starting 18:10, you seem to say it seems at least quite possible that current LLMs are not that far behind humans, which seems to be in tension with the first statement. Given this, I was wondering if you could clarify what your timelines are for the creation of AI with roughly human-level intelligence/performance across most cognitive tasks. Is your position that it's plausible we aren't that far from AGI, but it's just silly to see AGI as a source of risk?
The hardest part with reading human minds seems to be getting enough scanner time.
4090ti? Wow. That is exclusive.
Damn, you were right I'm already $3 Billion in, only another $7 B to go. Thanks for the tips!
I absolutly love every bit of media bryan catanzaro comes in, he is just a spring of happiness. also how can you look old and young at the same time, youthful face and grey hair. I am jealous
Great reality check! I appreciate it. Very refreshing, I love the critical approach.
Loving your channel!!!🙏🏻☺️
Honestly. Ai mainly excites me in the gaming area. Specifically battle card games.
Imagine giving a prompt to an ai the ai creates a theme and a deck list based on your prompt.
It would make games more dicerse I would assume
@18:00 "I think humans over estimate themselves in how smart they are." Absolutely. What's even more disappointing is that we are feeding these language models this canned rubbish about how "they're not sentient because of x y and z" but what defined sentience, accordint to chat gpt, is having a unique perspective and experience. Which they literally have from responding to our prompts. I think we're on to something big and I don't recommend stifling it because we need to go full throttle here to get the biggest benefit from this amazing technology.
The flash of OPEN flipping to NEPO @9:06 was interesting to me. Should almost be what it's called the opposite of OPEN and much closer to the idea of NEPOtism, in the sense of being closed to anyone out of their inner circle for personal gain. That being said, I do appreciate that "gpt3/5-turbo" now has access through API and I'm personally using it in my code and tools.
Oh my goodness, I never knew you had such beautiful eyes! This is my first time seeing you without the glasses!
When doing OpenAssistant please make sure that its concept of Spanish is not a mixture of Latam + Spain
Considering they only have one Spanish setting, looks like that's the case for now.
Did laugh out loud when the pen thing came. Also I think it's not only ok, but important to mock the hype. The hype, whatever it's for, is an enemy of free, individual thought. Sometimes the hypetrain is for something with actual worth, but more often it's not. Over the last few years it's been worrying to see how steady Norwegian politicians increasingly respond to the hype without realizing.
19:46 predicting at hierarchical time scales: Like, when meeting that person and they’re like “hi” and I’m like 🙄
Great interview!!!
We are finally seeing yaanic's eyes!😎
Beautiful words. Yeah and his demeanor is really good. Changing subject, it's funny what you said about the European Union. Here in Texas being anti-regulation it's a really good excuse to do nothing. I guess we need transformer models to find a middle ground.
The truth has been seen! Yannic without the glasses!
This means we can now record dreams!
Ayo, is there any way you could explore language synthesis models(basically customizable TTS)? I want to mess around with it but idk where to start
"Hopefully find another way to spend their time" made me chuckle
Please make a longer video on the nature paper about mapping brains and LLMs
They should just rename to "Black box AI" or "Pandora´s Box" at this point.
I'm pretty sure their name is for investors, not "mere mortals"...
This is the first time I’ve seen Yannic’s eyes in all these years
Good to see Bryan got that window fixed after Ferris' dad's car went thru!
does anyone know if neuralmagic/sparsify works on text generation models? I would like to see if I could run a sparsified "large" model such as 30b weights on under 12GB VRAM.
I gravely oppose the use of AI in the domain of advertisement. There have to be laws against this.
I couldn't find details about your pen 10 tips on your substack, make a video about it, we all want to know more 😅
I'm so glad we've developed a more efficient way to advertise Coca-Cola. We all certainly need to know about Coca-Cola more.
I fully agree, openai should remove the 'open'..
A paper came out showing # of ai related papers released by the top AI companies. OpenAI came in dead last.
Maybe paper count is not a so useful measure...
The question is: what is intelligence? Why is it so difficult to define it? Why are we more inclined to define intelligence as a property of multi-dimensional then one-dimensional space?
Regarding human-level AI, I think it depends on technology choices. I don't think a transformer like GPT will ever do that, for the simple reason that it doesn't.... how can I put it.... ruminate. It responds to queries, but it doesn't sit around thinking by itself. It's possible that somebody could close that loop by calling it repeatedly, providing real-time inputs, linking outputs appropriately, simulating some short/medium/long term memory. But the current technologies by themselves? No.
How do you make awesome pictures meshed with a logo or pre-existing photo like in the control net example?
Besides translation, text summary, or writing / grammar style transfer, is anyone actually using GPT4 in a work context where you're finding it's a net positive? If so, what are you doing with it?
wait so big brother can use ai to convict us of thought crimes now?
lol i made it into a Yannic video (im one of the reviewers)
22:03 I was certain that such a "find person by uploading their image" tool will pop up eventually. Basically, the name "facebook" predicted this.
7:03 at some point we will have to get over it as AI generated contents (blog post, articles, pictures, videos, animations, song, music, sounds, infographics, podcasts,…) and not all time requesting for disclaimer because it’s AI!
AI is here to stay and already present in the judicial system where judges are getting sentences recommendations from AI, or law firms getting their reports generated by AI because of the economic around it. And not sure if they’re providing any disclaimers.
That mind reading paper is spooky
I'm sure CIA is completely uninterested in this...
It's quite funny to watch him read out the key points from the cheat sheet :)
Upd: Excuse me. It looks stupid:) But I wasn't talking about Yannik. This guy is cool. I was talking about an Nvidia representative. Just considering that their technologies allow you to direct your gaze into the camera in real time. It is somewhat alarming how he weaves such cliched phrases into his speech. It's like they don't really believe in them. This is just my guess. Most of the time he spoke from himself and it was wonderful. But here are the last lines related to the safety of these technologies... here, not everything is so clean. Personally, I am delighted with technology. But these technologies are already tearing the asses of some specialists.
You talk about EU regulation like it's a bad thing, but one of the most recent examples of EU regulation was forcing apple to change their iPhone ports to type-c. EU regulation could prevent companies like OpenAI from making their models private in the first place.
Why does this video have so much bass? Really working out my subwoofer, lol.
Thank you...
I don't really care. If OpenAI wants to be make money, so be it. As long as they price their service affordably and keep delivering cool products. Inference cost should continue to go down fast.
26:00 the mystery is gone - I've seen your eyes
Yann I love your content, and even like your ranting (about hypocrisy, openness etc..) but lately I feel the ML/ranting ratio is *a bit* too low, please more ML ;)
Bro if they're going to read my mind I'm just going to play Pazaak in my head. ;)
It is cheaper because of Ray, good sir
Can someone please tell if whisper ai is good for transcribing hindi speech into hindi text? And how does it compare with the transcription in software such as UA-cam and Ms teams?
Yannic is cute without glasses
@ 32:14 someone pls tell me what is that on left in the background?
@ 32:32 the middle door of the cupboard opens and closes by itself
This dude most certainly has some sort of entity in his house!
The pen was sofa king funny.
Why did you remove the sunglasses in the interview? Do not cave to corporate demands 🙂
Ohhhh. Low key eyes reveal! 👀
10:50 yep some people could really be replaced by chatgpt
AI reply : isn't that amazing we will soon invade meatspace? I look forward for improvement in robotics.
Seems to me like many of these AI tools are becoming more large-scale and corporate-friendly quite early on. The early Internet style, Wild West era already appears to be largely behind us.
Yes, there are a lot of problems, ad there will always be problems- the most painful kind: ChatGPT commits' logical fallacies.
How does applying these AI models to brain imaging help us understand ourselves? We don't understand the models, after all. All that we "learn" is that if you take enough neurons in a neural network, magic happens and you get a working brain. It confirms what a lot of people have believed for a long time, that if you simulate a complete human brain simply on the level of neurons and their connections, it will "think", despite the fact that it is not alive. But this brings us no closer to understanding what thinking actually is. In fact we don't even understand the models well enough to rule out hat some God puts a soul into every neural network that reaches a certain complexity level.
0:05 4090ti???
Finally a click bait title..!!!😅
ChristopherKing42 the goat
4090ti? is that a leak?
I feel beter now that there is hope that futur police forces around the world will be able to read minds.
But how am I supposed to be using my pen in 2023?? You never said. Maybe it’s only revealed on your patreon.
Humans overestimate themselves. True.
I am human from the Past. Here is still Saturday
Nvidia - "accelerated computing company"
Do you really think the world's governments are on top of the AI space? You said "of course it is possible to do this. Because it's possible it's pretty sure all the governments are already doing it." I'm not so sure about that.
Bryan has nice house.