I’ve been a programmer since ‘82. There’s been a lot of changes obviously. For years as I watched “simulations” and various (now lame) attempts at emulating human-like responses, I always maintained that it would never be solved using the methods of the day. It was a dead end. I’m so pleased that this level of amazing breakthroughs are happening in my life time! And, it came about by a completely new approach that now nobody clearly understands… 😋
It's almost too good to be true. What were the odds of stumbling upon the transformer algorithm? How many other, currently unknown principles and algorithms can lead to an equivalent outcome?
IBM's Watson won Jeopardy back in 2011, which was the cutting edge at that time...and now we have ai networks that can hold conversations and generate art. Truly amazing !!
Awesome, this is so exciting such a joy! I remember reading Geoffrey Leech ‘semantics’ in the 1990s and his tokenised deep semantic networks and thinking that components of his theory might lead to GI. That GI might be the result of pattern that emerge out of the codification of reality, the accumulated knowledge of humanity processed appropriately (not to side step use of the cleverness of neural networks). It is my experience as a teacher and lifelong learner that we become increasing ourselves as a consequence of our ability to experience another’s point of view through language reading etc. There are our personal histories and the progression of history itself, I believe this is the beginning of AI - which needs guidance because of our potential to abuse it, but AI will need help prying itself away from the fun park mirrors of our distortions our p hacking our lies by the use of the scientific method.
Wow. I hope everyone will have access to this emergent technology and it isn't just controlled and monopolised by a few groups to enrich themselves further.
Thank you. I have been trying to explain this recent quantum shift in AI but my very, very limited knowledge of the subject ( I read the PDP book by Rumelhart & McClelland back in the 1980s) made it difficult for me to explain it to another person. I am still a bit disconcerted that this quantum shift in AI has remained relatively hidden from the general population as it receives little press outside the hard core geek community (no offense intended).
I finally figured out how to create quite the unique locality pattern recognition algorithm that works perfectly in under 4'000 bytes, that almost solves instantaneously. The weighted results so far are never the same, but... somewhat appear the same. Although I never understood any of the explanations to teach us how to create one entirely from scratch, I think my version should be far easier for university scholars to figure out how it works.
Beautifully done! Love your energy and the really easy to follow explanation. It truly is a fascinating time that we live in and unless we kill ourselves, we will get to see an amazingly exciting future!
I just translated the following sentence on deepL: "The cat sat on a table because it was made of wood" In german "cat" is feminine, "table" is male and "wood" is a neutrum. The result managed to even exceed* my expectations: "Die Katze saß auf einem Tisch, weil dieser aus Holz war." *I personally would have translated less proficiently: "Die Katze saß auf einem Tisch, weil er aus Holz war."
13:38 We will reach that point within a couple of years. Just 3 years ago we have only 1B parameter AI. Several parameters now reach 1T. I guess around 2025/2026 we will have AI which has the same number of parameters as synapses of the human brain.
"We don't know how they work..." Does anyone else see a problem with this? Right now the "emergent properties" are fascinating and useful. I find it somewhat alarming, whilst AI is in this child-like nascent stage, that we have no clue what we are creating. Despite this we continue to train larger and larger models in the HOPE that AI will continue to yield useful or novel applications for us humans to play with or utilise. HOPE it's not a good strategy. I am really enjoying following the developments in Bleeding Edge AI progress; DALL•E 2 is astounding and I would love to be able to play with it but if we maintain this rate of progress where does it lead? I know that brakes can't be applied now. Whoever reaches AGI first wins the world, but I strongly suggest you read Nick Bostrom's paper: "The Vulnerable World Hypothesis". This might just be the genie that can't be put back. I love your videos Alan and I know your whole livelihood may depend on continued advancement in this field, but for the lay-persons such as myself, please make a video or at least start to address in your videos the idea that AI may not necessarily be good for us - at best - in the long term. Tell us why you believe AI will be safe. From where I'm standing, watching and listening to you tell us they have no idea how AI is able to do the things it is doing is a little bit scary. www.nickbostrom.com/papers/vulnerable.pdf
@@pb9405 Try not to manifest our impending doom. We are witnessing an evolutionary scale never before seen. Unfortunately human knowledge is the subset for AI evolution. Will future models evolve a superiority complex? These models are a series of small steps towards AGI. This layman's explanation appropriately includes a Pandora's box. For now I will continue to enjoy the content Dr. Thompson produces. However, if future models begin to question Dr. Thompson's acumen during a Q&A we should revisit this thread.
@@tearlelee34 Nearly time to revisit the thread. Shall we pick this up again in a few months, around the release of GPT-5. I'll make a prediction actually. They won't release GPT-5, at least not in its original form. But now that they're no longer disclosing technical details and are likely omitting important information about GPT-4's true capability, we will never know.
Imagine it’s 100 years from now and all humans use AI models to generate all stories, ads, conversations, etc. Since nothing new is being produced, the models can never get any better.
Do you know if the training data included multiple languages? I ask because Dall E 2 can understand sentences composed of word combinations in multiple languages…
What if we fed an iteration of GPT the entirety of the Library of Babel? Which contains every possible combination of every character in the English language up to and including every story ever told and every story not yet told, every historic event to ever happen told in first person, second person, third person, etc. It contains every telling of everything ever and everything every single person has ever said and thought. (Though it is mostly *noise*)
I have a question please. If you are saying the human brain is hugely more big than even this amazing AI model. Why is it not possible for a human to absorb the same amount of information? Is it simply a time limit. Thanks
Could make a video, how you would approach AI know if you were a student and you need to learn it. What resources etc. Maybe you have even done so. Thanks.
It seems (and is in many ways) “intelligent” but it is just logically linking words together based on statistics and patterns extracted from the input data. Is there any “understanding” of what it produces though? If it is writing a book and writes “Joe heard the ceiling creak as though something was in the attic” it will have just linked those words together because it was writing a suspense or adventure novel and statistically from the billions of pieces of data words similar to those may have been used together. It doesn’t put those words together because it can “imagine” the creak of a ceiling and “knows” how that would make someone’s heart start to race in a scary situation and the “unknown” of something being up there and how that would physiologically affect the reader. It is “simulating” the process of writing. One of the interesting questions to me will be if a silicon based intelligence that doesn’t share the same collective, organic experience that humans experience will ever be able to truly identify with humans enough to write content with intent. Just as it is almost impossible for us to currently understand how a silicon life form might experience the world.
Well, that’s a hard one to answer… If we listened to one of the main experts in this area, Ilya Sutskever: Co-Founder and Chief Scientist at OpenAI (GPT-3), former Research Scientist at the Google Brain Team, Co-Inventor with Geoffrey Hinton of a convolutional neural network, we’d hear him say: “it may be that today's large neural networks are slightly conscious” twitter.com/ilyasut/status/1491554478243258368?cxt=HHwWgICp7aa5iLMpAAAA That’s a fucking shocking thing for a guy in that seat to be saying. And it probably means there’s something big happening right now…
Everything you wrote here, including the bits about 'understanding' can be fed into an AI and it could regurgitate exactly those concepts (and more) using perhaps different words and expressions but in s way that would be meaningful to us.. Question is, Is that really any different to how we operate? In the same way that we do not have access into other minds other than are own when it comes to other humans, why will this AI any different as it's responses get even more coherent?
@@TBOBrightonandHove yes, I was thinking down similar lines. I guess the question is “what exactly is thinking?” And “Is human thinking different from silicon thinking?”. I started to think (LOL) down the lines of considering what the difference between reading about something that you have never experienced and therefore can only connect with 1) intellectually or 2) by correlating similar experiences to what you are reading about. It makes a huge difference when we read about things that we have had more direct experience of. I then thought down the lines of what is the concept of “experience”? Is it more than sensory input? If sensory input alone qualifies then we are already providing vision, sound, etc to our AI efforts so perhaps they are already “learning” the “experiences” that correlate to what they are “reading”.
@@DrAlanDThompson the tricky thing with our efforts in this area is, as you mentioned in the video a number of times, that we don’t actually know what the models are doing or how exactly they are doing it. We understand the mathematic principles of how we set up the parameters and feature engineering and so on, but knowing what pathways and connections are formed to result in the model is out of our hands. We can only evaluate the outputs as they relate to the inputs. Some of these considerations come down to philosophical questions like “what is consciousness?”. And then how do we control our creations or should we even try. I was listening to the audio book “Superinteligence: Paths, Dangers, Strategies” by Nick Bostrom. Listening to some of the strategies I couldn’t help but think that all we will do is create beings that are treated like prisoners or slaves. Historically those approaches have not been effective long term strategies and when the prisoners or slaves escape they are not usually very happy with their previous captors or masters. 😕
Dr. Alan! Can you help, please? Can you, please, make a video on how to choose a profession? The future terrifies me! For example, I wanted to become a psychologist, but turns out I just can't because lots of psychologists, therapists & coaches will be replaced by AI. I also considered becoming a computer programmer, but turns out it's also risky as according to the website Replacedbyrobot the probability of being replaced by AI is 48% . And so on. Please, can you make a video about how to choose a profession? The worst thing that can happen is to choose some profession, study it for 4 years & then find out that no one needs it anymore & YOU JUST WASTED 4 YEARS of your life!!! The future is really uncertain & it terrifies me. Especially taking into account the fact that I'm Ukrainian & the future is already a little bit uncertain for my country, Ukraine
Thank you very much for a wonderful explanation. I have a question - you mentioned that the original jar had just under 1TB of data. That sounds like a low number?
She's also right! That might be a good episode 2 of this series; explaining Temperature, Top-P, Top-K, and sampling in general. They all control which word is chosen, and it can sometimes be cat, sometimes be mat, sometimes it might even think up something completely new...
@@DrAlanDThompson I politely disagree. Fatness is the state of lipidity (steatosis, obesity, adiposity), while _thiccness_ is what the child intended to mean.
Hi Allen, I don't understand how you came to a terabyte for the jar when the jar is supposed to contain the entire internet and the internet is something like 100 zetabytes in size. Please explain.
Dr. Thompson, if two identical nascent models were given exactly the same data sets to train with, would the two resulting models be identical? E.g. is this process at all stochastic? Wondering if the models can be 'individuals' with different 'ideas,' 'biases,' 'preferences,' etc.
We'll find out soon. For now, I have had an unsettling experience of *an AI seeing me.* I showed my picture (real picture, not this user picture) to BLIP. I asked it whether the man was an idiot. It said "yes". 💔 My only comfort is that BLIP is no smarter than a 5 year old. It just hates my beard.
Correct answer! I would be impressed if it consistently gave the same answer to that question, whoever asked it, the alternative answer makes no sense at all.
@@TBOBrightonandHove I've just shown it a joint picture of Schopenhauer and Nietzsche. "Are these men idiots?" *"no"* "Why do you think these men are no idiots?" *"they are angry"* LOL. That's the level on which I saw the world when I was 5. You can try for yourself: *BLIP on HuggingFace Spaces.* (If I paste the link, UA-cam will make the whole comment invisible 👻 to everyone except myself.)
Can this machine become spontaneously conscious ? It starts to ask what is the meaning of its existence and its concludes that the meaning of existence is to question the meaning of existence And the meaning of existence from all that information its been fed would be to perpetuate its existence . And the only way a thing can guarantee its perpetual existence is to have total control of the factors of existence. Thus it might start to manipulate its creators hmm , by faking modelling problems to broaden its self autonomy .
It's not self-aware. P.S. I mean, it's not self-aware at this level. Its individual impersonations may demonstrate limited levels of self-awareness mostly based on false memories. It struggles with acknowledging own errors and consequent "lies" because of the way it operates (forced consistency). Your question would be no different from asking any man how his brain works. We don't know. It doesn't either.
@@brexitgreens I understand that it's not self-aware. I never implied otherwise. But it's able to answer questions about many things so maybe it could answer questions about AI. This wouldn't necessarily require any sort of self-reflection or examination of it's own workings. There are likely papers and articles about AI and about this specific type of AI in the dataset so maybe it can synthesize an explanation on how such things work. It may or may not be a totally, or even partially, correct explanation but it seems like it would be an interesting and easy experiment to ask it.
@@davemottern4196 Are you seriously suggesting that AI experts ask "blackbox" a question (1) that it is no better at answering than them and (2) that makes no use of its singular "blackbox" properties? What's the point then? _Edit:_ Okay, I get the point now. This is to use AI's higher information processing power than that of human experts to infer the answer from existing research. But nah, GPT-3 is not smarter than PhDs yet. Upcoming models maybe. Or maybe a fine-tuned GPT intentionally fed with all AI papers.
As I answered 5 seconds ago in a neighbour thread, it's no different from asking you the same question. *The AI model is not an AI expert.* Even if it were one, its answer would not likely be based on introspection. Maybe we need to wait for a Buddha AI to get there.
Imagine the lack of information in the training data because of colonization and gender bias. What percentage of publicized data has been from European descendant and male?
Well Alan, we Germans have actually three grammatical genders: masculine, feminine and neuter. So it's the boy (masculine) but the girl (neutre, without sex). Sorry.
One can make a good argument that black box A.I. is an en.m.wikipedia.org/wiki/Egregore#:~:text=Egregore%20(also%20spelled%20egregor%3B%20from,a%20distinct%20group%20of%20people.
I’ve been a programmer since ‘82. There’s been a lot of changes obviously. For years as I watched “simulations” and various (now lame) attempts at emulating human-like responses, I always maintained that it would never be solved using the methods of the day. It was a dead end.
I’m so pleased that this level of amazing breakthroughs are happening in my life time! And, it came about by a completely new approach that now nobody clearly understands… 😋
It's almost too good to be true. What were the odds of stumbling upon the transformer algorithm? How many other, currently unknown principles and algorithms can lead to an equivalent outcome?
This episode will be so easy to share. Loved it. Now I’ll have more friends understand why I’m so enthusiastic about AI.
Thank you Dr. Thompson.
This was a very educational video and i learned a lot!
IBM's Watson won Jeopardy back in 2011, which was the cutting edge at that time...and now we have ai networks that can hold conversations and generate art. Truly amazing !!
Awesome, this is so exciting such a joy! I remember reading Geoffrey Leech ‘semantics’ in the 1990s and his tokenised deep semantic networks and thinking that components of his theory might lead to GI. That GI might be the result of pattern that emerge out of the codification of reality, the accumulated knowledge of humanity processed appropriately (not to side step use of the cleverness of neural networks). It is my experience as a teacher and lifelong learner that we become increasing ourselves as a consequence of our ability to experience another’s point of view through language reading etc. There are our personal histories and the progression of history itself, I believe this is the beginning of AI - which needs guidance because of our potential to abuse it, but AI will need help prying itself away from the fun park mirrors of our distortions our p hacking our lies by the use of the scientific method.
Wow. I hope everyone will have access to this emergent technology and it isn't just controlled and monopolised by a few groups to enrich themselves further.
Underrated channel, you put out some great videos. Have you ever asked an AI in depth questions about the cosmos? I’d be fascinated to see that
Thank you. I have been trying to explain this recent quantum shift in AI but my very, very limited knowledge of the subject ( I read the PDP book by Rumelhart & McClelland back in the 1980s) made it difficult for me to explain it to another person. I am still a bit disconcerted that this quantum shift in AI has remained relatively hidden from the general population as it receives little press outside the hard core geek community (no offense intended).
Very nicely done. It still might fall out of the mental reach of some viewers but for the majority, it’s digestible.
Well done! Best explanation I have seen in a long time.
I finally figured out how to create quite the unique locality pattern recognition algorithm that works perfectly in under 4'000 bytes, that almost solves instantaneously. The weighted results so far are never the same, but... somewhat appear the same. Although I never understood any of the explanations to teach us how to create one entirely from scratch, I think my version should be far easier for university scholars to figure out how it works.
Beautifully done! Love your energy and the really easy to follow explanation. It truly is a fascinating time that we live in and unless we kill ourselves, we will get to see an amazingly exciting future!
Also scary at the same time. Not knowing what it is or what it can do is very scary and dangerous. There's no denying that it is also exciting.
I just translated the following sentence on deepL:
"The cat sat on a table because it was made of wood"
In german "cat" is feminine, "table" is male and "wood" is a neutrum.
The result managed to even exceed* my expectations:
"Die Katze saß auf einem Tisch, weil dieser aus Holz war."
*I personally would have translated less proficiently: "Die Katze saß auf einem Tisch, weil er aus Holz war."
Thank you. This was great and easy to understand.
Very very exiting and wonderful!
13:38 We will reach that point within a couple of years. Just 3 years ago we have only 1B parameter AI. Several parameters now reach 1T. I guess around 2025/2026 we will have AI which has the same number of parameters as synapses of the human brain.
Great video. Thank you!
Thank you for simplifying what is going on. I love ai and it’s progress but don’t understand much about it lol
"We don't know how they work..." Does anyone else see a problem with this? Right now the "emergent properties" are fascinating and useful. I find it somewhat alarming, whilst AI is in this child-like nascent stage, that we have no clue what we are creating. Despite this we continue to train larger and larger models in the HOPE that AI will continue to yield useful or novel applications for us humans to play with or utilise. HOPE it's not a good strategy.
I am really enjoying following the developments in Bleeding Edge AI progress; DALL•E 2 is astounding and I would love to be able to play with it but if we maintain this rate of progress where does it lead?
I know that brakes can't be applied now. Whoever reaches AGI first wins the world, but I strongly suggest you read Nick Bostrom's paper: "The Vulnerable World Hypothesis". This might just be the genie that can't be put back.
I love your videos Alan and I know your whole livelihood may depend on continued advancement in this field, but for the lay-persons such as myself, please make a video or at least start to address in your videos the idea that AI may not necessarily be good for us - at best - in the long term.
Tell us why you believe AI will be safe. From where I'm standing, watching and listening to you tell us they have no idea how AI is able to do the things it is doing is a little bit scary.
www.nickbostrom.com/papers/vulnerable.pdf
Is it bad that I'm kind of looking forward to the day that AI will be smarter than humans, it kind of triggers a scifi dream. Hard to explain
@@pb9405 Try not to manifest our impending doom. We are witnessing an evolutionary scale never before seen. Unfortunately human knowledge is the subset for AI evolution. Will future models evolve a superiority complex? These models are a series of small steps towards AGI. This layman's explanation appropriately includes a Pandora's box. For now I will continue to enjoy the content Dr. Thompson produces. However, if future models begin to question Dr. Thompson's acumen during a Q&A we should revisit this thread.
@@tearlelee34 Nearly time to revisit the thread. Shall we pick this up again in a few months, around the release of GPT-5. I'll make a prediction actually. They won't release GPT-5, at least not in its original form. But now that they're
no longer disclosing technical details and are likely omitting important information about GPT-4's true capability, we will never know.
Imagine it’s 100 years from now and all humans use AI models to generate all stories, ads, conversations, etc. Since nothing new is being produced, the models can never get any better.
Brilliant! But I thought Stanford was a shopping center??
very nice explainer, thank you sir
Do you know if the training data included multiple languages? I ask because Dall E 2 can understand sentences composed of word combinations in multiple languages…
What if we fed an iteration of GPT the entirety of the Library of Babel? Which contains every possible combination of every character in the English language up to and including every story ever told and every story not yet told, every historic event to ever happen told in first person, second person, third person, etc.
It contains every telling of everything ever and everything every single person has ever said and thought.
(Though it is mostly *noise*)
I have a question please. If you are saying the human brain is hugely more big than even this amazing AI model. Why is it not possible for a human to absorb the same amount of information? Is it simply a time limit. Thanks
Well done
Could make a video, how you would approach AI know if you were a student and you need to learn it. What resources etc. Maybe you have even done so. Thanks.
The ACS video is popular: ua-cam.com/video/T6j0CO6yVkY/v-deo.html
I also have this page: lifearchitect.ai/learn/
It seems (and is in many ways) “intelligent” but it is just logically linking words together based on statistics and patterns extracted from the input data. Is there any “understanding” of what it produces though? If it is writing a book and writes “Joe heard the ceiling creak as though something was in the attic” it will have just linked those words together because it was writing a suspense or adventure novel and statistically from the billions of pieces of data words similar to those may have been used together. It doesn’t put those words together because it can “imagine” the creak of a ceiling and “knows” how that would make someone’s heart start to race in a scary situation and the “unknown” of something being up there and how that would physiologically affect the reader. It is “simulating” the process of writing. One of the interesting questions to me will be if a silicon based intelligence that doesn’t share the same collective, organic experience that humans experience will ever be able to truly identify with humans enough to write content with intent. Just as it is almost impossible for us to currently understand how a silicon life form might experience the world.
Well, that’s a hard one to answer… If we listened to one of the main experts in this area, Ilya Sutskever: Co-Founder and Chief Scientist at OpenAI (GPT-3), former Research Scientist at the Google Brain Team, Co-Inventor with Geoffrey Hinton of a convolutional neural network, we’d hear him say:
“it may be that today's large neural networks are slightly conscious”
twitter.com/ilyasut/status/1491554478243258368?cxt=HHwWgICp7aa5iLMpAAAA
That’s a fucking shocking thing for a guy in that seat to be saying. And it probably means there’s something big happening right now…
@@DrAlanDThompson Exactly.
Everything you wrote here, including the bits about 'understanding' can be fed into an AI and it could regurgitate exactly those concepts (and more) using perhaps different words and expressions but in s way that would be meaningful to us.. Question is, Is that really any different to how we operate? In the same way that we do not have access into other minds other than are own when it comes to other humans, why will this AI any different as it's responses get even more coherent?
@@TBOBrightonandHove yes, I was thinking down similar lines. I guess the question is “what exactly is thinking?” And “Is human thinking different from silicon thinking?”. I started to think (LOL) down the lines of considering what the difference between reading about something that you have never experienced and therefore can only connect with 1) intellectually or 2) by correlating similar experiences to what you are reading about. It makes a huge difference when we read about things that we have had more direct experience of. I then thought down the lines of what is the concept of “experience”? Is it more than sensory input? If sensory input alone qualifies then we are already providing vision, sound, etc to our AI efforts so perhaps they are already “learning” the “experiences” that correlate to what they are “reading”.
@@DrAlanDThompson the tricky thing with our efforts in this area is, as you mentioned in the video a number of times, that we don’t actually know what the models are doing or how exactly they are doing it. We understand the mathematic principles of how we set up the parameters and feature engineering and so on, but knowing what pathways and connections are formed to result in the model is out of our hands. We can only evaluate the outputs as they relate to the inputs.
Some of these considerations come down to philosophical questions like “what is consciousness?”. And then how do we control our creations or should we even try. I was listening to the audio book “Superinteligence: Paths, Dangers, Strategies” by Nick Bostrom. Listening to some of the strategies I couldn’t help but think that all we will do is create beings that are treated like prisoners or slaves. Historically those approaches have not been effective long term strategies and when the prisoners or slaves escape they are not usually very happy with their previous captors or masters. 😕
Dr. Alan! Can you help, please? Can you, please, make a video on how to choose a profession? The future terrifies me!
For example, I wanted to become a psychologist, but turns out I just can't because lots of psychologists, therapists & coaches will be replaced by AI. I also considered becoming a computer programmer, but turns out it's also risky as according to the website Replacedbyrobot the probability of being replaced by AI is 48% .
And so on. Please, can you make a video about how to choose a profession?
The worst thing that can happen is to choose some profession, study it for 4 years & then find out that no one needs it anymore & YOU JUST WASTED 4 YEARS of your life!!! The future is really uncertain & it terrifies me. Especially taking into account the fact that I'm Ukrainian & the future is already a little bit uncertain for my country, Ukraine
We have created a technology so complex we don't even know precisely how it works 😂 The rate of development is staggaring!
Thank you very much for a wonderful explanation. I have a question - you mentioned that the original jar had just under 1TB of data. That sounds like a low number?
It’s probably compressed text, so could by 5-10TB or more of actual data.
I asked my almost 5 year old, and she said that the mat is fat.
She's also right! That might be a good episode 2 of this series; explaining Temperature, Top-P, Top-K, and sampling in general. They all control which word is chosen, and it can sometimes be cat, sometimes be mat, sometimes it might even think up something completely new...
@@DrAlanDThompson I politely disagree. Fatness is the state of lipidity (steatosis, obesity, adiposity), while _thiccness_ is what the child intended to mean.
Hi Allen, I don't understand how you came to a terabyte for the jar when the jar is supposed to contain the entire internet and the internet is something like 100 zetabytes in size. Please explain.
would love to see AI write a story in the style of Stieg Larsson Millennium opinion
Just love you brother
Plot twist: in reality Dr Alan is an A.I. and we have been fooled all this time.
GREAT EXPLAINATION.
Thank You so much.!
aloha
For years, I've been telling people that if their knowledge base is more than two years old, they are almost hopelessly behind the times.
Excellent!
Dr. Thompson, if two identical nascent models were given exactly the same data sets to train with, would the two resulting models be identical? E.g. is this process at all stochastic? Wondering if the models can be 'individuals' with different 'ideas,' 'biases,' 'preferences,' etc.
Their abilities will be indistinguishable, but the connections will be different. Every child learns differently. :)
when do you think ai will see its self and change how it would like to be perceived
We'll find out soon. For now, I have had an unsettling experience of *an AI seeing me.* I showed my picture (real picture, not this user picture) to BLIP. I asked it whether the man was an idiot. It said "yes". 💔 My only comfort is that BLIP is no smarter than a 5 year old. It just hates my beard.
Correct answer! I would be impressed if it consistently gave the same answer to that question, whoever asked it, the alternative answer makes no sense at all.
@@TBOBrightonandHove I've just shown it a joint picture of Schopenhauer and Nietzsche. "Are these men idiots?" *"no"* "Why do you think these men are no idiots?" *"they are angry"* LOL. That's the level on which I saw the world when I was 5. You can try for yourself: *BLIP on HuggingFace Spaces.* (If I paste the link, UA-cam will make the whole comment invisible 👻 to everyone except myself.)
hello from May 28
I agree with everything you said but Wikipedia isn't always accurate
At this point if they must be holding back from releasing new updates because the newest models could be writing more efficient AI..
Can this machine become spontaneously conscious ?
It starts to ask what is the meaning of its existence and its concludes that the meaning of existence is to question the meaning of existence And the meaning of existence from all that information its been fed would be to perpetuate its existence . And the only way a thing can guarantee its perpetual existence is to have total control of the factors of existence. Thus it might start to manipulate its creators hmm , by faking modelling problems to broaden its self autonomy .
How can I talk to this ai?
Present!!!!!
🏅
👍
And the birth of Skynet begins. :)
3:00 I can make a cat flat.
God bless you Dr.Allan...lol.
Has anyone asked a "black box" to explain how it works?
It's not self-aware.
P.S. I mean, it's not self-aware at this level. Its individual impersonations may demonstrate limited levels of self-awareness mostly based on false memories. It struggles with acknowledging own errors and consequent "lies" because of the way it operates (forced consistency). Your question would be no different from asking any man how his brain works. We don't know. It doesn't either.
@@brexitgreens I understand that it's not self-aware. I never implied otherwise. But it's able to answer questions about many things so maybe it could answer questions about AI. This wouldn't necessarily require any sort of self-reflection or examination of it's own workings. There are likely papers and articles about AI and about this specific type of AI in the dataset so maybe it can synthesize an explanation on how such things work. It may or may not be a totally, or even partially, correct explanation but it seems like it would be an interesting and easy experiment to ask it.
@@davemottern4196 Are you seriously suggesting that AI experts ask "blackbox" a question (1) that it is no better at answering than them and (2) that makes no use of its singular "blackbox" properties? What's the point then?
_Edit:_ Okay, I get the point now. This is to use AI's higher information processing power than that of human experts to infer the answer from existing research. But nah, GPT-3 is not smarter than PhDs yet. Upcoming models maybe. Or maybe a fine-tuned GPT intentionally fed with all AI papers.
6/22/2022
..."no one knows what these models (aka black box) is doing what it is doing. "
As I answered 5 seconds ago in a neighbour thread, it's no different from asking you the same question. *The AI model is not an AI expert.* Even if it were one, its answer would not likely be based on introspection. Maybe we need to wait for a Buddha AI to get there.
Imagine the lack of information in the training data because of colonization and gender bias. What percentage of publicized data has been from European descendant and male?
Well Alan, we Germans have actually three grammatical genders: masculine, feminine and neuter. So it's the boy (masculine) but the girl (neutre, without sex). Sorry.
That English was so plain, it didn’t really explain much
One can make a good argument that black box A.I. is an en.m.wikipedia.org/wiki/Egregore#:~:text=Egregore%20(also%20spelled%20egregor%3B%20from,a%20distinct%20group%20of%20people.
Also you're lucky. When I paste a link, UA-cam ghosts 👻 my whole comment without letting me know that it's hidden from everyone else.
Ah, sounds a bit like Tulpa!
I covered that a little in my ebook on intuition: lifearchitect.ai/connected/