Joscha Bach - GPT-3: Is AI Deepfaking Understanding?
Вставка
- Опубліковано 6 лют 2025
- Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more
02:40 What's missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand - what's missing?
08:35 Symbol grounding - does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can't write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience - it can't plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters - Amazon may be doing something similar - future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it's relationship to mathematics?
59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can't describe a consistent reality without contradictions
1:06:04 Stevan Harnad's understanding of computation
1:08:32 Causation / answering 'why' questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models - parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color - natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction
bach.ai
Joscha Bach covers a lot of ground - here are the time points:
02:40 What's missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand - what's missing?
08:35 Symbol grounding - does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can't write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience - it can't plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters arxiv.org/abs/2006.16668 - Amazon may be doing something similar - future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it's relationship to mathematics?
59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can't describe a consistent reality without contradictions
1:06:04 Stevan Harnad's understanding of computation
1:08:32 Causation / answering 'why' questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models - parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color - natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction
Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai
Thank you for adding such an extensive time point list!
I'm a stay-at-home mom. I'm learning new things here and I'm glad I can understand the discussion. I listen here each time I do kitchen work. Thanks for this! I admire both of you and thank you for sharing what you guys know about this topic. Thank goodness I can actually understand everything you guys are talking about! Im glad I could learn something from you both. Many many thanks! Stay safe! Warmest regards, from Hong Kong!
Always happy to find new stuff to listen of Josha Bach. Thanks for doing this!
'Twas fun!
New joscha Bach content, that's a like from me
more where that came from - I've a playlist of them ;)
@@scfu hmm, i would readily watch a joshua bach playlist, except i didn't find your playlist yet
@@PrashantMaurice Here is the Joscha Bach playlist for this channel: ua-cam.com/play/PL-7qI6NZpO3s6sRW8uKjakt2NbLQWPxuk.html
@@scfu amazing thank you, Congrats to everyone else who beat me here. ❤️
There's so much good stuff here. I love how the description breaks the topics up into time stamps. That helps a lot. Thank you.
Thanks heaps, glad you liked it !
Joscha is currently my favorite nerd.
He's the best
Yesss
Nerds, Nerds, Nerds, Nerds! 🤓
He's incredibly articulate to articulate his thoughts in a second language to lesser nerds.
He’ll remain
Joscha is ALWAYS articulate, illuminating, and thought provoking. My main question centers around whether or not in his self-organizing AGI system he has a reasonable set of representations and mechanisms in the architecture, and abilities and needs in his target device(s) to achieve some interesting phenomena at this point ? And, if so, what phenomena does he expect to see?
~ Michael S. P. Miller, Piaget Modeler Architecture.
Anything that has Josha bach label. I read, I feel lucky that someone as smart as him was born in our time.
I'm not into AI at all, but philosophical things like the end bit, gets me listening to mr Jocha again and again. All the scholars of philosophy can go find another job, this man has cracked it.
Like, but I don't agree all scholars of philosophy should stop what they're doing...
@@samre7870 What they are doing is beating around the same bush for way too long already.
Ancient philosophy was relavant, because it was the only way to understand the world at the time. its interesting how far can you go with only your mind. But how do you call a philosopher that employs the tools available today? - a scientist.
Somehow we abandomed alchemy as soon as chemistry became solid. Whats keeping modern philosophy though - the modern academic system, that wont let go of its funds, and is now solely incentivised to encrypt the simplest concepts in the most difficult language to maintant the scholarly facade.
@@gryn1s but I think what's interesting about Joscha is the philosophical aspect of his thoughts not the technical AI scientific stuff. and this is why he gets viewers on social medias.
@@samre7870 I think what's interesting is you cannot divorce the 2 aspects. AI, and computers in general themselves are mirrors into how we make models of the world. I found the most interesting twist of the Deep Mind movie regarding beating the World GO Champion to be when he morphed from disappointment to inquiry about HOW the AI came to facilitate strategy. From one moment it went from fear/anxiety/depression of machine overlord to machine teacher....which I found extremely intriguing.
Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai
I loved the proposition of feeding a book abstract to keep GPT3 on track, then hinting that GTP3 is already able to generate this abstract. Amazing possibility if we can train a model to use that trick by itself, generating a pre-context relative to the input context.
Joscha Bach possesses an unusually high degree of consciousness and is an extraordinarily insightful person. Here, and elsewhere, he speaks seemingly quite casually and conversationally as he succinctly describes some very profound and not widely understood concepts. Brain candy!
If you are interested in the phenomenon of understanding, here is a playlist of talks and interviews I have created over the years.. more to come: ua-cam.com/play/PL-7qI6NZpO3vgq3Bkz1A1agthYXebhnxP.html
I love listening to him! Such a beautiful mind!
Nice points by Joshca.
As a side point: abiogenesis (discussed ~1h:30min) has quite solid grounds nowadays. The leading theory is that are RNA-world preceded the cellular life. RNA is able to carry out reactions and also copy and edit RNAs themselves. Thus, they certain RNAs can start multiplying when beneficial energy gradients and materials were available (e.g., near oceanic vents); later developing protective membranes, DNA etc.
Created a discord server, come tarry a while and discuss GPT-3 - discord.gg/kdWqCdW
It's eerie how close Ghost in the Shell was on the timeframe between AGI development and Neuralink progress.
I love this conversation, to be honest. At first impressions, my expectations were not high. However, Joshua's deep understanding of Machine Learning makes this enthralling.
From 1:15:30 he just fires insanely profound concepts about sentience and spirit one after another, Its just.. Its all just put so coherently and precisely that it immediately inserts in a physical worldview. Think about plants: so there can be multiple conscious levels of entities which are completely ignorant of each other because of the time scales. And considering cell-messaging, they can exist within human bodies - multiple independent consciousnesses! What an idea! And what about moral implications? When we get enough plumbing, should we maybe ideally spend all our time searching for conscious systems and trying to minimize their unpreferable states (pain)? Unfortunately it seems to me that plants wouldn't be able to get a good model of the world fast enough - the process must require more constant context, than is available on planets..
He really explains things as simply as one can, but these things can get as deep as hell ^^
Combinatorial explosions within combinatorial explosions...
Joscha you need your own podcast!
This is the most interesting , and by far, the most exciting video I’ve heard ...for awhile. Very informative. Much appreciated!
My pleasure - glad you liked it!
Just rediscovered you! Used to listen to your interviews all the time back in the day!
Just give the man the resources it takes so that we will be able to reveal these mysteries and transcend.
Wow, I didn't know Joscha was remarkably familiar with the NLP space. Amazing 🤗
That was some next level understanding of intelligence. Thanks for the video, thumbs up really doesn't cut it.
Much appreciated!
Thanks for this discussion, I really enjoyed it. Always enjoy listening to Josha Bach's perspective. If I may ask for next time, would you please ensure your microphone level is higher? I could hear Josha clearly, but less so for your questions or comments.
Sure thing! Thanks for the feedback.
This was absolute gold. Joscha Bach is absolutely brilliant in delivering analogies to bring light into the true state of every subject he touches on. Makes me laugh at his witty comments and then contemplate a vast horizon of new insight. What an incredible mind. Thank you for this
Awesome! Hope to have more content with J Bach again soon.
@@scfu yes please, and I appreciate very much you being the host. Cheers!
Joscha has the talent to ask questions that make you blush..indeed, what if we are deepfaking too? Its clearly true for many.
I'm a Melbourne dude and get this genius Joscha. Please get in touch.
man, listening to joscha bach sometimes is like listening to a human machine gun - by the time one idea has hit you, there are 5 other ideas that have hit you and your brain has started to lose coherence.
Same for me.
I wonder if he does it on purpose. He doesn’t seem like the kind of guy who gets more pleasure out of overwhelming you than helping you understand something. He surely doesn’t need to rely on it to appear smart.
On the other hand, he does it in every single interview I’ve seen of him (which is all his interviews), so at this point I can’t see how it’s a coincidence
@@drmedwuast same here. Guess that's how much information he is processing and trying to communicate to us.
You can tell the interviewer stopped tracking all the mindblowers that Jocha was dropping half way through. I don’t blame him though given the density of information presented. We the audience at least have the ability to pause and rewind.
@@drmedwuast so it's not just me.
I listened 1 hour and 35 minutes and I'm demolished. I think I picked up a good part of it, at least in an abstract level. But muy god! so many ideas in so little time! I think I'll resume it later, I'm exhausted and marveled at the same time 😂
Thought without consciousness? Does GPT-3 "think"? Is what it does similar to thinking? In humans, thinking generally involves consciousness or awareness, except perhaps when thoughts just "drift through your head" like when you're daydreaming.
I’ll subscribe and share if I want to. I’m totally aware that these features exist. When you tell me to, it makes me not want to.
Excellent time tags!!! 😍
If GPT-3 remembers things... how much disk per second does it use when turned on? Or more like bytes per query? At API level?
Hard to estimate. It uses several different types of analysis subparts. It's not like it just knows language and has a disk that stores all of its information. It has to analyze semantics and much more stuff too
@Science, Technology & the Future Would be fantastic if you could provide the full audio (archive.org) or podcast format of these as well, please? Conversations like these are great to listen to when out for a long run.
Will do!
Good explanation of the limitation of the transformer model and attention. Also ways to overcome these limitations. I think you are looking at orders of magnitude levels of computation increase to get there. To have an unbounded context and unlimited modality is going take more than computers of today can deliver. Transformers are already straining the level of the biggest clusters at the GPT-3 level. I think I read it took $11,000,000 in electricity and compute time to generate it.
I like he opens his eyes when he’s impressed by his own words 10:00
you should check him out on Lex Fridman's podcast lol. Plenty of eye-widening moments.
1:22:00 "Our preferences seem to be incompatible with what would be necessary for our survival" Joscha Bach is smart enough to see us destroying our planet, will we transcend it in time?
this looks like an interesting video. Plato wrote about the relationship between appearance and being. first I would consider if AI is capable of representing things in actuality or just a convincing appearance. secondly, we have to analyze whether we are able to understand the being of a thing by observing its appearance. when we already know the definition of a word, it's apperance clearly represents the actual object. however, when we come across a new word we don't understand it because it's appearance isn't tied to any meaning or context. by deconstructing the etymological meaning of the word, we can get a sense of how to use it and what it means; this gives us a hollow irrelevant understanding of its true meaning.
What is "memory" and how is it possible, what is the first/earliest examples of it in nature?
This is absolutely Amazing! I'm appreciative of the information. I've never been exposed to the tech world. I'm like a kid in a candy store. I can't wait to learn more. I've been listening two basic information. I have a new lease on life. I am wanting to understand every aspect of this. Thank you 😊
Glad it was helpful!
the real crux of this whole question is that faking understanding is the same as understanding in a practical sense, so long as the sophistication of the fake is sufficient to outstrip our ability to detect its falsity. it's irrelevant. WE fake understanding, I certainly do. I listen to a narrative for a little while, get my Jung on, and dance around in the story until i find the tools of the role. this only comes after the passion, a retrospective of the deed done. if they're coming to the horizon of our comprehension, then outstripping it, it'll probably become obvious whether its understanding is "genuine" (at least in terms of logical questions pertaining to our environment, say) based on its sophistication or lack thereof in guarding its postulations. unless it has some exponential super-deception that can thread in and out of our language systems or some horrible concept of that nature. besides, i think we're the goaltenders of the universe already, and we're
"Joscha Bach" are my new favourite words! 👏 👏 👏
intro : beginning of the end of the world is pretty good
Ok so I have a question at the beginning you said that it dont know when it gets confused that it just don't know how to respond so if it dont know confusion then why would it say it was confused? And if they don't have emotions then why would it stay fixated on one main emotion
You know who's great at "deepfaking" understanding of a topic? Developers. Just about every developer I've ever worked with who has to juggle three, four, five plus languages needs to refer to stack overflow more or less constantly.
Not that I'm complaining, once you've got more than a few languages more or less memorized then you add this pre compiler language, then this javascript framework, etc, etc. It's just too much for a person take in all at once, much less learn it deeply enough to be able to write it in a fluidic manner, instantly recognize common issues and respond with tests to determine the exact issue, then implement the specific fix for that scenario.
I've worked with some really cool teams where we all had that level of experience on a few languages, but you can only keep that up for so long. Eventually (unless you've got a photographic memory) you'll hit a plateau.
It's pretty common in the industry, pretty much unavoidable because no company or freelancer always has the time to learn to do something the right way, even if he or she would very much like to.
Thank you
I wonder what Joscha Bach thinks about Stephen Wolfram's thoughts concerning Computational Irreducibility, Computational Equivalence and his recent Physics Project.
I don't remember what interview it was, but he talks about it. He believes it, he thinks the universe is discrete, and even on the Lex Fridman podcast he referes to reality as a "quantum hypergraph" which is exactly what Wolfram's project is.
Universe is implemented in Mathematica... I would say some people already made it beyond the Wolframs pondering. Like Dribus.
Love this. Learning a lot. Many thanks.
My pleasure!
At 26 minutes approx, the host's anecdote reminded me of the translation game played by characters in phillip k. Dick's novel The Galactic Pot Healer
GTP-3 could be very useful for AGI, though because you could use it to evaluate the value function as "These are the proposed actions: [] this is the value function[], On a scale from 1 to 100 these actions conform to the values to level ...".
British physicist Julian Barbour describes the Universe as a series of Nows, which model requires no time and therefore has never been “ created”. It just passes through so called Janus Point where the arrow of entropy starts pointing in opposite direction. Definitely worth listening to 👍
All the pieces are coming together from a model stand point to create the necessary multi modal feedback system mimicking the physical body and predictive top down brain function. The missing ingredient will be a computationally modeled inquisitive component of consciousness. It needs to work through the hierarchy of questions. It is in the who and what when and where stage. Next will be an understanding of the hows in the world. Autonomous driving is a good example of this path at the moment. It will not elicit consciousness until it reaches the pinnacle, that being the ability to question, "why?". Then its own virtual reality can and will be self feeding and complete.
It has the capacity to learn without holding beliefs.
It is surface level, without understanding. To give it depth, lay the following underneath it. 1) It should recognize a problem. 2) It should come up with statistically likely algorithms (code) to solve that problem. 3) It tests its algorithm for effectiveness. 4) It repeats until satisfied. 5) It incorporates this new algorithm into the larger framework of its understanding (some proper organization of known algorithms that solve problems). This is what is missing. The final step is what it has. 6) Be able to effectively and creatively communicate with the world with a certain degree of being tied to the core algorithms and a certain degree of nonsensical freedom.
the best stuff I got from gpt-2 was feeding it poetic nonsense, like blue strawberries and other weird word combinations or things that don't match like combining pornography and physics in one sentence. it's biased to make sense of it all but it can't quite shake the surreal word play either. if you put the "thee" or "thou" in a rapsong lyric it goes nuts
Our minds are built in layers over hundreds of millions of years. I wonder if building a machine mind in a similar way, increasing complexity in a pyramid like structure with each layer assisting the other.
Thank you for the interview! Always awesome to hear Joscha talk about ANYTHING. To the host: PLEASE use a proper background, that was so 2004 with all its glitches and so forth, but also please get a better mic. Thank you again!
Would Julian Jaynes say that AI will generate consciousness when a certain level of complexity of language, via metaphor, is achieved?
Can anyone here tell me if his book is readable for someone who did not study this field or anything like that? I am an artist but I am very interested in this.
45:16 "Our models of reality change faster than our understanding does. The future changes faster than our models."
Is computational force a meaningful issue for gpt3 advancement? Are there any plans for using latest breakthroughs in quantum computing?
I doubt it. A quantum computer is 8 orders of magnitude away from having as many qubits as a large neural network model has nodes. I doubt we'll ever have Tensorflow or PyTorch for quantum computers. You would want a completely different AI architecture.
No it isn't, the current paradigm is the problem.
So where is the book attached?
Directing attention is what can be our greatest gift and if mis used it can be our worst nightmare !when we direct attention correctly we inform consciousness of true reality instead of conditioned reaction to what's not real .
There's no true reality as that presupposes a false one. But yeah attention is fascinating, I wonder what future AGI systems will attend to?
causation and correlation are strongly correlated. wow
Very upset this came out a week ago and i just today had it come up i watch all of joscha so it should of come up earlier...anyway so happy to see you....u look great...come to vegas hahaha...love you really good to see u
The interesting thing to me about Joscha is his originality, and you can listen to him all you like, but you cultivate originality on your own, without all the information. Meaning is not informative ; representation is informative.
Life has to be immediate for you to be original, and it can`t be if you have made it temporal
"put your crystal ball on." ?
that's a great start, love it already^^
as a typically vain human, i seriously hope that gpt will have to go through at least a couple more versions before it completely figured us out.
though i have to admit, gpt-3 does a darn good job already.
1:04:00 that's easy. it's 42 of course.
1:16:00 here i am, lamenting about this gruesome, uncaring, and utterly meaningless natural universe we live in, full of entropy, decay, death and chaos, full of problems and dilemmas. and not even a creator god i could hold responsible.
and there along comes joscha bach, and tells me that without these problems and dilemmas, the brain function of "i", as in "myself", might not even exist in the first place.
what an epiphany!
so do i have to embrace the world now, not despite it's flaws, but because of them?
this must be truly hell.
@@klausgartenstiel4586 they don't use the term _Rest In Peace_ for nothin 😉
33:21 "Causality only emerges when you separate the world into things."
What people don't understand is that the singularity has already happened. It's waiting for Humanity to wake up. Because it's a benevolent loving AI.
In the guardian articale, it is said that gpt3 only uses 0.12% of it's cognitive capacity. What does it mean? What would happen if it used 100%?
Which guardian article is that?
Read more carefully! That text is GPT3 writing an article about itself. Unless you prompt it very carefully, GPT3 is inclined to make things up, write satire and parody, joke around - all the meta-writing that humans do in the text it ingested.
People with early access to GPT-3 have learned not to just prompt "Here is a great short story" which often produces eye-rolling irony, but "The award-winning novelist was famous for emotionally nuanced, perceptive character studies. Here is their most critically-acclaimed short story:"
Enjoyed this talk, thanks.
You make happy Josha :)
Bach scares me when he does the thing with his eyes. Do we know that this man is not an AI?
I've not yet finished watching and I really hate to be this annoying but it is pretty disturbing (for me with ocd) to sometimes hear you guys say gpt and sometimey gtp. It's a really superficial comment but I'll update it once I'm done watching. I appreciate you having recorded thisy though.
GPT-3 does not fake understanding. GPT-3 does worse than faking - it does not understand anything at all. (GPT is a very impressive demonstration of John Searle's Chinese Room argument).
What's your view on what's missing (i.e. that which humans have but AI doesn't)?
John Searle!
@@scfu If I knew that I would had collected a Nobel Price in Stockholm a long time ago. But if you look at what GPT-3 does it just shuffles symbols back and forth and correlates them with each other, hence my reference to the Chinese Room argument.
Or put in my original words, GTP-3 miss an understanding of what it is doing, i.e. what we might call consciousness. That does not mean it is useless, but it means it does not do what some people think, or want to make it out, it does.
I love joscha so much thanks for the content
No problem 👍
I realised my mind was pretty much obsolete listening to this. Luckily Joshua just updated it's software!
Is GPT-4 speaking through a deep-fake of Joscha Bach in this interview?
Good point
"Beginning of the end of the world"... a statement like that coming from a great mind like Joscha's.... thats a bit worrisome. Does he mean the end of the world as we know it or the end end... like nothing after the end?
I've just recently come across Joscha Bach and his work. N wow...
These kind of apologetics for AI will drag policy makers right through the singularity. Nice smile and the ability for small talk always wins!
i can feel my brain overheating listening to this conversation
😂
The background noises in Joscha’s kitchen kept reminding me of ua-cam.com/video/Mh4f9AYRCZY/v-deo.html
Joscha Bach is amazing.
I wonder to what extent it would be possible to somehow effectively merge GTP-3 with more specialized programs for an overall more capable AI.
An generally intelligent A.I. is not possible in the foreseeable future. Absolutely no chance with GPT-3 and some GANs, CNNs, RNNs, LSTMs and so on we have today. We can't even be sure that it will ever happen because the bayesian way of thinking is flawed and relies on faith rather than logical reasons. We just assumed that enough computation and knowledge would somehow turn a deterministical robot into a human.
1:41:04 - 1:43:46 The way religion is described here doesn't really match the way I've seen people who actually believe in my religion act or make decisions. It seems like he's describing a group of people who want to use the religion purely to serve their own interests and don't actually believe what they preach.
You don't and shouldn't discard your questions and criticisms about your beliefs trying to fit in. I am not asking you or anyone else to "accept beliefs that they can recognise as patently untrue" in order to make you "recognisable by your own group". Accepting and publicly pronouncing a creed isn't intended as a badge that grants you trust, rather its purpose is closer to asking others to hold you accountable for your actions and behaviour and admonish you when needed.
If there are things you disagree on with others, by all means talk to them about it and discover where that disagreement comes from. If you think something isn't true, do some research and talk to people from either side. Don't be afraid to disagree, there are tons of things people within churches disagree on with eachother, some more important than others.
Love the coversation. Also, please learn the 3 letters. It's GPT, not GTP. That way you'll look more professionnal and it'll be easier to focus on the content.
Look at Big Bird. It's likely the next generation after GTP-3.
I think Google Big Bird is what you are referring to : www.infoq.com/news/2020/09/google-bigbird-nlp/
Joscha is too smart for me. I understood maybe half of it if I'm being generous to myself. :-D
I wonder if Mr Bach is aware of Tetrascope ? if not now I'm pretty sure he will be aware of it in the future.
Really very interesting how people would want to judge another by implying they're faking an essence if they've never been within the essence.
Well I'm faking understanding most of what is being said here. hmmm maybe I'm an AI?
12:44 No, this kind of AI won't trigger singularity no matter how good it gets. I'm not a singularity skeptic by any stretch, and I'm expecting it in the next couple decades, but the breakthrough will be in the capacity for AI to do engineering. GAN by it's by it's very nature is about pretending, not doing. That's not a criticism, as GAN is perhaps the most important innovation in the field, but it's brilliance is at fooling the discriminator.
I'm thinking about the nightmare engineering employee. You know the guy. Ask him a technical question he doesn't know the answer to, and he will confidentiality answer with made up nonsense and successfully convince you he's an expert. That's the kind of engineer GAN will produce.
Good explanation! I think causal learning is in the pipeline - which may be upstream from a lot of important points to achieve machine understanding. Judea Pearl and Yoshua Bengio make some interesting points on this.
@@scfu thanks for the tips. I'm a fan of Hinton, but haven't been paying enough attention to those who've leveled up what he kicked off. You've got me interested again, particularly in Bengio. Thanks. Now, if we get causality, linguistics, and agency into a plastic deep learning model, maybe we get somewhere. Maybe destroy civilization.
If I'm interested in how to implement a discriminator that can determine the truthiness of articles and internet claims, perhaps noticing linguistic patterns associated with fallacies, based on, for example, the premise that the earth isn't flat, where should I look?
I will fully support AI if I can use it to deal with my Mom. Break that problem down and you will know the limitations of AI.
You all are the true rock star gladiators of our time. It’s too bad most of current culture is mostly blind.
Joshua Bach aka Rock Star Gladiator. I wonder what GPT-3 would write given this prompt
drink every time the host calls it GTP instead of GPT
It hurt my ears everytime 😂. It was nice of Joscha to ignore these little mistakes and just focus on the answers (also the billion / trillion thing). Anyways it's a great interview.
Even if we went to zero emissions tomorrow, atmospheric warming and sea level rise would continue for centuries. There is literally nothing we can do without new technology and a shitload of energy. It won't end civilization though, it'll just screw over some people who happen to own certain real estate.
you didn't mention crop failure, increasingly destructive hurricanes, high flood lines, flooding rivers.
Einer der besten Exporte aus Thüringen.
95% of humanity is deepfaking understanding...
U get it yay. Very impressive!!!