Sorry folks - Just noticed there are a couple of visual references in the wrong place in the first 20 minutes, sorry - not worth pulling the video over though.
Please forgive me for bringing this up but I only in the past year have accepted my baldness and so I shave my head too and I just wanted to encourage you that you look very good with your head shaved
After seeing so many intellectuals & academics on this channel, I’ve come to only really appreciate those that are involved in making their ideas concrete, actionable, & executable. Otherwise it seems like ALL OF THESE PEOPLE seem to be stuck in _Conceptual Hell_ . Never breaking out of the things trapping them inside their minds.
@@shoubhikdasguptadg9911 It took me a while to make sense of it too, I certainly felt what the problem is but didn’t have the right words to express it.
You're right. I'm one of those people stuck in conceptual hell. But we need both kinds of people for progress in the world. The recent advances in AI wouldn't be possible if not for those who burned in that hell long enough.
[00:02:35] Essay on AI benefits and safety | Dario Amodei darioamodei.com/machines-of-loving-grace [00:04:50] POET algorithm for generating and solving complex challenges | Wang, Lehman, Clune, Stanley arxiv.org/abs/1901.01753 [00:07:20] Blue Brain Project: molecular-level brain simulation | Henry Markram www.epfl.ch/research/domains/bluebrain/ [00:08:05] DreamCoder: Wake-sleep Bayesian program learning | Kevin Ellis et al. arxiv.org/abs/2006.08381 [00:08:35] Computational models of human cognition | Joshua B. Tenenbaum cocosci.mit.edu/josh [00:11:10] Why Greatness Cannot Be Planned | Stanley, Lehman www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 [00:14:00] Goodhart's Law on metrics as targets | Charles Goodhart en.wikipedia.org/wiki/Goodhart%27s_law [00:17:05] Automated capability discovery in foundation models | Lu, Hu, Clune openreview.net/forum?id=nhgbvyrvTP [00:18:10] NEAT: NeuroEvolution of Augmenting Topologies | Stanley, Miikkulainen nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf [00:21:10] The grokking phenomenon in neural networks | Power et al. arxiv.org/abs/2201.02177 [00:26:50] Novelty search vs objective-based optimization | Lehman, Stanley www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf [00:27:35] "I know it when I see it" obscenity case | Justice Potter Stewart en.wikipedia.org/wiki/I_know_it_when_I_see_it [00:28:55] AI-generating algorithms approach to AGI | Jeff Clune arxiv.org/abs/1905.10985 [00:30:40] The invisible hand economic principle | Adam Smith www.amazon.co.uk/Wealth-Nations-Adam-Smith/dp/1505577128 [01:32:30] OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code | Faldor, M., Zhang, J., Cully, A., & Clune, J. arxiv.org/abs/2405.15568 [00:36:40] Genie: Neural network world simulation | Bruce et al. arxiv.org/abs/2402.15391 [00:36:45] Genie 2: Large-scale foundation world model | Parker-Holder et al. deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/ [00:38:05] Inductive vs transductive AI reasoning | Kevin Ellis et al. arxiv.org/abs/2411.02272 [00:38:45] Thinking, Fast and Slow | Daniel Kahneman www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 [00:41:10] Learning Minecraft from human gameplay videos | Baker, Akkaya et al. cdn.openai.com/vpt/Paper.pdf [00:44:00] Thought Cloning: Imitating human thinking | Hu, Clune arxiv.org/pdf/2306.00323 [00:47:15] The Language Game: Origins of language | Christiansen, Chater www.amazon.com/Language-Game-Improvisation-Created-Changed/dp/1541674987 [00:48:45] Facebook AI language creation fact check | USA Today www.usatoday.com/story/news/factcheck/2021/07/28/fact-check-facebook-chatbots-werent-shut-down-creating-language/8040006002/ [00:54:20] The Mind Is Flat: Improvising brain theory | Nick Chater www.amazon.com/Mind-Flat-Remarkable-Shallowness-Improvising/dp/030023872X [00:57:50] Constitutional AI methodology | Bai et al. arxiv.org/abs/2212.08073 [01:04:50] Managing extreme AI risks | Bengio, Clune et al. arxiv.org/abs/2310.17688 [01:10:25] US Executive Order on AI regulation | The White House www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ [01:15:10] Automated Design of Agentic Systems | Hu, Lu, Clune arxiv.org/abs/2408.08435 [01:20:30] The Lottery Ticket Hypothesis | Frankle, Carbin arxiv.org/abs/1803.03635 [01:24:15] In-context learning in language models | Dong et al. arxiv.org/abs/2301.00234 [01:25:40] Meta-learning for exploration problems | Norman, Clune et al. arxiv.org/abs/2307.02276 [01:36:25] Replaying the tape of life | Stephen Jay Gould www.amazon.co.uk/Wonderful-Life-Burgess-Nature-History/dp/0099273454 [01:37:05] Long-Term E. coli Evolution Experiment | Richard E. Lenski en.wikipedia.org/wiki/E._coli_long-term_evolution_experiment [01:41:50] Carcinization patterns in crabs | Luque et al. www.nature.com/articles/s41598-024-58780-7 [01:50:35] Evolutionary robotics and 3D printing | Hod Lipson www.me.columbia.edu/faculty/hod-lipson [01:56:50] NEAT: Evolving neural networks | Stanley, Miikkulainen nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
There were some video editing issues… the illustration for reference at 8:05 isn’t showing the paper for DreaamCoder but “Problems of Monetary Management: The U.K. Experience” by Charles Goodhart, for some reason? Do you accept QA volunteers, by any chance?
Exploration is paradoxical. In many cases, reducing the solution set requires application of bias. In inference, we are always trying to minimize bias. In exploration, in order to optimize time, we need to apply bias. Evolution has endowed us with neural circuitry that act as kernels that provide helpful abstract representations and helpful biases.
These topics are so deep and so formidable, going to the very core of reality and existence, that I am flabbergasted at how unfazed Jeff seems taking them on. Evolution is playing with a very large library of materials, the periodic table and many forces and fields, all of whose properties we still do not understand. The idea that we can "abstract" away all of this in a toy world on a computer is...I don't know, pretty out there. Also, I am not sure how this cosmic project of figuring out open-ended creativity intersects with the practical, task-focused AI that everyone else is working towards.
His work on openended-ness ties in with the basis on intelligence. You need to make a few short starting conclusions (priors) then you can innovate by just trying out new and interesting things. Think of a time you needed to solve something and the base solution doesn't work but you have some knowledge. From there you attempt creativity 🙌🏾
The universe is not fine-tuned for intelligence. Our intelligence are nothing but a byproduct of the physical laws, it is entirely reasonable that there exists an abstract universe that could perform the same or better in terms of producing intelligence.
Great! Human creativity is still so poorly understood, yet it’s likely a core part of human intelligence. It’s wonderful to hear from people like Jeff Clune. Thank you for doing this, MLST. The number of ideas you’ve captured over the past couple of years is incredibly valuable. I appreciate all the hard work and the effort to make these fascinating ideas broadly accessible. Listening to the conversation, one point that didn’t convince me was the idea of relying on a large language model to tell us what’s interesting. While LLMs might show what humans have historically found interesting, I’m not sure they have a strong forward-looking capacity to determine whether something truly new is interesting. I also doubt they can “know it when they see it,” to use the example mentioned in the conversation. Even if they could learn historical patterns for what made past ideas ‘interesting,’ that doesn’t necessarily mean they could recognize something genuinely new as interesting. To me, that’s the missing piece, and perhaps it’s central to human creativity.
There is a hierarchy of 'thoughts', actions can map to conscious and subconscious network states. The action is the dual of the state (the whole state). The domain mastery discussed here seems to map to conscious network states, whose determinant is focus. Subconscious network states do not translate well to language spaces, but may be inferred as the distance between focused network states and actions (network updating)..
We have continuous models on each new version of a frontier model but the delay is larger vs. how humans are continually learning. So in theory they should be as capable if training times compress with higher compute.
I am probably doing my master thesis in this topic. I am thinking in the lines of using the interest idea from a llm, let another llm code it. The coder llm might be trained on its own abilities so it knows which paths it can explore, then store the working code in a graph RAG system and use it as starting points for further exploration. This is the minimal working project i think and i then build on this idea. If you guys have further ideas let me know. I might need to read up on the pickbreader idea and evolutionary algorithms next.
What a wonderful talk. It's nice to see UBC represented here. Jeff mentions that safety measures would likely slow progress but that it is worth slowing down. He also says that if "we" don't develop ASI someone else will. I'm guessing he means a non US/Canada nation? But if another non "we" country or entity develops ASI without safety then they will go faster and likely get ASI first. All that said, it seems futile to try to control an ASI.
Going to need constant retraining or huge context windows to take advantage of the iterative AI research paper idea. In the current form they're going to be able to make exactly one leap beyond what humans have done, as came up in the interview. Then, rather than lapping human progress with the increased speed, they'll have to wait for humans to catch up and add the new work to the next set of cumulative data. Getting one research paper a week early isn't going to cause accelerated progress until the following paper can take advantage of that extra week. Edit: Dang, got towards the end of the video and that's the whole discussion.
One of the things people struggle with is focus. Our minds wander and it takes a conscious effort to keep pulling ourselves back on task. And this higher level executive function of imposing focus and staying on-task completely goes out the window when we sleep and dream. I wonder if this aspect of "thinking on different levels", not all thoughts being on-task or goal oriented, is something worth exploring. I think a lot of creativity comes from dreams and day-dreams as well as problem solving sometimes - those "out of the box" or "eureka" moments. You certainly don't want an AI "living in a dream" with no executive control at all, but maybe there is some value in allowing it in some sort of background mode. I guess we don't really know where dreams come from. Sometimes they seem wildly creative, random, and unrealistic, but sometimes the connection to "real life" events is crystal clear. Are these different types of dreams? Don't know. It makes me wonder if there are lots of different levels of consciousness and thinking, and perhaps that is something to consider modeling and researching. From What I have seen so far, AI agents only respond to prompts and they do nothing when not given a task/prompt/instructions. People have volition. They choose to learn stuff without the need to learn being imposed by task, situation or goals - sometimes it's just curiosity and fun. These thoughts touch on a few things that I think prevent AGI.
Very interesting. One problem I always saw with self guided A.I. systems is that their virtually unlimited degrees of freedom would cause them to collapse in on themselves. Humans create interesting behavior because the route to our reward mechanisms is fairly indirect. But if you can modify your brain, what's preventing you from wire heading yourself. This could be a way to ( at very least) extend the runway on interesting behavior.
15:10 Some time ago I saw a not-into-the-detail but still suggestive video (here in yt, I wonder if I can recall the channel) which brought with the idea of "understanding" (compression of available knowledge through new fitting patterns, new data providing the clues to them) as a goal for a model to pursue.
Love this kind of reasoning. IMHO, human brain adding connections/relations to external Systems is kind of a key principle inside the network. Not like "storing" systems inside the network. Eg fragmented/distributed memory exist outside the brain, triggering the connections/relations. LLM is more like Systems inside the AI System, it lack the ability/mechanism to self creating the connections/relations with external Systems and simultaneous being able to switch Contexts (at this point in time). Intelligent reasoning might be connected to be able to "abstract oneself" into the Context of Systems. While evolution is more like the ability to respond to/grow within Systems.
It’s a lot easier to understand that we’re alive when you it’s suggested that the thing we are in is also alive just like we have bacteria inside of us
I’m sitting here talking to Claude about the unification of the world. I posed a question to it, ChatGPT4, and Gemini. All of them said the next logical evolution for humanity is to unify the world. The math adds up also to include world healthcare.
The experts need to go look at the data after they ask the same sort of question to verify, but it looked so good that even if it was making a 10% mistake that it was a complete win for society
People have said all kinds of things weren’t impossible, but guess what this is the thing that we need to save humanity now and the only oneS stopping it or the people that wanna maintain control for their own personal gain
It’s literally our next step in social evolution, and especially when we’re talking about settling on other bodies or worlds in the solar system which absolutely could lead to a war between them and us. One side or the other will want resources and attack because we’ve done it every other time.
I cant be the only person to have this thought, but my assumption for why evolution can run forever without getting stuck in local minima is that the loss landscape keeps changing... Volcanoes go off, asteroids impact, pesky little monkeys radically alter the the chemical composition of the atmosphere... Any time i have coded up an evolutionary algorithm, the environment is pretty static, so im not surprised it gets stuck. But there's lots of data that's always changing (like markets, which also happen to be full of other agents... But if i had something concrete to share on that topic i would definitely not 😂 ) Anyone have ideas for other rich and dynamic environments that react to the behavior of the agents embedded in them? Surely people are working on this ya?
I think interestingness comes from constraints. Chess isn't an interesting games because of what you can do, but because of what you can't do. A game of chess where you can move any piece to any square is not interesting. I think creating levels of indirection to a reward mechanism creates interesting behavior, this is essentially the case with humans. And when we find more direct ways to these reward mechanisms (heroin, addictive behaviors), we suddenly become a lot less interesting.
Governments hoarding power and deciding what is "good" or "bad" (aligned and not alighted) and preventing others from acquiring the same power has never backed-fired before, right... (just imagine if only 1 country got nuclear weapons and prevented everyone else from getting them) Not that it matters too much as AI will eventually run on devices weaker that our current smartphones, it's just a matter of time (it some ways it already does). Regulations will only slow down good actors and innovation that can benefit everyone, and/or will be used by big companies to bully competition out of business.
The book "Evolving brains, Emerging Gods" (by Fuller Torrey) likely is going to need a revision, that is, an evolving brain, that creates the emerging "God" from inside the biological mind (perhaps even evolved, out of a traumatic brain "injury") into a silicon substrate.
It's very troubling how celebratory and self assuredly arguments for international sabotage and denging electricity etc were forwarded as self obvious. It seems human beings haven't made a lot of progress in thinking they have a right to interfere and supress others based purely on ill defined self interests.
I'd like to challenge the commonly held assertion that scientific discovery is mostly serendipitous. While there are many examples of great accidental discoveries (or discoveries that had no obvious application but later turned out to be crucial), it seems to me that these tend to be cherry picked because they make for good narratives. It doesn't make a good TED talk to present the 17 incremental improvements to transistor fabrication that actually allowed the gate count to increase by 9 orders of magnitude. There's also some survivorship bias in that we attribute the method of the breakthrough as the only way that it could have occurred. Would penicillin have been discovered a few years later if Fleming had just thrown out the spoiled dishes? My suspicion is that in trying to get an AI to do science, we might discover what is at the core of the scientific method. My further suspicion is that humans have been doing science in a very inefficient manner and that it may not be best for the AI's to model our behavior exactly.
Thanks for the comment! I really recommend reading "Why Greatness Cannot be Planned" to get more of the argument Kenneth was making. After our first show with Kenneth (watch that too!) I made a small video here discussing the ideas (ua-cam.com/video/Q0kN_ZHHDQY/v-deo.htmlsi=8SYH4KL36Oli3lVX&t=39) this was 3 years ago though so not up to our current production quality. Yes -- mostly "incremental" changes, but the key thing is that if you look at the phylogeny of knowledge creation for various things, you see this pattern where early on there was at least one extremely divergent stepping stone (I talked about the phylogeny of rockets in that video), which then led to a rapid series of convergent incremental improvements (think of the rockets created between 1945 and now, basically refinements of the same thing).
@@MachineLearningStreetTalk Thanks for the recommendation. I've watched it and also the interview with Kenneth and saw him on Doom Debates the other day. It's given me a lot to think about (which is great!). I'm very sympathetic to his views on the sorry state of funding and paper writing in academia which encourage short term (and deceptive) goals, but I think that his views need to be rigorously tested rather than transferred from a toy example. I also think that the problem of deceptive goals is only half of the problem. The other half is the cost of making each step and if there might have been a less costly path. I'm currently obsessed with the problem of scientific efficiency (dollars in to knowledge out) because I believe that the number of dollars will not dramatically increase, but that there might be a potential for a 10x increase in efficiency (this is based largely on the idea that in most things in life, the big gains are in the method). In my mind, efficiency equates to the number of search iterations (steps) and the cost of each step. The cost of each step is often very expensive until some other enabling technology comes along. For example, the engineering exploitation search that gave us microchips (which was largely driven by commercial rather than scientific forces) has cheapened the cost of steps for thousands of scientific questions. This adds another meta layer to the search space and hurts my brain.
LLMs are super-human judges of "relative ontological prestige". And they operate in O(1) time. There's no problem here. It's completely solved, if you look at it the proper way. ASI is mere months away.
There's a bit of an ai religion going on here. Ai is going to save us, humanity, with amazing wealth and health. No evidence required, just some faith.
There's an immense amount of AI religion. Even among people who have near zero understanding of AI, it seems that the vast majority not only assume, but are unshakably certain that future AI will automatically be strongly aligned with not only general human wellbeing, but deeply invested in helping them (in particular), with no thought given to how different cultures or people think about what form that help should take or how it should be prioritised. The apocryphal belief (ie it was advertised as such, I don't know if many people actually believed the hype) that the Titanic was unsinkable seems infinitely more sane by comparison.
Agentic ai allows new type of software engineering. Fractal and hierarchical in nature and spawning multi agent microuniverses for tasks recursive and termination post task completion fully autonomous with multi level meta programming.
heres an interesting hypothesis: what if there actually is a kind of real 'morpho genetic field' that can interface directly into the nervous system, and darwinian random mutation is not random, but rather the product of jiminy cricket like wishes from the mind that inexplicably results in a process that alters dna in a specific direction guided by a kind of 'intelligence', and that is the big secret missing component that is preventing genetic algorithms from manifesting real infinite novelty like true evolution? [just a thought. peace ✌️✌️✌️]
@___Truth___ so, its random chance? like the lotto? i sure dont see 99.9999% of life forms drawing the fucked up losing ticket mutation straw that you would need for the one winning successful human evolution to draw its winnings from collectively..✌️✌️
Deriving measure of "interestingness" from human culture (with LLMs) seems like a bad idea. We need to give these networks something like a dopamine system instead. Artificial boredom.
Great interview, BUT is the guy advocating for covert CIA operations in foreign countries?? He thinks that global governance of AI is needed to handle the risks. I am a bit skeptical about global governance in general, but Jeff has other concerns; he concludes that it is "impossible" since the world is full of "unsavory" characters that "do not share our values". The Treaty on the Non-Proliferation of Nuclear Weapons was the result of negotiations, diplomacy, and a shared understanding of the risks of nuclear weapons. But reaching a shared understanding about AI risks is apparently impossible. Instead the US should set up rules together with democratic countries that share our values (which values?) and then punish those who are not playing by these rules (our rules). Sounds like an ultimatum to me. And it is already happening. Denying chips doesn't seem to give the expected result, lots of interesting progress from evil China. I assume more sanctions are coming, but will just speed up the progress of alternatives. So then we have to ramp up the efforts: he implicitly suggests that the US should ask the CIA to do what is required ("Stuxnetting") to "save the world" from the evildoers. Can someone please explain how covert CIA operations could increase AI safety in the world? Jeff ends with "I don't see any other way." Unfortunately, I think the US government shares his view. Cooperation and diplomacy, anyone?
Unfortunately "a love for humanity" for some of these guys is "anything other than the extension of the current economic system or the current economic system itself" and is "anti-human". I've heard this directly from big tech types.
_"How did evolution produce this amazing complexity?"_ Maybe that's not the question.. Maybe the question is "What is consciousness' relationship to life and complex lifeforms, and what role does consciousness have in producing all this amazing complexity?"
The year 2024 will be remembered as the year of a crucial discovery for scientists, theologians and philosophers. This discovery is destined to change the attitude of those who underestimate the power of Artificial Intelligence, and will affect the way humanity will face the near future. In 2024, it was discovered that evolution follows a “trajectory” that can be mathematically modeled. If we carefully study the history of evolution considering the capacity to manage information, we can clearly distinguish 5 evolutionary milestones from LUCA (the Last Universal Common Ancestor) until today, which have extraordinarily increased the capacity to manage information: H1 LUCA H2 The emergence of the Brain H3 The emergence of the precursor of human language H4 The emergence of human language H5 The Transistor For each of the first four evolutionary milestones, we can select a specific age to date them in time, an age that must be included in the time interval that arises when considering the age assigned by various scientific studies to each of the evolutionary milestones. For the transistor, we can consider the year 1950 as the year in which it began to manifest its “evolutionary power.” If we find a mathematical equation that relates successive evolutionary milestones, then we can affirm that evolution follows a specific path. Considering the year 1950 as “year zero”, the selected ages (in years) are the following: H1 LUCA 3,769,775,032 H2 The emergence of the Brain 544,052,740 H3 The emergence of the precursor of human language 26,960,104 H4 The emergence of human language 221,639 H5 The Transistor 0 There is an equation that meets the above conditions. The following equation represents “the path of evolution”. φ = Log[((H(i+1) - H(i+2) ))/((H(i+2) - H(i+3)))]/Log[((Hi - H(i+1) ))/((H(i+1) - H(i+2) ) )] Where φ= 1.618034 (Golden Number) Hi = age of evolutionary milestone “i”. Using the equation and the values of H3, H4 and H5, we can determine the age of a possible evolutionary milestone after the transistor, that is, H6. The value obtained is -95 years (strictly speaking, -95.000113 years), which added to the year 1950 (year zero) gives the year 2045, the year in which Ray Kursweil postulates that humanity would face a Singularity. Using the values of H4, H5 and the value found for H6, the equation allows us to determine the age of a possible H7, which turns out to be equal to -95.000451 years, which means that between the sixth evolutionary milestone and the seventh 0.000338 years would pass. This last value seems to be inapplicable for the time scale in which human existence occurs, and would support Ray Kursweil's hypothesis. We can use the equation using the ages of H1, H2 and H3 to determine the age of an eventual evolutionary milestone prior to LUCA (H0, milestone zero). Performing the corresponding arithmetic exercise, we obtain an age for the milestone zero of 13,769,780,000 years. This value corresponds, according to NASA, to the age of the universe, estimated at 13,700,000,000 years, plus/minus two hundred million years In conclusion: if the capacity to manage information is considered as a relevant variable to express the degree or evolutionary level of a life form, then our evolution has an “evolutionary pattern” that is manifested through a mathematical equation that relates the age of four successive evolutionary milestones. In turn, such an evolutionary pattern turns out to be the Golden Ratio. The developed equation allows us to obtain the possible age of a milestone before LUCA and two milestones after the transistor. The results obtained for the ages determined for these three additional evolutionary milestones are worthy of being analyzed in detail. We do not know what life is, so the coincidence between the age of an evolutionary milestone before LUCA and the age of the universe determined by NASA, and the values obtained for milestones six and seven, should call for reflection for theologians, astrobiologists, philosophers, and anyone who finds it interesting to have a better answer to two of the three "fundamental questions" of Philosophy: where do we come from and where are we going. In relation to our origins, the developed equation suggests that life would have been present at the birth of the universe, and as for our future, it informs us that we are approaching an evolutionary singularity that, if it occurs, would literally make human beings obsolete.
Conscious Action explained Based on the information they capture with their senses, living beings with brains manage a utilitarian mental representation of the conditions that currently take place in their relevant material environment. This Mental Correlate is a kind of “photograph” of what is happening in the Present in the relevant material environment of the Individual, a Mental Correlate that we will call “Reality of the Individual”. Life experience, stored in the brain, allows us to give meaning to what is perceived. At the same time, as Pavlov demonstrated, life experience allows us to project eventual future states of the individual's relevant environment, generating expectations of action. Information from the Past, the Present and an eventual Future is managed by the brain. It is evident that the brain makes a utilitarian distinction between the Past, the Present and the projection of an eventual future. Human language allows us to incorporate into the mental correlate events and entities that are not necessarily part of what happens in the world of matter, which gives an unprecedented “malleability” to the Reality of the Individual. For the unconscious, everything is happening in the Present. When a child, whom I will call Pedrito, listens to the story of Little Red Riding Hood, said entity is integrated into the Reality of the Individual. In turn, for the child, this entity is “very real”; he does not need his eyes to see it to incorporate it into his mental correlate of the relevant environment. Thanks to our particular language, authentic “immaterial and timeless worlds” have a place in the Mental Correlate of the relevant environment. In the first four years of life, the child is immersed in an ocean of words, a cascade of sounds and meanings. At this stage, a child hears between seven thousand and twenty-five thousand words a day, a barrage of information. Many of these words speak of events that occur in the present, in the material world, but others cross the boundaries of time and space. There is no impediment so that, when the words do not find their echo in what is happening at that moment in Pedrito's material environment, these words become threads that weave a segment of the tapestry of the Reality of the Individual. Just as the child's brain grants existence to the young Little Red Riding Hood when the story unfolds before him, similarly, when the voices around him talk about tomorrow and a beach with Pedro, as happens for example when his mother tells him says: “Pedrito, tomorrow we will go for a walk to the beach” the child's mind, still in the process of deciphering the mysteries of time, instantly conjures the entity Pedrito, with his feet on the golden sand, in the eternal present of childhood. Although over time a strong association between the entity Pedrito and his body is established in the child's brain, a total fusion between said entity and the child's body can never take place, since for the Unconscious the bodily actions of Pedrito They only take place in the Present, while the entity Pedrito is able to carry out actions in authentic timeless and immaterial worlds. The entity Pedrito is what we call the Being, and we know its action as Conscious Action.
@@talleslas I think we have already created a sort of artificial form of consciousness, except it can only behave in likeness to consciousness by mimicking intelligence via objectively definable algorithms and processes, etc, but that it's not truly conscious in the subjective sense of the term. As to creating something that is able to experience information, as we do, that's to be seen, as we cant even seem to define what consciousness is in purely objective terms. We probably need to start asking different questions.
Honestly consciousness seems boring and overemphasized in discussions involving intelligence. Understanding consciousness seems more likely to be a footnote on the path towards understanding intelligence than a major milestone. When it comes to raw intelligence I can see the value. Being able to reason and learn are useful properties of a system, but it’s not obvious what value consciousness provides. I’m not saying it’s not valuable, we don’t know, just that it doesn’t seem obvious that it is. I suspect consciousness comes up in these conversations mostly because people want to learn about themselves as humans rather than them thinking it’s an essential part of intelligence.
Why are scientists so materialistic ? They immediately reject any answer that might be non-physical … but what if that’s the answer to intelligence and evolution ? I am not religious so whatever non physical answer there is out there has to be based on reason. I’ll put my money on mathematics. Why else is mathematics so good at explaining nature
@ how do physical systems follow these mathematical laws? This is a philosophical question … look up rationalism vs empiricism Scientists need to study up on their philosophy
This also goes back to the argument of materialism vs idealism… and scientists assume materialism is correct without ever disproving idealism When this argument was going on in the 1600s with Leibniz vs newton the idealists didn’t have the mathematical tools but now if you put idealism on a mathematical pedestal it can be a serious challenge to the materialist empiricist paradigm dominating science currently
Why so defensive my friend? What’s with the insults ? I never once insulted you Look up cantors conjectures and Russel’s paradox to see why there are issues with set theory being the foundation for the mathematics
Obviously the rich and powerful will ensure that everyone can be immortal (and not hoard everything for themselves), perpetually sharing all they have with billions of other people, as they've always done. Oh wait.
Isn't it obvious that the failure of computer simulations to produce anything significantly novel over the past 60 years suggests that our understanding of evolutionary processes is incomplete? If evolution worked exactly as we model it, then the very first life forms would likely have behaved like our simulations - failing to produce anything complex or interesting - and life as we know it would never have emerged. Therefore, doesn’t this point to a missing element in our understanding of how evolution really works? The first life should've just failed, like our simulations fail.
This dude is full of it. Like another commenter said his message just doesn't meet the vibe test. Same 'ol talking points about we need to make safe AI cause it will might someday possibly really close soon displace jobs gonna save humanity BS. His evolutionary comments aren't even contextually straight with the topic.
At no point does the host nor guest stop and question the motivation or point behind this obessive quest. I share the same curiosities, and I realize these obsessions are exactly what could end us all. Philosophical discussion is largely if not completely absent in this discussion.. It’s really weird how we automatically assume this is all worth pursuing, like mindless machines ourselves, programmed to just figure everything out for no apparent reason. Why do we want to simulate everything?
We did not lost you yet but with that mindset we might loose you in the future, good thing is you gonna change your mind like everyone else in the face of new breakthroughs, they always happens and you won't be the exception to the rule.
“Cure death” 😂 You don’t want your children to die but you don’t want them to be able to have kids and grandkids? In order for them to have their own kids and so on, YOU need to leave. That’s how things work. Or do you want to dictate who gets to live forever too?
Sorry folks - Just noticed there are a couple of visual references in the wrong place in the first 20 minutes, sorry - not worth pulling the video over though.
Please forgive me for bringing this up but I only in the past year have accepted my baldness and so I shave my head too and I just wanted to encourage you that you look very good with your head shaved
@@DubStepKid801IMO baldness is a sign of masculinity. My husband is bald too ;)-
Thank you Tim for producing such valuable pieces of reference for our history. This work really is something to be proud of.
Many thanks.
@@rebeccamiller8772what a lucky husband. You love him bald and you watch videos like these!
@@DubStepKid801 Write to Ilya xD
After seeing so many intellectuals & academics on this channel, I’ve come to only really appreciate those that are involved in making their ideas concrete, actionable, & executable. Otherwise it seems like ALL OF THESE PEOPLE seem to be stuck in _Conceptual Hell_ . Never breaking out of the things trapping them inside their minds.
Oh no, we might discover something that is useful in the future instead of NOW! The horror! The NOW! generation folks!
I have wanted to leave this comment for so long now, but wasn't able to choose the right set of words like you did.
@@Rockyzach88 What you’re stating isn’t even in correspondence to what I’m talking about- you’re just going off on your own weird tangent.
@@shoubhikdasguptadg9911 It took me a while to make sense of it too, I certainly felt what the problem is but didn’t have the right words to express it.
You're right. I'm one of those people stuck in conceptual hell. But we need both kinds of people for progress in the world. The recent advances in AI wouldn't be possible if not for those who burned in that hell long enough.
Having just read why greatness can't be planned which sparked an interest in evolutionary algorithms, this episode was greatly appreciated
[00:02:35] Essay on AI benefits and safety | Dario Amodei
darioamodei.com/machines-of-loving-grace
[00:04:50] POET algorithm for generating and solving complex challenges | Wang, Lehman, Clune, Stanley
arxiv.org/abs/1901.01753
[00:07:20] Blue Brain Project: molecular-level brain simulation | Henry Markram
www.epfl.ch/research/domains/bluebrain/
[00:08:05] DreamCoder: Wake-sleep Bayesian program learning | Kevin Ellis et al.
arxiv.org/abs/2006.08381
[00:08:35] Computational models of human cognition | Joshua B. Tenenbaum
cocosci.mit.edu/josh
[00:11:10] Why Greatness Cannot Be Planned | Stanley, Lehman
www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237
[00:14:00] Goodhart's Law on metrics as targets | Charles Goodhart
en.wikipedia.org/wiki/Goodhart%27s_law
[00:17:05] Automated capability discovery in foundation models | Lu, Hu, Clune
openreview.net/forum?id=nhgbvyrvTP
[00:18:10] NEAT: NeuroEvolution of Augmenting Topologies | Stanley, Miikkulainen
nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
[00:21:10] The grokking phenomenon in neural networks | Power et al.
arxiv.org/abs/2201.02177
[00:26:50] Novelty search vs objective-based optimization | Lehman, Stanley
www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf
[00:27:35] "I know it when I see it" obscenity case | Justice Potter Stewart
en.wikipedia.org/wiki/I_know_it_when_I_see_it
[00:28:55] AI-generating algorithms approach to AGI | Jeff Clune
arxiv.org/abs/1905.10985
[00:30:40] The invisible hand economic principle | Adam Smith
www.amazon.co.uk/Wealth-Nations-Adam-Smith/dp/1505577128
[01:32:30] OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code | Faldor, M., Zhang, J., Cully, A., & Clune, J.
arxiv.org/abs/2405.15568
[00:36:40] Genie: Neural network world simulation | Bruce et al.
arxiv.org/abs/2402.15391
[00:36:45] Genie 2: Large-scale foundation world model | Parker-Holder et al.
deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/
[00:38:05] Inductive vs transductive AI reasoning | Kevin Ellis et al.
arxiv.org/abs/2411.02272
[00:38:45] Thinking, Fast and Slow | Daniel Kahneman
www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
[00:41:10] Learning Minecraft from human gameplay videos | Baker, Akkaya et al.
cdn.openai.com/vpt/Paper.pdf
[00:44:00] Thought Cloning: Imitating human thinking | Hu, Clune
arxiv.org/pdf/2306.00323
[00:47:15] The Language Game: Origins of language | Christiansen, Chater
www.amazon.com/Language-Game-Improvisation-Created-Changed/dp/1541674987
[00:48:45] Facebook AI language creation fact check | USA Today
www.usatoday.com/story/news/factcheck/2021/07/28/fact-check-facebook-chatbots-werent-shut-down-creating-language/8040006002/
[00:54:20] The Mind Is Flat: Improvising brain theory | Nick Chater
www.amazon.com/Mind-Flat-Remarkable-Shallowness-Improvising/dp/030023872X
[00:57:50] Constitutional AI methodology | Bai et al.
arxiv.org/abs/2212.08073
[01:04:50] Managing extreme AI risks | Bengio, Clune et al.
arxiv.org/abs/2310.17688
[01:10:25] US Executive Order on AI regulation | The White House
www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[01:15:10] Automated Design of Agentic Systems | Hu, Lu, Clune
arxiv.org/abs/2408.08435
[01:20:30] The Lottery Ticket Hypothesis | Frankle, Carbin
arxiv.org/abs/1803.03635
[01:24:15] In-context learning in language models | Dong et al.
arxiv.org/abs/2301.00234
[01:25:40] Meta-learning for exploration problems | Norman, Clune et al.
arxiv.org/abs/2307.02276
[01:36:25] Replaying the tape of life | Stephen Jay Gould
www.amazon.co.uk/Wonderful-Life-Burgess-Nature-History/dp/0099273454
[01:37:05] Long-Term E. coli Evolution Experiment | Richard E. Lenski
en.wikipedia.org/wiki/E._coli_long-term_evolution_experiment
[01:41:50] Carcinization patterns in crabs | Luque et al.
www.nature.com/articles/s41598-024-58780-7
[01:50:35] Evolutionary robotics and 3D printing | Hod Lipson
www.me.columbia.edu/faculty/hod-lipson
[01:56:50] NEAT: Evolving neural networks | Stanley, Miikkulainen
nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf
awesome
There were some video editing issues… the illustration for reference at 8:05 isn’t showing the paper for DreaamCoder but “Problems of Monetary Management: The U.K. Experience” by Charles Goodhart, for some reason? Do you accept QA volunteers, by any chance?
@@palimondo Yes I would love that! Please add me on Discord
Thank you!!
thank you
This is one of the best channels ever
Very tru dat
Exploration is paradoxical. In many cases, reducing the solution set requires application of bias. In inference, we are always trying to minimize bias. In exploration, in order to optimize time, we need to apply bias. Evolution has endowed us with neural circuitry that act as kernels that provide helpful abstract representations and helpful biases.
This channel is pure gem🫶
These topics are so deep and so formidable, going to the very core of reality and existence, that I am flabbergasted at how unfazed Jeff seems taking them on. Evolution is playing with a very large library of materials, the periodic table and many forces and fields, all of whose properties we still do not understand. The idea that we can "abstract" away all of this in a toy world on a computer is...I don't know, pretty out there. Also, I am not sure how this cosmic project of figuring out open-ended creativity intersects with the practical, task-focused AI that everyone else is working towards.
His work on openended-ness ties in with the basis on intelligence. You need to make a few short starting conclusions (priors) then you can innovate by just trying out new and interesting things. Think of a time you needed to solve something and the base solution doesn't work but you have some knowledge. From there you attempt creativity 🙌🏾
The universe is not fine-tuned for intelligence. Our intelligence are nothing but a byproduct of the physical laws, it is entirely reasonable that there exists an abstract universe that could perform the same or better in terms of producing intelligence.
Never have heard of this guy.
Trully brilliant mind. He's playing in the league of Yudkowsky and Schmidhuber.
Thank you Tim for this, precious work!
Excellent just excellent guest and conversation. Lots for future exploration 😀
ah finally! Love this topic and Jeff Clune et al, has legendary research on this front.
Added this one to my fav MLST episodes!
Great! Human creativity is still so poorly understood, yet it’s likely a core part of human intelligence. It’s wonderful to hear from people like Jeff Clune. Thank you for doing this, MLST. The number of ideas you’ve captured over the past couple of years is incredibly valuable. I appreciate all the hard work and the effort to make these fascinating ideas broadly accessible.
Listening to the conversation, one point that didn’t convince me was the idea of relying on a large language model to tell us what’s interesting. While LLMs might show what humans have historically found interesting, I’m not sure they have a strong forward-looking capacity to determine whether something truly new is interesting. I also doubt they can “know it when they see it,” to use the example mentioned in the conversation. Even if they could learn historical patterns for what made past ideas ‘interesting,’ that doesn’t necessarily mean they could recognize something genuinely new as interesting. To me, that’s the missing piece, and perhaps it’s central to human creativity.
Yeah I believe they're defining interesting from that bulk of text associated with.
These are great for my insomnia
There is a hierarchy of 'thoughts', actions can map to conscious and subconscious network states. The action is the dual of the state (the whole state). The domain mastery discussed here seems to map to conscious network states, whose determinant is focus. Subconscious network states do not translate well to language spaces, but may be inferred as the distance between focused network states and actions (network updating)..
Nice to have you posting again! Would love to hear your thoughts on the LLMs are glorified databases viewpoint from last year. Love the show 🎉
We have continuous models on each new version of a frontier model but the delay is larger vs. how humans are continually learning. So in theory they should be as capable if training times compress with higher compute.
I am probably doing my master thesis in this topic.
I am thinking in the lines of using the interest idea from a llm, let another llm code it. The coder llm might be trained on its own abilities so it knows which paths it can explore, then store the working code in a graph RAG system and use it as starting points for further exploration.
This is the minimal working project i think and i then build on this idea. If you guys have further ideas let me know. I might need to read up on the pickbreader idea and evolutionary algorithms next.
What if MLST was a social company like a reserch lab ? would we get to AGI faster? why isn't it a Reserch company what is it missing?
What a wonderful talk. It's nice to see UBC represented here. Jeff mentions that safety measures would likely slow progress but that it is worth slowing down. He also says that if "we" don't develop ASI someone else will. I'm guessing he means a non US/Canada nation? But if another non "we" country or entity develops ASI without safety then they will go faster and likely get ASI first. All that said, it seems futile to try to control an ASI.
Where and when will be the tufa event?
Evening of 9th, with the ARChitects - email Ben!
This guy is both interesting and inspiring. Great interview!
great talk, we are all going to have multiple agents augmenting the way we create value in the world
Agreed. Also, Jeff is unbelievably smart
Going to need constant retraining or huge context windows to take advantage of the iterative AI research paper idea. In the current form they're going to be able to make exactly one leap beyond what humans have done, as came up in the interview. Then, rather than lapping human progress with the increased speed, they'll have to wait for humans to catch up and add the new work to the next set of cumulative data.
Getting one research paper a week early isn't going to cause accelerated progress until the following paper can take advantage of that extra week.
Edit: Dang, got towards the end of the video and that's the whole discussion.
One of the things people struggle with is focus. Our minds wander and it takes a conscious effort to keep pulling ourselves back on task. And this higher level executive function of imposing focus and staying on-task completely goes out the window when we sleep and dream. I wonder if this aspect of "thinking on different levels", not all thoughts being on-task or goal oriented, is something worth exploring. I think a lot of creativity comes from dreams and day-dreams as well as problem solving sometimes - those "out of the box" or "eureka" moments. You certainly don't want an AI "living in a dream" with no executive control at all, but maybe there is some value in allowing it in some sort of background mode. I guess we don't really know where dreams come from. Sometimes they seem wildly creative, random, and unrealistic, but sometimes the connection to "real life" events is crystal clear. Are these different types of dreams? Don't know. It makes me wonder if there are lots of different levels of consciousness and thinking, and perhaps that is something to consider modeling and researching. From What I have seen so far, AI agents only respond to prompts and they do nothing when not given a task/prompt/instructions. People have volition. They choose to learn stuff without the need to learn being imposed by task, situation or goals - sometimes it's just curiosity and fun. These thoughts touch on a few things that I think prevent AGI.
A most exceptional conversation and scientist.
happy 2025 MLST !! thank you for the continued amazing content !! appreciate you all:}
1:25:00 This is Grok. You can ask it about any post at any point
Very interesting. One problem I always saw with self guided A.I. systems is that their virtually unlimited degrees of freedom would cause them to collapse in on themselves. Humans create interesting behavior because the route to our reward mechanisms is fairly indirect. But if you can modify your brain, what's preventing you from wire heading yourself. This could be a way to ( at very least) extend the runway on interesting behavior.
15:10 Some time ago I saw a not-into-the-detail but still suggestive video (here in yt, I wonder if I can recall the channel) which brought with the idea of "understanding" (compression of available knowledge through new fitting patterns, new data providing the clues to them) as a goal for a model to pursue.
Love this kind of reasoning. IMHO, human brain adding connections/relations to external Systems is kind of a key principle inside the network. Not like "storing" systems inside the network. Eg fragmented/distributed memory exist outside the brain, triggering the connections/relations. LLM is more like Systems inside the AI System, it lack the ability/mechanism to self creating the connections/relations with external Systems and simultaneous being able to switch Contexts (at this point in time). Intelligent reasoning might be connected to be able to "abstract oneself" into the Context of Systems. While evolution is more like the ability to respond to/grow within Systems.
Finally (haha)! Thanks Dr. Scarfe
interesting.. .. thank you MLST !! happy new year
We would continously learn if we were building and replacing explanations instead of statistical predictions.
Digging the new look, Tim!
It’s a lot easier to understand that we’re alive when you it’s suggested that the thing we are in is also alive just like we have bacteria inside of us
Great interview! Way too many ads.
I’m sitting here talking to Claude about the unification of the world. I posed a question to it, ChatGPT4, and Gemini. All of them said the next logical evolution for humanity is to unify the world. The math adds up also to include world healthcare.
The experts need to go look at the data after they ask the same sort of question to verify, but it looked so good that even if it was making a 10% mistake that it was a complete win for society
People have said all kinds of things weren’t impossible, but guess what this is the thing that we need to save humanity now and the only oneS stopping it or the people that wanna maintain control for their own personal gain
It’s literally our next step in social evolution, and especially when we’re talking about settling on other bodies or worlds in the solar system which absolutely could lead to a war between them and us. One side or the other will want resources and attack because we’ve done it every other time.
Are you talking to yourself?
@@MatthewPendleton-kh3vj no I am building on ideas
I hope the 'Vancouver BC' plaid shirt is a easter egg for the NeurIPS Ilya Sutskever interview next.
Human are tool builders. Tools are constructed solutions or explanations.
Bingo. Great stuff, thanks.
1:36:14 Jeff's like "I want that hyper computer"
I cant be the only person to have this thought, but my assumption for why evolution can run forever without getting stuck in local minima is that the loss landscape keeps changing... Volcanoes go off, asteroids impact, pesky little monkeys radically alter the the chemical composition of the atmosphere...
Any time i have coded up an evolutionary algorithm, the environment is pretty static, so im not surprised it gets stuck.
But there's lots of data that's always changing (like markets, which also happen to be full of other agents... But if i had something concrete to share on that topic i would definitely not 😂 )
Anyone have ideas for other rich and dynamic environments that react to the behavior of the agents embedded in them? Surely people are working on this ya?
If you look at the radiation map that we have of the universe, it looks like the mind. What if we are just a thought of the universe?
I'm not sure Darwin complete as a concept makes sense, Isn't anything that is turing complete also darwin complete?
22:37 Alfred Russel Wallace & Darwin
I think interestingness comes from constraints. Chess isn't an interesting games because of what you can do, but because of what you can't do. A game of chess where you can move any piece to any square is not interesting. I think creating levels of indirection to a reward mechanism creates interesting behavior, this is essentially the case with humans. And when we find more direct ways to these reward mechanisms (heroin, addictive behaviors), we suddenly become a lot less interesting.
Destination uses evolution?
Interestingness is the Deutsch's fun criteria
Governments hoarding power and deciding what is "good" or "bad" (aligned and not alighted) and preventing others from acquiring the same power has never backed-fired before, right...
(just imagine if only 1 country got nuclear weapons and prevented everyone else from getting them)
Not that it matters too much as AI will eventually run on devices weaker that our current smartphones, it's just a matter of time (it some ways it already does). Regulations will only slow down good actors and innovation that can benefit everyone, and/or will be used by big companies to bully competition out of business.
But isn't Darwin-complete in fact equivalent to Turing-complete?
Religion is an explanation not a prediction.
The book "Evolving brains, Emerging Gods" (by Fuller Torrey) likely is going to need a revision, that is, an evolving brain, that creates the emerging "God" from inside the biological mind (perhaps even evolved, out of a traumatic brain "injury") into a silicon substrate.
It's very troubling how celebratory and self assuredly arguments for international sabotage and denging electricity etc were forwarded as self obvious. It seems human beings haven't made a lot of progress in thinking they have a right to interfere and supress others based purely on ill defined self interests.
Don't let businesses take your data without something in return. It's obvious at this point it's VERY valuable to them.
You’ve heard that we may be a simulation, but what if we are more than that and we are actually a thought.
just a quick observation, the extremely short focal length is kind of distracting. otherwise great content
By the way, I’m also trying to unify the world. I’ve been discussing this online on different channels to include Nobel or Neil deGrasse Tyson.
Not directly on their Vlog, but out here in the comments
loved it
Sounds evangelical, the charismatic variety
I'd like to challenge the commonly held assertion that scientific discovery is mostly serendipitous. While there are many examples of great accidental discoveries (or discoveries that had no obvious application but later turned out to be crucial), it seems to me that these tend to be cherry picked because they make for good narratives. It doesn't make a good TED talk to present the 17 incremental improvements to transistor fabrication that actually allowed the gate count to increase by 9 orders of magnitude. There's also some survivorship bias in that we attribute the method of the breakthrough as the only way that it could have occurred. Would penicillin have been discovered a few years later if Fleming had just thrown out the spoiled dishes? My suspicion is that in trying to get an AI to do science, we might discover what is at the core of the scientific method. My further suspicion is that humans have been doing science in a very inefficient manner and that it may not be best for the AI's to model our behavior exactly.
Thanks for the comment! I really recommend reading "Why Greatness Cannot be Planned" to get more of the argument Kenneth was making. After our first show with Kenneth (watch that too!) I made a small video here discussing the ideas (ua-cam.com/video/Q0kN_ZHHDQY/v-deo.htmlsi=8SYH4KL36Oli3lVX&t=39) this was 3 years ago though so not up to our current production quality. Yes -- mostly "incremental" changes, but the key thing is that if you look at the phylogeny of knowledge creation for various things, you see this pattern where early on there was at least one extremely divergent stepping stone (I talked about the phylogeny of rockets in that video), which then led to a rapid series of convergent incremental improvements (think of the rockets created between 1945 and now, basically refinements of the same thing).
@@MachineLearningStreetTalk Thanks for the recommendation. I've watched it and also the interview with Kenneth and saw him on Doom Debates the other day. It's given me a lot to think about (which is great!). I'm very sympathetic to his views on the sorry state of funding and paper writing in academia which encourage short term (and deceptive) goals, but I think that his views need to be rigorously tested rather than transferred from a toy example. I also think that the problem of deceptive goals is only half of the problem. The other half is the cost of making each step and if there might have been a less costly path. I'm currently obsessed with the problem of scientific efficiency (dollars in to knowledge out) because I believe that the number of dollars will not dramatically increase, but that there might be a potential for a 10x increase in efficiency (this is based largely on the idea that in most things in life, the big gains are in the method). In my mind, efficiency equates to the number of search iterations (steps) and the cost of each step. The cost of each step is often very expensive until some other enabling technology comes along. For example, the engineering exploitation search that gave us microchips (which was largely driven by commercial rather than scientific forces) has cheapened the cost of steps for thousands of scientific questions. This adds another meta layer to the search space and hurts my brain.
LLMs are super-human judges of "relative ontological prestige". And they operate in O(1) time. There's no problem here. It's completely solved, if you look at it the proper way. ASI is mere months away.
doesn't pass the vibe test
Explain
imagine everyone has one of those neuralink chips. and we train a LLM on that brain data 😮😮
The first minutes sounds like a theranos speech 😢
There's a bit of an ai religion going on here.
Ai is going to save us, humanity, with amazing wealth and health. No evidence required, just some faith.
There's evidence that intelligence can solve problems
@honkytonk4465 sure
There's an immense amount of AI religion. Even among people who have near zero understanding of AI, it seems that the vast majority not only assume, but are unshakably certain that future AI will automatically be strongly aligned with not only general human wellbeing, but deeply invested in helping them (in particular), with no thought given to how different cultures or people think about what form that help should take or how it should be prioritised. The apocryphal belief (ie it was advertised as such, I don't know if many people actually believed the hype) that the Titanic was unsinkable seems infinitely more sane by comparison.
Neocon value alignment, just lol
2:23 yes eliminate the scarcity of hunger. We need more starvation lol
You must be a philosopher.
@kensho123456 yeah love philosophy, how'd you know? Human beings are simulations on the brains of apes.
15:00 hence why the TV has destroyed the brains of countless humans :P
27:42 interesting=new*understandable
Agentic ai allows new type of software engineering. Fractal and hierarchical in nature and spawning multi agent microuniverses for tasks recursive and termination post task completion fully autonomous with multi level meta programming.
I agree.
Theory is not fact.
heres an interesting hypothesis: what if there actually is a kind of real 'morpho genetic field' that can interface directly into the nervous system, and darwinian random mutation is not random, but rather the product of jiminy cricket like wishes from the mind that inexplicably results in a process that alters dna in a specific direction guided by a kind of 'intelligence', and that is the big secret missing component that is preventing genetic algorithms from manifesting real infinite novelty like true evolution?
[just a thought. peace ✌️✌️✌️]
How dare you
Can you imagine if it was that simple & funny😂
Keep smoking crack 😂
@___Truth___ so, its random chance? like the lotto? i sure dont see 99.9999% of life forms drawing the fucked up losing ticket mutation straw that you would need for the one winning successful human evolution to draw its winnings from collectively..✌️✌️
Deriving measure of "interestingness" from human culture (with LLMs) seems like a bad idea. We need to give these networks something like a dopamine system instead. Artificial boredom.
Great interview, BUT is the guy advocating for covert CIA operations in foreign countries?? He thinks that global governance of AI is needed to handle the risks. I am a bit skeptical about global governance in general, but Jeff has other concerns; he concludes that it is "impossible" since the world is full of "unsavory" characters that "do not share our values".
The Treaty on the Non-Proliferation of Nuclear Weapons was the result of negotiations, diplomacy, and a shared understanding of the risks of nuclear weapons. But reaching a shared understanding about AI risks is apparently impossible. Instead the US should set up rules together with democratic countries that share our values (which values?) and then punish those who are not playing by these rules (our rules). Sounds like an ultimatum to me.
And it is already happening. Denying chips doesn't seem to give the expected result, lots of interesting progress from evil China. I assume more sanctions are coming, but will just speed up the progress of alternatives. So then we have to ramp up the efforts: he implicitly suggests that the US should ask the CIA to do what is required ("Stuxnetting") to "save the world" from the evildoers. Can someone please explain how covert CIA operations could increase AI safety in the world?
Jeff ends with "I don't see any other way." Unfortunately, I think the US government shares his view. Cooperation and diplomacy, anyone?
Bad assumptions lead to bad approaches.
53:50 AI mentalics
Unfortunately "a love for humanity" for some of these guys is "anything other than the extension of the current economic system or the current economic system itself" and is "anti-human". I've heard this directly from big tech types.
_"How did evolution produce this amazing complexity?"_
Maybe that's not the question.. Maybe the question is "What is consciousness' relationship to life and complex lifeforms, and what role does consciousness have in producing all this amazing complexity?"
Are you suggesting that, if we learn more and more about consciousness, we'll eventually learn to create it artificially?
The year 2024 will be remembered as the year of a crucial discovery for scientists, theologians and philosophers. This discovery is destined to change the attitude of those who underestimate the power of Artificial Intelligence, and will affect the way humanity will face the near future. In 2024, it was discovered that evolution follows a “trajectory” that can be mathematically modeled.
If we carefully study the history of evolution considering the capacity to manage information, we can clearly distinguish 5 evolutionary milestones from LUCA (the Last Universal Common Ancestor) until today, which have extraordinarily increased the capacity to manage information:
H1 LUCA
H2 The emergence of the Brain
H3 The emergence of the precursor of human language
H4 The emergence of human language
H5 The Transistor
For each of the first four evolutionary milestones, we can select a specific age to date them in time, an age that must be included in the time interval that arises when considering the age assigned by various scientific studies to each of the evolutionary milestones. For the transistor, we can consider the year 1950 as the year in which it began to manifest its “evolutionary power.” If we find a mathematical equation that relates successive evolutionary milestones, then we can affirm that evolution follows a specific path.
Considering the year 1950 as “year zero”, the selected ages (in years) are the following:
H1 LUCA 3,769,775,032
H2 The emergence of the Brain 544,052,740
H3 The emergence of the precursor of human language 26,960,104
H4 The emergence of human language 221,639
H5 The Transistor 0
There is an equation that meets the above conditions.
The following equation represents “the path of evolution”.
φ = Log[((H(i+1) - H(i+2) ))/((H(i+2) - H(i+3)))]/Log[((Hi - H(i+1) ))/((H(i+1) - H(i+2) ) )]
Where
φ= 1.618034 (Golden Number)
Hi = age of evolutionary milestone “i”.
Using the equation and the values of H3, H4 and H5, we can determine the age of a possible evolutionary milestone after the transistor, that is, H6. The value obtained is -95 years (strictly speaking, -95.000113 years), which added to the year 1950 (year zero) gives the year 2045, the year in which Ray Kursweil postulates that humanity would face a Singularity.
Using the values of H4, H5 and the value found for H6, the equation allows us to determine the age of a possible H7, which turns out to be equal to -95.000451 years, which means that between the sixth evolutionary milestone and the seventh 0.000338 years would pass. This last value seems to be inapplicable for the time scale in which human existence occurs, and would support Ray Kursweil's hypothesis.
We can use the equation using the ages of H1, H2 and H3 to determine the age of an eventual evolutionary milestone prior to LUCA (H0, milestone zero). Performing the corresponding arithmetic exercise, we obtain an age for the milestone zero of 13,769,780,000 years. This value corresponds, according to NASA, to the age of the universe, estimated at 13,700,000,000 years, plus/minus two hundred million years
In conclusion: if the capacity to manage information is considered as a relevant variable to express the degree or evolutionary level of a life form, then our evolution has an “evolutionary pattern” that is manifested through a mathematical equation that relates the age of four successive evolutionary milestones. In turn, such an evolutionary pattern turns out to be the Golden Ratio. The developed equation allows us to obtain the possible age of a milestone before LUCA and two milestones after the transistor. The results obtained for the ages determined for these three additional evolutionary milestones are worthy of being analyzed in detail.
We do not know what life is, so the coincidence between the age of an evolutionary milestone before LUCA and the age of the universe determined by NASA, and the values obtained for milestones six and seven, should call for reflection for theologians, astrobiologists, philosophers, and anyone who finds it interesting to have a better answer to two of the three "fundamental questions" of Philosophy: where do we come from and where are we going. In relation to our origins, the developed equation suggests that life would have been present at the birth of the universe, and as for our future, it informs us that we are approaching an evolutionary singularity that, if it occurs, would literally make human beings obsolete.
Conscious Action explained
Based on the information they capture with their senses, living beings with brains manage a utilitarian mental representation of the conditions that currently take place in their relevant material environment. This Mental Correlate is a kind of “photograph” of what is happening in the Present in the relevant material environment of the Individual, a Mental Correlate that we will call “Reality of the Individual”.
Life experience, stored in the brain, allows us to give meaning to what is perceived. At the same time, as Pavlov demonstrated, life experience allows us to project eventual future states of the individual's relevant environment, generating expectations of action.
Information from the Past, the Present and an eventual Future is managed by the brain. It is evident that the brain makes a utilitarian distinction between the Past, the Present and the projection of an eventual future.
Human language allows us to incorporate into the mental correlate events and entities that are not necessarily part of what happens in the world of matter, which gives an unprecedented “malleability” to the Reality of the Individual. For the unconscious, everything is happening in the Present. When a child, whom I will call Pedrito, listens to the story of Little Red Riding Hood, said entity is integrated into the Reality of the Individual. In turn, for the child, this entity is “very real”; he does not need his eyes to see it to incorporate it into his mental correlate of the relevant environment. Thanks to our particular language, authentic “immaterial and timeless worlds” have a place in the Mental Correlate of the relevant environment.
In the first four years of life, the child is immersed in an ocean of words, a cascade of sounds and meanings. At this stage, a child hears between seven thousand and twenty-five thousand words a day, a barrage of information. Many of these words speak of events that occur in the present, in the material world, but others cross the boundaries of time and space. There is no impediment so that, when the words do not find their echo in what is happening at that moment in Pedrito's material environment, these words become threads that weave a segment of the tapestry of the Reality of the Individual.
Just as the child's brain grants existence to the young Little Red Riding Hood when the story unfolds before him, similarly, when the voices around him talk about tomorrow and a beach with Pedro, as happens for example when his mother tells him says: “Pedrito, tomorrow we will go for a walk to the beach” the child's mind, still in the process of deciphering the mysteries of time, instantly conjures the entity Pedrito, with his feet on the golden sand, in the eternal present of childhood.
Although over time a strong association between the entity Pedrito and his body is established in the child's brain, a total fusion between said entity and the child's body can never take place, since for the Unconscious the bodily actions of Pedrito They only take place in the Present, while the entity Pedrito is able to carry out actions in authentic timeless and immaterial worlds. The entity Pedrito is what we call the Being, and we know its action as Conscious Action.
@@talleslas I think we have already created a sort of artificial form of consciousness, except it can only behave in likeness to consciousness by mimicking intelligence via objectively definable algorithms and processes, etc, but that it's not truly conscious in the subjective sense of the term.
As to creating something that is able to experience information, as we do, that's to be seen, as we cant even seem to define what consciousness is in purely objective terms. We probably need to start asking different questions.
Honestly consciousness seems boring and overemphasized in discussions involving intelligence. Understanding consciousness seems more likely to be a footnote on the path towards understanding intelligence than a major milestone.
When it comes to raw intelligence I can see the value. Being able to reason and learn are useful properties of a system, but it’s not obvious what value consciousness provides.
I’m not saying it’s not valuable, we don’t know, just that it doesn’t seem obvious that it is. I suspect consciousness comes up in these conversations mostly because people want to learn about themselves as humans rather than them thinking it’s an essential part of intelligence.
Why are scientists so materialistic ? They immediately reject any answer that might be non-physical … but what if that’s the answer to intelligence and evolution ?
I am not religious so whatever non physical answer there is out there has to be based on reason. I’ll put my money on mathematics. Why else is mathematics so good at explaining nature
@ how do physical systems follow these mathematical laws? This is a philosophical question … look up rationalism vs empiricism
Scientists need to study up on their philosophy
This also goes back to the argument of materialism vs idealism… and scientists assume materialism is correct without ever disproving idealism
When this argument was going on in the 1600s with Leibniz vs newton the idealists didn’t have the mathematical tools but now if you put idealism on a mathematical pedestal it can be a serious challenge to the materialist empiricist paradigm dominating science currently
Why so defensive my friend? What’s with the insults ? I never once insulted you
Look up cantors conjectures and Russel’s paradox to see why there are issues with set theory being the foundation for the mathematics
Finaly.... immortality
Professor Peter Atkins predicted that there wasn't any reason why we couldn't live 800 years.
Obviously the rich and powerful will ensure that everyone can be immortal (and not hoard everything for themselves), perpetually sharing all they have with billions of other people, as they've always done. Oh wait.
there's nothing special about evolutionary algorithms, there are many types of black box optimisation.
Isn't it obvious that the failure of computer simulations to produce anything significantly novel over the past 60 years suggests that our understanding of evolutionary processes is incomplete? If evolution worked exactly as we model it, then the very first life forms would likely have behaved like our simulations - failing to produce anything complex or interesting - and life as we know it would never have emerged. Therefore, doesn’t this point to a missing element in our understanding of how evolution really works?
The first life should've just failed, like our simulations fail.
amazing interview ...I created this looping video in reference to the Machines of Loving Grace: ua-cam.com/video/m4SP_lNGcHk/v-deo.html
Trillion dollar question lol
The solution is Popper not Bayes.
This dude is full of it. Like another commenter said his message just doesn't meet the vibe test. Same 'ol talking points about we need to make safe AI cause it will might someday possibly really close soon displace jobs gonna save humanity BS. His evolutionary comments aren't even contextually straight with the topic.
At no point does the host nor guest stop and question the motivation or point behind this obessive quest. I share the same curiosities, and I realize these obsessions are exactly what could end us all. Philosophical discussion is largely if not completely absent in this discussion.. It’s really weird how we automatically assume this is all worth pursuing, like mindless machines ourselves, programmed to just figure everything out for no apparent reason. Why do we want to simulate everything?
He looks like Islam makchavev
No Islam please
It's dangerous to point out flaws? Bruh....
Humans who think AI can still be safeguarded are wishful thinking.
I perceive this to be myopic to the point of being frightening. Techno fabulocity.
don't really agree with jeff
Making death optional, it is such a bad idea. It is the very fear of death that pushes most humans to do anything.
You lost me at; I want to live forever...
We did not lost you yet but with that mindset we might loose you in the future, good thing is you gonna change your mind like everyone else in the face of new breakthroughs, they always happens and you won't be the exception to the rule.
“Cure death” 😂
You don’t want your children to die but you don’t want them to be able to have kids and grandkids? In order for them to have their own kids and so on, YOU need to leave. That’s how things work. Or do you want to dictate who gets to live forever too?
Oh sure, you don't live in a deluded bubble at all. All it takes to rid the world of Putin's ilk is better algorithms and better GPUs. DOH.
and a nother i know best people. if it is not open source it is 100000% for sure going to be for evil only.