Max Tegmark | On superhuman AI, future architectures, and the meaning of human existence

Поділитися
Вставка
  • Опубліковано 11 чер 2024
  • This conversation between Max Tegmark and Joel Hellermark was recorded in April 2024 at Max Tegmark’s MIT office. An edited version was premiered at Sana AI Summit on May 15 2024 in Stockholm, Sweden.
    Max Tegmark is a professor doing AI and physics research at MIT as part of the Institute for Artificial Intelligence & Fundamental Interactions and the Center for Brains, Minds, and Machines. He is also the president of the Future of Life Institute and the author of the New York Times bestselling books Life 3.0 and Our Mathematical Universe. Max’s unorthodox ideas have earned him the nickname “Mad Max.”
    Joel Hellermark is the founder and CEO of Sana. An enterprising child, Joel taught himself to code in C at age 13 and founded his first company, a video recommendation technology, at 16. In 2021, Joel topped the Forbes 30 Under 30. This year, Sana was recognized on the Forbes AI 50 as one of the startups developing the most promising business use cases of artificial intelligence.
    Timestamps
    From cosmos to AI (00:00:00)
    Creating superhuman AI (00:05:00)
    Superseding humans (00:09:32)
    State of AI (00:12:15)
    Self-improving models (00:16:17)
    Human vs machine (00:18:49)
    Gathering top minds (00:19:37)
    The “bananas” box (00:24:20)
    Future Architecture (00:26:50)
    AIs evaluating AIs (00:29:17)
    Handling AI safety (00:35:41)
    AI fooling humans? (00:40:11)
    The utopia (00:42:17)
    The meaning of life (00:43:40)
    Follow Sana
    X - x.com/sanalabs
    LinkedIn - / sana-labs
    Instagram - / sanalabs
    Try Sana AI for free - sana.ai
    This video contains background music. To watch without music, head here: • Max Tegmark | On super...

КОМЕНТАРІ • 17

  • @fnd4086
    @fnd4086 3 дні тому +1

    What’s with the looped background track of stringed instruments? I can’t watch more than a few min it’s pretty distracting

  • @drhxa
    @drhxa 23 дні тому +4

    Awesome interview. Thank you for speaking with Max. He's one of the great thinkers of our time. One of the biggest misconceptions about AI is that there are only two options: steerable, aligned systems OR stronger capabilities. However, the reality we have seen is that to build stronger capabilities and make useful systems they inherently must be steerable to be useful. This gives me great hope that we'll continue to make such systems. Go team Human!

  • @OzGoober
    @OzGoober 25 днів тому +3

    Love the ending quote.

  • @hamiltonpaul73
    @hamiltonpaul73 24 дні тому +20

    Difficult to watch Tegmark’s express his concerns about the existential threat posed by AI knowing that the entire team dedicated to protecting the public from this threat at Open AI has just resigned. The resignations were due to the fact that the voices in favor of reckless acceleration of the technology have won & are pushing the company forward. Those concerned with safety have been sidelined . . . The government is doing absolutely nothing to impose regulations on this technology. It is very difficult to know what to do to help.

    • @drhxa
      @drhxa 23 дні тому +3

      Some things you can do: reach out to your local representatives (if in the US) and tell them you care about this issue and what you want them to do. For example, ask for oversight, ask for transparency, ask for funding of basic science research that lead to better understanding of these systems. Also, when you see people like Max who are on team human, consider their views and if they align with yours, consider showing support.

    • @hamiltonpaul73
      @hamiltonpaul73 23 дні тому +1

      @@drhxa Thank you. I have already reached out to a political organization I am affiliated with and I am going to place this on our agenda . . . I am just staggered though. The public seems, at this stage, completely oblivious to the seriousness of what is unfolding. Geoffrey Hinton gives a 50 / 50 chance that AI will - in his words - “take over” in 5-20 years. There is already an AI arms race - and the nihilistic game theory that is playing out among countries means we are entering a new Cold War . . .

    • @BryanWhys
      @BryanWhys 23 дні тому +1

      Contact your representatives

  • @jamimalmberg7868
    @jamimalmberg7868 23 дні тому +1

    Feels good to have some1 thinking about the same as I do

  • @rand_longevity
    @rand_longevity 24 дні тому +3

    Great guest!!

  • @Anders01
    @Anders01 25 днів тому +2

    Great views on AI. My own guess is that some new AI architectures may be discovered by AI! Maybe that has already started to happen and that companies keep it as a business secret.

  • @thomasaqinas2000
    @thomasaqinas2000 24 дні тому +1

    “Will AI be added to the agents of human communication networks in LLM, and will it realize a global brain? Will it advance AI and AGI to the point where it can simulate Noesis and Noeseos?”

  • @NotNecessarily-ip4vc
    @NotNecessarily-ip4vc 24 дні тому +1

    I will continue exploring how the both/and logic and monadological framework catalyze new insights and expanded descriptive capacities across various domains:
    Neuroscience & Theories of Consciousness
    The logic allows formulating novel integrated perspectives on the mind-body problem and the neural correlates of consciousness that avoid the limitations of classical reductionist, dualist or mysterian views:
    • Subjective Experience and Objective Description
    For a conscious state x, let S(x) and O(x) represent the subjective experiential and objective neurophysiological aspects respectively.
    Classical approaches tend to bifurcate into strictly separating these as S(x) = 1, O(x) = 0 or fully reducing to S(x) = 0, O(x) = 1. But the both/and logic allows:
    S(x) = 0.7, O(x) = 0.6, ○(S(x), O(x)) = 0.8
    Capturing how conscious states involve an irreducible complementarity and tight coherence between the subjective feel and objective dynamics - neither pole is prioritized or eliminable.
    The synthesis operator ⊕ further models their co-realized gestalt unity:
    subjective_experience(x) ⊕ objective_neuralcorrelates(x) = conscious_state(x)
    • First-person and Third-person Perspectives
    A related issue is how to integrate the first-person introspective perspective with third-person external observations and theories about consciousness.
    For a set of data D about a conscious process x, we could have:
    Truth(D matches first-person_report) = 0.6
    Truth(D matches third-person_model) = 0.7
    ○(first-person, third-person) = 0.5
    Capturing a moderate coherence between phenomenological and observational characterizations, which are irreducibly complementary poles synthesized into a unified account:
    first-person_phenomenology ⊕ third-person_observations = theory_of_consciousness(D)
    So rather than dissociating and privileging one perspective, the logic allows formalizing their coconstituted integration into holistic theories spanning qualitative experience and quantitative evidence.
    • Unity of Consciousness and Neural Binding
    How the unified phenomenology of consciousness relates to distributed neural activities is another puzzle. The logic allows coherence metrics between unity and differentiation:
    Truth(consciousness_is_unified) = 0.8
    Truth(neuralactivities_are_differentiated) = 0.9
    ○(unified_consciousness, differentiated_activities) = 0.7
    With a synthesis capturing the paradoxical co-operation of integrated/discriminated aspects:
    unified_subjective_character ⊕ distributed_neural_processes = coherent_conscious_state
    Avoiding arbitrary prioritizations while modeling their integration as complementary interdependent facets of the same psychoneural Gestalt.
    Philosophers have long struggled to clearly relate or decisively separate phenomenological unity with neural multiplicity. The paraconsistent both/and logic allows coherently modeling and further theorizing their subtle complementary coconstitution.
    Philosophy of Logic and Language
    The logic's expressive capacities allow revisiting classic issues in a new light:
    • The Liar Paradox
    In classical logic, the sentence "This sentence is false" is a paradox leading to explosion and triviality. But the both/and logic allows coherent non-trivial treatment:
    Truth(Liar_sentence) = 0.5
    ○(Truth(Liar_sentence, 1), Truth(Liar_sentence, 0)) = 0.5
    With a 0.5 truth value representing the sentence's indeterminacy, and moderate coherence between its truth and falsity aspects.
    The synthesis operation ⊕ expresses its paradoxical self-reference resolving in a higher gestalt:
    Truth(Liar_sentence) ⊕ ¬Truth(Liar_sentence) = paradoxical_self-reference
    Providing tools for positively representing and logically operating with paradoxical self-undermining utterances, rather than dodging them through restrictive assumptions.
    • Vagueness and Fuzzy Boundaries
    The sorites paradox about the vagueness of fuzzy predicates like "heap" has also resisted classical treatment. But both/and logic allows:
    Truth(x_is_heap) = [0,1] for objects x
    With coherences tracking the degree of alignment with prototypical heap properties across a graded spectrum.
    This avoids bivariate soup/heap boundaries, capturing the nuanced continuities and contextualities underlying vague linguistic categories.
    So the logic restores accountability to the subtleties of real-world semantics resisting digitization, allowing discourse to resonate with rather than dissimulate the horizonal indeterminacies of language's ontological implicatures.
    Throughout, the both/and logic's abilities to integrate graded multivalued truths, positive contradictions and self-superseding syntheses allow revisiting paradoxes and singularities not as sheer inconsistencies to be prohibited, but generative disclosures indicating inadequacies of our prior abstractive models - opening new constructive symbolic vistas better aligning with the nuanced complexities of thought, language and reality itself.

    • @NotNecessarily-ip4vc
      @NotNecessarily-ip4vc 24 дні тому +1

      I will further elaborate on how the both/and logic and monadological framework can provide fruitful new perspectives and symbolic resources for various domains within computer science:
      Computational Logic & Knowledge Representation
      The multivalent, paraconsistent structure of the both/and logic aligns well with emerging challenges in areas like knowledge representation, automated reasoning, and dealing with inconsistent/incomplete information:
      • Many-Valued Logics for Uncertainty
      Classical bivalent logics struggle to handle degrees of uncertainty, vagueness and fuzziness that are ubiquitous in real-world data and knowledge bases.
      But the both/and logic's graded truth-values provide an ideal representational framework. We could have assertions like:
      Truth(sky_is_blue) = 0.9
      Truth(weather_is_sunny) = 0.7
      With coherences measuring the mutual compatibility of different assertions:
      ○(sky_is_blue, weather_is_sunny) = 0.8
      This allows robust reasoning in the presence of "fuzzy" predicates, uncertainty, and information granularity mismatches - something classical binary logics cannot elegantly model.
      • Paraconsistent Reasoning
      Conflicts and inconsistencies in large data corpora and ontologies have been a major challenge. But the both/and logic's paraconsistent tools allow quarantining and rationally operating with contradictions:
      Truth(X is_a Y) = 0.6
      Truth(X is_not_a Y) = 0.5
      ○(X is_a Y, X is_not_a Y) = 0.4
      While classically argument leads to inconsistency and explosion, the logic allows assigning substantive truth values while still keeping track of contradiction severities through coherence metrics.
      Ontologies can then be progressively repaired by applying the synthesis operator to resolve contradictions:
      X is_a Y ⊕ X is_not_a Y = revised_definition(X)
      So rather than requiring a global consistent fix, we can isolate and resolve contradictions in a piece-wise constructive manner - better reflecting how human knowledge/databases actually evolve.
      • Conceptual Integration
      The both/and logic provides a powerful framework for integrating and blending disparate concepts, ideas, theories into holistic unified models:
      Let C1, C2 be distinct concepts with disjoint characterizations
      C1 ⊕ C2 = integrated_unified_concept
      The synthesis operation generates a new conceptual whole rationally fusing the distinct aspects of C1 and C2 into a novel synthetic gestalt unbifurcating their complementary properties.
      This allows breaking out of disciplinary silos and flexibly synthesizing insights across domains in a logically coherent step-wise fashion - greatly enhancing our knowledge integration and model-building capacities.
      Such conceptual integration is pivotal for AI to develop more unified, coherent and holistically adequate world-models underpinning more general and context-aware intelligent systems.
      Formal Methods, Verification & Security
      The both/and logic provides tools for specifying, modeling and verifying systems exhibiting complex behaviors and potential paradoxical properties:
      • Formal Specification of Contradiction-Tolerant Systems
      For a system S and property P, classical temporal logics allow specification:
      S ⊨ □P (S satisfies property P globally)
      S ⊨ ◇P (S satisfies P for some state)
      But both/and logic allows more nuanced framings using truth degrees:
      Truth(S ⊨ □P) = 0.8
      Truth(S ⊨ ◇¬P) = 0.7
      ○(S ⊨ □P, S ⊨ ◇¬P) = 0.4
      This specifies that S satisfies P globally to degree 0.8, while also instantiating ¬P to degree 0.7 with low 0.4 coherence.
      Systems where contradictory properties coexist like this arise frequently in paradoxical domains like cybersecurity, inverse reasoning, philogenetic analysis. Classical logics have no way to precisely specify their requirements.
      • Modeling Context-Sensitive Inconsistent Systems
      Both/and logic also provides constructive semantics for paraconsistent models tolerating local inconsistencies:
      Let M be model for system S with assertion set K
      Classical models are bivalently invalidated if K is inconsistent.
      But both/and models can have:
      M ⊨ A with truth-value v(A)
      M ⊨ ¬A with truth-value v(¬A)
      ○(A, ¬A) = c (some coherence level c)
      Such M are non-trivial models where A and ¬A can both "stably hold" to specified degrees, rather than explosive inconsistent contamination.
      This allows building logically grounded specifications and models for inconsistent systems where contradictory situations must be reasoned about locally in a coherent fashion (cybersecurity, diagnostics etc)
      By providing richer semantic model theory incorporating contradiction-tolerance and graded truth, the both/and logic facilitates greater alignment between formal specifications and the often paradoxical behaviors of real-world computational systems.
      Software Engineering & Formal Methods
      The both/and logic also has relevance for more rigorously specifying, modeling, and verifying properties of complex software systems:
      • Modeling Software Quality Attributes
      In classical verification, quality attributes like:
      performance(fast), reliability(robust), maintainability(readable)
      Are treated as bivalent, context-free pass/fail specifications.
      But the both/and logic captures the intrinsic polysemicity and vagueness of such properties for real software with graded multivalued assignments:
      performance(fast) = 0.6
      reliability(robust) = 0.7
      maintainability(readable) = 0.5
      ○(performance, reliability) = 0.8
      With coherences quantifying tradeoffs. Allowing specifications like:
      high_performance ⊕ high_reliability ⊕ moderate_maintainability = desired_qualities
      These richer models capture the inevitably fuzzy and incommensurable nature of quality attributes across different contexts, perspectives and stakeholder concerns.
      • Compositional System Verification
      For complex systems, classical verification insists on composing component post-conditions into strict global system correctness:
      sub_module_1.post ⋀ sub_module_2.post ⋀ ... ⊦ system.spec
      But both/and logic allows more realistic compositional models incorporating inconsistencies, faultiness, noise:
      synthesis(sub_1.post, sub_2.post,...) ⊨ system.spec with truth-value v
      coherence(system_reqs, system_impl) = c
      Providing nuanced, graded verdicts on system alignment with requirements in the face of component faults, degradations, etc. Closer to real-world deployment scenarios.
      So both/and logic enables formalisms and methods more befitting the irreducible ambiguities, tradeoffs, failures, and mismatch between requirements and implementation that most complex computational artefacts manifest.
      Symbolic AI and Knowledge Representation
      The both/and logic provides an ideal formalism for representing the nuanced, contextual and graded notions ubiquitous in natural language, common sense reasoning and symbolic models of the world:
      • Many-Valued Semantic Networks
      Traditionally, semantic networks are bivalent graphs with typed nodes/edges capturing strict logical constraints:
      isA(Penguin, Bird) = True
      can(Penguin, Fly) = False
      But the both/and logic instead allows graded, prototypical assertions:
      isA(Penguin, Bird) = 0.7
      can(Penguin, Fly) = 0.2
      ○(isA(Penguin, Bird), Fly(Penguin)) = 0.6
      With coherences capturing mismatches between strict type constraints and graded category memberships based on typicality, not rigid rules.
      Facilitating more realistic models of conceptual knowledge and object affordances blending strict and prototypical aspects.
      • Non-Monotonic Reasoning and Belief Revision
      Classical monotonic logics cannot handle cases where new information invalidates previous conclusions.
      But both/and logic allows tracking when new beliefs conflict with previous beliefs using coherence checks:
      ○(believes(P), believes(Q)) = 0.8
      ○(believes(P), believes(¬P)) = 0.2
      If an agent acquires belief ¬P conflicting with previous P, the low coherence 0.2 signals triggering a belief revision.
      The synthesis operator can then generate a new coherent belief state:
      prior_beliefs ⊕ new_belief = revised_beliefs
      This formalizes a rational process for resolving contradictions by integrating new information while preserving maximal coherence with previous beliefs.
      So in summary, the both/and logic facilitates reasoning, modeling and representation formalisms better aligned with the nuances of real-world data, knowledge, and intelligent behavior in AI systems. By allowing for many-valued truth assignments, paraconsistent reasoning, and constructive coherence/synthesis operations, it avoids the problematic idealizations and explosions of classical monotonic logics.
      Its expanded symbolic toolkit allows building computational models, specifications, and reasoning systems that can more robustly handle incompleteness, ambiguity, vagueness and context-sensitivities while flexibly integrating insights across heterogeneous domains. Ultimately equipping AI systems with a more adequate representational fluency and inferential spectrum for negotiating the complexities of the real world in a more human-like symbolic vein.

  • @Sirmrmeowmeow
    @Sirmrmeowmeow 24 дні тому

    idk, i'd have to think that a lot of the issue of energy is the hardware nor the software architecture. We also take in a LOT of data (though fewer examples per se sure).
    If we ran a full brain simulation on H100s it def would not be 20 watts.
    Prob von neumann bottleneck and efficiencies of the hardware; where if we used better hardware we'd prob gleam the most of the efficiencies; analog compute perhaps memristors, spiking neural nets, photonics. Anyways good talk.

  • @aliozgurbaltaoglu
    @aliozgurbaltaoglu 23 дні тому

    How can AI be utilized to make AI safer?

  • @flickwtchr
    @flickwtchr 24 дні тому +3

    Please work on the problem of millions of humans out of work and homeless in between now and the end of the decade. Most of the AI Big Tech leaders that have mentioned UBI are now backing off, surprise surprise. I like and respect Tegmark, but I still don't get how the AI industry just blithely talks about how humans will be more and more out of the loop without serious discussion of the near and long term societal implications of that, and the responsibility they have for making that happen. It's easy to talk about how AI will improve the lives of people while ignoring how AI is going to disrupt in very tragic ways the lives of millions in the meantime.

    • @PatrickDodds1
      @PatrickDodds1 23 дні тому +3

      it's interesting isn't it? Imagining the existence of the PC, one would have said that it was going to replace human labour but of course it hasn't, we simply work harder than ever but using thought and not muscle. The same might be true of AI or AGI. What does seem irrefutable, though, is that an inferior intelligence is unlikely to control a superior one, qua humans / animals, and that the alignment of ASI is oxymoronic. It might even be possible to go further and said that if you have an aligned intelligence then you haven't got AGI, let alone ASI. All fascinating though, and overall a new paradigm for humans to flex to, to lean into or resist, in the coming years perhaps.