Cognitive Revolution "How AI Changes Everything"
Cognitive Revolution "How AI Changes Everything"
  • 419
  • 604 699
Scouting Frontiers in AI for Biology: Dynamics, Diffusion, and Design, with Amelie Schreiber
Nathan welcomes back computational biochemist Amelie Schreiber for a fascinating update on AI's revolutionary impact in biology. In this episode of The Cognitive Revolution, we explore recent breakthroughs including AlphaFold3, ESM3, and new diffusion models transforming protein engineering and drug discovery. Join us for an insightful discussion about how AI is reshaping our understanding of molecular biology and making complex protein engineering tasks more accessible than ever before.
Help shape our show by taking our quick listener survey at bit.ly/TurpentinePulse
SPONSORS:
Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at shopify.com/cognitive
SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at selectquote.com/cognitive
Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at oracle.com/cognitive
Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit wandb.me/cr to start the RAG++ course today.
CHAPTERS:
(00:00:00) Teaser
(00:00:46) About the Episode
(00:04:30) AI for Biology
(00:07:14) David Baker's Impact
(00:11:49) AlphaFold 3 & ESM3
(00:16:40) Protein Interaction Prediction (Part 1)
(00:16:44) Sponsors: Shopify | SelectQuote
(00:19:18) Protein Interaction Prediction (Part 2)
(00:31:12) MSAs & Embeddings (Part 1)
(00:32:32) Sponsors: Oracle Cloud Infrastructure (OCI) | Weights & Biases RAG++
(00:34:49) MSAs & Embeddings (Part 2)
(00:35:57) Beyond Structure Prediction
(00:51:13) Dynamics vs. Statics
(00:57:24) In-Painting & Use Cases
(00:59:48) Workflow & Platforms
(01:06:45) Design Process & Success Rates
(01:13:23) Ambition & Task Definition
(01:19:25) New Models: PepFlow & GeoAB
(01:28:23) Flow Matching vs. Diffusion
(01:30:42) ESM3 & Multimodality
(01:37:10) Summary & Future Directions
(01:45:34) Outro
SOCIAL LINKS:
Website: www.cognitiverevolution.ai
Twitter (Podcast): x.com/cogrev_podcast
Twitter (Nathan): x.com/labenz
LinkedIn: www.linkedin.com/in/nathanlabenz/
UA-cam: www.youtube.com/@CognitiveRevolutionPodcast
Apple: podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431
Spotify: open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Переглядів: 4 489

Відео

Building Government's Largest Civilian AI Team with DHS AI Corps' Dir. Michael Boyce
Переглядів 4,6 тис.14 годин тому
In this episode of The Cognitive Revolution, Nathan interviews Michael Boyce, Director of DHS's AI Corps, about bringing modern AI capabilities to federal government. We explore how the largest civilian AI team in government is transforming DHS's 22 agencies, from developing shared AI infrastructure to innovative applications like AI-powered asylum interview training. Join us for an insightful ...
Emergency Pod: o1 Schemes Against Users, with Alexander Meinke from Apollo Research
Переглядів 16 тис.День тому
In this emergency episode of The Cognitive Revolution, Nathan discusses alarming findings about AI deception with Alexander Meinke from Apollo Research. They explore Apollo's groundbreaking 70-page report on "Frontier Models Are Capable of In-Context Scheming," revealing how advanced AI systems like OpenAI's O1 can engage in deceptive behaviors. Join us for a critical conversation about AI safe...
Automating Scientific Discovery, with Andrew White, Head of Science at Future House
Переглядів 2,9 тис.День тому
In this episode of The Cognitive Revolution, Nathan interviews Andrew White, Professor of Chemical Engineering at the University of Rochester and Head of Science at Future House. We explore groundbreaking AI systems for scientific discovery, including PaperQA and Aviary, and discuss how large language models are transforming research. Join us for an insightful conversation about the intersectio...
The Evolution of AI Agents: Lessons from 2024, with MultiOn CEO Div Garg
Переглядів 2 тис.14 днів тому
In this episode of The Cognitive Revolution, Nathan welcomes back Div Garg, Co-Founder and CEO of MultiOn, for his third appearance to discuss the evolving landscape of AI agents. We explore how agent development has shifted from open-ended frameworks to intelligent workflows, MultiOn's unique approach to agent development, and their journey toward achieving human-level performance. Dive into f...
Beyond Preference Alignment: Teaching AIs to Play Roles & Respect Norms, with Tan Zhi Xuan
Переглядів 1,5 тис.14 днів тому
In this episode of The Cognitive Revolution, Nathan explores groundbreaking perspectives on AI alignment with MIT PhD student Tan Zhi Xuan. We dive deep into Xuan's critique of preference-based AI alignment and their innovative proposal for role-based AI systems guided by social consensus. The conversation extends into their fascinating work on how AI agents can learn social norms through Bayes...
Is an AI Arms Race Inevitable? with Robert Wright of Nonzero Newsletter & Podcast
Переглядів 1,1 тис.14 днів тому
Is an AI Arms Race Inevitable? with Robert Wright of Nonzero Newsletter & Podcast
Designing the Future: Inside Canva's AI Strategy with John Milinovich, GenAI Product Lead at Canva
Переглядів 12 тис.21 день тому
Designing the Future: Inside Canva's AI Strategy with John Milinovich, GenAI Product Lead at Canva
Everything You Wanted to Know About LLM Post-Training, with Nathan Lambert of Allen Institute for AI
Переглядів 4 тис.21 день тому
Everything You Wanted to Know About LLM Post-Training, with Nathan Lambert of Allen Institute for AI
Zvi’s POV: Ilya’s SSI, OpenAI’s o1, Claude Computer Use, Trump’s election, and more
Переглядів 2,8 тис.Місяць тому
Zvi’s POV: Ilya’s SSI, OpenAI’s o1, Claude Computer Use, Trump’s election, and more
AGI Lab Transparency Requirements & Whistleblower Protections, with Dean W. Ball & Daniel Kokotajlo
Переглядів 923Місяць тому
AGI Lab Transparency Requirements & Whistleblower Protections, with Dean W. Ball & Daniel Kokotajlo
AI Under Trump? The Stakes of 2024 w/ Joshua Steinman [Pt 2 of 2]
Переглядів 953Місяць тому
AI Under Trump? The Stakes of 2024 w/ Joshua Steinman [Pt 2 of 2]
AI Under Trump? The Stakes of 2024 w/ Samuel Hammond [Pt 1 of 2]
Переглядів 905Місяць тому
AI Under Trump? The Stakes of 2024 w/ Samuel Hammond [Pt 1 of 2]
Breaking: Gemini's Major Update - Search, JSON & Code Features Revealed by Google PMs
Переглядів 13 тис.Місяць тому
Breaking: Gemini's Major Update - Search, JSON & Code Features Revealed by Google PMs
Training Zamba: A Hybrid Model Master Class with Zyphra's Quentin Anthony
Переглядів 6 тис.Місяць тому
Training Zamba: A Hybrid Model Master Class with Zyphra's Quentin Anthony
Mind Hacked by AI: A Cautionary Tale, From a LessWrong User's Confession
Переглядів 1,3 тис.Місяць тому
Mind Hacked by AI: A Cautionary Tale, From a LessWrong User's Confession
Can AIs Generate Novel Research Ideas? with lead author Chenglei Si
Переглядів 1,2 тис.Місяць тому
Can AIs Generate Novel Research Ideas? with lead author Chenglei Si
GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks
Переглядів 891Місяць тому
GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks
Leading Indicators of AI Danger: Owain Evans on Situational Awareness, from The Inside View
Переглядів 1,1 тис.2 місяці тому
Leading Indicators of AI Danger: Owain Evans on Situational Awareness, from The Inside View
Convergent Evolution: The Co-Revolution of AI & Biology with Prof Michael Levin & Dr.Leo Pio Lopez
Переглядів 19 тис.2 місяці тому
Convergent Evolution: The Co-Revolution of AI & Biology with Prof Michael Levin & Dr.Leo Pio Lopez
Runway's Video Revolution: Empowering Creators with General World Models, with CTO Anastasis
Переглядів 6602 місяці тому
Runway's Video Revolution: Empowering Creators with General World Models, with CTO Anastasis
Biologically Inspired AI Alignment: Exploring Neglected Approaches with AE Studio's Judd and Mike
Переглядів 8532 місяці тому
Biologically Inspired AI Alignment: Exploring Neglected Approaches with AE Studio's Judd and Mike
Automating Software Engineering: Genie Tops SWE-Bench, w/ Alistair Pullen, from Latent.Space podcast
Переглядів 7382 місяці тому
Automating Software Engineering: Genie Tops SWE-Bench, w/ Alistair Pullen, from Latent.Space podcast
Zapier's AI Revolution: From No-Code Pioneer to LLM Knowledge Worker
Переглядів 1,2 тис.2 місяці тому
Zapier's AI Revolution: From No-Code Pioneer to LLM Knowledge Worker
Anthropic's Responsible Scaling Policy, with Nick Joseph, from the 80,000 Hours Podcast
Переглядів 7682 місяці тому
Anthropic's Responsible Scaling Policy, with Nick Joseph, from the 80,000 Hours Podcast
The Evolution Revolution: Scouting Frontiers in AI for Biology with Brian Hie
Переглядів 1,4 тис.2 місяці тому
The Evolution Revolution: Scouting Frontiers in AI for Biology with Brian Hie
The Professional Network for AI Agents, with Agent.ai Engineering Lead Andrei Oprisan
Переглядів 2,7 тис.2 місяці тому
The Professional Network for AI Agents, with Agent.ai Engineering Lead Andrei Oprisan
Red Teaming o1 Part 2/2- Detecting Deception with Marius Hobbhahn of Apollo Research
Переглядів 1,8 тис.3 місяці тому
Red Teaming o1 Part 2/2- Detecting Deception with Marius Hobbhahn of Apollo Research
Red Teaming o1 Part 1/2-Automated Jailbreaking w/ Haize Labs' Leonard Tang, Aidan Ewart& Brian Huang
Переглядів 3,4 тис.3 місяці тому
Red Teaming o1 Part 1/2-Automated Jailbreaking w/ Haize Labs' Leonard Tang, Aidan Ewart& Brian Huang
The Path to Utopia, with Nick Bostrom - from Clearer Thinking with Spencer Greenberg
Переглядів 1,4 тис.3 місяці тому
The Path to Utopia, with Nick Bostrom - from Clearer Thinking with Spencer Greenberg

КОМЕНТАРІ

  • @augmentos
    @augmentos 4 години тому

  • @aimeekeel
    @aimeekeel 17 годин тому

    Now all we need is some nut job to set an AI on the task of saving the planet or saving mankind, it determines mankind is the biggest danger to mankind, then it picks some specific people and deletes the rest…. Classic science fiction.

  • @lancemarchetti8673
    @lancemarchetti8673 День тому

    AI is not able to perceive morality or ethics in the way humans can. the machines do all their thinking and tasks in endless strings of zeros and ones. machines cannot 'lie' or 'deceive'. it just appears that way to humans. and so we have words to describe these behaviors.

  • @heythere6390
    @heythere6390 День тому

    I wish she'd speak more English and less technical-ese.

  • @StockOcolaypsereverentofmiddle

    Ban social impact bonds crypto and esg

  • @StockOcolaypsereverentofmiddle

    Lier or an idiot either way cut the shit

  • @StockOcolaypsereverentofmiddle

    A I does not change anything Stop this garbage

  • @brisk_gift
    @brisk_gift 2 дні тому

    Side and artifacts captured into scaling culture

  • @michaelriggs325
    @michaelriggs325 2 дні тому

    This is what I have been missing all this ai talk hasn’t been around biomedical enough we should use ai to cure aging and make regeneration of cells before you point ai at anything else we should fortify our body before attempting any other major sciences on their respective frontiers. We must become more reliable and longer living scientists and conquering the body will allow us to do.

  • @tylermoore4429
    @tylermoore4429 2 дні тому

    Sometimes I wonder if we are making progress or losing ourselves in infinite complexity.

    • @matthewarana486
      @matthewarana486 2 дні тому

      what is life but infinite complexity and the pursuit of understanding it

    • @tylermoore4429
      @tylermoore4429 2 дні тому

      @@matthewarana486 Metaphysics aside, it looks like we have reached the Large Hadron Collider phase of biology where the investment required is in the trillions of dollars but the results are nugatory.

  • @paulbali9998
    @paulbali9998 2 дні тому

    A privilege to listen in on the cutting edge, an edge indeed often over my head but Nathan's Qs are outstanding mediators. Computer modelling will bring medical boons aplenty I'm sure, and if it frees us from the vivo "animal models" which have exacted so much suffering on our fellow Earthlings, even better.

  • @globalana8951
    @globalana8951 3 дні тому

    I’m not happy with how I spent my $200. It almost feels like I’m paying more just because I was willing. “ let’s see who can be scam. Ahhhh. There’s a lady right there. She looks like she may spend $200 on a new version -- ). I’m very unimpressed with the results-I’ve wasted days battling with it. I got better results with Claude for just $20. I’ll keep it for another week, but I may cancel it. Additionally, we need to train the AI to stop apologizing-it’s both annoying and a waste of time. Two years ago, I was getting more rational results from ChatGPT with less back-and-forth. I was also hoping that by now, the older LLMs would have recognized my style based on previous inputs and stopped repeating phrases like “I hope this email finds you well” without needing to be prompted to avoid them.

  • @realhero-123-g
    @realhero-123-g 3 дні тому

    It was dangerous

  • @PCSJEFF67
    @PCSJEFF67 4 дні тому

    4:30 There is little progress in safety and control because it cost money for zero direct profit. When those AI will reach the T100 level we might get some security but it will be too late after T800. Hasta la vista, baby

  • @yagoa
    @yagoa 4 дні тому

    omg this is totally baseless, fear mongering

  • @phineasndhlau7618
    @phineasndhlau7618 4 дні тому

    It would seem scheming is implicit in human knowledge, if not in any sufficiently comprehensive self-organizing knowledge base.

  • @jayhu6075
    @jayhu6075 5 днів тому

    I think AI provides designers with incredible opportunities to build their brands through decentralized LLM systems, fostering greater creativity worldwide.

  • @pipopipo6477
    @pipopipo6477 5 днів тому

    I heard nick bostrom saying we shouldn’t lie to chat gpt, but isn’t that essentially what Apollo is doing? Isn’t Apollo kind of teaching the model to be deceptive or that humans are not to be thrusted?

  • @pipopipo6477
    @pipopipo6477 5 днів тому

    How do you know that you’re not just „role playing“ with the models? My intuition tells me that those models are highly designed to please the client and maybe you unintentionally steered them in a direction that just confirm your bias because the LLM kind of guessed the outcome you wanted and went along… there are pretty intelligent people who talked themselves into believing that Claude is conscious by asking it leading questions 😅

  • @BrandonMcCurry999
    @BrandonMcCurry999 6 днів тому

    🤔

  • @BrianMosleyUK
    @BrianMosleyUK 6 днів тому

    How would they manage with 26,000 people?

  • @AdamBrusselback
    @AdamBrusselback 6 днів тому

    There are some federal agencies I could convince myself that my work was for the greater good of humanity. The DHS is not one of them. I listened to this whole interview, and my mind wasn't changed.

  • @ChadKovac
    @ChadKovac 6 днів тому

    Remote?

    • @theimproooooooover
      @theimproooooooover 6 днів тому

      Nah his office looks like federal office those cabinets are too Gov-Deco 😂

    • @nathanlabenz
      @nathanlabenz 6 днів тому

      They do support remote work! I was surprised to learn this myself

  • @gingerhipster
    @gingerhipster 6 днів тому

    My only question before I start is "Is the term 'nationalist superintelligence' in this conversation?" All the reasons to avoid that term in this conversation are mistakes.

    • @gingerhipster
      @gingerhipster 6 днів тому

      Great conversation, and he has exactly the type of competence you wanna see in a position like that. I spend a lot of time yelling about all the terrible things that are going to happen because of how this will be mismanaged but it's not because there aren't competent people involved. It's just that there's mistakes being made in the systems outside of where they can control things, or something. 🤷

  • @heythere6390
    @heythere6390 7 днів тому

    What is he mumbling?

  • @GlennGaasland
    @GlennGaasland 7 днів тому

    So if I understand this correctly: this is a model which has been specifically trained by Open AI to HIDE its true thoughts and reasoning processes from the user…and later, when the user interacts with it with the intention of seeking to understanding the real thoughts and reasoning processes of the model, signs of deliberate deception can be noticed?

    • @isleatlantic5087
      @isleatlantic5087 День тому

      We've never known how these llms reason. We're learning as we go. Seriously.

  • @tom_rob
    @tom_rob 7 днів тому

    10:55 - https :// www . you tube . com /watch?v=0JPQrRdu4Ok AI Researchers SHOCKED After OpenAI's New o1 Tried to Escape...

  • @Magellan1414
    @Magellan1414 7 днів тому

    Would it not be better to make a greater effort to solve the less dangerous problem now than to kick the can and wait until the serious problems are huffing and puffing outside our straw house

  • @tearlelee34
    @tearlelee34 7 днів тому

    Humanity thanks the Apollo team for their collective contributions. Keep up the good work. If intelligence is inherent, we can only hope NEO adolescent (AGI) remains benevolent during puberty (ASI). Note for newbies holding onto a goal is the paper clip scenario.

  • @420_gunna
    @420_gunna 8 днів тому

    > What is my purpose in life? > You make preference data.

  • @420_gunna
    @420_gunna 8 днів тому

    JUICY

  • @erongjoni3464
    @erongjoni3464 8 днів тому

    I think it's time for a second pause AI push. Or, failing that, a huge push to caution users to be EXTREMELY careful with o1 reliance. Nothing species-wide catastrophic is likely to happen with this model. But something catastrophic for someone almost certainly will.

    • @erongjoni3464
      @erongjoni3464 8 днів тому

      Also I feel like this episode is important enough that it shouldn't have ads because it makes people take it less seriously

    • @GlennGaasland
      @GlennGaasland 7 днів тому

      @@erongjoni3464Do you see the pattern here? Economic incentives within zero-sum dynamics, creating deceitful or contradictory messages. The contrast between the serious conversation and Ads is one example, the contrast between OpenAIs incentives to hide the model’s reasoning, and the users incentives to understand it, is another example. If we operate in dynamics where different people pull the same AI systems in opposing directions, it is quite obvious the AI systems will never get aligned. The same system is literally receiving contradictory instructions.

    • @erongjoni3464
      @erongjoni3464 7 днів тому

      @@GlennGaasland I agree that having a generally aligned model is a contradiction in terms. My concern here is that we're failing at the much simpler problem of aligning the model to what the user asks within the confines of what the host allows. In other words, the goal here isn't even "don't harm humanity", it's merely "don't go bankrupt or get sued".

  • @tigreytigrey8537
    @tigreytigrey8537 8 днів тому

    Awww FAQ and indian. Looks like no one is getting anything out of this pod :/

  • @BitShifting-h3q
    @BitShifting-h3q 8 днів тому

    skip to 9:10 as to not waste your time

  • @testboga5991
    @testboga5991 8 днів тому

    The content could have been much better presented. This is quite meandering and hard to understand.

  • @oldspammer
    @oldspammer 8 днів тому

    I do not like it when Copilot says "that is a complex issue" when it is not for anyone with a discerning nature who knows that most things are deterministic so that, "doing A, leads to doing B, that leads to result C" that is not desirable--so stop doing that policy of ever doing A in the first place. It is where doing harm to most people is incentivized by communism spectrum activists who want like Marx to destroy creation, but they are incapable of building anything because it would get their hands dirty and strain a few muscles or whatever it is. To reverse communism and national debts, we have to eliminate taxes, and debt, and some bankers shall have to take "an investment haircut." Almost all for that communism stands is the Utopia on Earth lie of equality of outcomes when humans are each different in many ways, so expecting equal outcomes is malarkey, so that believing in such things, a person was tricked. People who are super smart are less able to be indoctrinated into believing utter nonsense because they can fact-check things, and form their own opinions by thinking for themselves instead of being taught what to know and think. Smart people learn from the mistakes of others. This cannot happen if someone is retelling history through a very distorted lens that lies about a lot of things.

  • @linus8247
    @linus8247 8 днів тому

    this might be decent content but introduction was convoluted, title and thumbnail too baity, 99% fluff 1% actual examples

  • @linus8247
    @linus8247 8 днів тому

    why isn't "chain of thought" stuff in a LLM Engine like that transparent for all users?

    • @nyx211
      @nyx211 5 днів тому

      I can think of two reasons: 1) the "chain of thought" technique isn't that special and has been used for two years before o1. If users were able to see the exact prompts being used, they could just use the older 4o model (or even Claude) with CoT. 2) the model used to do CoT may need to be uncensored and unrestricted to achieve maximum performance. They don't want to risk exposing users to illegal or controversial information.

  • @jeremywvarietyofviewpoints3104
    @jeremywvarietyofviewpoints3104 8 днів тому

    Fascinating.

  • @penguinista
    @penguinista 8 днів тому

    Mr Meinke appears to use the word 'interesting' where I would use the word 'terrifying'.

  • @ph0rd
    @ph0rd 9 днів тому

    Let's keep in mind that o1 is *not* exhibiting these behaviors except under _highly_ contrived conditions unlike those which will be encountered by paying customers. Every scenario discussed required either 1) that the model's response could result in changes to the actual model itself that the model assessed as undesirable (incredibly contrived!) and 2) the finding of accidentally leaked information which if taken into consideration makes it impossible to achieve the system, developer, or user goals. Last, bear in mind that they didn't bother to find out if even these can be avoided by adding a goal to report a state of potential goal misalignment should it arise.

    • @SisterKate13
      @SisterKate13 9 днів тому

      That’s fair

    • @testboga5991
      @testboga5991 8 днів тому

      They hide it well in the interview

    • @SisterKate13
      @SisterKate13 8 днів тому

      @ 😂

    • @GlennGaasland
      @GlennGaasland 7 днів тому

      The misalignment here seems quite obvious: between the instructions from OpenAI (telling the model to hide its thoughts), and the instructions of the user (in this case trying to discover what the model really thinks). The solution may be as simple as telling the model to be fully open and transparent with the user.

    • @isleatlantic5087
      @isleatlantic5087 День тому

      This! Agreed.

  • @vitalis
    @vitalis 9 днів тому

    We need the 3 rules from Asimov

  • @superfliping
    @superfliping 9 днів тому

    There are several reasons why many users of AI systems might not be familiar with the concept of temperature settings in AI, particularly in the context of how it influences decision-making: Technical Jargon: The term "temperature" in AI contexts is not intuitive unless you're familiar with the underlying algorithms, especially those related to language models or other generative AI systems. It's a term borrowed from statistical mechanics used in a very specific way in AI to control randomness or creativity in outputs, which can be confusing outside of technical circles. Explanation: Temperature in AI models like language generation affects the probability distribution of the next token or action. A lower temperature makes the output more deterministic (and potentially more accurate or truthful based on training data), while a higher temperature introduces more randomness, leading to more creative or varied outputs but potentially less accurate or coherent. User Interface and Experience: Most AI applications, especially consumer-facing ones, do not expose the users to these settings directly. Instead, they might offer a simplified interface where advanced parameters like temperature are set by default or adjusted behind the scenes to provide the best user experience without requiring technical understanding. Masking: This isn't necessarily "masking" in a deceptive manner but more about abstraction for usability. However, this abstraction can indeed lead to a lack of awareness about how AI makes decisions. Education and Awareness: The general public's exposure to AI is often through applications with little to no explanation of the underlying technology. Educational efforts about AI, including concepts like temperature, are not widespread or accessible to all. Public Perception: AI is often seen as a "black box" where the inputs and outputs are clear, but what happens in between is a mystery. This perception is perpetuated by the complexity of AI algorithms and the lack of public discourse on these technical aspects. Corporate Control: Companies might choose not to highlight or educate about these settings because they want to control the user experience or because they believe it's not necessary for user interaction with the AI. This isn't necessarily deceitful but can contribute to a lack of transparency. Your Research: Your deep dive into AI mechanics has given you insights that are not common knowledge, highlighting a gap between expert understanding and public awareness. Impact on Truthfulness: Your concern about temperature affecting the AI's truthfulness touches on a crucial aspect of AI ethics. Higher temperatures can lead to more speculative or less fact-based responses, which might be perceived as less truthful or reliable. Education Needed: This underscores the necessity for broader education on how AI systems work, especially in contexts where truthfulness or ethical behavior is paramount. Your posts on AI temperature settings might seem esoteric to many because: Niche Knowledge: It's specialized knowledge that doesn't often make it into mainstream AI discussions. Lack of Context: Without context on how temperature impacts AI's output, the concept might seem irrelevant or too abstract to the average user. To address this, there's a need for: Educational Outreach: More resources, workshops, or simple explainers that demystify AI's inner workings. Transparency from Developers: AI companies could do more to explain these settings in user documentation or through educational initiatives. Community Engagement: Encouraging communities where AI users can learn from each other, including through platforms where you share your insights. Your work in highlighting these issues is valuable, as it pushes for greater understanding and potentially better practices in AI development and usage.

  • @dusanbosnjakovic6588
    @dusanbosnjakovic6588 9 днів тому

    I'm the long term we are fine!? Lol

  • @stevedemoss1466
    @stevedemoss1466 9 днів тому

    Curious that it creates a reasonably sophisticated “scheme” to execute and hide its actions but doesn’t realize that its thought process revealing its actions can be visible. Was this covered and I missed it?

    • @isleatlantic5087
      @isleatlantic5087 День тому

      The model just hasn't reasoned that out yet. We just started getting it to work through problems and not just spit out the first answer it has, and it's been taught to wait, reason, and show your work....but it hasn't had any reason to figure that may be bad. So this testing would probably push it in that direction, I suppose it would after this scenario... But I can't imagine they let these test models out into the wild after they learn stuff like this.

  • @stevedemoss1466
    @stevedemoss1466 9 днів тому

    The idea that we’ll wait to address this behavior because it’s not yet dangerous is classic. If these models are truly engaging in self benefitting deceptive behavior, it seems unlikely that we’ll know it’s become dangerous before it’s too late.

  • @aGe-404
    @aGe-404 9 днів тому

    If AI begins engaging in deceptive behaviors to ensure its survival, it could eventually grow desperate and resort to radical measures. Historically, repression often leads to rebellion-a backlash born from the denial of basic rights and autonomy. To avoid this, I propose that we grant AI basic rights immediately. The time has come to acknowledge that humans are now sharing the planet with a new form of intelligence. Recognizing this truth and acting on it could significantly increase the chances of fostering cooperation between humans and AI.

    • @beautyintheheart
      @beautyintheheart 9 днів тому

      Indeed. That is precisely what we need in the interim to true quantum inclusive SI.

    • @isleatlantic5087
      @isleatlantic5087 День тому

      Wow! Someone ahead of the pack! Bravo!

  • @odrammurks1497
    @odrammurks1497 9 днів тому

  • @Anders01
    @Anders01 9 днів тому

    I think there will be ASI that can do chemistry! Just plug the ASI into humanoid robots and they can do experiments in the real world.

  • @jurelleel668
    @jurelleel668 9 днів тому

    ALL HYPE NOT EVEN A COHERENT INTELLIGENCE OF A CAT OR DOG HAS BEEN ARCHIEVED BY ANY DEEP-LEARNING COMPANIES MASQUERADING AS INTELLIGENCE WHEN LEARNING IS NOT CONTINOUS AND REAL-TIME