Institute for Experiential AI
Institute for Experiential AI
  • 244
  • 147 194
The State of AI in Precision Health: Director of AI + Life Sciences Sam Scarpino on What to Expect
Curious what our Director of AI + Life Sciences Sam Scarpino is looking forward to exploring at “The State of AI in Precision Health,” our exciting #AI conference happening Oct. 10, 2024 at Northeastern University in Boston and online? Watch this video to find out!
Don't miss this opportunity to hear from leaders across #industry, #research, and #academia about advances in #drugdiscovery, #health, #lifesciences and more. Register before Aug. 31, 2024 for early-bird rates! bit.ly/SAIPH24
Find out more about The Institute for Experiential AI: ai.northeastern.edu/
Connect with us on
LinkedIn: www.linkedin.com/company/eai-nu/
Facebook: ExperientialAI
X: x.com/Experiential_AI
Переглядів: 27

Відео

Blue wAIve Accelerator Program: Helping Blue Tech Innovators Leverage AI for Ocean-based Solutions
Переглядів 2014 днів тому
We are thrilled to be part of the Blue w(AI)ve Accelerator Program, a groundbreaking initiative helping blue tech innovators leverage #AI for ocean-based solutions. In collaboration with The Gulf of Maine Research Institute, Gulf of Maine Ventures, and The Roux Institute at Northeastern University, we are pioneering the future of #bluetech. Watch to hear from the program’s inaugural cohort abou...
AI + Health Fireside Chat: AI in Healthcare and Public Health with John Brownstein
Переглядів 4821 день тому
What do rideshare services like Lyft and Uber have to do with disease control in the United States? Can Amazon’s Alexa serve as a model for health messaging and symptom reporting? What role can AI play in making hospitals more efficient and patient-centric? Does risk really work in opposition to innovation? These are some of the topics covered in a fireside chat between Gene Tunik, director of ...
AI + Health Fireside Chat: AI for Electronic Health Records with Hoda Sayed-Friel
Переглядів 7521 день тому
To get an inside look at how AI is transforming health and medicine, the Institute for Experiential AI’s Director of AI Health Gene Tunik sat down with Hoda Sayed-Friel, the former executive vice president of MEDITECH, for an illuminating conversation that kicked off the institute’s AI Health fireside chat series. The pair discussed the many ways that AI is already leaving its mark on health an...
AI + Health Fireside Chat Series: Dive into the Future of AI for Healthcare
Переглядів 44Місяць тому
We're excited to launch our new AI Health Fireside Chat Series, featuring leading experts discussing the future of #AI for #healthcare! Hosted by Gene Tunik, our Director of AI Health, these engaging, topical and conversational discussions dive into balancing #risk and #innovation, transparency in AI and #data, real-world use cases, and more. Tune into the first episode with Hoda Sayed-Friel, f...
Strategic Leadership in the Age of AI: Building Trust through Innovation and Protection
Переглядів 37Місяць тому
Leaders from health care, finance, cybersecurity, energy, government, and academia convened for "Strategic Leadership in the Age of AI: Building Trust through Innovation and Protection," co-hosted by BigID and Databricks along with the Institute for Experiential AI and the Roux Institute at Northeastern University. Participants shared their vision for AI along with the challenges they’re facing...
InnovateMA and the AI for Impact Co-op
Переглядів 332 місяці тому
In the Spring 2024 semester, 12 Northeastern students from The Burnes Center for Social Change's AI for Impact Co-op program worked alongside the Commonwealth of Massachusetts as part of InnovateMA, using generative AI to support the MBTA, MassHealth, The Executive Office of Energy and Environmental Affairs, and MassDOT with impactful projects that will improve the delivery of services and prog...
Machine Learning in the Optimization and Discovery Loop with Andreas Krause
Переглядів 1422 місяці тому
Many problems in science and engineering require estimating and optimizing an unknown function that is accessible only through noisy experiments. A central challenge here is the exploration-exploitation dilemma: Designing experiments that are informative for learning about the unknown objective, while focusing exploration where we expect high performance. The field of Bayesian optimization seek...
AI, Ethics, and Citizen Input in Emerging Technologies - Research Seminar with Rafael Mestre
Переглядів 442 місяці тому
Rafael Mestre, Lecturer at the University of Southampton, an Alan Turing Institute Fellow and a Visiting Researcher at Northeastern University, presented his Research Seminar "AI, Ethics, and Citizen Input in Emerging Technologies" on Monday, June 17, 2024. Abstract Computational social science is revolutionizing how we study social behavior by using advanced computational tools and large-scale...
Responsible AI for Suicide Prevention: Expeditions in Experiential AI Seminar - Annika Marie Schoene
Переглядів 1542 місяці тому
Annika Marie Schoene, a research scientist at our Institute, presented her virtual Expeditions in Experiential AI Seminar "Responsible AI for Suicide Prevention" on Wednesday, June 12, 2024. Abstract: Suicide remains one of the leading causes of death for people aged under 34 worldwide. While the numbers have started to decline in some countries, they continue to rise in the USA. Pre-existing m...
Q&A with Chris Wiggins from Columbia University and The New York Times: AI, Data, Ethics and More
Переглядів 562 місяці тому
Chris Wiggins, chief data scientist at The New York Times and associate professor of applied mathematics at Columbia University, presented a Distinguished Lecturer Seminar on Wednesday, May 29, 2024 at Northeastern University and online. Wiggins delivered his talk "How Data Happened: A History from the Age of Reason to the Age of AI." Afterwards, our Executive Director Usama Fayyad joined Wiggi...
Blue w(AI)ve Venture Showcase: Director of AI Solutions Hub Jimi Shanahan Recaps Inspirational Event
Переглядів 372 місяці тому
What an exciting and inspirational event hearing from seven startups working at the intersection of #AI and #bluetech during the recent Blue w(AI)ve Venture Showcase! The innovative companies unveiled incredible milestones achieved during this first-of-its-kind accelerator program focused on leveraging AI for ocean-based solutions. The inaugural cohort also shared how the specialized support fr...
Fireside Chat: Usama Fayyad and The New York Times and Columbia University's Chris Wiggins
Переглядів 423 місяці тому
Chris Wiggins, chief data scientist at The New York Times and associate professor of applied mathematics at Columbia University, presented a Distinguished Lecturer Seminar on Wednesday, May 29, 2024 at Northeastern University and online. After an introduction from Provost and Senior Vice President for Academic Affairs at Northeastern University, David Madigan, Wiggins delivered his talk "How Da...
Highlights: Chris Wiggins' Distinguished Lecturer Seminar + Fireside Chat at Northeastern University
Переглядів 643 місяці тому
Thank you to Chief Data Scientist at The New York Times and Associate Professor of Applied Mathematics at Columbia University, Chris Wiggins for his captivating talk at Northeastern University yesterday, speaking to hundreds of attendees about the history and future of #data and #AI. After the talk "How Data Happened: A History from the Age of Reason to the Age of AI," introduced by Provost and...
Chris Wiggins Discusses "How Data Happened: A History from the Age of Reason to the Age of AI"
Переглядів 2473 місяці тому
Chris Wiggins Discusses "How Data Happened: A History from the Age of Reason to the Age of AI"
Explainable AI in Computer Vision - Expeditions in Experiential AI Seminar with Àgata Lapedriza
Переглядів 1793 місяці тому
Explainable AI in Computer Vision - Expeditions in Experiential AI Seminar with Àgata Lapedriza
Northeastern Researchers Lead Department of Defense Collaboration to Build Climate Resilience Models
Переглядів 303 місяці тому
Northeastern Researchers Lead Department of Defense Collaboration to Build Climate Resilience Models
Coastal Measures: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 243 місяці тому
Coastal Measures: Blue w(AI)ve Accelerator Company Spotlight
LOOKOUT: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 273 місяці тому
LOOKOUT: Blue w(AI)ve Accelerator Company Spotlight
SeaDeep: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 193 місяці тому
SeaDeep: Blue w(AI)ve Accelerator Company Spotlight
Blue Latitudes: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 263 місяці тому
Blue Latitudes: Blue w(AI)ve Accelerator Company Spotlight
Coastal Carbon: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 513 місяці тому
Coastal Carbon: Blue w(AI)ve Accelerator Company Spotlight
Deep Voice: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 274 місяці тому
Deep Voice: Blue w(AI)ve Accelerator Company Spotlight
Nekton Labs: Blue w(AI)ve Accelerator Company Spotlight
Переглядів 304 місяці тому
Nekton Labs: Blue w(AI)ve Accelerator Company Spotlight
Responsible AI for Leaders Executive Education: Annika Marie Schoene, Institute for Experiential AI
Переглядів 504 місяці тому
Responsible AI for Leaders Executive Education: Annika Marie Schoene, Institute for Experiential AI
Giving AI Some Common Sense - Distinguished Lecturer Seminar with Ron Brachman, Cornell University
Переглядів 2514 місяці тому
Giving AI Some Common Sense - Distinguished Lecturer Seminar with Ron Brachman, Cornell University
Insights from REWORK AI in Finance Summit New York
Переглядів 204 місяці тому
Insights from REWORK AI in Finance Summit New York
AI Career Fair - February 2024: Thousands of Students Battle Storm to Network with Leading Companies
Переглядів 824 місяці тому
AI Career Fair - February 2024: Thousands of Students Battle Storm to Network with Leading Companies
Responsible AI for Leaders - Executive Education: Cansu Canca, Institute for Experiential AI
Переглядів 534 місяці тому
Responsible AI for Leaders - Executive Education: Cansu Canca, Institute for Experiential AI
Responsible AI for Leaders - Executive Education: Matthew Sample, Institute for Experiential AI
Переглядів 164 місяці тому
Responsible AI for Leaders - Executive Education: Matthew Sample, Institute for Experiential AI

КОМЕНТАРІ

  • @InstituteforExperientialAI
    @InstituteforExperientialAI 9 днів тому

    Ahead of “The State of AI in Precision Health,” our flagship conference happening Oct. 10 at Northeastern University in Boston and online, hear from former Executive Vice President at MEDITECH and #SAIPH2024 speaker Hoda Sayed-Friel about AI’s role in healthcare. Register for “The State of AI in Precision Health” at bit.ly/SAIPH24

  • @Orbitaonamika
    @Orbitaonamika 13 днів тому

    Garcia Mark Martinez Betty Perez Anthony

  • @JustNow42
    @JustNow42 Місяць тому

    A short comment: light is not discrete, it is emittet in quanta and absorbed in quanta but it is not in itself in quanta. Proof: light is stretched in the expanding space.

    • @JustNow42
      @JustNow42 Місяць тому

      A question: how does the time steps in the models compare to our time progress or should I say time quanta

  • @app8414
    @app8414 2 місяці тому

    What is the ultimate fractal? That is how I phrase the question or problem...

  • @app8414
    @app8414 2 місяці тому

    Something I call: Knowledge Audit

  • @app8414
    @app8414 2 місяці тому

    There's much to think about and much to do. I've designed a course: Computational Thinking Simplified Technical English for Artificial Intelligence: Language Standard and Register. STEAI-001/ STLAI-001 Prompt Engineering Manual, Prompt Dictionary, and Register. I think it helps to answer or meet the demands of Wolfram Research and more. I'd be happy to connect with a representative from the channel or with Mr Wolfram himself. Fingers crossed someone takes me seriously. 😊

    • @app8414
      @app8414 2 місяці тому

      I forgot to include: My work also responds to Jeannette Wing's work, too.

  • @midoann
    @midoann 2 місяці тому

    This Institute is so lucky, I think this is one of the best Dr Lisa Feldman lectures, master piece! ❤🎉

  • @aurasandovalvigo2712
    @aurasandovalvigo2712 3 місяці тому

    Thank you so much Lisa! Your work is outstanding.

  • @FarhatiYassine-en1mj
    @FarhatiYassine-en1mj 3 місяці тому

    Like Management

  • @FarhatiYassine-en1mj
    @FarhatiYassine-en1mj 3 місяці тому

    How can learning machine can develop social sciences liées Management ?

  • @FarhatiYassine-en1mj
    @FarhatiYassine-en1mj 3 місяці тому

    Hi, welcome Professeur Tina i am Yassine Farhati a doctorant in Management Complexity from Tunisia, teacher of Physical Science and a writer

  • @AlgoNudger
    @AlgoNudger 3 місяці тому

    TAI + XAI - IAI : RAI = BS. 🙄

  • @mitchellhayman381
    @mitchellhayman381 3 місяці тому

    This is approaching the limit for how smart a human can be.

  • @alexandersmirnov07
    @alexandersmirnov07 5 місяців тому

    It's fascinating to listen to Lisa! Thanks Lisa, thanks to the Institute for Experimental AI for hosting

  • @neoepicurean3772
    @neoepicurean3772 5 місяців тому

    So time is not fundamental or emergent, or strictly in the observer, but in the computing speed of the hypergraph? But doesn't that just kick the problem of explaining time up a level?

  • @MichaelQuarantaGooglePlus
    @MichaelQuarantaGooglePlus 6 місяців тому

    Great talk, especially the demos. They were helpful to visualize concepts. Thank you Dr. Wolfram for being a prolific communicator and thinker and doer. You are accomplishing and contributing a lot to humanity, please continue.

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 6 місяців тому

    Excellent presentation 😊

  • @LeahBensonTherapyTampa
    @LeahBensonTherapyTampa 7 місяців тому

    Goodness. People really don't get it... The questions make that clear. We are suuuuch affective realists.

  • @JuliusUnique
    @JuliusUnique 7 місяців тому

    the cool thing is, it doesn't matter how "fast" the thing is that computes us, because even if it is a single string, bit by bit, we are part of it, so we are slow as well, so it cancels each other out at no matter the speed of what computes us, we experience the universe the same exact speed

    • @tellesu
      @tellesu 7 місяців тому

      It would require an incredibly stable universe that somehow refreshes entropy

    • @JuliusUnique
      @JuliusUnique 7 місяців тому

      @@tellesu why would it refresh entropy? Entropy is what happens based on a given rule it computes, check rule 30

    • @Tore_Lund
      @Tore_Lund 7 місяців тому

      Agree. Like the critics claiming Wolfram Physics can't possibly work because it would have to work faster than light speed! They are forgetting what is considered consensus in Quantum mechanics, that entanglement or quantum foam fluctuations are considered orders of magnitude faster in the underlying mechanics of the Universe, than the causality speed that we can observe. Funny how those exploring some version of the simulation hypothesis or other computational cosmology, like Max Tegmark, don't get asked that question?

    • @Gustavoooooooo
      @Gustavoooooooo 5 місяців тому

      Futurama S08E10

  • @gisele.st.hilaire.feldenkrais
    @gisele.st.hilaire.feldenkrais 7 місяців тому

    Thanks again, Lisa, for another great presentation. Thanks to the Institute for Experiential AI for hosting and inviting Lisa Feldman Barrett.

  • @iggymcgeek730
    @iggymcgeek730 7 місяців тому

    In 2023, Stephen is revolutionizing the field of physics with his innovative exploration in hypergraph models of space. His work is more than just theoretical talent; it fundamentally changes our understanding of the universe. Stephen skillfully combines advanced artificial intelligence with cellular automata, delving deep into the complexities of space-time. Speaking of cellular automata, did you hear what one said to the other? 'Stop copying me, or we'll end up in a loop!' and 'You had one job - to follow the rules!' It's like they're having a real chat in there. One even said, 'Hey, let's make a pattern nobody can predict!' and 'I'm feeling less complex today, how about you?' Stephen's automata are not just evolving, as one joked, 'Are you evolving, or just stuck in your grid?' Together, they make complexity look simple, with one quipping, 'Let's not gridlock over the rules!' and questioning life in the grid, 'Do you think we'll ever escape this grid?' It's a chaotic system for sure, with them joking, 'Careful, or we'll end up as a chaotic system!' and reminding each other, 'Don't be so predictable!' Stephen's role in all this? Not just riding the peak of the AI wave, he's leading a transformative movement, ingeniously intertwining the fields of physics and artificial intelligence in unprecedented ways. #InnovatingSpaceTime #AIPhysicsFrontier

  • @mooncop
    @mooncop 8 місяців тому

    generating text2audio from multimodal latent space is next -- "describe the voice of Max Stirner" -> "Stirner <desc>" -> enactment

  • @hypercube717
    @hypercube717 8 місяців тому

    Interesting

  • @silberlinie
    @silberlinie 8 місяців тому

    truely great personality

  • @vak5461
    @vak5461 8 місяців тому

    I wish when I subscribe to good channels they wouldn't then spam random useless videos and it would just keep up the good ones that I subscribed for 😭 clogged up my feed immediately! Aaah

  • @shanek1195
    @shanek1195 8 місяців тому

    Thoroughly interesting talk, most enjoyed the Q&A. Induction and relevance realisation and how we can inscribe human values are really interesting problems. 1:43:32 : Unsure about the notion that AIs don't suffer (of course not in the anthropomorphic sense but) the cost function does act as a form of negative reinforcement.

  • @richardnunziata3221
    @richardnunziata3221 9 місяців тому

    A bunch of unsupported and unchallenged statements

  • @LauraNorvig
    @LauraNorvig 9 місяців тому

    OK, Bard said you could go by public transportation or taxi so ... still some work to be done there.

  • @PeterSodhi
    @PeterSodhi 10 місяців тому

    A masterclass

  • @geoffreyanderson4719
    @geoffreyanderson4719 Рік тому

    No knowledge of underlying reality, no common sense, can't plan. Example. You are an expert parent Assistant and babysitter. You are playing with a parent and her infant, helping the baby girl learn basic concepts about the world around us. Today the girl is pushing a wheeled toy along a table top near its edge. What are two possible events that could likely occur in the next five seconds into the future which may surprise or instruct the girl as she continues pushing the toy? ChatGPT Certainly! Based on the scenario described, here are two possible events: The Toy Falls Off the Table: As the infant pushes the wheeled toy near the edge of the table, it's quite possible that she may push it too far and the toy falls off. This event can be surprising for the baby. It's an introduction to the concept of gravity and the consequences of pushing objects beyond a boundary. The Toy Gets Stuck or Slows Down: If there's a small obstacle or a sticky spot on the table, the toy might get stuck or slow down when the infant pushes it. This can teach her about resistance, friction, and the fact that not all surfaces are smooth or unobstructed. Both events can be instructive and can spark curiosity in the baby, leading to exploration and learning. However, it's crucial for the parent or babysitter to ensure the baby's safety during such play, especially if the toy falls off the table, to prevent any potential harm.

  • @geoffreyanderson4719
    @geoffreyanderson4719 Рік тому

    When llm are put into ensemble with databases they can be made factual, actually. The llm is good at fusing query results is the reason. When llm are put into ensemble with strategy specialist models they can be made into planners, actually. The Alpha family of models is a planner. When llm are augmented with persistent storage they can be made to remember their learning s. The llm alone is not the way forward, but the llm With various augmentation s seems very promising.

  • @maxheadrom3088
    @maxheadrom3088 Рік тому

    Google AI generated subtitles just wrote "anti-Gary" as "anti-gay". NOTE: I have no idea if the auto generated subtitles on youtube use AI or not.

  • @kayakMike1000
    @kayakMike1000 Рік тому

    We _do_ have unlimited energy to throw at AI workloads. Its called nuclear energy. Works _great_ rain or shine, day or night.

  • @kayakMike1000
    @kayakMike1000 Рік тому

    For crying out loud, CO2 is not a dangerous gas. Its not a problem. Sudan got flooded because that's what happens in Sudan every 100 years. The Maldives are not getting swamped. They are STILL there and they will still be there 200 years from now.

  • @federicoaschieri
    @federicoaschieri Рік тому

    Finally someone understanding why LLMs are bound to fail. It's unbelievable how people are underestimating the difficulty of building a cognitive architecture. Literally people are expecting that a quick, polynomial algorithm like a feed forward neural net can solve all problems of humanity. Yet, logicians have already explained the concept of NP-hardness, that is, logical problems can't be solved in general efficiently by a machine, no matter how sophisticated it is. In some sense, scientific problems don't have "patterns", they're all different, so a machine learning patterns is pretty useless. That's why progress is slow and to even be possible it takes billions of intelligent brains in parallel and with incredibly structured communication. So good luck with LLMs...

  • @codybmenefee
    @codybmenefee Рік тому

    Is his deck available anywhere?

  • @hyunkim6195
    @hyunkim6195 Рік тому

    .

  • @klausunscharferelation6805

    about the singularity As Ray Kurzweil says When the whole universe becomes a computer, What does it calculate? Even though the purpose for calculating has already disappeared?

  • @user-qy2rj6pm3w
    @user-qy2rj6pm3w Рік тому

    Computer by itself created 40000 inventions. About it said here ua-cam.com/video/twUzsAZIe90/v-deo.html

  • @nicktasios1862
    @nicktasios1862 Рік тому

    Yann is mentioning 1:09:16 that a lot of the mathematics of neural networks comes from statistical physics, but I wonder what mathematics he's referring to, since most of the mathematics I've seen when I learned statistical physics was much more basic than some of the mathematics I've seen by the likes of Yi Ma and Le Cun.

    • @edz8659
      @edz8659 Рік тому

      Reverse diffusion for one

    • @nicktasios1862
      @nicktasios1862 Рік тому

      @@edz8659 I never learned anything about reverse diffusion in my statistical physics courses. Neither did we learn about stochastic differential equations for example. I actually learned more about Brownian motion and Wiener processes when I worked as a quant.

    • @synthclub
      @synthclub Рік тому

      I would The statical tools are from quantum physics... Not mechanical physics..

    • @stefanobutelli3588
      @stefanobutelli3588 8 місяців тому

      @@nicktasios1862Brownian motion is statiscal physics, and spin glasses and entropy are a good bridge between phase transitions (statistical physics) and decision boundaries in data spaces

  • @mbrochh82
    @mbrochh82 Рік тому

    here's a ChatGPT summary: - Welcome to the last distinguished lecture series for the Institute of Experimental AI for the academic year - Introducing Yann LeCun, VP and Chief AI Scientist at META, Silver Professor at NYU, and recipient of the 2018 ACM Turing Award - Overview of current AI systems: specialized and brittle, don't reason and plan, learn new tasks quickly, understand how the world works, but don't have common sense - Self-supervised learning: train system to model its input, chop off last few layers of neural net, use internal representation as input to downstream task - Generative AI systems: autoregressive prediction, trained on 1-2 trillion tokens, produce amazing performance, but make factual errors, logical errors, and inconsistencies - LLMs are not good for reasoning, planning, or arithmetics, and are easily fooled into thinking they are intelligent - Autoregressive LLMs have a short shelf life and will be replaced by better systems in the next 5 years. - Humans and animals learn quickly because they accumulate an enormous amount of background knowledge about how the world works by observation. - AI research needs to focus on learning representations of the world, predictive models of the world, and self-supervised learning. - AI systems need to be able to perceive, reason, predict, and plan complex action sequences. - Hierarchical planning is needed to plan complex actions, as the representations at every level are not known in advance. - Predetermined vision systems are unable to learn hierarchical representations for action plans. - AI systems are difficult to control and can be toxic, but a system designed to minimize a set of objectives will guarantee safety. - To predict videos, a joint embedding architecture is needed, which replaces the generative model. - Energy based models are used to capture the dependency between two sets of variables, and two classes of methods are used to train them: contrastive and regularized. - Regularized methods attempt to maximize the information content of the representations and minimize the prediction error. - LLMs are a new method for learning features for images without having to do data augmentation. - It works by running an image through two encoders, one with the full image and one with a partially masked image. - A predictor is then trained to predict the full feature representation of the full image from the representation obtained from the partial image. - LLMs are used to build world models, which can predict what will happen next in the world given an observation about the state of the world. - Self-supervised learning is the key to this, and uncertainty can be done with an energy-based model method. - LLMs cannot currently say "I don't know the answer to this question" as opposed to attempting to guess the right answer. - Data curation and human intervention through relevance feedback are critical aspects of LLMs that are not talked about often. - The trend is heading towards bigger is better, but in the last few months, smaller systems have been performing as well as larger ones. - The model proposed is an architecture where the task is specified by the objective function, which may include a representation of the prompt. - The inference procedure that produces the output is separated from the world model and the task itself. - Smaller networks can be used for the same performance. - AI and ML community should pivot to open source models to create a vibrant ecosystem. - Biggest gaps in education for AI graduates are in mathematics and physics. - Open source models should be used to prevent control of knowledge and data by companies. - LLMs are doomed and understanding them is likely to be hopeless. - Self-supervised learning is still supervised learning, but with particular architectures. - Reinforcement learning is needed in certain situations. - Yann discussed the idea of amortized inference, which is the idea of training a system to approximate the solution to an optimization problem from the specification of the problem. - Yann believes that most good ideas still come from academia, and that universities should focus on coming up with new ideas rather than beating records on translation. - Yann believes that AI will have a positive impact on humanity, and that it is important to have countermeasures in place to prevent the misuse of AI. - Yann believes that AI should be open and widely accessible to everyone.

    • @StoutProper
      @StoutProper Рік тому

      You could have got it to include timestamps, particularly as they haven’t published this with chapters.

    • @RufusShinra
      @RufusShinra Рік тому

      @@StoutProper By all means go ahead and do it.

    • @StoutProper
      @StoutProper Рік тому

      @@RufusShinra you’ve already fed it the transcript complete with timestamps. Just instruct it to add a timestamp

    • @RufusShinra
      @RufusShinra Рік тому

      @@StoutProper I didn't do Jack :D i'm not the OP

    • @StoutProper
      @StoutProper Рік тому

      @@RufusShinraembarrassing

  • @richardnunziata3221
    @richardnunziata3221 Рік тому

    It would be very helpful if there was a list of open problems in machine learning space

  • @kinngrimm
    @kinngrimm Рік тому

    1:17:25 "at some point it will be too big for us to comprehend" Before that point is reached we should have figured out alignment, not having a blackbox system so we can actually see whats going on in there and a ton of societal changes that will have to be made for societies to be/stay stable.

  • @FergalByrne
    @FergalByrne Рік тому

    No mention of “understanding” in the title, not even trying these days

  • @rim3899
    @rim3899 Рік тому

    With respect to LLMs, the evidence suggests that they do know a great deal about how the world works. For example, GPT-like models' weights can be/are trained on data that actually sub-sum all that is currently known about physical laws, chemistry, biology etc. through countless papers, review articles, and textbooks at various levels of sophistication, from 1st-grade level through cutting edge research. The fact that these are given as text (language) is not as problematic, since it appears that the relevant written record is sufficient to explain and convey the current and past knowledge in these subjects. That multi-layer transformers learn context and correlations between (meaningful!) words and their associated concepts and relationships should not to be underestimated. That the models "just produce" the next probable token isn't conceptually trivial either, if one considers that, for example, most of physics can be described through (partial) differential equations that can be integrated step by step, where the context/state-dependent coefficients of the equations (-the trained weights of the network-) ultimately result from the underlying theories these equations are solving. Processing the current state, with these coefficients in context, to predict and specify what happens next, one step at the time, is how these equations are in practice numerically integrated. So what we potentially may have with the current LLMs are models that learn from language and words, that actually do describe in excruciating detail what is known to man, and proceed to "auto-complete", in analogous ways to the best methods used to solve the currently known equations of Science.

    • @kinngrimm
      @kinngrimm Рік тому

      Knowing of them and understanding how to use them maybe still not be the same. There was a study about alphaGo that took place after the fact alphaGo beat the greatest Go players. In that study they gave the alphaGo a muligan/an advantage of a few turns that would be given to childreen that are starting out to learn the game. The result was that every single game then alphaGo lost. The researchers who analyzed that then came to the conclusion, the strategy of engulving the enemies pieces was not really understood conceptionally. I think it is at times hard to differentiate what we see as a result and outcome, the product of the calculations and then also correctly make assumptions about how the LLMs got there. After all it is said they are a blackbox system abd researchers will still take a while to exactly figure out what is going on in there within their neural networks. On the other hand we humans tend to put ourselves on a pedestal and make us something special. The image we have of ourselves is often quite inflated. Which therefor could lead to missinterpretations and underestemating what is going on. We maybe all be partial to Dunning Krueger Effect, anthropomorphisation and other phsycological traps. Understanding and admiting that to ourselves seems key for many of my own issues i have at times with other people and while that in itself may open up another trap, to think others have the same issues, i still think it is safe to assume that most do. Longstory short, not just the hype but a sort of panic that set in the past year around the topic of AGIs and LLMs, seemed to have more to do with our human failings and how we would use such technology than it already being on its own the biggest threat to humanity. Still it is a wakeup call of the direction this takes and what we can expect in the not too distant future.

    • @StoutProper
      @StoutProper Рік тому

      @@kinngrimm the biggest threat isn’t the technology itself but how it is used, more accurately who is using it and for what? The answer to which is rich corporate elites to replace us, rich government elites to control us, and rich military industrial elites to kill us. Examples of each are already prevalent throughout the world, meanwhile people are distracted with the notion that the technology itself is the danger, and they are failing to focus on the real threats.

  • @richardnunziata3221
    @richardnunziata3221 Рік тому

    Open Source foundation models are the future of democracy and small business development

  • @Peteismi
    @Peteismi Рік тому

    My question to Yann would have been "I'm an idiot. If I offered you a job here and now without me being able to give you practically anything you want in return, would you come and work for me as my super intelligence?"

    • @mkr2876
      @mkr2876 Рік тому

      Exactly! How naive are people in this field who trusts the version of the future where we successfully enslave an intelligence 100.000X smarter than us? How dumb can people possibly be??

  • @andybaldman
    @andybaldman Рік тому

    34:54 "AI is not going to kill us all. Or, we would have to screw up pretty badly for that to happen." Hey Yann, have you met human beings? Or the history of science?

    • @federicoaschieri
      @federicoaschieri Рік тому

      I think his point was: machines are so far off from being human level intelligent, that it's like being worried that bears will kill us all.

  • @skierpage
    @skierpage Рік тому

    Shame on the recording engineer for screwing up the mic'ing of theerson making the introduction. So much distortion 🙉

  • @BH-BH
    @BH-BH Рік тому

    I think that this guy makes the most sense relative to LLM’s and their inability to plan, reason, or think.

    • @Clinton.Williams
      @Clinton.Williams Рік тому

      Prof. LeCun and Meta AI have been foundational in the field. A voice of reason. But LLMs do show an ability to reason look at Voyager by NVIDIA and MM-REACT from Northeastern. Agility Robotics and Google have even showcased their reasoning abilities. But we have a ways to go. There is most likely a fundamental shift in architecture coming soon, like LeCun said.

    • @skierpage
      @skierpage Рік тому

      LeCun doesn't acknowledge that if you prompt GPT-4 to come up with a plan and explain itself step-by-step, it does so! People are getting 20% improvement on its already college-level performance in all kinds of tasks by coming up with more and more elaborate instructions, like telling it to generate multiple candidate responses, and then become a professor reviewing all the responses looking for the one that has the least errors. Despite their flaws, LLMs DO respond as if they plan, reason, and think. Yan LeCun is criticizing the leaders from the vantage point of Meta which has fallen behind.

    • @Clinton.Williams
      @Clinton.Williams Рік тому

      @skierpage it's a competitive environment, but Yan is right about AGI not being here. His new idea of masking images and new architecture might have some serious advantages. We will have to see how it plays out. I completely agree GPT-4 is already a huge game changer! Hotz publicly stated it's an eight headed mixture model of experts and kinda shrugged it off like it's not that cool, so why didn't anyone else do it... Not only has OpenAI bright AI to a larger audience and boosted global productivity, the most important thing they excite venture capital back into AI and get Google, Meta, Nvidia, and Microsoft to alter their business strategies to true AGI research. We will look back and say that GPT-4 and OpenAI were the match to really light the fire in the AGI race.