The Real World Cost of AI

Поділитися
Вставка
  • Опубліковано 27 вер 2024
  • It seems like the loudest voices in AI often fall into one of two groups. There are the boomers - the techno-optimists - who think that AI is going to bring us into an era of untold prosperity. And then there are the doomers, who think there’s a good chance AI is going to lead to the end of humanity as we know it.
    While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.
    But when you dig deeper into these systems, it becomes apparent that both of these visions - the utopian one and the doomy one - are based on some pretty tenuous assumptions.
    Kate Crawford has been trying to understand how AI systems are built for more than a decade. She’s the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.
    Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn’t lead to utopia, or take over the world, it is transforming the planet - by depleting its natural resources, exploiting workers, and sucking up our personal data. And that’s something we need to be paying attention to.
    Mentioned:
    “ELIZA-A Computer Program For the Study of Natural Language Communication Between Man And Machine (web.stanford.e...) ” by Joseph Weizenbaum
    “Microsoft, OpenAI plan $100 billion data-center project, media report says, (www.reuters.co...) ” Reuters
    “Meta ‘discussed buying publisher Simon & Schuster to train AI (www.theguardia...) ’” by Ella Creamer
    “Google pauses Gemini AI image generation of people after racial ‘inaccuracies’” (globalnews.ca/...) by Kelvin Chan And Matt O’brien
    “OpenAI and Apple announce partnership (openai.com/ind...) ,” OpenAI
    Fairwork (fair.work/en/f...)
    “New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms (www.oii.ox.ac....) ” by Fairwork
    “The Work of Copyright Law in the Age of Generative AI (direct.mit.edu...) ” by Kate Crawford, Jason Schultz
    “Generative AI’s environmental costs are soaring - and mostly secret (www.nature.com...) ” by Kate Crawford
    “Artificial intelligence guzzles billions of liters of water (english.elpais...) ” by Manuel G. Pascual
    “S.3732 - Artificial Intelligence Environmental Impacts Act of 2024 (www.congress.g...) ″
    “Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation (pubmed.ncbi.nl...) ” by Peter Greim, A. A. Solomon, Christian Breyer
    “Calculating Empires (knowingmachine...) ” by Kate Crawford and Vladan Joler
    Further Reading:
    “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (yalebooks.yale...) ” by Kate Crawford
    “Excavating AI (excavating.ai/) ” by Kate Crawford and Trevor Paglen
    “Understanding the work of dataset creators (knowingmachine...) ” from Knowing Machines
    “Should We Treat Data as Labor? Moving beyond ‘Free’” (www.aeaweb.org...) by I. Arrieta-Ibarra et al.

КОМЕНТАРІ • 25

  • @ViktorGrandgeorg
    @ViktorGrandgeorg 3 місяці тому

    Good talk. Apparently OpenAI for example, has decided not only to make deals with Microsoft and Apple, but also to bring the curtain down further for the public on transparency etc. by appointing Paul Nakasone to their board. I'm not sure if I want to live in a word, where a gray-haired former NSA director is deciding what goes in and what comes out, with all consequences for the whole public world.

  • @marshallmcluhan33
    @marshallmcluhan33 3 місяці тому +4

    Anyone else running a model locally?

    • @terjeoseberg990
      @terjeoseberg990 3 місяці тому +1

      Yes.

    • @marshallmcluhan33
      @marshallmcluhan33 3 місяці тому

      @@terjeoseberg990 May I ask which model?

    • @terjeoseberg990
      @terjeoseberg990 3 місяці тому +1

      @@marshallmcluhan33, I’m trying to train my own diffusion model for image generation.

    • @marshallmcluhan33
      @marshallmcluhan33 3 місяці тому

      @@terjeoseberg990 Cool, are you using 1.5 as the base?

    • @terjeoseberg990
      @terjeoseberg990 3 місяці тому

      @@marshallmcluhan33, This…
      Coding Stable Diffusion from scratch in PyTorch.

  • @AbdurRahman-12
    @AbdurRahman-12 3 місяці тому

    You know Temur, Your Videos are really good. But here are very few Views the lack of perfect SEO, Tags, and Ranking Keywords. If you want we will discuss about it details. Thanks

  • @Darhan62
    @Darhan62 3 місяці тому

    She raised good points about energy costs, exploitation of hidden labor, etc... But she lost me when she mentioned Scarlett Johansson, as the ChatGPT-4o Sky voice didn't even sound like Johansson's voice, and OpenAI says it's the voice of a different actor (much less well known, obviously), who last I heard was still anonymous. Elon Musk can be an anti-woke jerk at times, but if you get facts wrong on something like Johansson's spurious complaint, that makes you sound like you've been brainwashed by woke creatives, really not any better than him. Balance requires respect for facts, regardless of what narrative you align with.
    As for solving the issues around energy, water, etc., AI systems will empower our scientists and engineers to do just that, in one way or another. This has been the story of invention and innovation since we began on our technological kick a few hundred years ago. Technology solves old problems, and creates new ones. When the new problems get bad enough, the next new technology comes along and solves them, and then creates new problems, and so on... Nick Bostrom talks about "technological maturity," as if human civilization could reach some steady state where it has maxed out on all the technologies that the laws of physics allow. I doubt such a thing will ever happen, but if it will someday, it's still far off in the future. For now we have to expand human intelligence by externalizing mental functions to machines that can perform much faster than human minds can. That will help us perfect cooling systems, make GPUs smaller and more energy efficient, develop commercially viable fusion and space propulsion systems that will allow us to mine asteroids for lithium (or whatever the new lithium is in the next technological cycle).
    Why is it up the creators of AI technology what's going to be built? Because they're the ones building it. Duh... I mean, you're not Ilya Sutskever... Stay in your lane. But if do you want to build something else, then build it! Go to school, get the necessary degrees, get hired at a tech company or start your own, and do the work.
    Or get involved in the regulatory process, or in giving constructive feedback to the tech companies. Everyone, or at least everyone's descendants, will live in the world shaped by the technologies emerging now. We all have a role in determining how this goes, but the most direct role is in actually building and implementing the technology. Don't tell the tech companies "You can't build *that*!" Don't try to kill the goose that laid the golden egg.
    You can help guide the process even if you're not an engineer at a tech company. That's what OpenAI's iterative deployment strategy is about. And I'm not saying iterative deployment is the best solution, but it's one strategy among many that might help us collectively guide things to a place that works for everyone.
    Just beware of Luddism, of techno-skepticism. The Luddites have always been wrong about the limits of what's possible, or at least have been selfishly unconcerned with creating a better world for future generations. To achieve our true potential we need to make what seems impossible into a reality. Today we can communicate instantaneously between continents, travel around the world in a matter of hours, perform complex calculations on a small device that fits in the palm of your hand, etc... This is because we've never said "That's impossible, therefore we won't try." We can't go back to thinking like Medieval peasants, focused only on the tasks and the limitations that they know. The times in which we are living are an *intermediate* stage in the development of technology, knowledge, civilization, and culture. We need to realize that much more is possible, and we need to reach for all that might be within our grasp. That is our moral imperative if we care about solving the problems of poverty, war, environmental degradation, etc... Thank goodness for capitalist race conditions, as they are probably the only driving force sufficiently powerful to get humankind over the hump of its self-doubt and level up civilization.

  • @homayounshirazi9550
    @homayounshirazi9550 3 місяці тому

    When US developed its first atomic bomb and exploded it in the wilds of the first Southwestern States followed by selecting Hiroshima and Nagasaki, people reacted to its destructive power with fear and trepidation. The military came up with the feel good slogan of "hide and cover" in schools as an an assurance for the masses. Most of us who are old enough to remember, know that governments are inclined to appease us because we "can't handle the truth." I am afraid that, in the hands of governments the possibility of hearing another version of "hide and cover" in the time of need for public assrances will soon become necessary for AI, in all likelihood. The likes of Trump are commonplace in this world. And even you can't deny it.

  • @Westernaut
    @Westernaut 3 місяці тому

    Interesting.