AI and war: Governments must widen safety dialogue to include military use | GZERO AI

Поділитися
Вставка
  • Опубліковано 5 тра 2024
  • On regular basis, governments around the world are intensifying discussions and efforts on making artificial intelligence safe to use. GZERO AI host Marietje Schaake says there's an urgent need to widen the discussion and include its use in military operations-an area where lives are at stake.
    Subscribe to GZERO on UA-cam and turn on notifications (🔔): / @gzeromedia
    Sign up for GZERO Daily (free newsletter on global politics): rebrand.ly/gzeronewsletter
    Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, says governments must prioritize establishing guardrails for the deployment of artificial intelligence in military operations. Already, there are ongoing endeavors ensuring that AI is safe to use but Marietje insists there's an urgent need to widen the discussion and include its use in warfare-an area where lives are at stake. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
    There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
    Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
    And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
    So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules-based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
    Want to know more about global news and why it matters? Follow us on:
    Instagram: / gzeromedia
    Twitter: / gzeromedia
    TikTok: / gzeromedia
    Facebook: / gzeromedia
    LinkedIn: / gzeromedia
    Threads: threads.net/@gzeromedia
    Subscribe to our UA-cam channel and turn on notifications (🔔): / @gzeromedia
    Sign up for GZERO Daily (free newsletter on global politics): rebrand.ly/gzeronewsletter
    Subscribe to the GZERO podcast: podcasts.apple.com/us/podcast...
    GZERO Media is a multimedia publisher providing news, insights and commentary on the events shaping our world. Our properties include GZERO World with Ian Bremmer, our newsletter GZERO Daily, Puppet Regime, the GZERO World Podcast, In 60 Seconds and GZEROMedia.com
    #GZEROAI #DefTech #AI

КОМЕНТАРІ • 4

  • @oldsteamguy
    @oldsteamguy 29 днів тому +1

    Serious stuff.

  • @bennpierce2990
    @bennpierce2990 Місяць тому +2

    The industry leaders concerns are only for profits. Safety will get nothing but lip-service, no matter what laws are passed.
    But sure, go ahead and hold another summit on the topic. Meanwhile, tech-bros will be busy disrupting the paradigm. IMHO

  • @Peace2051
    @Peace2051 29 днів тому +1

    The AI avalanche has it's own momentum and will run it's own course. There are too many people and institutions that are stakeholders in the investments that are doubling every 10 months or so. This does not bode well for thoughtful constraints.

  • @WalterBurton
    @WalterBurton Місяць тому

    👍👍👍