Artificial Creativity or Ethical Concern? Understanding How Generative AI Works

Поділитися
Вставка
  • Опубліковано 27 вер 2024
  • AI-generated art is becoming a massive tool for creatives and innovators alike. But ethically speaking, the way data is sourced by AI models is a big concern.
    CK and Michelle discuss the different ways generative AI sources data and how some models get around barriers put in place to protect artists who can't consent to their art being used by AI models.
    ----
    #Technology #BusinessStrategy #Workshops #UXDesign
    Innovation doesn’t happen on its own. We can help you discover, understand, and execute on your business’ greatest opportunities. Learn more 👉 www.crema.us/
    Find us almost anywhere:
    🤝 LinkedIn → bit.ly/3LebFbb
    🤳 Instagram → bit.ly/3LjWY6B
    🐦 Twitter → bit.ly/44cZfc3
    🎧 Podcast → bit.ly/3HpKbhz
    🚀 Crema is a design & technology consultancy that partners with global brands like Adidas, H&R Block, Miro, and Callaway. Headquartered in Kansas City.

КОМЕНТАРІ • 4

  • @pokepress
    @pokepress Рік тому +1

    I’m not a great fan of how the training data was compiled for these models, but I’ve generally found the proposed solutions (opt-in, royalty systems, etc.) to be even less appealing from a practicality and free speech perspective (I expect AI art to become a major means of expression for a significant portion of the population going forward). Personally, I’ve been using Stable Diffusion to generate quiz material and frame discussions around possible future Pokemon movies in the style of Detective Pikachu.

  • @mariasterminal
    @mariasterminal Рік тому +2

    Great talk on a nuanced topic :)

    • @Cremalab
      @Cremalab  Рік тому

      Thank you for watching!

  • @pokepress
    @pokepress Рік тому

    Another important aspect of the likely fair use defense art generators will employ is that the models don’t really store the image data itself, but more an aggregation of certain qualities of the object/style/etc. This is why (outside of some overfitting scenarios) you can’t replicate the training images from the model-what the model stores is more of an analysis of the training data than the work itself. This is a criticism of one of the lawsuits that refers to the model as compression, as the process is actually more analogous to abstraction.