Indirect Prompt Injections and Threat Modeling of LLM Applications | The MLSecOps Podcast

Поділитися
Вставка
  • Опубліковано 4 жов 2023
  • The MLSecOps Podcast | Season 1 Episode 10
    With Guest Kai Greshake
    This episode makes it increasingly clear. The time for machine learning security operations - MLSecOps- is now. In this episode we dive deep into the world of large language models (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively.
    Thanks for listening! Find more episodes and read the transcript at: bit.ly/MLSecOpsPodcast.
    Additional MLSecOps and AI Security tools and resources to check out:
    Protect AI Radar (bit.ly/ProtectAIRadar)
    ModelScan (bit.ly/ModelScan)
    Protect AI’s ML Security-Focused Open Source Tools (bit.ly/ProtectAIGitHub)
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform (bit.ly/aimlhuntr)
  • Наука та технологія

КОМЕНТАРІ •