Practical LLM Security: Takeaways From a Year in the Trenches

Поділитися
Вставка
  • Опубліковано 8 жов 2024
  • As LLMs are being integrated into more and more applications, security standards for these integrations have lagged behind. Most security research either focuses 1) on social harms, biases exhibited by LLMs, and other content moderation tasks, or 2) zooms in on the LLM itself and ignores the applications that are built around them. Investigating traditional security properties such as confidentiality, integrity, or availability for the entire integrated application has received less attention, yet in practice, we find that this is where the majority of non-transferable risk lies with LLM applications.
    NVIDIA has implemented dozens of LLM powered applications, and the NVIDIA AI Red Team has helped secure all of them. We will present our practical findings around LLM security: what kinds of attacks are most common and most impactful, how to assess LLM integrations most effectively from a security perspective, and how we both think about mitigation and design integrations to be more secure from first principles.
    By:
    Richard Harang | Principal Security Architect (AI/ML), NVIDIA
    Full Abstract & Presentation Materials:
    www.blackhat.c...

КОМЕНТАРІ •