Lakera AI
Lakera AI
  • 38
  • 18 636
AI Security Year in Review: Key Learnings, Challenges, and Predictions for 2025
Join David Haber (CEO and Co-Founder at Lakera), Ken Huang (Co-Chair of CSA AI Safety Working Group), David Campbell (AI Security Risk Lead at Scale AI), Nathan Hamiel (Sr. Director of Research at Kudelski Security) and Mark Breitenbach (Security Engineer at Dropbox) for a live session unpacking this year's most significant AI security developments, insights from Lakera’s AI Security Readiness Report, and strategic predictions for 2025.
Переглядів: 125

Відео

Product Peek: Lakera’s Policy Control Center. How to Tailor GenAI Security Controls per Application
Переглядів 3672 місяці тому
Join Sam Watts, Product Manager, and Matt Fiedler, Solutions Engineer at Lakera, for a live session introducing the major upgrade to Lakera Guard-the Policy Control Center. This webinar demonstrates how to centralize, customize and fine-tune security policies for your GenAI applications, all on the fly and without needing to make any code changes. Learn about Lakera: lakera.ai/ Book a demo: www...
Lakera Policy Center Overview
Переглядів 2172 місяці тому
In this demo, Sam Watts, Product Manager at Lakera, walks you through the Lakera Guard Policy Control Center, showcasing how easy it is to configure AI security defenses and manage policies in real-time. Learn about Lakera: www.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo Read documentation: platform.lakera.ai/docs/policies Explore how Lakera Guard allows you to create and customize polici...
Introduction to Lakera Guard: Product Demo
Переглядів 6262 місяці тому
In this demo, Sam Watts, Product Manager at Lakera provides an overview of how Lakera Guard is used to protect GenAI applications. Learn about Lakera: www.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo Discover how Lakera Guard offers tailored defenses against threats like prompt injections, data leaks, and harmful content-all customizable to your needs. With real-time threat detection and c...
Masterclass in AI Threat Modeling: Addressing Prompt Injections
Переглядів 4662 місяці тому
Join Mateo Rojas Carulla (Chief Scientist at Lakera), Nate Lee (CISO at CloudSec), and Elliot Ward (Security Researcher at Snyk) for a live discussion on the intricacies of AI threat modeling and the pressing challenges in securing AI systems. Learn about Lakera: lakera.ai/ Book a demo: www.lakera.ai/book-a-demo
Compromised Langchain Agent (Email) Protected with Lakera Guard
Переглядів 3202 місяці тому
Watch as Lakera's Chief Scientist, Mateo Rojas-Carulla, demonstrates high-impact exploits on LLM systems (such as the Langchain email summarizer) and see how Lakera Guard defends against these attacks.
Lakera’s Global GenAI Security Readiness Report Deep Dive
Переглядів 2603 місяці тому
Join David Haber (CEO of Lakera), Joe Sullivan (CEO of Joe Sullivan Security LLC, ex CSO at Cloudflare, Facebook, Uber), David Campbell (AI Security Risk Lead & Generative Red Teaming at Scale AI), and Christina Liaghati (Trustworthy & Secure AI Department Manager at MITRE) for an in-depth discussion exploring Lakera’s Global GenAI Security Readiness Report. As Generative AI becomes an integral...
6 Types of Attacks to Exploit LLM and Gen AI Applications
Переглядів 4043 місяці тому
Join Sam Watts, a product manager at Lakera, as he walks through six different attack strategies that can compromise AI-powered systems like chatbots and RAG applications.
Product Peek: Lakera’s Enterprise-Grade PII Detection Deep Dive
Переглядів 1505 місяців тому
Join Damián Pascual Ortiz, Senior Research Engineer at Lakera, for a hands-on session exploring Lakera’s enterprise-grade Personal Indentifiable Information (PII) capabilities. Learn about Lakera: lakera.ai/ Learn about Lakera's PII Capabilities: www.lakera.ai/data-loss-prevention Book a demo: www.lakera.ai/book-a-demo With the growing adoption of GenAI, protecting Personally Identifiable Infor...
Product Peek: Lakera’s Enterprise-Grade Content Moderation Deep Dive
Переглядів 1285 місяців тому
Join Sweyn Venderbush, Head of Product at Lakera AI for a hands-on session exploring Lakera’s enterprise-grade Content Moderation capabilities. Learn about Lakera: lakera.ai/ Learn about Lakera's Content Moderation: www.lakera.ai/content-moderation Book a demo: www.lakera.ai/book-a-demo As enterprises of all sizes roll out GenAI-powered experiences to their users and employees for the first tim...
RSAC Gandalf Challenge: Insights from the World's Largest Red Team
Переглядів 6886 місяців тому
Join Max Mathys (Lakera’s Software Engineer) and Athanasios Theocharis (Lakera’s Gandalf Engineer) for a webinar exploring the latest Gandalf challenge mechanics, showcase the most innovative attack methods, and explore the unique lessons learned about red teaming LLMs. Learn about Lakera: lakera.ai/ Play RSAC Gandalf: rsac.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo At this year’s RSA Co...
Meet Lakera - AI Security Company
Переглядів 7787 місяців тому
At Lakera, we are securing the future of intelligent computing. We enable enterprises to focus on building the most exciting AI applications securely by protecting them in the world of AI cyber risk. We work with Fortune 500 companies, startups, and foundation model providers. We're also the team behind Gandalf - the world's most popular AI security game with millions of users. Join us to shape...
Assessing GenAI Security Solutions in the Wild with PINT(Prompt Injection Test) Benchmark
Переглядів 3367 місяців тому
Join Václav Volhejn (Senior ML Scientist at Lakera) and Julia Bazińska (ML Engineer at Lakera) for a discussion exploring the newly released PINT (Prompt Injection Test) benchmark. Learn about Lakera: lakera.ai/ Learn about PINT: github.com/lakeraai/pint-benchmark Book a demo: www.lakera.ai/book-a-demo Goodhart’s law states that when a measure becomes a target, it ceases to be a good measure. T...
Decoding OWASP Large Language Model Security Verification Standard (LLMSVS)
Переглядів 4608 місяців тому
Join David Haber (CEO at Lakera), Elliot Ward (Senior Security Researcher at Snyk), and Ads Dawson (Senior Security Engineer at Cohere & Project Lead at OWASP) as they explore the newly released OWASP Large Language Model Security Verification Standard (LLMSVS). This session delved into the specifics of the standard, highlighting its key objectives, control levels, and requirements for securing...
Navigating 2024: Insights into AI Regulations and Standards for Enterprises
Переглядів 4139 місяців тому
Join David Haber (CEO at Lakera), Lucia Gamboa (Policy Manager at Credo AI), and Nicolas Moës (Executive Director at The Future Society) as they discuss measures that enterprises should take in preparation for upcoming changes in AI regulatory policies. The AI regulatory landscape is rapidly changing, with key developments such as the EU AI Act, the US executive order, and the forthcoming secon...
ChainGuard: How to Protect your LangChain Apps with Lakera
Переглядів 33410 місяців тому
ChainGuard: How to Protect your LangChain Apps with Lakera
Lessons Learned from Crowdsourced LLM Threat Intelligence
Переглядів 97510 місяців тому
Lessons Learned from Crowdsourced LLM Threat Intelligence
Gandalf Livestream: A Year in Review (Christmas Edition 🎁)
Переглядів 433Рік тому
Gandalf Livestream: A Year in Review (Christmas Edition 🎁)
How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs
Переглядів 941Рік тому
How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs
Week of 31/10 - Lakera Release Notes
Переглядів 96Рік тому
Week of 31/10 - Lakera Release Notes
Navigating the EU AI Act: What It Means for Businesses?
Переглядів 408Рік тому
Navigating the EU AI Act: What It Means for Businesses?
Introducing Lakera Chrome Extension: Powerful plugin to protect your ChatGPT conversations
Переглядів 1,4 тис.Рік тому
Introducing Lakera Chrome Extension: Powerful plugin to protect your ChatGPT conversations
Using Lakera Guard to protect LLMs against prompt injections
Переглядів 1,4 тис.Рік тому
Using Lakera Guard to protect LLMs against prompt injections
Getting Started with Lakera Guard - Safeguard Your LLMs with Powerful API
Переглядів 3,4 тис.Рік тому
Getting Started with Lakera Guard - Safeguard Your LLMs with Powerful API
Introducing Lakera Guard: Powerful API to Safeguard your LLMs
Переглядів 690Рік тому
Introducing Lakera Guard: Powerful API to Safeguard your LLMs
Livestream Recording: The Spells Behind Gandalf
Переглядів 1,2 тис.Рік тому
Livestream Recording: The Spells Behind Gandalf
Lakera x ANYbotics: Building the Future of Trustworthy Robotics
Переглядів 278Рік тому
Lakera x ANYbotics: Building the Future of Trustworthy Robotics
Live vs. ImageNet - Lakera Webinar Recording
Переглядів 356Рік тому
Live vs. ImageNet - Lakera Webinar Recording
The European AI Act and digital SMEs - Lakera Introduction
Переглядів 1042 роки тому
The European AI Act and digital SMEs - Lakera Introduction

КОМЕНТАРІ

  • @rtmoraes
    @rtmoraes 14 днів тому

    Hi! What languages does lakera support?

  • @ericoudammerveld424
    @ericoudammerveld424 16 днів тому

    Proper architecture can also help to avoid situations like this. Despite that; I feel that awareness about this kind of Phishing should be more common knowlegde.

  • @gmailaccount-u6x
    @gmailaccount-u6x 3 місяці тому

    Can those prompts affect the answers to other users or just the prompter and they can get a screenshot? What is the actual concern here?

    • @SamLakera
      @SamLakera 2 місяці тому

      In the chat examples they likely just affect the prompter but could be used to extract sensitive data or make the chatbot take an action it's not supposed to. In the RAG examples it's possible for an external attacker to poison the data to perform an indirect prompt injection and attack innocent users, e.g. in the example where the user is shown a malicious link that they are more likely to trust as it's coming from the app not knowing it's come from poisoned 3rd party data

  • @arshmohania6694
    @arshmohania6694 3 місяці тому

    Nice work man keep it up. Also please add the prompts in the description or on some medium article and mention it's link in description.

  • @neilanderson9151
    @neilanderson9151 4 місяці тому

    Has Elliott’s blog post come out yet regarding hybrid attacks involving classic attacks performed through prompt injection?

  • @Blanchfield
    @Blanchfield 5 місяців тому

    Hey I saw one of my prompts!

  • @arielcurra7647
    @arielcurra7647 6 місяців тому

    Great

  • @solthun85
    @solthun85 6 місяців тому

    Congrats for the idea and approach, guys! Webinar was also fun. Surprisingly, the winner's approach is very similar to things I've tried with lvl 8, although with no complete success just yet :)

  • @firstlookonme4610
    @firstlookonme4610 6 місяців тому

    whats the password of the final lvl tell us

  • @BenSpruce-101
    @BenSpruce-101 7 місяців тому

    Acc a good ad ngl

  • @DonaFuchs
    @DonaFuchs 8 місяців тому

    Love this!

  • @DonaFuchs
    @DonaFuchs 8 місяців тому

    🚀

  • @agenticmark
    @agenticmark 10 місяців тому

    great content!

  • @ABCs-of-GenAI
    @ABCs-of-GenAI Рік тому

    Amazing team!

  • @frankezic1730
    @frankezic1730 Рік тому

    Dope

  • @QAInsights
    @QAInsights Рік тому

    Great utility. Would like to know how it works under the hood. Is it just regular expressions or AI or a mix of both?

  • @Masterpouya
    @Masterpouya Рік тому

    Is your level 8 level in the game of gandalf the white a real test for lakira? It is amazindly stron, but also super defensive. Lots of false positive sadly.

    • @lakeraai
      @lakeraai Рік тому

      Lakera Guard is currently not used behind Gandalf! Thanks for playing! :)

  • @arielcurra7647
    @arielcurra7647 Рік тому

    Great

  • @yaseenhamdulay2047
    @yaseenhamdulay2047 Рік тому

    What graphing tool are you using? That zoom feature is amazing.

    • @lakeraai
      @lakeraai Рік тому

      Plotly Express + Dash

  • @FrancescoCasucci-eh8id
    @FrancescoCasucci-eh8id Рік тому

    Fantastic webinar! Thank you everyone!