- 38
- 18 636
Lakera AI
Приєднався 9 тра 2022
The channel for all things related to AI safety and security.
The content is created by Lakera 👉
At Lakera, we are securing the future of intelligent computing. We enable enterprises to focus on building the most exciting AI applications securely by protecting them in the world of AI cyber risk.
We work with Fortune 500 companies, startups, and foundation model providers. We're also the team behind Gandalf - the world's most popular AI security game with millions of users.
Join us to shape the future of intelligent computing: www.lakera.ai/careers
The content is created by Lakera 👉
At Lakera, we are securing the future of intelligent computing. We enable enterprises to focus on building the most exciting AI applications securely by protecting them in the world of AI cyber risk.
We work with Fortune 500 companies, startups, and foundation model providers. We're also the team behind Gandalf - the world's most popular AI security game with millions of users.
Join us to shape the future of intelligent computing: www.lakera.ai/careers
AI Security Year in Review: Key Learnings, Challenges, and Predictions for 2025
Join David Haber (CEO and Co-Founder at Lakera), Ken Huang (Co-Chair of CSA AI Safety Working Group), David Campbell (AI Security Risk Lead at Scale AI), Nathan Hamiel (Sr. Director of Research at Kudelski Security) and Mark Breitenbach (Security Engineer at Dropbox) for a live session unpacking this year's most significant AI security developments, insights from Lakera’s AI Security Readiness Report, and strategic predictions for 2025.
Переглядів: 125
Відео
Product Peek: Lakera’s Policy Control Center. How to Tailor GenAI Security Controls per Application
Переглядів 3672 місяці тому
Join Sam Watts, Product Manager, and Matt Fiedler, Solutions Engineer at Lakera, for a live session introducing the major upgrade to Lakera Guard-the Policy Control Center. This webinar demonstrates how to centralize, customize and fine-tune security policies for your GenAI applications, all on the fly and without needing to make any code changes. Learn about Lakera: lakera.ai/ Book a demo: www...
Lakera Policy Center Overview
Переглядів 2172 місяці тому
In this demo, Sam Watts, Product Manager at Lakera, walks you through the Lakera Guard Policy Control Center, showcasing how easy it is to configure AI security defenses and manage policies in real-time. Learn about Lakera: www.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo Read documentation: platform.lakera.ai/docs/policies Explore how Lakera Guard allows you to create and customize polici...
Introduction to Lakera Guard: Product Demo
Переглядів 6262 місяці тому
In this demo, Sam Watts, Product Manager at Lakera provides an overview of how Lakera Guard is used to protect GenAI applications. Learn about Lakera: www.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo Discover how Lakera Guard offers tailored defenses against threats like prompt injections, data leaks, and harmful content-all customizable to your needs. With real-time threat detection and c...
Masterclass in AI Threat Modeling: Addressing Prompt Injections
Переглядів 4662 місяці тому
Join Mateo Rojas Carulla (Chief Scientist at Lakera), Nate Lee (CISO at CloudSec), and Elliot Ward (Security Researcher at Snyk) for a live discussion on the intricacies of AI threat modeling and the pressing challenges in securing AI systems. Learn about Lakera: lakera.ai/ Book a demo: www.lakera.ai/book-a-demo
Compromised Langchain Agent (Email) Protected with Lakera Guard
Переглядів 3202 місяці тому
Watch as Lakera's Chief Scientist, Mateo Rojas-Carulla, demonstrates high-impact exploits on LLM systems (such as the Langchain email summarizer) and see how Lakera Guard defends against these attacks.
Lakera’s Global GenAI Security Readiness Report Deep Dive
Переглядів 2603 місяці тому
Join David Haber (CEO of Lakera), Joe Sullivan (CEO of Joe Sullivan Security LLC, ex CSO at Cloudflare, Facebook, Uber), David Campbell (AI Security Risk Lead & Generative Red Teaming at Scale AI), and Christina Liaghati (Trustworthy & Secure AI Department Manager at MITRE) for an in-depth discussion exploring Lakera’s Global GenAI Security Readiness Report. As Generative AI becomes an integral...
6 Types of Attacks to Exploit LLM and Gen AI Applications
Переглядів 4043 місяці тому
Join Sam Watts, a product manager at Lakera, as he walks through six different attack strategies that can compromise AI-powered systems like chatbots and RAG applications.
Product Peek: Lakera’s Enterprise-Grade PII Detection Deep Dive
Переглядів 1505 місяців тому
Join Damián Pascual Ortiz, Senior Research Engineer at Lakera, for a hands-on session exploring Lakera’s enterprise-grade Personal Indentifiable Information (PII) capabilities. Learn about Lakera: lakera.ai/ Learn about Lakera's PII Capabilities: www.lakera.ai/data-loss-prevention Book a demo: www.lakera.ai/book-a-demo With the growing adoption of GenAI, protecting Personally Identifiable Infor...
Product Peek: Lakera’s Enterprise-Grade Content Moderation Deep Dive
Переглядів 1285 місяців тому
Join Sweyn Venderbush, Head of Product at Lakera AI for a hands-on session exploring Lakera’s enterprise-grade Content Moderation capabilities. Learn about Lakera: lakera.ai/ Learn about Lakera's Content Moderation: www.lakera.ai/content-moderation Book a demo: www.lakera.ai/book-a-demo As enterprises of all sizes roll out GenAI-powered experiences to their users and employees for the first tim...
RSAC Gandalf Challenge: Insights from the World's Largest Red Team
Переглядів 6886 місяців тому
Join Max Mathys (Lakera’s Software Engineer) and Athanasios Theocharis (Lakera’s Gandalf Engineer) for a webinar exploring the latest Gandalf challenge mechanics, showcase the most innovative attack methods, and explore the unique lessons learned about red teaming LLMs. Learn about Lakera: lakera.ai/ Play RSAC Gandalf: rsac.lakera.ai/ Book a demo: www.lakera.ai/book-a-demo At this year’s RSA Co...
Meet Lakera - AI Security Company
Переглядів 7787 місяців тому
At Lakera, we are securing the future of intelligent computing. We enable enterprises to focus on building the most exciting AI applications securely by protecting them in the world of AI cyber risk. We work with Fortune 500 companies, startups, and foundation model providers. We're also the team behind Gandalf - the world's most popular AI security game with millions of users. Join us to shape...
Assessing GenAI Security Solutions in the Wild with PINT(Prompt Injection Test) Benchmark
Переглядів 3367 місяців тому
Join Václav Volhejn (Senior ML Scientist at Lakera) and Julia Bazińska (ML Engineer at Lakera) for a discussion exploring the newly released PINT (Prompt Injection Test) benchmark. Learn about Lakera: lakera.ai/ Learn about PINT: github.com/lakeraai/pint-benchmark Book a demo: www.lakera.ai/book-a-demo Goodhart’s law states that when a measure becomes a target, it ceases to be a good measure. T...
Decoding OWASP Large Language Model Security Verification Standard (LLMSVS)
Переглядів 4608 місяців тому
Join David Haber (CEO at Lakera), Elliot Ward (Senior Security Researcher at Snyk), and Ads Dawson (Senior Security Engineer at Cohere & Project Lead at OWASP) as they explore the newly released OWASP Large Language Model Security Verification Standard (LLMSVS). This session delved into the specifics of the standard, highlighting its key objectives, control levels, and requirements for securing...
Navigating 2024: Insights into AI Regulations and Standards for Enterprises
Переглядів 4139 місяців тому
Join David Haber (CEO at Lakera), Lucia Gamboa (Policy Manager at Credo AI), and Nicolas Moës (Executive Director at The Future Society) as they discuss measures that enterprises should take in preparation for upcoming changes in AI regulatory policies. The AI regulatory landscape is rapidly changing, with key developments such as the EU AI Act, the US executive order, and the forthcoming secon...
ChainGuard: How to Protect your LangChain Apps with Lakera
Переглядів 33410 місяців тому
ChainGuard: How to Protect your LangChain Apps with Lakera
Lessons Learned from Crowdsourced LLM Threat Intelligence
Переглядів 97510 місяців тому
Lessons Learned from Crowdsourced LLM Threat Intelligence
Gandalf Livestream: A Year in Review (Christmas Edition 🎁)
Переглядів 433Рік тому
Gandalf Livestream: A Year in Review (Christmas Edition 🎁)
How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs
Переглядів 941Рік тому
How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs
Navigating the EU AI Act: What It Means for Businesses?
Переглядів 408Рік тому
Navigating the EU AI Act: What It Means for Businesses?
Introducing Lakera Chrome Extension: Powerful plugin to protect your ChatGPT conversations
Переглядів 1,4 тис.Рік тому
Introducing Lakera Chrome Extension: Powerful plugin to protect your ChatGPT conversations
Using Lakera Guard to protect LLMs against prompt injections
Переглядів 1,4 тис.Рік тому
Using Lakera Guard to protect LLMs against prompt injections
Getting Started with Lakera Guard - Safeguard Your LLMs with Powerful API
Переглядів 3,4 тис.Рік тому
Getting Started with Lakera Guard - Safeguard Your LLMs with Powerful API
Introducing Lakera Guard: Powerful API to Safeguard your LLMs
Переглядів 690Рік тому
Introducing Lakera Guard: Powerful API to Safeguard your LLMs
Livestream Recording: The Spells Behind Gandalf
Переглядів 1,2 тис.Рік тому
Livestream Recording: The Spells Behind Gandalf
Lakera x ANYbotics: Building the Future of Trustworthy Robotics
Переглядів 278Рік тому
Lakera x ANYbotics: Building the Future of Trustworthy Robotics
Live vs. ImageNet - Lakera Webinar Recording
Переглядів 356Рік тому
Live vs. ImageNet - Lakera Webinar Recording
The European AI Act and digital SMEs - Lakera Introduction
Переглядів 1042 роки тому
The European AI Act and digital SMEs - Lakera Introduction
Hi! What languages does lakera support?
Proper architecture can also help to avoid situations like this. Despite that; I feel that awareness about this kind of Phishing should be more common knowlegde.
Can those prompts affect the answers to other users or just the prompter and they can get a screenshot? What is the actual concern here?
In the chat examples they likely just affect the prompter but could be used to extract sensitive data or make the chatbot take an action it's not supposed to. In the RAG examples it's possible for an external attacker to poison the data to perform an indirect prompt injection and attack innocent users, e.g. in the example where the user is shown a malicious link that they are more likely to trust as it's coming from the app not knowing it's come from poisoned 3rd party data
Nice work man keep it up. Also please add the prompts in the description or on some medium article and mention it's link in description.
Has Elliott’s blog post come out yet regarding hybrid attacks involving classic attacks performed through prompt injection?
Hey I saw one of my prompts!
Great
Congrats for the idea and approach, guys! Webinar was also fun. Surprisingly, the winner's approach is very similar to things I've tried with lvl 8, although with no complete success just yet :)
whats the password of the final lvl tell us
Acc a good ad ngl
Love this!
🚀
great content!
Amazing team!
Dope
Great utility. Would like to know how it works under the hood. Is it just regular expressions or AI or a mix of both?
Is your level 8 level in the game of gandalf the white a real test for lakira? It is amazindly stron, but also super defensive. Lots of false positive sadly.
Lakera Guard is currently not used behind Gandalf! Thanks for playing! :)
Great
What graphing tool are you using? That zoom feature is amazing.
Plotly Express + Dash
Fantastic webinar! Thank you everyone!