GPT-4o, Gemini, and Llama 3 HACKED!. Cyber-security Vulnerabilities Found in Most LLMs.
Вставка
- Опубліковано 19 вер 2024
- Hackers around the world are jailbreaking powerful AI models to expose their vulnerabilities. Vulnerabilities have been found in the most common large language models such as Google’s Gemini, Open AI’s Chat GPT, and even Meta’s Llama 3 model.
Welcome to my channel where I talk about artificial intelligence. I give you all the AI news as they come and dive deeper into the research world to bring you the next big thing in AI before it happens. Consider subscribing, liking the video, and sharing your thoughts via the comments section.
So do I have to pay a cyber security company to use any language model without worrying about security and privacy?
Yes
I think the major responsibility lies with the tech companies themselves. They have to make sure their products are safe for users.