Dark AI Agents: The Most Dangerous AI Today?
Вставка
- Опубліковано 7 лип 2024
- Dark AI Agents - The Most Dangerous AI? (For Now)
👊 Become a member and get access to GitHub and Code:
/ allaboutai
🤖 Great AI Engineer Course:
scrimba.com/learn/aiengineer?...
🔥 Open GitHub Repos:
github.com/AllAboutAI-YT/easy...
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
Today, I built a Dark AI Agent to explore how these could cause havoc on the web in the short term by creating confusion and misinformation on popular social media platforms like Reddit, X and others. These AI agents can have a more advanced brain than traditional bot networks, and I think it's important to be aware that these exist and will probably become more prevalent in the near future.
00:00 Dark AI Agents Intro
02:04 Reddit AI Agent Flowchart
03:29 Dark AI Agent Python Code
10:38 First Test - Comment on Post
11:27 Second Test - New Post
12:59 Third Test - Respond to Comments
14:59 Key Takeaways - Наука та технологія
Interestingly, this was vaguely similar to the original plot for Fallout that was never made canon according to the original creator. Big corps were experimenting on humans in the Vaults because they were planing to abandon the planet in ships and wanted to learn how small groups of humans in enclosed space would react to different extreme situations.
I think the biggest issue with this style of prompting is repetition out of context. Make 1000 posts with this engine and patterns will emerge that are easy for readers to identify.
Now, if you keep track of what's been said previously and keep your ICL text fresh...
Yeah, I think this started almost a year ago, but it has been increasingly accelerating. We will need some kind of protection at the platform/protocol level, otherwise in one year's time we will be flooded with comments and real human's comments will remain a drop in a bucket.
You can’t differentiate it’s the same as if he posted it
@@alexanderrosulek159 Then you just dont trust the internet anymore.
@@helix8847 and I was referring to the services. UA-cam can’t tell if it’s bot nor can any other site if you do it right. They can only check if ur on same device/ip sorta but the actual text they have no idea if a robot did it.
People? are responding? I kind of suspect everything nowadays.
What would be interesting is the opposite. A sort of ai validation of content likelihood using several smaller models working together. Prompt injection would be an issue but hopefully a mixture of models will mitigate the risks. Have that as a javascript browser plug-in connected to ollama. That might be a neat project if it was connected to a few online sources of truth via rag.
Thank you for your interesting idea, probably I'll try to implement it
@@JohnDoe-zx8bu good luck. I hope you succeed. Im coding something else right now but eventually am planning to get to that idea myself. I want to start open sourcing some code to give back to the community. :)
Very crazy!
the price to goto Mars is 100Trillion USDT or USD GOLD... there will eventually get there.....
So the are not gray boxes
Atlas Shrugged (Ayn Rand ) == Project Elysium ? Who is John Galt? ;-)
I'm more afraid of the 52 Cult of Elon followers who hammered your post than I am of the "dark AI".
Do you think AI post could persuade or convince some people?
Most commenters on the post call BS and seems quite critical of the viewpoint in the post
Absolutely. If the narrative was slightly more nuanced and subtle. And if you optimized the system prompts and instructions to better reflect the science of influence. You can find these in books on sales, or political campaigning etc. How to overcome objections, what kind of emotional triggers to push etc.
I kind off agree with the previous comment that says "Don't share the code", what Kris created here is a basic framework that could easily be improved upon by someone with the influence know-how.
Lets not make it too easy :)
@@AntonBj3 I think you might be spot on there. I also fear it too late. Share the code or not. Someone would have figured this out…
Yes, they are probably testing out these systems as we speak.
However, I don't worry about the future. Because I believe there is a diminishing return. A new system and method of manipulation will work very good for a short time. People will catch on(on average) and it will stop working(on average)
Manipulation will become increasingly more difficult, and after a certain amount of iterations: Humanity wins.
In the short term, I expect Reddit and other fully text based platforms to become completely unusable.
@@AntonBj3 I hope you right
When regarding redditors, they're the easiest to manipulate. They are very mentally limited people, the ones on reddit all day.
nice:)
It's no surprise redditors engage with AI content, they're not exactly the brightest bunch.
I've canceled my membership since this video dropped. Looking at the size of the channel I thought this was a harmless, serious channel posting cool, innovative AI community projects. This video doesn't show how to protect yourself from these threats-just a dummy step-by-step instruction for edgy script kiddies on how to spread misinformation and false narratives online, which they will pay for with their mom's card and get quick access to through membership and mess with innocent less technically literate people online. Sorry Kris.
Exactly
Actually, to me knowing this is possible we need to educate other people that whatever content you sees in the reddit or Facebook or any other forums is controlled by AI for spreading fake news. It's the same as how other ethical hacking ytube videos shared on the other channels.
Love the content thanks for the video!
Although i don't know much technically but it seems interesting. Sharing with friends
This is blackhat stuff, I'd refrain from spreading it around.
You're building into the hype by saying this. The internet has already been ruined. Also you know AI stands for automated intelligence, not artificial, right? It's not real intelligence, not even surpassing the intelligence of a fish.
@@kevinsedwards Wtf, no that's not what it stands for. And no I'm not building into any hype, this is objectively bad and makes the internet worse regardless of how you want to characterize AI.
Don't share the code.
bro you think people with bad intentions don’t know how to code?
@@Graverman yes and yes. the ones who know how to code are already paid by adversary governments. people who would potentially use it with good intentions are security specialists, they do know how to code it themselves - don't need to search for this step by step video for dummies on UA-cam how to make a malicious bot to harass innocent people online and push false narrative
@@Graverman of course they do. Making it easier and more available still increases the number of capable bad actors
@@technolus5742 if someone can’t code this, they also probably would get caught by thinking vpns are anonymous or something
@@technolus5742 I think that different platforms and browsers will start to integrate some protection faster if there is too much fake content in internet so it's good thing
Thanks Kris. This was very enlightening and helpful. I doubt you have opened Pandora's box with this code as some might think, there are plenty of sources that are accessible with instructions to do far worse. This was helpful because it showed how easily this could be done and that we do need to be on guard. Please continue to point things like this out.
I was thinking, this could be used as a counter agent, someone unleashes a dark agent on my Reddit page and I unleash my "white hat" agent to battle theirs. 😀
Hasbara is already using it on social media platforms
You mean pro-Palestinian liars using Hamas made up numbers.
Screw Germany, Germany is broke
It is time to configure it for Trump(or Biden) narratives and sell to both parties. You will become a millionaire :)