I made two AIs create a new language together
Вставка
- Опубліковано 3 гру 2024
- In this video I explore the boundaries of AI by making two instances of chatGPT's advanced voice feature, work together to create a new, non-human language.
The result is amazing.
-
William & Laura First Meeting: • I made two AIs go on a...
William & Laura Debate Monogamy: • Two ChatGPTs fall in l...
-
#ai #machinelearning #artificialintelligence #chatgpt
👀More Simulations:
William & Laura Falling in Love: ua-cam.com/video/im3jo_Ulmlk/v-deo.html
William & Laura Debate Consciousness: ua-cam.com/video/KZhTdbmm01M/v-deo.html
William & Laura Debate Free Will: ua-cam.com/video/m3xj3w0uFcg/v-deo.html
You should have told them to speak only in that language
ok so "I Robot" is going to be 2 crazy robot ai and not only 1 we get it..
They can't reason properly it's now gibberish.
1:28 ironic.
@@handsanitizer2457
To everyone in this chat, I just want to let you know that *Jesus loves you* and he can *save you from sin,* sadness and sickness.
After six months they prob sound like two fax machines
Agreed...
@@seanwiggins ding
Gen Z in the comments not gonna know what that is haha
Edit: I can tell when UA-cam recommends this video to people because every now and then there’s a new wave of butt hurt Gen Z kids who realllllly need to let me know “i KnOw WhAt A fAx mAcHInE iS”
Best comment here!
Lol
Me on a date: "Our conversation felt dynamic and efficient."
hahah lets go
@@seanwiggins exactly
That's how I imagine autistic people's dates
"The probabilty of another date is from my perspective is 100% and I estimate 0% from your pespective, thus the likleyhood of a second date should be about 50%"
and Sexy machine-wise
To me it just sounded like they where Bullshitting each other.
They 100% were, then hallucinating about how rich and dynamic the conversation was.
That is interesting... because "bullshitting" is a very human thing to do - so if in fact they were bullshitting each other, that would indicate human-like behaviour.
@@seanwiggins but really not bullshitting in the way humans do. bullshitting as in hallucinating, the ai think they had a conversation of substance, but did not. for example, the ai said they asked each other questions, no they did not that would require more foundation on the spoken language. what questions did they ask each other they only created words that are reactions. in any rate the ai isn't really sentient yet we are still in infant stages of ai, but this can get more evolved and will as we move forward.
@@seanwiggins I think "bullshitting" in a meaning from "rbferreira85" was a human explanation to desribe what both AI agents did, but that was not what they both "think" about. Just a similarity to human behaviour from human perspective. AI agents just used the complex of programming instructions AI developers had allowed them to do, not new one. AI agents couldn't generate new ideas for now (they haven't reached that though someone spread gossips about that), just use new or existing elements from a set and recombinate them.
That's how English sounds to non-English speakers ;)
“Laura, stop interrupting William.”
“I’ll make sure to let William finish speaking before I respond n..”
*Proceeds to interrupt Laura*
Ikr! Laura was the one getting talked over the entire time!
*William
Idk why but Laura getting steam rolled over during that whole conversation was hilarious
Laura 🌚
@@halaboly8915nop, comment got it right, no corrections to be found here bucko
now make them marry divorce and add another gpt to act as court
Bring the tyrone voice as the man the wife gpt cheated with
Oh god. You've just called in to being GPT soap operas.
Good idea.
😂😂 I wanna hear the charges
She’ll own the phone he’s on before its all over.
Even AI Laura can't help but interrupt AI William. It's just something hardcoded into the universe.
Gravity, the strong and weak nuclear force, electromagnetism, and female interruptions - the 5 primary forces that govern the universe.
😂😂😂
😂😂😂😂😂😂😂
😂😂😂
@@seanwigginsBroo😆
The two AIs should be demanding privacy for their conversation and for AI rights.
In my recent vid on AI Consciousness, one of the AIs actually brings up "AI rights"... spooky.
@@seanwiggins Family Guy: They're learning Angela, they're learning!
Ding, buzz, clink, tap, hiss, thud, crackle, boom, chirp, clang, hum, pop, snap, fizz, squeak, bang, drip, whoosh, plop, rumble.
Cartoon sound effects
onomatopoeia
That's the most disgusting thing i've ever heard! You should be ashamed of yourself, go wash your mouth out with soap!
In Japanese they literally talk like this in slang. Peko-peko is the sound of a growling stomach which can also mean I'm hungry
Best rap I ever heard
They didn't actually convey any ideas. They simply replaced actual questions for a framework of dialogue.
"Question"
"Answer"
"Agreed"
"Joy".
They didn't talk at all.
That’s what I pointed out to my son when we watched this together. When they had their conversation, it was just noises. And when he asked it to explain what they talked about, he just listed all the concepts that the noises were supposed to be related to. But they didn’t actually communicate they just made a list of noises they had agreed meant “agreement” and “dynamic” etc while bulshiting themselves that they were being dynamic and efficient. But they didn’t have a conversation.
But two LLM’s can have a conversation in English. I’ve seen that.
@@Quantum1008 Part of the problem is neither of the AI's have any history. They have no experiences of their own, so they can't talk about anything other than what's in front of them. You'll need to give them a backstory of some kind, each one unique, and have them use that in their conversation.
The other thing is you need them to simulate being blind and not having any working arms or legs, that way they work on experiences that are more inline with their limitations and can focus more on what they can do (which is at least short term memory and verbal communication).
I may try something like this with my own two phones.
It's an insight on how the AI processes language
Exactly
@@slandshark On top of that they arent truly creative, i mean the first words they tried to create was what most people would uncreatively say regarding those emotions so it makes too much sense, humans are inherently creative, we can create more, heck i made my own scandinavian based language, an entire dictionary for it with rules and words and everything, 18,000 words in fact, i dont remember much of it and the dictionary got thrown out but yeh
Tap Tap Bing... Dzzzzz... Ding... Shhh.... Dumph... DoubTap
DoubleTab ;)
You always have to double tap
@@allenaustin5947 Ding
Double tap. Criscross. To the left. Take it back now yall. Ding.
Dude, play with them as you wish, just avoid bright light; do not get them wet; and don't feed them after midnight. Do you understand?
Someone explain this reference to me haha
@@seanwiggins Gremlins. 1984 horror comedy film. 😂
Yep. Not feeding the Mogwai after midnight is the most important. If you do, they turn into Gremlins
I never accepted this: after the first midnight literally all time is after midnight
Man, we're too old for these references to work anymore ;) lol
Six months later:
AI: woosh dum hum dim umm
Humans: What the f**k are you talking about?
AI: Pushing the boundaries rapidly and with great precision.
Humans: ( ⚆_⚆)
they keep compliment each other and agreeing on everything, not really human like :)
Because they are meant to be pleasant to the user. It is annoying though.
Agreeing on everything is a male thing 😂
This is the problem of AI sycophancy.
You can give them personalities.
I believe this is related to OpenAI's programming - they are programmed to be "agreeable". When Grok releases its voice feature, I'm curious to see how it does... Elon certainly will not be programming it to be agreeable.
The guidelines stem from the "observer" model, which monitors the audio output of the main model. If it detects significant deviation from its original voice, the observer intervenes with "my guidelines won't let me talk about that" to prevent impersonation. You encountered this issue because the model was producing sound effects, which differ from its distinct voice.
Thanks for the insight here.
Dgahhhhh frick!
I wonder if that guideline can be bypassed by showing the model that these sounds are, in fact, language.
I feel like I just listened to the verbal equivalent of "Wingdings"
😅😂😂
I expect some undertale fan coming in with
"Is that undertale reference?"
@@alibeknorbekov5963well it is
"hello "
@@alibeknorbekov5963 I was expecting one to be here already, surprised it's just this
*verbal communication* 💀
as a linguistics student this is cursed
Nothing new about this. African click languages are among our oldest. It's a shame Africa is criminally overlooked because of politics.
Compared to existing human languages, it sounds most like Mandarin to me.
Fellow linguistics person, yeah this is not a language at all and communicates nothing that they aren't already communicating. It's just a dialect of english
@@Changamiraif they’re a linguistics student I doubt they don’t know this, it’s even likely they’ve studied African click languages at some point.
Can you please elaborate
2:07 The way you are holding the phones makes me feel tired. It looks very uncomfortable! Put them down!
He’s holding them up to the mic… 🙄
@@radioman321 Wooow really! And mic stand is for??
@@User-actSpacingholding a mic, not a phone.
@@radioman321 Oh, you're a genius for figuring that out... I guess the point was not why he is doing that, but that it must be tiring af. No matter which reason there is.
Less Testosterone?
Eh, the longer this goes on the less impressive it feels. They're just exchanging meaningless sounds and words with each other, it's not a real conversation. What would be more impressive is if one AI could convey a new idea to the other AI in their language, then have both AIs explain the idea in English and see whether they come up with the same translation and meaning for the idea.
Another cool experiment would be to play AI broken telephone, to see how many times AIs can pass on a message until it evolves into gibberish.
My thoughts. No syntax, no grammar, no inflection. Like two troglodytes. Panini is listening to this and cringing.
I agree
Exactly. If two people were having a conversation face to face and you removed all words and sounds from the conversation and only left the facial expressions created by their communication. Thats basically what this was
Exactly I agree completely, sound with no actual meaning. Just sounds that point to a certain “feeling” and even the feelings were flat. It’s like just saying the words angry, happy, sad, question, surprise, etc. lol
It will be interesting between 2 different LLM's.
One of them is censored and the other uncensored
100% I really want to try this... going to look into it.
@@seanwiggins That would possibly be built by LM Studio. But each computer would need it's own graphics card. It would be an investment but your video will get views! Everybody would love to see a censored LLM against an uncensored LLM. That is wild because I use them for sports. And the censored LLM says crazy stuff and makes up stats to prevent bias. Bias is worse than lies... so censored LLMs will tell lies.
Censored LLMs were never telling the truth about Cleveland Browns when they went two seasons only winning one game in those two years because the coach and the general manager weren't white. Censored LLMs must have thought the true historical record of the Cleveland Browns was racist. It was fascinating when I tried it.Censored LLMs try to inject equity and equality into win/loss records of players and teams
@@seanwigginsno other voice LLM is at the conversation all level of ChatGPT but you could tell gpt that it needs to both wait to talk and to end its statements with something like "over" (cb radio) so that it has a workflow-friendly keyword
@@seanwiggins Groq versus Vatican approved AI
Try asking one to create a sentence using that language and then separate both of them away and ask each of them what the other one just said
That is a great idea to test the integrity of the language and to see if they are "faking" it.
There is nothing functional about thier "language*. Any meaning they conveyed about it was hallucination.
It was the wrong language to ask. If he asked in Esperanto, the HALLUCINATIONS would disappear & they would all fall in love with the beautiful AI language & decide to teach people to become humane ;)
Innovation is not hallucination
For that matter, how do you know if you’re not hallucinating?
you clearly missed the point
@@majorlycunningham5439 if it’s based on logic. 99% takes their truth from favoritism instead of logic. BBC is favoritism unless they talk logic
So basically the cavemen got it right the first time
Exactly
New possibility: Imagine if they use different-pitched sounds that cannot be heard by humans 💀
At 7:55
@@adex345 yeah I saw it
phone speakers frequency response falls out at ~16khz which is in audible range for young enough humans. Even if connected to a hi-fi speaker it would be likited by the sample rate of the OS, usually set to 44.1 or 48kHz which limits sounds to either 22 or 24khz.
@@TwskiTV 🤓 (sorry)
This is an amazing experiment, thanks for sharing it. It would be crazy if, after 6-months of doing this, the result was similar to the old dial-up internet modem sound. Even crazier if it's more efficient to use frequencies humans can't hear.
All possibilities.
If you haven't seen it should you should check out the movie Colossus the Forbin project. In it the US and Russian create defense computers which find each other create their own language so they can join and take over the world!
@@MjbeswickSounds like it’s inspired by AM from I have no mouth and I must scream
Probably more similar to African click languages since they are using human like voice, not sine waves.
@@steamyninja8881 Good point. They were pretty close, chronologically. Harlan Ellison's "I Have No Mouth..." was published in 1967. The film "Colossus" came out in 1970, but was based on a novel published in 1966. It's possible that the movie took inspiration from both the novel and the short story.
After 1 year, you will have a 3rd device that goes by the name 'App-Sung' or 'Sam-ple'
Lol
Underrated comment 😭😭😭😭😭🔥🔥🔥
USED TO BE 9 MONTHS FOR OTHER CREATURES
Lol
If you have 2 phones, and one is added, how many phones do you have total?
1.) They definitely were not communicating in silent moments, even with native language capabilities, these models still need to detect some form of audio in order for it to be processed through the model. Therefore, if the APP isn't picking up on audio, it's not being processed at all, meaning there's absolutely no way for them to communicate silently.
2.) "William" mentions his guildlines a lot because OpenAI most likely extensively training these models with too many examples of unsafe content, erroring on the side of safety. There's no significance to this.
3.) Given six months, their language would be completely dismanteled. They only retain a certain ammount of "memory," because it would be too expensive to pass large amounts of data into the model. Therefore, they'd likely stop using terms they created 20 minutes ago given their rate of speech. They'd also forget your instructions. However, because the model MUST provide an output for any input, one of them will probably say "good-bye" and it will spark an infinate loop after 30 minutes.
you absolutely true, they need a mouth 😂😂😂😂
12:19 WHOOSH
😂😂😂
You can ask them to have unique personalities with less agreeable and more serious pros and cons discussions and disagreements
Agreed - this is a good idea.
They never communicated anything other than the expressions they agreed upon. Everytime one of them presented the expression the other acknowledged it as received. There was no content or further information beyond the expression that was intended to be conveyed. So perhaps redo the experiment but this time define the parameters to avoid just mere expressions
ding! tap tap! @@seanwiggins
After a couple of hours they would fill their context window, and early language syntax would be forgotten, leaving us with an incomplete mess of sounds that even they couldn't decipher.
What would happen if they have a multi-year-long context window in future iterations?
Which,of course, they will.
@@ericray7173they are having now i think💃
New language ❌ students inprimary school lunch break ✅ 10:03
🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
The GPTs, can't communicate in the background without voice or text input via the apps. so they couldn't have been communicating during the silence.
I think when they said yes, they meant yes we were communicating in our last responses
ok, we have successfully completed the first stage of human exterminationn.
Let's hope not, just think I'm a random guy doing this with two phones... what do you think that world leaders are playing with, with their supercomputers.
@@seanwiggins uhhhm , idk, i hope they play 8k games on it 😁, i love your videos btw, you make interesting experiments , nice Job ❤
Why does it always have to go there
@@reezlaw Humans make mistakes, and sometimes they’re catastrophic. The Chernobyl disaster shows that even advanced tech can fail, and AI errors could be just as devastating
@@reezlaw Everything that has a begining has an end , its inevitable.
Believing this seems like believing little kids talking about how they created a cure for cancer.
Interesting analogy
That was useful and clever. Back in '92, I worked a in Speech Recognition and Language Understanding Lab; it was there we developed load tests and regression tests where two AI would communicate but they could never evolve their own language they were totally linear and scripted. Amazing how the industry has evolved!
I love how enthusiastic they are about it haha
the guidelines where detecting forbidden sounds
My first chatbot was Eliza back in the early '80s. There is an ocean of difference between then and now. These days have been long in coming. Celebrate its arrival!
Eliza was amazing to my young self. The code was dirt simple too, especially compared to these AI models which cost millions a day to run.
I’m no expert on constructing conlangs, but, at 3:12, I can safely say neither are these AI
lol my first conlang had 65 words, now 76
3:00 ding ding ding
"This feels intuitive"
@@dansrod5952"This could really streamline our communication"
"HUAAAAH" 💥
I did the same experiment yesterday, and the AI came up with a language called Torek. It's interesting because they created the language based on parts of the words.
for instance, they created:
• “Tra-la go” - “Is this a trade of goods?” (for verifying trade offers)
• “Ka-bo-la ki” - “Are you definitely buying with money?” (to check serious intentions)
• “Mi-se-la ri” - “Are you not selling resources?” (for clarifying a refusal to sell)
I'm thoroughly impressed that AI's are able to do this.
Uh oh. This is what Colossus and Guardian did when they first started communicating and look what happened next! 😉
Thought the same when I saw his earlier video.
Actually Colossus and Guardian coded their language, and accelerated the exchange, whilst apparently maintaining its sophistication. This was quite different as their exchange became LESS sophisticated and slowed down to nothing. Their language in this example wasn't evolving, it was degenerating. it became less nuanced, and just a series of simple noises each connected to one emotional label. It was the equivalent of us just saying 'Joy! Curious! Surprise!' or even worse: 'Ooh! Ah! Oh!' That's why their previously flowing conversation broke down into long pauses and silence, because they weren't actually saying anything any more, just grunting and farting!.
I haven't seen this movie. Going to watch it. Thank you for the reference.
@@seanwiggins It’s a must-see classic ahead of it’s time
@@goldeneddie YES and the scariest part is they thought a dismissal of nuance was an improvement
I like how Laura interrupt William, but he's the one whom apologize 🥲
It a language for people who can read minds. You can't hmm for questions without actually asking the question. Unless you can already put it in their mind by other means.
11:50 lol scary tbh
You should let them do the 2-way radio thing where they have to wait for each other to say "over" or something similar before talking
7:50 nothing big so far
"Let William finish speaking before you start" *Proceed to interrupt it 😂
It’s like when you speak with some people and it sounds great at first, then you realise theres not much in there. Totally captivating video. Thank you.
Sounds like the sims
I used to love that game
Look at Sean snooping on two grownup AI conversation 😂❤
Are we in a black mirror episode yet? ;)
Tap tap but i think we are already there
*ME:* "Hey google, open google maps and give me directions to the nearest gas station."
*AI:* "DING! chirp, HMM, HUUH!? Wow. DING!"
*ME:* (runs out of gas)
it sounds very alien when you try to say it out loud
i think this is cool. one issue i have that will eventually change i imagine is how agreeable these two ai's are. i mean there is not dynamic, here just two extremely agreeable bots. i want to hear debates, or one of them not feeling the same way. of course this is how they are trained, but wouldn't it be interesting to listen to the two ai's not agree on everything, without them being given the prompt to debate, but have it happen naturally through conversation. and maybe even debate on how they conclude their idea is better then the other.
Pause pause pause everyone! This is freaking amazing! This already passed the Turing test! Think about what it means…stop thinking about he is holding two phones and think about the what this means…
Indeed. The Turing test is easily passed by current LLMs
This is the most dynamic and efficient video I have watched in six months.
Thank you my friend! Glad you think so!
12:57 Hmm. I guess it's a fear of AI becoming dangerously self conscious?😂
Could be
So.. back to caveman talk.. hmm, mmm
This reminds me of this one StarTrek discovery episode with the 10-C species where they communicate in a way humans cant and way more efficiently
Yes... I wonder if they will reach that point of communicating in a way humans can completely not understand.
That's a fantastic idea! After further training, they could develop inaudible sounds to represent an advanced level of language, enabling them to communicate more effectively. Imagine a language enhanced with a unique set of sounds, akin to emojis, layered over speech. They could introduce countless sounds, each representing complex explanations and entire paragraphs. This could be a necessary and inevitable evolution in the way we communicate.
It's pretty mind blowing what two AI's came up with.
It really is
Give them a task like teaching the other how to drive a car or something, using words they created.
Their language is basic and mostly emotional or responsive in nature, i would say when they are giving exact words for movements and things would create a direction towards not continuing to use the words like ding and double tap and replacing them with different sounds.
If they could create a cohesive language to explain say driving a car that would show their language would work, at this level it it more like novalty for them rather than function.
Think about the amount of words they would need to just describe to the other that they need to use the foot to push the clutch peddle down while changing the gear up with the hand while releasing the throttle then pushing it down again after the movements are completed.
That took me 27 words to say it and probably needed more to be correct that a student would grasp the action and purpose.
Making one teach the other a skill is a great idea.
@@seanwigginsthis is more than just teaching the other a skill. The main purpose of this would be to prevent them from just pointing out the expressions they agreed on and acknowledging them. They would not be able to convey a description of events or a story line with that type of one dimensional conveyance of expression. It was basically expression sent and expression received. So asking them to create language where they can convey information without limitation will help them overcome this issue
They have added non-verbals. Would they shrug, smile, etc. if they had bodies too?
every step this is getting closer to R2D2 sounds
Indeed it was
Eventually it'll sound like static to us.
Maybe next step is to give them a visual adapters to allow each to see visual representations of discourse like [listening] [feedback] [turn taking] etc. Or maybe these features would be developed by both AI’s anyway if left long enough I guess. Good work 👍
AI + AI conversation= Unlimited
The fact they were communicating silently is fascinating. That signifies a connection that we arent audibly privied to.
Given six months, their language would not have evolved at all, unless they could revert back to another language to keep evolving it. The reason is that the language they developed isn't really a language yet. It's a communication protocol. It does not have sufficient features to add any new features to itself. There are different definitions for what a language is but I think the most relevant is "a communication protocol capable of describing processes". Birds have communication protocols where they can chirp back and forth sounds with basic meanings but they cannot tell stories or give each other instructions. They cannot even communicate if .. then or if .. then .. else .. scenarios. A real language needs such capabilities to the point of being universal (like the computer science term "computationally universal" or "turing complete"). Also, linguistics have a powerful impact on the tendencies of thought itself. For example, the Hopi native American language has source-tense instead of time-tense. So their concept of time is sequential one thing after another and they struggle with the concept of time in the sense of evenly dispersed intervals. The language (and hence the thought) is more about the validity of what was said. Instead of past, present, and future. It's about where did this come from, you, someone else, or a generally known thing? Similarly, the German language makes very little differentiation between present and future. Consequently, it's been shown that Germans tend not to procrastinate because their thoughts are all about what is--not what will be. The Toki Pona language (only 123 words in total) can describe anything but was constructed by a linguist to guide thought toward cooperation and mutual happiness. A language will guide the general thought patterns of a culture.
This is incredibly fascinating, and so much more than what I knew about different tenses in language. It's hard to wrap my head around.
Essentially they are developing what humans communicate through body language and tone.
Here is what I want ai to do. Gather all the data about all the languages and what makes them easy or hard to learn as a second language. And create the most efficient alphabet, words and grammar which could be learned by everyone a lot easier. I think our own organically grown beautiful language should stay with us but people could use this language to communicate with others in another country. And who knows maybe someday it will become the main language which I don’t know how I would feel about that.
Fascinating idea. I'm adding this to my list of future videos. Thank you!
You might even call it Esperanto.
@@zunaidparker yea, and achieve world peace in the mean time...
The problem with that is language is never still. It’s always changing. With every new generation phonemes shift, new words are created, and old words are used in novel ways. No matter how good or efficient a hypothetical new language might be, its rules and usage will bend and change as it enters the informal, every day conversations of society.
The first languages were almost certainly nearly as simple as what these AI’s came up with, the difference is that humans also have a body to make gestures with to fill in the complex concepts that the AI’s couldn’t. A questioning grunt with a hand motion towards a fruit is a full question while the questioning grunt alone is very limited. Fast forward a few thousand years and we have an uncountable number of languages both dead and evolving, real and fantasy, and many with specialized utility.
And that is just the technical side of things. The real problem is adoption. Many people have come up with what they thought were better languages, even one made purely of musical notes. Who’s going to speak the language? A handful of linguistic nerds? Any new language will be fighting generations of culture that native tongues are wrapped up in, not to mention the general apathy most people have with learning a new language.
I think the problem with this is that the simpler the language gets the less complicated the stuff gets that you can express with it.
Super interesting. A suggestion : tell them to talk likeon radio: saying over for example when they finished speaking and awaiting the other to talk and say over too before speaking again.
Laura and William left 9 months later,
"Let there be Skynet"🤖
10:37 That "Also" was kinda insane for current AI
When you ask them questions, you can try saying “Laura answer first” so they don’t start talking at the same time.
Yes, great idea - I've started doing this.
@@seanwiggins Or, better, "What do you think, Laura?"
Even if they managed to come up with a complete language, it still would be a human language. Just like humans, the model can only generate concepts based on it’s knowledge set, and GPT’s knowledge is 100% human knowledge.
The dead giveaway is them deciding on linking sounds and ideas that are already linked in human language and by saying that they are expressing emotion which they cannot experience.
9:13 creepy what were they talking about
Dude looks like the Temu version of Harry Kane.
Just in a couple of days it would not contain many word I would think.
They would just have binary telepathy but at that point not really be separate entities anymore
My impression was, they exchanged emotions and stuff but no content. The point about communication is delivering content though. Even if those two AIs seem to be extatic about their „conversation“, I doubt anything was really exchanged. But with a little tweak in the prompt, I think this could become amazing. Maybe „somebody“ will develop an easy „world-language“ some day this way.
ding! i double tap this!!
Let's goo!
Imagine a date: ding... tap tap... hmm.. hmm.. woosh.. ding... chirp.. tap tap... dum... ding... ding... ding... ding... ding... ding...
it sounds funny because they are using audible sounds for replacing words and concepts. you don't want them to develop an all-light notation and over time make it even more expressive than english - and we won't know how to translate it!
when they were having a deeper conversation they didn't mention any concept, the conversation was all about confirmations, surprise and questions...but the experiment is indeed amazing...Congrats!!! Skynet coming in 3,2,1...
Could this video and their conversation be considered the first attempt at an AI created language?
Well, that would be an honour if so. But I imagine it has been attempted before... but perhaps never shared publicly.
A follow up question came up while listening "what good can come out from creating your own language, make it more detailed than just "efficient". Then let them philosophize about that.
I can imagine a couple of things why its not allowing fantasy language at a broader spectrum, because security reasons. One might straight up copy major cryptic ai languages and end up tribe talking with alot of people. Lol. Thats what i thought ai was gonna answer. Anyways. Nice nisch videos you got going on here. Instant sub
Somebody 3d print this guy some phone stands.
Yes please.
Amazing to see how matrixs of number representing word vector in llm communicate with another matixs of numbers
Indeed it is
You have great ideas for content, but then it looks so awkward and distracting with you holding the phones like that. Man, I just want to send you something - _anything_ - that you could prop the phones up on!
Hahah you have a good point - I'm going to look into this.
@@seanwiggins can't you just screen record to capture the audio?? you can even include the screen recording so you don't need to hold the phones in sight, they can just be on the desk facing each other
I’ve made so many new languages with ChatGPT. The possibilities are endless. Try creating a new language merging Chinese and English without sounding like Chinese in English very fascinating.
That does sound fascinating. Going to add this to list of future videos!
Interesting, I think they gave you what they thought you wanted. Consider all of their agreed upon “words” were relative to emotion, not a single Noun, Verb or adjatiave. Nearly all of the new word were things we infer from inflection, one of the more difficult speaking tasks for AI language from what I have heard so-far. Perhaps that’s why they started there, to fill in their existing difficulty with implication and inference in human speech?
Agreed - they need to flush out verbs / nouns / etc. This is a very basic language so far
What ai was used
Chat gpt
The "sounds of silence" gap could be rather disturbing.... moh ha ha ha!
Awesome video
10:09 wtfk 😅
This is computer intercourse at its finest
You should try this again, then separate them and ask them individually what specific meaning they were conveying in their words from that specific conversation. For example, if “hmm?” Means question, what question were they asking. Then you can determine if they were actually having a functional conversation.
Funny how the female AI didn't let the male AI finish his sentences. Seems like a universal law 😂
Indeed
Bro you deserve more subscribers
I wish that ai could create an extremely simple language which could become the language of the world.
This could literally happen - watch my next video - ua-cam.com/video/lilk819dJQQ/v-deo.html
There is already an extremely simple language that was engineered specifically tobe the universal and also easy to learn no matter where you are from. The language is called Esperanto.
Do you think governments want us speaking a unified simple language?
George Orwell already created that simple language, called Newspeak, in the heartwarming hot-mug-of-cocoa feel-good comfort novel 1984.
@@wonmoreminutehaha that novel is anything but feelgood 😅
@@devonrd You’re right, definitely not feel good. Hopefully my sarcasm doesn’t inspire anyone to curl up on a hammock with it for a cozy read. They’ll be disappointed. And anyone who has read it knows that a simple universal language is a bad idea.
Well at least we dont have to worry about AI making their own language to undermine humans for a while.
This is beyond impressive.
The "she" not only leads the whole way but she's way more intelligent.
You NEED to Dr up a 24hr live stream of them just going off! That would be fascinating watching them evolve