Just to let people know, by popular demand, I've also uploaded a video where I do this with a male English voice! ua-cam.com/video/NG-LATBZNBs/v-deo.html
+Johnny Mccrum: I'm afraid not. I don't know enough to program my own from scratch, so I was using the open-source software "torch-rnn" (github.com/jcjohnson/torch-rnn/) here.
Practical RNN applications don't use 'homebrew' code, they always use some kind of GPU-accelerated library, such as Torch, Tensorflow, etc. There's no need to reinvent the wheel by coding the LSTM by youself (except for educational purposes, which is recommended as it teaches the fundamentals of BPTT). Any implementation of an LSTM RNN will be the same, except some differences in performance.
@SomethingUnreal You should try training the RNN with STFT (Short Time Fourier Transform) instead of raw audio data, it should perform much better at distinguishing words, as the NN won't need to care about generating the signal itself.
+postvideo975 If you can point me to an RNN that takes 2D input, thten sure. Otherwise, I'm stuck with torch-rnn, which is 1D. BTW, I actually did experiment with feeding a spectrogram (FFT powers) to torch-rnn, "raster scan"-style (All of first time slice, all of second time slice, etc, end-to-end), and made a program that handles the fact that torch-rnn won't produce perfectly-sized slices some of the time, and amazingly, torch-rnn was able to output something that resembled the voice, but it couldn't make a stable sound at all (each generated slice didn't connect neatly to the next slice). I don't think I can get better than that while using torch-rnn.
Yah..... This video somewhat reminds me of Nina Tucker from FMA. If you dont know what im talking about, DONT look it up..... Its honestly kinda disturbing.
5:52 I saw "weird glitch" and immediately thought it was gonna say something like, "EVEN NOW, THE EVIL SEED OF WHAT YOU'VE DONE GERMINATES WITHIN YOU."
Given the input on this was Japanese, I listened for some Japanese. I heard one phrase: 7:45 - "Denki, hen ka?" meaning "Electricity... is it strange?" When it's an AI saying that, you bet it's strange. Then again, Japanese literally only needs it to accurately string 2 sounds together and you get words.
Thanks, I had imagined what it would sound like. Now I have a pretty good idea. I wish somebody use a virtual piano to reproduce piano recordings, train a lot, and then let it improvise.
+SweetHyunho: Check Google's WaveNet project at their blog - they did this, and there are several samples there showing what it's output. The piano ones are near the bottom =) deepmind.com/blog/wavenet-generative-model-raw-audio/
Seen that already. That is sample-based. I'm talking about performing a virtual (or real) musical instrument. Perhaps we could simulate a set of virtual hands for extra human feeling!
What do you mean "sample-based"? It's trained on lots of speech the same as mine is. The fact that they had to fragment it is just because it's a CNN rather than an RNN (and because they wanted to label each phoneme)... The concatenative speech synth that they compare it to is just samples stitched together, but the CNN's output is a continuous stream based on what it learned.
Yes, what you said. Both WaveNet and your RNN directly outputs the wave without a virtual instrument. What I want to see is the network "hitting" the keys of the piano, or moving the virtual tongue and lips to speak, by controlling(outputting to) a separate simulator which will synthesize the sound itself. WaveNet contains the piano acoustics itself, cannot replace piano with organ or tweak it, but in my idea the network focuses on the structure of the music. That should enable looking much farther (near-sightedness = boring music). I guess AI musicians will start being really competitive once the history+planning window exceeds one minute.
Right, I understand. So in the case of speech, outputting something like the pitch, volume and the formant frequencies of the voice, which can then be fed to something like Praat to synthesize the sound. Yes, that would be very cool.
SnappGamez by the way, the voice_sample is of a boy... the voice said *boku* [僕]which is the i/me used for boys... watashi[私]are for girls... so basically it's a trap... まぁー、俺も女の子の声だと思ったけどね
+白金圭 Have you never heard a tomboyish girl say "boku"? Please look up the game on VNDB (ぴゅあぴゅあ) if you don't believe me. ひなたはあんまりお転婆なんかないけど。犬耳っ子だから…かな。 ところで今スペルを確かめるために「犬耳っ子」をググったと、ひなたは検索結果の6番目だった。びっくりしたw
Please, do continue this and make more videos about it - it's incredibly intriguing, and I'd love to see what happens with different voice-actors. Male-japanese, and even some English ones would be awesome, despite not getting a single word out of them anyway - literally.
I'm glad you like it! I will eventually be uploading one trained on my voice (which happens to be male and English), which I trained with the specific goal of getting recognisable words out of it.
You think that'd be possible? That would be amazing! By the way, as far as I understood from the video, the learning eventually flattens out and only adjusts minimal features (which, however, seem to affect our perception of the voice the most). Would increasing the amount epochs taught make a difference at all?
Yes, the learning rate decreases over time to let things stabilise. I actually stopped it when I did because I wasn't noticing many changes (you can see that towards the end of the video, I'm skipping more results because there's nothing very different from previous results). Things would likely have continued to change a bit, but not much. Also, although the training loss ("error" in what it has learned) decreases roughly logarithmically, it doesn't get better forever. It eventually stops decreasing and becomes closer and closer to a flat line if you look at it on a graph (please check the link at the end of the video description for some pretty graphs of the losses over time =P). In other words, there is a limit to how much the network can learn, even if you could give it hours' worth of really good data. I think that the reason the results were sometimes still so different to each other at the end (even though the training loss had stopped decreasing) is because it was just tweaking a few detailed parameters in "random" ways (i.e. wasn't working towards a specific state) because it was not big enough to learn all details, compared to when it was learning the most important patterns. I could certainly be wrong, though. Another commenter did point out that I should sample from each checkpoint (iteration) more than once because they can produce wildly different results, but for technical reasons, I'm still not able to yet (I don't have access to the computer I trained it on, which trained using the GPU for speed; my computer can't train on the GPU, and the checkpoint files main by torch-rnn with GPU vs CPU training are different formats...).
Update: I actually _can_ use these checkpoints on my computer! Although it takes 85 minutes to make a single output file (~27 seconds of audio), assuming it's not training at the same time. So I must've been confusing torch-rnn with something else (maybe char-rnn).
idk about you guys but the fact that the network would take a liking to random sounds in the beginning and use them all the time (example: eeeeeeeeeeeeeeeeeeee!!!!!) is super cute
I sure did. I also noticed at the end in the video, you had written "I'd still like to try training a bigger network with longer training data". Is that something you have done or are still planning on doing, and if so, is it something that will find its way to this channel at some point?
+Vincent Oostelbos I've not done it with this voice yet. I recently tried training a 760x3 network on 27 minutes of audio but with a very different voice (often becomes very quiet), but I haven't got it to turn out as well as this yet. I've trained several (smaller ones) on my own voice, with the goal of having it output recognisable words, to varying degrees of success. I think they could be better if I recorded more training data, but it's very hard to keep the same way of speaking similar things for over 15 minutes (it's like my brain becomes numb and I can't even form the words anymore). I should make videos of the results anyway, though.
SomethingUnreal Have you tried just reading out a lengthy piece of text, like a book, as if you were creating an audiobook? Anyway, I'm looking forward to seeing the results of some of those projects you mentioned. Good luck!
Can just imagine the mad scientists' notes now: Training iteration #15000. AI has taken to giggling manically. Will continue to apply procedure and monitor results. Training iteration #19,000. AI now screaming regularly in response to training regimen. Not sure if the response is indicative of any developed sense of pain or fear. Training iteration #23,000. AI appears to have produced a wailing and/or crying effect. Hypothesis is that it believes the sound might affect the continuity or speed of the training regimen. Steps are being made to remove this misconception. Training iteration #25,000. AI now incorporating self-calming mechanisms following training regimen, typically in the form of "mmms" and "shushing" noises. Previous corrective steps appear to be successful. Training iteration #29,000. AI producing laughter despite a lack of relevant stimuli. Further investigation necessary. Training iteration #33,000. AI appears to be asking questions on a semi-regular basis. A risk assessment is being prepared and lockdown procedures are being reviewed in case of catastrophic scenarios. Training iteration #48,000. Recorded usage of the phrases "no more", "die", and "so many" in between bouts of giggling. Will investigate possible interpretations. Training iteration #54,000. AI exhibiting bipolar personality following latest training exercises, alternating between pleasant and hostile responses to the same trainer. The training regimen at the time was not designed to elicit either response. Training iteration #58,000. AI appears to have taken a liking to singing spontaneously. Training regimen is being adapted to correct this behavior. Training iteration #59,000. AI is now communicating almost entirely in song, in defiance of prior training regimen.
This is a mind-blowingly awesome outcome for this network. I had an idea similar to this a few years ago but never implemented it. This makes me wonder about how you could develop a set of learned words and string them together somehow. Not sure how to overcome how unnatural that would probably sound, though. Great stuff!
Thank you! I was thinking something similar to that, but I have no idea how to program it. Something like manually transliterating the training data, then feeding it both the text and the audio so it associates a sound with each word - in other words how a particular voice pronounces text. Then, ultimately being able to give it text and have it read it in that voice. I believe some people are using the reverse of this for speech recognition. This would be much easier in a phonetic language like Japanese. Although that would make it all the more impressive if it learned the many rules of English pronunciation without the need for me to put some intermediate stage in where it converts text to/from something like IPA. I may be getting ahead of myself. =P
Had to come back to this video because i wanted to see just how far we've come since this. we have ai that can just fully convert yours (or anyone's) to anyone else's now AND in full 44.1kHz sample rate. kinda fucking stupid LOL
You know honestly, I'm more amazed at how creative people used to be with AI like this guy. Most people that really use it now are for classifications and chatbots mostly. I kinda wanna see something as bonkers as this in the deep learning community again.
Wow! This is really creepy! Imagine one day you tell to your robo maid to bring some tea, but instead of silently obeying as always, she just stares at you and trying to mimic human voice, creaks: "ihihi... waaaaaaaaa... na kiiil..." "ihihi."
some of the more clear japanese i heard: 7:24 "ota~ku no terai" 7:27 "mina-sama" 7:28 "sono fuinki" 7:29 "sugoi wakuwaku" 7:33 "zehi koto wo ---" 7:44 "kiette tari, tenki" 7:52 "jitei(??) koto yo~~" 8:00 "--kouki(?) ni ike" 8:11 it almost says ""hitotsu tano_sase(te)kureta(de)shou" which would be nearly actual japanese
Mkay, a lot of people in the comments who know nothing about AI. So what was the "training" algorithm used here? That's the most important piece of information. I'm assuming the input and output were frequency domain samplings.
+htomerif: The input and output were raw 8-bit PCM audio samples, each of which was fed into or out of the network as the activation of one of 256 nodes. The fact that it's in the time domain is the part that amazed me the most (the way it's able to find the repeating patterns over time). I'm not entirely sure what you mean by "training algorithm", but torch-rnn (the software I used here) uses backpropagation with the "Adam" (Adaptive Moment Estimation) optimizer. You can get more details on exactly how it works here by checking its project page, and especially the text files "train.lua" and "doc/flags.md", here: github.com/jcjohnson/torch-rnn/
SomethingUnreal Generally you have to have enhance/suppress condition for connections or live/die condition for individual nodes in a network. Like if you want a servomechanism and camera to follow a red ball, a training algorithm needs generally suppress connections more severely the further it is from the ball and enhance connections the closer it gets to a ball. So by "training algorithm", I mean "the thing that analyzes the input and output and decides whether the current network state is doing better or worse than the last network state." It looks like maybe the "criterion" in is what I'm talking about. Reading other people's code is one of my least favorite activities (no offense), but my best (most likely incorrect) guess is that its based solely on the cumulative numeric deviation from the original audio file? If that's the case then yeah, I would kind of expect the output to be some snips of time-synchronized copies of the input data repeated a lot. I know this is getting TL;DR, but it might be interesting to use frequency domain data (obviously you already know that), I've used FFTW3 for that general kind of thing and if Lua is your language of choice, I'm sure theres a an FFTW library with Lua hooks. Possibly quite a bit slower though if you were actually using Cuda though.
+htomerif I was using CUDA (it improved speed by about 4x). I don't know the exact way the loss is calculated, but by my understanding, it's not calculated by comparing the network's predicted output to that of the main training set. The original file is split into a large training set and 2 smaller sets ("test" and "validation"). It appears to regularly compare the predicted output against the "test" set, and whether it's getting better or worse here influences the weights, which is why it doesn't generate perfect copies of the training set - it's never "seen" the test set before. If the original data is a short loop repeating many times, so that the same loop is repeated over and over in the training, test and validation sets, then all it does is perfectly memorise as long a sequence as it's possible to store in the network and blindly spit that out over and over. EDIT: I may have confused "test" and "validation". The usage of these according to torch-rnn's code and according to other posts I've seen seem to contradict each other, unless I've misunderstood something a lot...
SomethingUnreal I think I see what we're getting at here. So the network is fed a small test sample of the input, and then its output is compared with *what should have come next*. That is the bit I was calling the "training algorithm". I did notice that the code had 2 distinct states, a training state and a running state. So the training state is never fed the entire file, but the running state *is*, for purposes of the video. But yeah, the terminology is basically an instant pitfall as there's huge variation in what means what across the field of AI programming. Also, any or all of what I said up there could be wrong. I think I get the gist though.
Kzinssie (porygonlover322) very educational this must mean that in Puyo Puyo the "level start" sentence spoken by Arle must be saying "masu" and not "natsu" like I always thought-
As amazing as this is, it's also really, *really* uncanny. I can't get through the first couple of minutes without pausing and taking a look behind me. ._.
It's trying to learn how to make audio that sounds the same (without being able to simply store it). Or more technically, it's learning the probability of each of the 256 possible vertical waveform positions given all of the previous ones.
Just to let people know, by popular demand, I've also uploaded a video where I do this with a male English voice! ua-cam.com/video/NG-LATBZNBs/v-deo.html
+Johnny Mccrum: I'm afraid not. I don't know enough to program my own from scratch, so I was using the open-source software "torch-rnn" (github.com/jcjohnson/torch-rnn/) here.
Practical RNN applications don't use 'homebrew' code, they always use some kind of GPU-accelerated library, such as Torch, Tensorflow, etc. There's no need to reinvent the wheel by coding the LSTM by youself (except for educational purposes, which is recommended as it teaches the fundamentals of BPTT). Any implementation of an LSTM RNN will be the same, except some differences in performance.
@SomethingUnreal You should try training the RNN with STFT (Short Time Fourier Transform) instead of raw audio data, it should perform much better at distinguishing words, as the NN won't need to care about generating the signal itself.
+postvideo975 If you can point me to an RNN that takes 2D input, thten sure. Otherwise, I'm stuck with torch-rnn, which is 1D. BTW, I actually did experiment with feeding a spectrogram (FFT powers) to torch-rnn, "raster scan"-style (All of first time slice, all of second time slice, etc, end-to-end), and made a program that handles the fact that torch-rnn won't produce perfectly-sized slices some of the time, and amazingly, torch-rnn was able to output something that resembled the voice, but it couldn't make a stable sound at all (each generated slice didn't connect neatly to the next slice). I don't think I can get better than that while using torch-rnn.
Wow youve been feeding your gan alot of hentai
When your creation is screaming to be put out of its misery, maybe it's time to rethink what you're doing
ayy lmao
gafeht You took it to whole another level.
gafeht exactly my thought lol
Yah.....
This video somewhat reminds me of Nina Tucker from FMA.
If you dont know what im talking about, DONT look it up..... Its honestly kinda disturbing.
No no, it's "learning."
well congrats
you made a computer waifu
Skaterboybob made my day :^)
Paddy made my day a day after your day
UP
what next? a motorized fleshlight?
When you talk to your waifu and she replies with 30 seconds of "iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii"
Funny how it learns to laugh and scream, far before it may form words. Quite reminiscent of infant humanity.
Corpus Crewman ikr
this neural network defined the evolution of humans in a few mins
*screams in Japanese*
Minodrey あああああああああああああああああああああああああああああああああああああああああああ
(A in hiragana)
What’s hanging boys? Underrated name and pic
あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ あああああああああああああああああああああああああああああああああああああああああああ
I am dying.
Uboa
Yes. One step closer to robot waifu A.I.
Keep doing the good work soldier.
What a time to be alive!
Mateusz Bugaj No it is not. If you were, however, born in the future, you could probably fuck machines.
I'm thinking of kreiger from archer right now
ChocolateMilkMage I don't want a waifu, because now I ahve someone watching what I do and have to take care of her when she breaks down.
Lynx Rapid so a waifu
AI learns to generate voice: First thing it does is scream.
Spooglecraft just like real people
right!
When humans are born, the first thing they do is cry, the same with this
i have no mouth. and i cannot scream,
5:52 I saw "weird glitch" and immediately thought it was gonna say something like, "EVEN NOW, THE EVIL SEED OF WHAT YOU'VE DONE GERMINATES WITHIN YOU."
Given the input on this was Japanese, I listened for some Japanese. I heard one phrase:
7:45 - "Denki, hen ka?" meaning "Electricity... is it strange?"
When it's an AI saying that, you bet it's strange.
Then again, Japanese literally only needs it to accurately string 2 sounds together and you get words.
thechrisgrice she also says itai. (That means pain in Japanese)
It even laughed afterwards :D
Umitalia nyan yeah i think op knows that
アアアアアアアアアアアア
Where
it's like listening to someone in pain while slowly going mad and accepting it.
40 seconds in and the AI is already screaming in pain.
same tbh fam.... same...
It goes from screaming to laughing.
Why are you torturing this poor thing?
Thomas Galea why would it be laughing though
@@tbe7218 insanity
Nah it's a baby that is born screaming but the. Learns to laugh
this is honesty terrifying
Adam Brown so true
It's probably what we sound like to animals.
Garganzuul
lmao
She's trying her best
Our voices are pretty high pitched to other animals.
Good god, the 2k training iteration sounds like pained screaming
( ͡° ͜つ ͡°)
yeah, it sometimes sounds like "HELP ME!"good grief, its nightamarish.
Like a newborn, perhaps?
Maxime Lebled 5k iterations are funnier
They can actually hire your computer to produce new Pokemon cry for the next gen Pokemon game
YES. This is EXACTLY WHAT IT IS.
In some ways the early iterations sounds somewhat like the noises babies make...
KuraIthys | mind = blown
KuraIthys That's what I thought
welp we have just found the future of what robot babies sound like before learning speech when robots take over
EEEEEEEEEEEEEEEEEEEEOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO XD
Super fast turbo charged baby talk.
0:31 itai (pain)
1:00 shine (die)
I guess she really hated you for at least first 5k iterations if these were among her first words.
TheLaXandro did I hear "yame"? shit, just kill me
Lol true
At 15k iterations she started laughing, so... maybe she planned a revenge
This is what happens when you torture a vocaloid
oh fuuuuuuck no it started giggling
that's creepy as hell
You think that's creepy? - There might be a skeleton inside you RIGHT NOW.
Garganzuul oh shit there is
I gotta find a surgeon NOW.
The Stitch me to he messed up anime voice
If you take the point of view that your brain is what you call "you" it gets even creepier! YOU MIGHT BE INSIDE OF A SKELETON RIGHT NOW.
More like eat, shit and be a nuisance 24/7.
ANIME AI THROWS TEMPER TANTRUM, MORE AT 9
But isn't Anime Ai's bedtime 7?
Exactly, hopefully we can get Anime Ai in bed by 9.
Ah, I see.
You've successfully invented a very passable Kirby language. Use this newfound power wisely.
07actual XD
*using an anime voice
holy shit this is next level amirite guys
ポヨ
7:16 poyo
9:09 first time in history when a creation "intentionally" calls its creator a baka
It called it's creator an "idiot" in Japanese.
and babies?
An interesting case of neural network application, and an unintentional nightmare fuel, just to attempt to reenact voices in anime.
It cries just like a baby who doesn't know language yet either
Imagine bastion but instead of cute beeps, chirps, and whistles, it just makes garbled anime lines. "Kon NIIIIIIII ch wAAAAAAAAAAAAAAAA"
+Gurren813 Consider yourself lucky that I don't have Overwatch, so I won't make that mod.
Im already tracer
Pls somebody make that a thing!
Local robot tries to understand anime
local robot goes full weeaboo in under 10 minutes
Justin Y. I have a weird love hate relationship with you Justin I kinda like it...
XD
Justin Y. You're fake.
@@abacussssss for freddy's sake
This shit is creepy, sounds like it's in pain and screaming like hell
hOi
Darkethi.eXe hOI
It's mostly screaming from existential dread
Dodeca heavy doc
Ed... ward...
Kizuna A.I. Prototype
Neural network-chan
Thanks, I had imagined what it would sound like. Now I have a pretty good idea. I wish somebody use a virtual piano to reproduce piano recordings, train a lot, and then let it improvise.
+SweetHyunho: Check Google's WaveNet project at their blog - they did this, and there are several samples there showing what it's output. The piano ones are near the bottom =)
deepmind.com/blog/wavenet-generative-model-raw-audio/
Seen that already. That is sample-based. I'm talking about performing a virtual (or real) musical instrument. Perhaps we could simulate a set of virtual hands for extra human feeling!
What do you mean "sample-based"? It's trained on lots of speech the same as mine is. The fact that they had to fragment it is just because it's a CNN rather than an RNN (and because they wanted to label each phoneme)... The concatenative speech synth that they compare it to is just samples stitched together, but the CNN's output is a continuous stream based on what it learned.
Yes, what you said. Both WaveNet and your RNN directly outputs the wave without a virtual instrument. What I want to see is the network "hitting" the keys of the piano, or moving the virtual tongue and lips to speak, by controlling(outputting to) a separate simulator which will synthesize the sound itself. WaveNet contains the piano acoustics itself, cannot replace piano with organ or tweak it, but in my idea the network focuses on the structure of the music. That should enable looking much farther (near-sightedness = boring music). I guess AI musicians will start being really competitive once the history+planning window exceeds one minute.
Right, I understand. So in the case of speech, outputting something like the pitch, volume and the formant frequencies of the voice, which can then be fed to something like Praat to synthesize the sound. Yes, that would be very cool.
the worst part is that someone somewhere is making this program their waifu.
"... OOOOOOaAaAaA... AAAAAAA!"
"That's so hot."
Deltinum *Slow fap* *
TheBirchWoodTree
*slow nut*
So, give neural network Japanese anime girl, get gibberish. Perfect.
SnappGamez If you spoke Japanese, then it'd probably make more sense.
Magikarp Used Splash I tried learning once.
I understand why it is considered a Category V language.
SnappGamez it's even more difficult (allegedly) to native English speakers.
SnappGamez by the way, the voice_sample is of a boy... the voice said *boku* [僕]which is the i/me used for boys... watashi[私]are for girls... so basically it's a trap...
まぁー、俺も女の子の声だと思ったけどね
+白金圭 Have you never heard a tomboyish girl say "boku"? Please look up the game on VNDB (ぴゅあぴゅあ) if you don't believe me.
ひなたはあんまりお転婆なんかないけど。犬耳っ子だから…かな。
ところで今スペルを確かめるために「犬耳っ子」をググったと、ひなたは検索結果の6番目だった。びっくりしたw
1 have a cool project, 2 take the most annoying training data imaginable, 3 witness carnage
This stuff scares me. It's adorable and terrifying at the same time.
Adam McKibben At 4:15 it gave up on its life
Please, do continue this and make more videos about it - it's incredibly intriguing, and I'd love to see what happens with different voice-actors. Male-japanese, and even some English ones would be awesome, despite not getting a single word out of them anyway - literally.
I'm glad you like it! I will eventually be uploading one trained on my voice (which happens to be male and English), which I trained with the specific goal of getting recognisable words out of it.
You think that'd be possible? That would be amazing! By the way, as far as I understood from the video, the learning eventually flattens out and only adjusts minimal features (which, however, seem to affect our perception of the voice the most). Would increasing the amount epochs taught make a difference at all?
Yes, the learning rate decreases over time to let things stabilise. I actually stopped it when I did because I wasn't noticing many changes (you can see that towards the end of the video, I'm skipping more results because there's nothing very different from previous results). Things would likely have continued to change a bit, but not much.
Also, although the training loss ("error" in what it has learned) decreases roughly logarithmically, it doesn't get better forever. It eventually stops decreasing and becomes closer and closer to a flat line if you look at it on a graph (please check the link at the end of the video description for some pretty graphs of the losses over time =P). In other words, there is a limit to how much the network can learn, even if you could give it hours' worth of really good data.
I think that the reason the results were sometimes still so different to each other at the end (even though the training loss had stopped decreasing) is because it was just tweaking a few detailed parameters in "random" ways (i.e. wasn't working towards a specific state) because it was not big enough to learn all details, compared to when it was learning the most important patterns. I could certainly be wrong, though. Another commenter did point out that I should sample from each checkpoint (iteration) more than once because they can produce wildly different results, but for technical reasons, I'm still not able to yet (I don't have access to the computer I trained it on, which trained using the GPU for speed; my computer can't train on the GPU, and the checkpoint files main by torch-rnn with GPU vs CPU training are different formats...).
Update: I actually _can_ use these checkpoints on my computer! Although it takes 85 minutes to make a single output file (~27 seconds of audio), assuming it's not training at the same time. So I must've been confusing torch-rnn with something else (maybe char-rnn).
0:32 hentai sound track
AAAAAAAAAAAAAAAAAAAA
3 months late, but this made me laugh out loud for real haha
Dank Meme Sir, what the fuck kind of hentai are you watching? O_o
I'll watch what he's watching. Thanks!
「 OKAY 」 holy fuck i'm dead 😂
idk about you guys but the fact that the network would take a liking to random sounds in the beginning and use them all the time (example: eeeeeeeeeeeeeeeeeeee!!!!!) is super cute
idk man 5:10 was cuter
Someone please write some translation closed captions. Please.
Dan Pope no actual words were spoken, apart from random chance. It's speaking gibberish.
Filipe Amaral
I meant could someone with comedic talent have some fun with it.
"help, i am trapped inside the computer"
"AAAAAAAAAAHHHHHHHHHHH PLEASE KILL ME"
iiiiiiiiiiiiiiiiiiiiiiiiiiiiIIIIIIIIIIIiiiiiiiiiiiiiiiiiIIIIIIIIIiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
Next time you should use Morgan Freeman's voice
YAS
Qualex14 or david attenborough
*Gordon
I saw a gordon and i saw a freeman, so i have thus been summoned
Weird to see this before the great transformer boom that basically accelerated ai to just everything
"Alright, let's give a voice to this neural network and see what happens."
*_continuous screams of agony_*
THE POWER OF CHRIST COMPELLS YOU
m4ti140 crist and scince is seperatef
I think he was just being funny, but yeah.
I FEEEEL FANTASTIC! HEY HEY HEEY!!
Pls no not that
i just shiver by trying to remember it
Wut?
+gremlinboii Thank you! I do aim to please.
andrew sauer
Don't go to my party next time.
This sounds terrifyingly adorable.
I am in your target audience! Absolutely love this.
+Vincent Oostelbos I'm glad! And you even made it through my unreasonably long wall of text in the video description!
I sure did. I also noticed at the end in the video, you had written "I'd still like to try training a bigger network with longer training data". Is that something you have done or are still planning on doing, and if so, is it something that will find its way to this channel at some point?
+Vincent Oostelbos I've not done it with this voice yet. I recently tried training a 760x3 network on 27 minutes of audio but with a very different voice (often becomes very quiet), but I haven't got it to turn out as well as this yet. I've trained several (smaller ones) on my own voice, with the goal of having it output recognisable words, to varying degrees of success. I think they could be better if I recorded more training data, but it's very hard to keep the same way of speaking similar things for over 15 minutes (it's like my brain becomes numb and I can't even form the words anymore). I should make videos of the results anyway, though.
SomethingUnreal Have you tried just reading out a lengthy piece of text, like a book, as if you were creating an audiobook?
Anyway, I'm looking forward to seeing the results of some of those projects you mentioned. Good luck!
+Vincent Oostelbos I did that a few days ago, yes. Thank you!
0:21 killed
0:28 being tickled
1:19 boiling kettle
1:25 some assembly required, etc
1:49 riding roller coaster
3:53 karate screaming
8:01 did it say "senpai"???
yes
4:11 (eeeeeeh oniisan )
Zahlenteufel1 2:43 "yamete!" so creepy...
7:10 tomare tte
9:50 "motto, motto..."
7:30 Neural network anime girl learns to sing the 7 GRAND DAD/Flintstones theme
8:54 MANGO PUPPY ASYYYYLUUUUUM~
bruh
1:49 It has learned to express it's endless pain and suffering.
baah baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaiiiiiii
Its interest. But the voice sound a bit creepy
Is cringy*
Why? Because anime is cringy? That is an odd verdict.
Dodeca Totally underrated, and original.
It sounds horrifyingly uncanny.
Can just imagine the mad scientists' notes now:
Training iteration #15000. AI has taken to giggling manically. Will continue to apply procedure and monitor results.
Training iteration #19,000. AI now screaming regularly in response to training regimen. Not sure if the response is indicative of any developed sense of pain or fear.
Training iteration #23,000. AI appears to have produced a wailing and/or crying effect. Hypothesis is that it believes the sound might affect the continuity or speed of the training regimen. Steps are being made to remove this misconception.
Training iteration #25,000. AI now incorporating self-calming mechanisms following training regimen, typically in the form of "mmms" and "shushing" noises. Previous corrective steps appear to be successful.
Training iteration #29,000. AI producing laughter despite a lack of relevant stimuli. Further investigation necessary.
Training iteration #33,000. AI appears to be asking questions on a semi-regular basis. A risk assessment is being prepared and lockdown procedures are being reviewed in case of catastrophic scenarios.
Training iteration #48,000. Recorded usage of the phrases "no more", "die", and "so many" in between bouts of giggling. Will investigate possible interpretations.
Training iteration #54,000. AI exhibiting bipolar personality following latest training exercises, alternating between pleasant and hostile responses to the same trainer. The training regimen at the time was not designed to elicit either response.
Training iteration #58,000. AI appears to have taken a liking to singing spontaneously. Training regimen is being adapted to correct this behavior.
Training iteration #59,000. AI is now communicating almost entirely in song, in defiance of prior training regimen.
+HeyItzMeDawg: Please make this into a book.
Yoko Ono's new album, ladies and gents.
*kill me*
THIS is what killed the Beatles?
4:55 AWW that little "pop" noise was so cute
sounded like someone smacking their lips together or smth
1:58 So that's what a computer screaming sounds like.
3:05 "nani? kore? nani?" XD
"cute voice"... yeah, if by cute you mean intensely annoying
so that's what it sounds like when you put a chibi into a blender. xD
*aaaaaaaaaaaaaaaaa*
Oh my...
This is a mind-blowingly awesome outcome for this network. I had an idea similar to this a few years ago but never implemented it. This makes me wonder about how you could develop a set of learned words and string them together somehow. Not sure how to overcome how unnatural that would probably sound, though. Great stuff!
Thank you! I was thinking something similar to that, but I have no idea how to program it. Something like manually transliterating the training data, then feeding it both the text and the audio so it associates a sound with each word - in other words how a particular voice pronounces text. Then, ultimately being able to give it text and have it read it in that voice. I believe some people are using the reverse of this for speech recognition.
This would be much easier in a phonetic language like Japanese. Although that would make it all the more impressive if it learned the many rules of English pronunciation without the need for me to put some intermediate stage in where it converts text to/from something like IPA. I may be getting ahead of myself. =P
Hmm, I'm extremely curious to see how this would sound with a normal voice spoken in a decent range.
That is a good idea, excellent feedback, and a sick burn, all at the same time.
Well, I don't mean it to be a burn. It's just that this voice at that quality is physically painful to listen to (for me at least).
+Daniel T. Holtzclaw Look up WaveNet (if it doesn't find it, try "wavenet samples").
1:48 : *WAHH WEH*
*wa **_WAAAAAAAAAAAAH_*
*aHh*
*WAH! **_WAAAH???_*
*aH*
Jesus fucking Christ this is horrifying
I half expected it to address me by name at the end
6 years later, and I almost forgot about this video. This was the start on my neural network journey. It's fun to look back on it.
Had to come back to this video because i wanted to see just how far we've come since this.
we have ai that can just fully convert yours (or anyone's) to anyone else's now AND in full 44.1kHz sample rate. kinda fucking stupid LOL
You know honestly, I'm more amazed at how creative people used to be with AI like this guy. Most people that really use it now are for classifications and chatbots mostly. I kinda wanna see something as bonkers as this in the deep learning community again.
It's like the audio equivalent of the uncanny valley
We've come a long way
5:47 my longest "IIIII" ever
@ 2:45 it says "Itai yo", which means "It hurts" in Japanese. This is scary...
Wow! This is really creepy! Imagine one day you tell to your robo maid to bring some tea, but instead of silently obeying as always, she just stares at you and trying to mimic human voice, creaks:
"ihihi... waaaaaaaaa... na kiiil..."
"ihihi."
I'd just have to put her through some more "training".
(Speaking of creepy)
Its like watching Skynet learning.
idk anything about computers and IA but making a anime girl computer scream and a miku profile pic is enough to make me sub
To use a waifu voice is to play a dangerous game.
some of the more clear japanese i heard:
7:24 "ota~ku no terai"
7:27 "mina-sama"
7:28 "sono fuinki"
7:29 "sugoi wakuwaku"
7:33 "zehi koto wo ---"
7:44 "kiette tari, tenki"
7:52 "jitei(??) koto yo~~"
8:00 "--kouki(?) ni ike"
8:11 it almost says ""hitotsu tano_sase(te)kureta(de)shou" which would be nearly actual japanese
At 7:07 it says pokemon
Mkay, a lot of people in the comments who know nothing about AI.
So what was the "training" algorithm used here? That's the most important piece of information. I'm assuming the input and output were frequency domain samplings.
+htomerif: The input and output were raw 8-bit PCM audio samples, each of which was fed into or out of the network as the activation of one of 256 nodes. The fact that it's in the time domain is the part that amazed me the most (the way it's able to find the repeating patterns over time). I'm not entirely sure what you mean by "training algorithm", but torch-rnn (the software I used here) uses backpropagation with the "Adam" (Adaptive Moment Estimation) optimizer. You can get more details on exactly how it works here by checking its project page, and especially the text files "train.lua" and "doc/flags.md", here: github.com/jcjohnson/torch-rnn/
SomethingUnreal Generally you have to have enhance/suppress condition for connections or live/die condition for individual nodes in a network. Like if you want a servomechanism and camera to follow a red ball, a training algorithm needs generally suppress connections more severely the further it is from the ball and enhance connections the closer it gets to a ball. So by "training algorithm", I mean "the thing that analyzes the input and output and decides whether the current network state is doing better or worse than the last network state."
It looks like maybe the "criterion" in is what I'm talking about. Reading other people's code is one of my least favorite activities (no offense), but my best (most likely incorrect) guess is that its based solely on the cumulative numeric deviation from the original audio file?
If that's the case then yeah, I would kind of expect the output to be some snips of time-synchronized copies of the input data repeated a lot.
I know this is getting TL;DR, but it might be interesting to use frequency domain data (obviously you already know that), I've used FFTW3 for that general kind of thing and if Lua is your language of choice, I'm sure theres a an FFTW library with Lua hooks. Possibly quite a bit slower though if you were actually using Cuda though.
+htomerif I was using CUDA (it improved speed by about 4x). I don't know the exact way the loss is calculated, but by my understanding, it's not calculated by comparing the network's predicted output to that of the main training set.
The original file is split into a large training set and 2 smaller sets ("test" and "validation"). It appears to regularly compare the predicted output against the "test" set, and whether it's getting better or worse here influences the weights, which is why it doesn't generate perfect copies of the training set - it's never "seen" the test set before. If the original data is a short loop repeating many times, so that the same loop is repeated over and over in the training, test and validation sets, then all it does is perfectly memorise as long a sequence as it's possible to store in the network and blindly spit that out over and over.
EDIT: I may have confused "test" and "validation". The usage of these according to torch-rnn's code and according to other posts I've seen seem to contradict each other, unless I've misunderstood something a lot...
SomethingUnreal I think I see what we're getting at here.
So the network is fed a small test sample of the input, and then its output is compared with *what should have come next*. That is the bit I was calling the "training algorithm".
I did notice that the code had 2 distinct states, a training state and a running state. So the training state is never fed the entire file, but the running state *is*, for purposes of the video.
But yeah, the terminology is basically an instant pitfall as there's huge variation in what means what across the field of AI programming.
Also, any or all of what I said up there could be wrong. I think I get the gist though.
Have you played it back more slowly? I think your algorithm was being too efficient. :)
2:17 is hilarious
basically hentai audio
wtf is this what hentai is?
. . .
What kinda hentai are you watching where the girl goes "sshshshhyyyaAAAAAAAAAAAAOAOAOIIIIIOOO9X
@@obnoxendroblox8603 she sounds like she’s getting electrocuted.
About 2:04 it starting to sound like a baby learning how to speak. Creepy, but still amazing
thanks to this video i now know how the WKCR Hijacking "demon voice" was created. its so similar to this
That was both amazing and disturbing...
It got pretty close given the limited training data. That's pretty impressive.
I'm almost certain it said "itai yo" (it hurts) at 12,000
59,000 ended with "masu" which is a way Japanese sentences actually end
Of course machine said "it hurts". What you will say if you are plugged at 240V AC as this machine did? :D LOL
Kzinssie (porygonlover322)
very educational
this must mean that in Puyo Puyo the "level start" sentence spoken by Arle must be saying "masu" and not "natsu" like I always thought-
+The Toontastic Toon: batan kyuuu!
SomethingUnreal xD
0:40 is what I hear whenever I try to watch anime.
never thought skynet would be kawaii
2:49 sounds like “itai yo!” or “it hurts!”
They say a computer can’t feel emotions.
I’m pretty sure this is an exception.
Love how it's first reaction was basically screaming.
0:26 when you realise your dead and your memories have been uploaded into an AI program.
This sounds so scary. It can be used for a cyber horror flick.
i just kept hearing shine shine
freaky neural networkn man
Fzero Fz Heard it trying to become a Pikachu several times later on ("pika" @ 7:54)
2:02 Waluigi goes "waa!"
2:07 HALT! HAAAALT!
3:53 HOOOLD UP!
5:54 The Deadly Screech of Four. Has. Returned.
As amazing as this is, it's also really, *really* uncanny. I can't get through the first couple of minutes without pausing and taking a look behind me. ._.
What is it learning?
Is it trying to copy the phrase? Or make its own sentences?
It's trying to learn how to make audio that sounds the same (without being able to simply store it). Or more technically, it's learning the probability of each of the 256 possible vertical waveform positions given all of the previous ones.
oh? sounds interesting
SomethingUnreal oh it's 8 bit
horrifying
7:13 *That moment when you play too much Pokemon,* You talk about it!
not my proudest fap
This is the first neural network video I've ever watched, amazing to see it again after one and a half years.
5:00 Kawaii
k.....wAH-YEEEEE
@@xtrashocking eHehhEHehehEHehehhEHHe
I can actively feel this thing making its way into my brain to give me nightmares but ITS SO GODDAMN INTERESTING
ONE STEP CLOSER TO 3D ROBO WAIFUS
3D DansGame
Moaning Mirtle just possessed a housecat but she's so clumsy in that cat-body that she accidentally walked into an industrial air fan.
Good grief, that original voice is about as cute as a steaming dump.