Also, I believe I already said this countless times already, but for those new viewers watching my video, I am not using Synth V in this cover. I used kits.ai, screen recorded the svp project in Synth V and timed it to the vocals to give the impression that it was being played in Synth V (Just like those show tuning covers in UTAU and VOCALOID, if you know what I mean by that.)
Unfortunately, the full SVP I downloaded was cut off (I don’t why but I think it’s due to the fact that I am just using the Basic Version of SynthV). So, I can’t make a full cover…
@@UtaKure the English is good Though I feel like something like an Arpasing utau would have made better quality. Don't get me wrong, this is perfectly fine And I can't really speak for you since maybe it is easy But it does look a bit hard to use
@@ascensiongacha1308 It's kinda easy for me since I just morphed the reference vocal into the AI model (just like Diff-SVC, so-vits-svc and RVC.). It did not undergo a much more complex process of training just like DiffSinger and/or NNSVS. Probably you never asked this question but for the video display, I just screen recorded a SynthV project that is playing.
@@dtn44k This is not a Synth V bank, this is an AI voicebank that was trained in kits.ai. After all, custom voicebanks are not supported by Dreamtonics. I just used Synth V to show tuning and synced it to ALPHONSE’s singing voice.
@@STARR503 Training your custom model in kits.ai was completely free and possible. Unfortunately, it had been restricted by a paywall several months ‘till now. However, based on memory, I used to just submit voice recordings, making sure that the dataset does not exceed 10 mins (60 mins of dataset was reserved for a paid plan back them); then, kits.ai will do its thing and you’ll just have to wait until it finishes.
If you are going to use ALPHONSE in your covers, then reply to this comment with a link to your cover using ALPHONSE so I can watch it too!!
Also, I believe I already said this countless times already, but for those new viewers watching my video, I am not using Synth V in this cover. I used kits.ai, screen recorded the svp project in Synth V and timed it to the vocals to give the impression that it was being played in Synth V (Just like those show tuning covers in UTAU and VOCALOID, if you know what I mean by that.)
Omg, I love (No default voice)!! Thank you for using them 😄 /j
@@Anemonie LOL XD
wtf sounds sick, I didn't expect that from a 320 views video
This was just to test my own AI voicebank a little bit and I am glad you liked how they both sound 😭
never thought i’d see a kpop crossover but absolutely pop off 🔥🔥
@@love.hw4ng thank youuu
I can still hear Solaria-
fine ill go play overwatch again
LMAO
Who is this DIVA 💜
@@The_Oyasumi_girl2 It’s… ALPHONSE 🤩🤩🤩
Please upload make it a full cover 😭🙏
Unfortunately, the full SVP I downloaded was cut off (I don’t why but I think it’s due to the fact that I am just using the Basic Version of SynthV). So, I can’t make a full cover…
that's okay we appreciate it! Thanks
Justin Bieber when he was younger omg
basically this voicebank is a justin bieber dupe 😭
Is it possible make your own voicebank on Synth-v?
No.
Dreamtonics never supported the creation of custom voicebanks for SynthV in the first place…
No, and if you ever see "synth v port" its fake.
@@SucoTeiKun_pohtath exactly!
And I'm sure they just took voice samples from a utau or a vocaloid voicebank. Which is not a good thing to do
Let me, once again, take the honor of being the first commentor in this video (Don't mind this HAHAHAHAHA)
How'd you get your voice high pitched?
I just used sample of my voice from when I was 11-years-old 😆
Along with that, I recorded an older AI voicebank which uses my currently deep voice!
Hmmmm.....
what do you think :3?
@@UtaKure the English is good
Though I feel like something like an Arpasing utau would have made better quality.
Don't get me wrong, this is perfectly fine
And I can't really speak for you since maybe it is easy
But it does look a bit hard to use
@@ascensiongacha1308 It's kinda easy for me since I just morphed the reference vocal into the AI model (just like Diff-SVC, so-vits-svc and RVC.). It did not undergo a much more complex process of training just like DiffSinger and/or NNSVS.
Probably you never asked this question but for the video display, I just screen recorded a SynthV project that is playing.
How do you do Synth V voicebanks?
@@dtn44k This is not a Synth V bank, this is an AI voicebank that was trained in kits.ai. After all, custom voicebanks are not supported by Dreamtonics. I just used Synth V to show tuning and synced it to ALPHONSE’s singing voice.
@@UtaKurehow do you do voicebanks like that in kits?
@@STARR503 Training your custom model in kits.ai was completely free and possible. Unfortunately, it had been restricted by a paywall several months ‘till now. However, based on memory, I used to just submit voice recordings, making sure that the dataset does not exceed 10 mins (60 mins of dataset was reserved for a paid plan back them); then, kits.ai will do its thing and you’ll just have to wait until it finishes.
I can still hear Solaria-
It’s because I really used SOLARIA as a vocal reference for morphing into this model ;)