I did not know Warp even existed, so without this new frmware I would not have found it in my UA-cam feed. Thanks for putting yourself high on my wishlist ;-)
Nice work Thomas, Warp was my favorite module out of 14U 104HP worth of racks even before your latest firmware. I appreciate your hard work and dedication, the improvements are fantastic! I especially noticed the stereo imaging improvement, Warp sounds even better. Chris Stell Valencia, CA
I had this idea of making an ai to predict rhythmic sequences based on your current clock or maybe even an audio in. Cool to see you actually doing synthesis with it.
It always comes down to the question: What would your training set look like? Can you provide a huge number of pairs of given input and their „correct“ or „good“ output, and of course is there some kind of pattern that is learnable at all?
Woah! The whole idea of "training" it is so fascinating! Would be so curious to hear variations based on differences in training. This just makes me curious, curious, curious!
I love when people develop AI that isnt trying to be a magic one button press solution, but something actually useful. Its so sad to see the underlying tech being wasted on shallow content machines by silicon walley. Unlike theirs witch only exist to drive speculative investment, things like this help people be more creative and are what the future a music technology will be like. Sorry for my rambling, i just wanted to say that im so happy whenever i see actual good use of this tech made by poeple who actually care about music. I makes me feel like there is still hope for humanity. Keep up the good work! ❤
Exciting updates! Good to see regular improvements. Still praying for the day that each additive osc has minimum 512 partials for detail on the upper-most harmonics when using low pitches for sound design.!!!
Innovation, awesome! 1. You could train the AI on voice and violin etc spectra, right? You could have a morph mode. 2. It’s be fun to use your finger on the screen, and even use pressure to traverse a layer cake 3d wavetable galaxy. Or better yet, with your toe/foot as you play a midi piano. And have a path memory so Warp can evolve on its own. Or hand/head tracking in space.
2) that's a super nice interface idea.... :-D 1) with only wavetables it's not really possible to make it sound like a voice, violin etc. as you need noise partials, and harmonics that are not exact integer multiples of the fundamental. In other words, organic sounds are way more complex than a wavetable ;-) Softsynths that use a phase vocoders go in that direction, like the NSynth project by Google Magenta.
yes that‘s right, but with only a wavetable you will get some kind of „robot voice“, and i think the question was more related to reproduction of realistic sounds. btw Warp also ships with some Vox wavetables on board.
on all creative synths i really would love to see a snapshot file system, which means, you have one main preset, but then - like to a subfolder - you can store various snapshots and jump through them (did somebody just call for snapshot morphing?)
This looks great! One question - Is there an option to crossfade between presets like the Erica LXR drum does? With the LXR, you can create completely different kit presets. save them, and then apply a slewed crossfader to morph between them. Does the Neuzeit offer such an option, or in a firmware update, could it?
Thanks! That would be an awesome feature but honestly i don‘t see that coming, as currently we are already squeezing every bit of CPU power and hardware acceleration out of that thing. And it is a very capable CPU (ARM core with 600MHz). Handling a second preset at the same time might probably be too much 😉
Many people think that the Hartman neuron was an unsuccessful synthesizer but if you know that it actually became sentient and is now known as Elon Musk
Let's be real: it doesn't matter whether you're the first guy to put AI into a Eurorack module. What matters is whether it's a good module, and that is not measured by whatever "dazzling" technology is inside it, but by things like the quality of sound, its feature set, its user-friendliness, the workflows it supports, perhaps its versatility, etc etc etc. I don't know your module and I can't speak to how it performs, I'm just making a general point.
Very nice work! But you're not the first to use AI/neural nets in a module, the Siren drone oscillator by the anonymous developer uses neural network-based oscillators
Interesting. I can't wait to try this IA features ! A question: I would be greate if we can put our own data for the learning process. Even more: some sound files. (maybe too complicated to do into the module ?)
As much as i love DIY and openness towards third-party developers, implementing an API for user trained NNs was kind of overkill here ;-) A few more thoughts on the technical side: Offering users to train with their own data makes total sense when a NN is supposed to produce a specific kind of output. E.g. AI software that imitates voices, instruments, etc. However, here for example if you would train with WTs that only consist of sparse sine waves, the NN would only be able to produce these kind of WTs, no matter what you put in and how you dial the rings. That would be pretty boring. So i found that it is best when the NN can produce as much variety as possible, and therefore balanced the training set and size of the NN for that purpose.
@@neuzeitrecordings Yes, I understand. It would be nice to send a long sample, for example, something (on the computer, not on the module?) deduces wavforms, which are then used for training. But yes, it can give disappointing results if the input sound is not rich enough in difference, or too short, for example. It's also that I don't really like when the tool decides for me when it's about creation (presets, automatic chord progression, etc.)
Hi unrelated question but I have two quasars and the OLED screen is sort of burning out is there some sort of screen saver mode that disables the OLED screen or some way that I can get this eventually fixed because it's going to wreck itself.
Just released a new firmware for Quasar that has a screensaver on board! The main feature however is a stereo delay. The FW is available on our website.
Won‘t be at SB in person this year, but Vermona have a Warp at their booth in the Bungalow Dorf area. It does have the latest firmware, they pair it with their Melodicer sequencer.
Where did you obtain your training data? That is an important aspect that gets overlooked. Is it data you've built yourself? Commercially available music which has not been licensed? "Public domain" sounds? Is this a plagiarism machine in eurorack format?
haha no it's not a plagiarism machine, although this would probably pick up the "Zeitgeist" pretty well. The training set consists of overtone spectra, so each set is basically just 128 float values that represent the amplitudes of sine waves. It was built from tons of GalaXYs that i built on Warp itself with the included GalaXY editors, then mangled with a python script for further processing. I also used some GalaXYs that the beta testers and preset builders already made. It was also mixed with some mathematically generated overtone spectra that seemed to be important.
Hey glad you explained this at 7:22. Neat idea.
This thing is capable of so much. Clever way to incorporate AI. Incredible tool to add textures and shape timbres. Good job!
I did not know Warp even existed, so without this new frmware I would not have found it in my UA-cam feed. Thanks for putting yourself high on my wishlist ;-)
Fantastic addition to Warp Thomas, well done. Loving the new AI enabled wavetables🎉
Just purchased, looking forward to playing this!!
@@mrparksy Thanks! Have fun 🤩
Impressive! I imagine the hours and amount of work that lay behind such a creation... Hut ab!!!
Thanks 🙏
Downloading the new firmware and locking myself up in the mancave for the next few days 🤩
Nice work Thomas, Warp was my favorite module out of 14U 104HP worth of racks even before your latest firmware. I appreciate your hard work and dedication, the improvements are fantastic! I especially noticed the stereo imaging improvement, Warp sounds even better. Chris Stell Valencia, CA
I wondered what A.I. could bring, and I got to say it looks very appealing, congratulations for this impeccable instrument!
I had this idea of making an ai to predict rhythmic sequences based on your current clock or maybe even an audio in. Cool to see you actually doing synthesis with it.
It always comes down to the question: What would your training set look like? Can you provide a huge number of pairs of given input and their „correct“ or „good“ output, and of course is there some kind of pattern that is learnable at all?
Well this is a very nice surprise. Thanks for your ongoing work!
Woah! The whole idea of "training" it is so fascinating! Would be so curious to hear variations based on differences in training. This just makes me curious, curious, curious!
I love when people develop AI that isnt trying to be a magic one button press solution, but something actually useful. Its so sad to see the underlying tech being wasted on shallow content machines by silicon walley. Unlike theirs witch only exist to drive speculative investment, things like this help people be more creative and are what the future a music technology will be like. Sorry for my rambling, i just wanted to say that im so happy whenever i see actual good use of this tech made by poeple who actually care about music. I makes me feel like there is still hope for humanity. Keep up the good work! ❤
thanks! I totally feel the same. Indeed, also here it was a long search for a problem to solve while the solution was already there 😅
Updating my firmware tonight! So excited!!
Wow this is just fucking awesome! Glad I bought it last week!
sehr interessantes Modul, hatte ich gar nicht auf dem Schirm. Soundtechnisch genau mein Geschmack.
Exciting updates! Good to see regular improvements.
Still praying for the day that each additive osc has minimum 512 partials for detail on the upper-most harmonics when using low pitches for sound design.!!!
Innovation, awesome!
1. You could train the AI on voice and violin etc spectra, right? You could have a morph mode.
2. It’s be fun to use your finger on the screen, and even use pressure to traverse a layer cake 3d wavetable galaxy. Or better yet, with your toe/foot as you play a midi piano. And have a path memory so Warp can evolve on its own. Or hand/head tracking in space.
2) that's a super nice interface idea.... :-D
1) with only wavetables it's not really possible to make it sound like a voice, violin etc. as you need noise partials, and harmonics that are not exact integer multiples of the fundamental. In other words, organic sounds are way more complex than a wavetable ;-) Softsynths that use a phase vocoders go in that direction, like the NSynth project by Google Magenta.
@neuzeitrecordings vital can create voice wavetables lmao
yes that‘s right, but with only a wavetable you will get some kind of „robot voice“, and i think the question was more related to reproduction of realistic sounds. btw Warp also ships with some Vox wavetables on board.
on all creative synths i really would love to see a snapshot file system, which means, you have one main preset, but then - like to a subfolder - you can store various snapshots and jump through them (did somebody just call for snapshot morphing?)
This looks great!
One question - Is there an option to crossfade between presets like the Erica LXR drum does? With the LXR, you can create completely different kit presets. save them, and then apply a slewed crossfader to morph between them. Does the Neuzeit offer such an option, or in a firmware update, could it?
Thanks!
That would be an awesome feature but honestly i don‘t see that coming, as currently we are already squeezing every bit of CPU power and hardware acceleration out of that thing. And it is a very capable CPU (ARM core with 600MHz). Handling a second preset at the same time might probably be too much 😉
@@neuzeitrecordings Thanks for replying! No problem, there is always another way to do things in modular. I'll look a bit more deeply into this.
The Hartmann Neuron used a neural network. Very cool and strange synth.
Many people think that the Hartman neuron was an unsuccessful synthesizer but if you know that it actually became sentient and is now known as Elon Musk
never heard of that one, but looks like a really unique piece of gear!
@@neuzeitrecordings you can get the vst, use screen mapper to map the orbs
Which module are you using for the intro music?
@@ES60Hz The Warp of course!
Let's be real: it doesn't matter whether you're the first guy to put AI into a Eurorack module. What matters is whether it's a good module, and that is not measured by whatever "dazzling" technology is inside it, but by things like the quality of sound, its feature set, its user-friendliness, the workflows it supports, perhaps its versatility, etc etc etc. I don't know your module and I can't speak to how it performs, I'm just making a general point.
Very nice work! But you're not the first to use AI/neural nets in a module, the Siren drone oscillator by the anonymous developer uses neural network-based oscillators
Interesting. I can't wait to try this IA features !
A question: I would be greate if we can put our own data for the learning process. Even more: some sound files. (maybe too complicated to do into the module ?)
As much as i love DIY and openness towards third-party developers, implementing an API for user trained NNs was kind of overkill here ;-)
A few more thoughts on the technical side:
Offering users to train with their own data makes total sense when a NN is supposed to produce a specific kind of output. E.g. AI software that imitates voices, instruments, etc. However, here for example if you would train with WTs that only consist of sparse sine waves, the NN would only be able to produce these kind of WTs, no matter what you put in and how you dial the rings. That would be pretty boring. So i found that it is best when the NN can produce as much variety as possible, and therefore balanced the training set and size of the NN for that purpose.
@@neuzeitrecordings Yes, I understand.
It would be nice to send a long sample, for example, something (on the computer, not on the module?) deduces wavforms, which are then used for training.
But yes, it can give disappointing results if the input sound is not rich enough in difference, or too short, for example.
It's also that I don't really like when the tool decides for me when it's about creation (presets, automatic chord progression, etc.)
Hi unrelated question but I have two quasars and the OLED screen is sort of burning out is there some sort of screen saver mode that disables the OLED screen or some way that I can get this eventually fixed because it's going to wreck itself.
Hi, please drop me an email with a photo to get in touch about this: contact[at]neuzeit-instruments.com
Just released a new firmware for Quasar that has a screensaver on board! The main feature however is a stereo delay. The FW is available on our website.
@@neuzeitrecordings will do thanks!
Oh wow!
🖤🖤🖤
Interesting. Seems here ANN makes sense. I will try it at Superbooth if possible.
Won‘t be at SB in person this year, but Vermona have a Warp at their booth in the Bungalow Dorf area. It does have the latest firmware, they pair it with their Melodicer sequencer.
Neuzeit 9 positions of the tongue?
Neat.
Where did you obtain your training data? That is an important aspect that gets overlooked. Is it data you've built yourself? Commercially available music which has not been licensed? "Public domain" sounds?
Is this a plagiarism machine in eurorack format?
haha no it's not a plagiarism machine, although this would probably pick up the "Zeitgeist" pretty well. The training set consists of overtone spectra, so each set is basically just 128 float values that represent the amplitudes of sine waves. It was built from tons of GalaXYs that i built on Warp itself with the included GalaXY editors, then mangled with a python script for further processing. I also used some GalaXYs that the beta testers and preset builders already made. It was also mixed with some mathematically generated overtone spectra that seemed to be important.
@@neuzeitrecordings I feel like "Zeitgeist" could be a name for a future module. ;-)
The last AI I came in contact with tried to kill me...
That AI did kill you, but it replaced your brain with a simulacrum that doesn't notice that it is now an AI brain.
😂 i am pretty sure that Warp is not smart enough for that. It uses around 100k neurons, a bee has ~1 million…
I just fell in love ... ❤... again 🫣