Whatever noise reduction Nick's team uses is amazing. It was so loud right there from the booths on either side of us, that neither Andrew nor I could really hear Nick at all.
So exited for this, and can’t understate the importance that microsoft is so closely involved. Please also do audio, we want integrated devices like on mac!
@@MatusFinchus Yep. That's part of the General MIDI standard so only applies to GM devices, which are rarely used outside of education and some types of games these days. Profiles are more flexible for sure.
it would also be very beneficial if ASIO would be incorporated into the OS, making latency a thing of the past, now we have to use ASIO4ALL, fl studio ASIO, etc
I wonder if MS will bake in Windows ASIO or something like CoreAudio is on Mac. Currently there are plenty things you gotta do in Windows to make it work and even then you can have some audio crackles and pops althou gotta admit Windows has gotten pretty good for audio production workloads. Apple has dominated the audio production space for too long and great to see MS paying attention to the segment but still it is a long road ahead.
Actually, they already have since Win10, WASAPI has been improved to allow for low latency. It would need audio interface drivers and applications to make use of it, but of course for Low Latency ASIO is the current standard and probably still has the advantage for ultra low latency (don’t have any comparisons, though), and works really well with good drivers, so it’s probably hard to get rid of. Any problems with crackles and pops have nothing to do with ASIO, but usually with other drivers on the system, or energy saving settings.
Will we be able to combine multiple interfaces for more input and output options? I am referring to the the aggregate device feature on MACs computers.
Aggregated audio devices on MacOS still have slightly varying sample rates so have to have resampling occur on the slave devices unless synced at the hardware level outside macos. I think it's generally not a good way to expand I/O and rely on drift correction / re-sampling to keep things aligned on the clock imo. You can aggregate on Windows ASIO too of course but still the same sync issues as MacOS.
I'm really hyped for the whole windows package. Will it finally be possible to create, (re)name and destroy a virtual MIDI device on the fly with a script? Like, for example, via a powershell execution from a node.js-environment? A possible goal of mine would be a fancy UI-Editor for an old dinosaur like a Roland JV-1080, that could be reached within a local network with literally every webbrowser. I made great progress on that front, but a major issue is (or hopefully was?) MIDI routing and blocking of ports (as soon as one process uses a MIDI port) on a basic level in the OS. Until today i use the fully functional but (for deploying purposes) impractical tools of Tobias Erichsen, to make things work.
Thought I replied to this. I hadn't considered this exact scenario, but I don't see why it wouldn't be possible. I had already generate a nodejs projection today for some of the preview builds, and once we're in-box, it'll be projected like any other WinRT API. I have an example of it on our github repo. (The current release 4 doesn't have a nodejs projection, but if you look in the get-started area under samples you can see what the previous one looked like) Routing is on the short list, but likely for the next release after v1. Virtual (app to app) MIDI is in there right now. We're working on the plumbing to enable creating these endpoints through the API instead of requiring that the service be restarted. Multi-client is there. It's our #1 or so request. We need to test with more devices, but so far, we haven't run into any problems.
This is some exciting stuff, but I would also like to see Microsoft redesign the windows Audio driver stack at some point. What I would like to see is an integration of ASIO (or a newly developed successor) to be integrated in the audio stack. ASIO is getting quite old and it lacks some capabilities like using two devices at the same time. Ideally I would like to see “Core Audio” on Windows, as synths, drum machines and grooveboxes are increasingly featuring integrated USB audio interfaces with multitrack capabilities. I often want to record from drum machine but monitor through a different interface for practical reasons. The current ASIO model doesn’t really allow for this unless you go through the somewhat flakey ASIO4all solution (which depends on WDM drivers which are limited to only 2 channels). This is not ideal and MacOS has kind of solved this usage scenario through their audio driver stack. Steinberg and Microsoft could joint-develop a similar solution to make Aggregate devices possible like Core Audio. ASIO was great a solution in the late 90’s but it has been stagnant in this case. It needs some modernisation to stay relevant.
My take is this is going to be much more useful for Orchestral type performances. Not that other producers from different styles wouldn't benefit one way or another.
MIDI 1.0 had a set speed which resulted in just about 1ms for a Note On message. MIDI 2.0 has no set max speed. It will run as fast as the wire speed and processors on each side will allow.
@@Psychlist1972 MIDI 1.0 to 1.0 instrument works fine, putting a computer with a crapp OS and crapp USB interfaces in the chain makes it unplayable. I didn't ask what the clock speed of part of the path was, I asked what the end to end delay through a computer was. If this was any good you would be shouting about it, not 16 bit velocity that no-one can hear.
@@hintoninstruments2369 MIDI 1.0 instrument to instrument is close to 1ms per note on. For a 5 note chord, you're looking at around 5ms from the first note on to the last, and that's assuming the processor on the device doesn't add additional delay (the old ones always added fairly significant processing time). " If this was any good you would be shouting about it, not 16 bit velocity that no-one can hear." No need to be like that. I can't talk about everything in a short video interview, but in my main presentation at the booth and in other talks I've given, speed is one of the top things I cover. For native MIDI 2.0 on Windows, we're in the microseconds range for the full path. We send and receive as fast as we can and we use a very fast cross-process mechanism for moving the data between the points -- much faster than we had in MIDI 1.0 I thought I had said in the video that it is much faster, but perhaps I forgot to mention that this time. The wire speed plays a huge part in that, as does the USB standard used. MIDI 1.0 was locked into older and slower standards. MIDI 2.0 is not. If you create a USB 3 MIDI interface, we use it at USB 3 speeds, for example.
@@Psychlist1972 You are not addressing the real issues or answering the question. Most of the delays and jitter inside a computer and USB are software scheduling which is much slower than the hardware is capable of. In the 1980s personal computers had UARTs directly on the bus and applications could directly process interrupts. That is why computers like the Atari ST had such good performance for MIDI applications, developers could junk the inadequate OS support and put in their own interfaces and handlers. Then multi-tasking OSes were introduced and this was taken away. Then native serial interfaces were taken away and OSes got even worse. Fortunately computers became capable of recording multitrack audio and it was no longer necessary to orchestrate with MIDI. You say that MIDI 2.0 is backwards compatible, but it does not look like it offers anything better for users of MIDI 1.0 instruments of which there are rather a lot in the world and won't be going away. It may have less delay when used with new MIDI 2.0 instruments over a different physical transport, but how do you synch with existing MIDI instruments? Claims based on clock speeds or only one part of a system are deceitful. Lets have some real use figures based on measurements of real systems. MIDI was developed by instrument designers and adopted by the whole MI industry within a year because it was apt and obviously going to be big and it is still here 40 years later despite computer manufacturers. You are not giving any convincing reasons why any developers should invest their time and resources in this. It sounds like other systems like mLan that failed to deliver and died on the vine. btw, the transmission delay between the first note and last note of a 5 note chord is 2.56ms and the total transmission time is 3.52ms. Jitter introduced by USB and OSes is usually multiples of 2ms on top of that. I would expect any half decent instrument to respond in less than 0.5ms. Only early retrofitted instruments like the Fairlight and PPG had large processing delays.
@@hintoninstruments2369 MIDI uses a baud rate of 31250. Or 31250 symbols(bits) per second or 31.25 symbols per millisecond. A single serial "Note On"-event is comprised of three bytes: one status byte (note pressed) followed by two data bytes (which note and how fast). Each byte (=8 bits) in the MIDI protocol is preceded by a start bit and terminated by a stop bit. So we are at 30 bits per "Note On" or a timeframe of 960 microseconds to be exact. Any faster is simply not possible or it would break the general baud constraint set by the MIDI protocol. And a 5-note-chord does indeed last for an absolute minimum of 4,8ms. Let there be a simple setup: MIDI-Keyboard > PC > Synth Both arrows ">" represent a physical cable and a signal in this scenario has to be converted from the relatively slow MIDI baud rate to the much faster processing speeds of a modern CPU and then back into the slow MIDI baud rate. Even without any processing in the PC, these conversions alone would push the transmission up to at least 1,92ms for a single "Note On"-Event under ideal conditions. There is no possibility, that MIDI 2.0 can impact these general constraints of MIDI 1.0 in any meaningful way. Because MIDI 1.0 is just sooooooo sloooooow compared to any other modern digital transmission of information. MIDI 1.0 is indeed 40 years old and quite limited as it was never meant for roundtripping and beeing converted several times along it's path.
It would be great if the MMA also added a MIDI 2.0 implementation on Raspberry PI devices. Since they are cheap, they can be used by both hobbyists as well as those looking to build commercial hardware products.
@@JKC40 I don't quite understand then. The MMA (The MIDI Association) is a standards body. In the API working group we have Apple, Linux, Microsoft, and Google all working together for our own implementations of the standard. The MIDI Association itself doesn't provide any OS stack implementations. The Linux ALSA implementation is good. Takashi and team did a great job on it.
MIDI 1.0 supported 14 bit CC, but no one uses it, and the new MIDI2.0 Keystage has a CC resolution of 128. Why? It would be great if someone explained this
You're talking RPNs and NRPNs, I assume. Some devices do use it, but they are more work and actually use more data in the already slow MIDI 1.0 serial transmission. And, they are over multiple messages so there could be other message data in between, especially if you are using all 16 channels on a cable, so you get potentially more latency and jitter for a control change. So, instead, many hardware manufacturers have implemented smoothing algorithms. But MIDI 2.0 offers a lot more than just increased resolution (and what I showed was velocity which must be tied to the note on/off messages, not an NRPN). Finally, the keystage implements MIDI CI. Despite also being a part of MIDI 2.0, it's a bit different. That's more about discovery and auto-configuration. All of that runs over the existing MIDI 1.0 byte stream transports. The CI messages are all specialized SysEx. It doesn't include UMP support so the new messages types are not in the Keystage.
Maybe a dumb question, but does MIDI 2.0 give my older synths MIDI controlled parameters more resolution? Like say the LFO rate on my Prophet ‘08 has too a low of a resolution. The stepping from 0-1 or 2-3 is too big for the vibrato I’m trying to dial in (I want it to wobble more slowly). Can MIDI 2.0 give me that finer resolution if I use a DAW to send like a MIDI value of 2.19 or something like that, even with an old synth that was built with MIDI 1.0? Or does it only work on a new synth that is designed to receive MIDI 2.0 from the beginning?
Crazy. I didn't know how much Windows is lacking compared to Mac. Mac supports Midi 2.0 for a longer time and all the mentioned features. - Happy to see windows users enjoying this soon too.
To be fair MIDI 2.0 ready hardware / software / DAWs are not that ubiquitous yet and many that support 2.0 are really still in a prototype / partial implementation state, but Professor Pete Brown from Microsoft is also Chair of the MIDI Association and he's really been making some nice updated to Windows for audio recently, he's a proper audio studio geek too which will surely help drive it on!
True, but at the same time, the MIDI 2.0 UMP implementation in macOS really only hit true usability in Sonoma or so as Andrew mentioned to me. But yes, they are ahead of many others here and got the plumbing in early.
Crazy new lvl of possible velocities will be a problem for VSTi's that already rely on high amounts of samples, some in the GB's, I wonder what will be the answer to that, we cant have a 100GB lib for a Piano .. So, MIDI-2 will push us towards more Physical modeling stuff? 🤔
I had proposed removing it, but it'll stick around for now, because it turns out it's still used extensively for education and training in Japan, and there's also a group of Windows users who still prefer it to other synths for listening to MIDI files. But we're including app-to-app MIDI, so there will be more options available.
@@Psychlist1972 I still use it from time to time since some apps rely on it, either that or I want to have a somewhat true to life roland sound since that was the standard for composing midi for a while. I do wish it was better for playing in parts live when I need it though, and it doesn't even support things that would make it technically compatible with the gs extention since it doesn't support the bank changes that get you access to the extra instruments, which are included in the dls file it pulls from.
Imagine having a head-shaving and moisturising game as strong as Pete Microsoft's, then completely neglecting the goatee. Like, it just doesn't exist to him. He is totally beard-blind. I'm actually lost for words.
Yes there are a few DAWs (e.g. Logic, Cubase, Multitrack Studio) that can use MIDI 2.0 protocol on Mac. There are prototypes for Windows waiting for the driver release.
I dont know how this could be done but I had some conversations about the idea of merging/fusing MIDI and audio somehow. Like, capturing every event in the form of MIDI and capturing the audio produced at the same time? Or an entirely new format that is somehow both? 🤷 idk - not entirely fleshed out yet, but I think there’s something interesting there.
@@KiR_3d Yes. Our service and API rely on things only in Windows 10 SDK 20348 and higher. We're also having to service the latest version of Windows 10 with the USB stack updates to support the new USB driver. Even that was a bit of a stretch (and extra work) because most things like that only go in the latest OS in feature updates, but we're still planning to do it to support the musician community. So technically not possible for Windows 7, and even if technically possible, we wouldn't be allowed to update an older out-of-service OS with anything other than critical security stuff, if even that.
The travesty are the fact midi Sys exclusive always could use any bit resolution, like 16bit velocy, 16bit bend etc , it was just about "actually" implementing it, but very few did due to the speed and memmory limitation of the 80ies, 8bit MCU's. Not me saying all this but Dave Smith in an interview long time ago.
True, except that SysEx messages in MIDI 1.0 took longer to send because they are so many more bytes. Each set of 3 bytes takes approximately 1ms in MIDI 1.0. There are also RPNs and NRPNs in MIDI 1.0
@@Psychlist1972 Indeed Pete, but remember the Prophet 2000/2002 had selectable Midi rates 1x, 2x,3x and 4x, unfortunately no other manufacturer catch'd on. Dave should have put that as a requirement into the spec but perhaps thought it was so obvious it was not needed. Midi's Akkiles heel was always the "definition" of transfer rate and the hardware interface. His first proposition in the standard i recall was 9600baud! Whoopdidoo!
Because computers don't count in base 10. 8 bits is a byte and when used to store/represent unsigned integers, 8 bits can encode values 0-255. By using the last 7 bits, where the first bit encodes channel and midi message type, the remaining bits can store a maximum integer of 127.
There's quite a few Pro studios running Windows, especially on the more CPU demanding multichannel stuff like Ambisonics, AR.VR, Gaming, Atmos and Surround where you can take advantage of the faster more capable GPU and CPU processing not possible in Silicon at the moment.
I like BLE MIDI, but prior to v5 there were real latency challenges due to how the data gets transmitted. There are decent implementations of devices out there, but usually there's a caveat around latency that's also very dependent on the strength of your BLE antenna. It's worth revisiting now.
Dave Smith’s greatest contribution to the world of electronic music-making devices
Indeed. That and the Prophet 5
Far more important than Dolby Atmos! 😎
Whatever noise reduction Nick's team uses is amazing. It was so loud right there from the booths on either side of us, that neither Andrew nor I could really hear Nick at all.
Man, this is killer. I'm on a Mac, but the whole Midi universe is the most exciting thing ever.
I’m BEYOND excited about MIDI 2.0
So exited for this, and can’t understate the importance that microsoft is so closely involved. Please also do audio, we want integrated devices like on mac!
Simply standardizing which CC values are correlated to which controls/parameters would be a great time saver quality of life thing for musicians.
I'm sure this used to be a thing (CC74 for Cutoff etc).
@@MatusFinchus Yep. That's part of the General MIDI standard so only applies to GM devices, which are rarely used outside of education and some types of games these days. Profiles are more flexible for sure.
it would also be very beneficial if ASIO would be incorporated into the OS, making latency a thing of the past, now we have to use ASIO4ALL, fl studio ASIO, etc
I wonder if MS will bake in Windows ASIO or something like CoreAudio is on Mac. Currently there are plenty things you gotta do in Windows to make it work and even then you can have some audio crackles and pops althou gotta admit Windows has gotten pretty good for audio production workloads. Apple has dominated the audio production space for too long and great to see MS paying attention to the segment but still it is a long road ahead.
Actually, they already have since Win10, WASAPI has been improved to allow for low latency. It would need audio interface drivers and applications to make use of it, but of course for Low Latency ASIO is the current standard and probably still has the advantage for ultra low latency (don’t have any comparisons, though), and works really well with good drivers, so it’s probably hard to get rid of.
Any problems with crackles and pops have nothing to do with ASIO, but usually with other drivers on the system, or energy saving settings.
WASAPI Exlusive is basically the same. On my Win 11 system ASIO performs slightly worse in fact with an Apogee Interface.
great stuff nick, thanks for keeping us informed!
Dave must be so proud looking from above right now 😢
Pete Brown, what a legend.
Will we be able to combine multiple interfaces for more input and output options? I am referring to the the aggregate device feature on MACs computers.
Aggregated audio devices on MacOS still have slightly varying sample rates so have to have resampling occur on the slave devices unless synced at the hardware level outside macos. I think it's generally not a good way to expand I/O and rely on drift correction / re-sampling to keep things aligned on the clock imo. You can aggregate on Windows ASIO too of course but still the same sync issues as MacOS.
I'm really hyped for the whole windows package. Will it finally be possible to create, (re)name and destroy a virtual MIDI device on the fly with a script? Like, for example, via a powershell execution from a node.js-environment?
A possible goal of mine would be a fancy UI-Editor for an old dinosaur like a Roland JV-1080, that could be reached within a local network with literally every webbrowser. I made great progress on that front, but a major issue is (or hopefully was?) MIDI routing and blocking of ports (as soon as one process uses a MIDI port) on a basic level in the OS. Until today i use the fully functional but (for deploying purposes) impractical tools of Tobias Erichsen, to make things work.
Thought I replied to this.
I hadn't considered this exact scenario, but I don't see why it wouldn't be possible. I had already generate a nodejs projection today for some of the preview builds, and once we're in-box, it'll be projected like any other WinRT API. I have an example of it on our github repo. (The current release 4 doesn't have a nodejs projection, but if you look in the get-started area under samples you can see what the previous one looked like)
Routing is on the short list, but likely for the next release after v1. Virtual (app to app) MIDI is in there right now. We're working on the plumbing to enable creating these endpoints through the API instead of requiring that the service be restarted.
Multi-client is there. It's our #1 or so request. We need to test with more devices, but so far, we haven't run into any problems.
This is some exciting stuff, but I would also like to see Microsoft redesign the windows Audio driver stack at some point. What I would like to see is an integration of ASIO (or a newly developed successor) to be integrated in the audio stack. ASIO is getting quite old and it lacks some capabilities like using two devices at the same time. Ideally I would like to see “Core Audio” on Windows, as synths, drum machines and grooveboxes are increasingly featuring integrated USB audio interfaces with multitrack capabilities. I often want to record from drum machine but monitor through a different interface for practical reasons. The current ASIO model doesn’t really allow for this unless you go through the somewhat flakey ASIO4all solution (which depends on WDM drivers which are limited to only 2 channels). This is not ideal and MacOS has kind of solved this usage scenario through their audio driver stack. Steinberg and Microsoft could joint-develop a similar solution to make Aggregate devices possible like Core Audio. ASIO was great a solution in the late 90’s but it has been stagnant in this case. It needs some modernisation to stay relevant.
My take is this is going to be much more useful for Orchestral type performances. Not that other producers from different styles wouldn't benefit one way or another.
MIDI 2.0 should have come pit 10 years ago, this is so so overdue
Adds more latency though?
Nope. It's quite a bit faster than MIDI 1.0
Can it send a note from an instrument through a computer to a sound module with less than 1ms delay? Or is that in MIDI 3.0?
MIDI 1.0 had a set speed which resulted in just about 1ms for a Note On message. MIDI 2.0 has no set max speed. It will run as fast as the wire speed and processors on each side will allow.
@@Psychlist1972 MIDI 1.0 to 1.0 instrument works fine, putting a computer with a crapp OS and crapp USB interfaces in the chain makes it unplayable. I didn't ask what the clock speed of part of the path was, I asked what the end to end delay through a computer was. If this was any good you would be shouting about it, not 16 bit velocity that no-one can hear.
@@hintoninstruments2369 MIDI 1.0 instrument to instrument is close to 1ms per note on. For a 5 note chord, you're looking at around 5ms from the first note on to the last, and that's assuming the processor on the device doesn't add additional delay (the old ones always added fairly significant processing time).
" If this was any good you would be shouting about it, not 16 bit velocity that no-one can hear."
No need to be like that. I can't talk about everything in a short video interview, but in my main presentation at the booth and in other talks I've given, speed is one of the top things I cover.
For native MIDI 2.0 on Windows, we're in the microseconds range for the full path. We send and receive as fast as we can and we use a very fast cross-process mechanism for moving the data between the points -- much faster than we had in MIDI 1.0
I thought I had said in the video that it is much faster, but perhaps I forgot to mention that this time. The wire speed plays a huge part in that, as does the USB standard used. MIDI 1.0 was locked into older and slower standards. MIDI 2.0 is not. If you create a USB 3 MIDI interface, we use it at USB 3 speeds, for example.
@@Psychlist1972 You are not addressing the real issues or answering the question. Most of the delays and jitter inside a computer and USB are software scheduling which is much slower than the hardware is capable of. In the 1980s personal computers had UARTs directly on the bus and applications could directly process interrupts. That is why computers like the Atari ST had such good performance for MIDI applications, developers could junk the inadequate OS support and put in their own interfaces and handlers. Then multi-tasking OSes were introduced and this was taken away. Then native serial interfaces were taken away and OSes got even worse. Fortunately computers became capable of recording multitrack audio and it was no longer necessary to orchestrate with MIDI.
You say that MIDI 2.0 is backwards compatible, but it does not look like it offers anything better for users of MIDI 1.0 instruments of which there are rather a lot in the world and won't be going away. It may have less delay when used with new MIDI 2.0 instruments over a different physical transport, but how do you synch with existing MIDI instruments?
Claims based on clock speeds or only one part of a system are deceitful. Lets have some real use figures based on measurements of real systems. MIDI was developed by instrument designers and adopted by the whole MI industry within a year because it was apt and obviously going to be big and it is still here 40 years later despite computer manufacturers. You are not giving any convincing reasons why any developers should invest their time and resources in this. It sounds like other systems like mLan that failed to deliver and died on the vine.
btw, the transmission delay between the first note and last note of a 5 note chord is 2.56ms and the total transmission time is 3.52ms. Jitter introduced by USB and OSes is usually multiples of 2ms on top of that. I would expect any half decent instrument to respond in less than 0.5ms. Only early retrofitted instruments like the Fairlight and PPG had large processing delays.
@@hintoninstruments2369 MIDI uses a baud rate of 31250. Or 31250 symbols(bits) per second or 31.25 symbols per millisecond.
A single serial "Note On"-event is comprised of three bytes: one status byte (note pressed) followed by two data bytes (which note and how fast). Each byte (=8 bits) in the MIDI protocol is preceded by a start bit and terminated by a stop bit. So we are at 30 bits per "Note On" or a timeframe of 960 microseconds to be exact. Any faster is simply not possible or it would break the general baud constraint set by the MIDI protocol. And a 5-note-chord does indeed last for an absolute minimum of 4,8ms.
Let there be a simple setup:
MIDI-Keyboard > PC > Synth
Both arrows ">" represent a physical cable and a signal in this scenario has to be converted from the relatively slow MIDI baud rate to the much faster processing speeds of a modern CPU and then back into the slow MIDI baud rate. Even without any processing in the PC, these conversions alone would push the transmission up to at least 1,92ms for a single "Note On"-Event under ideal conditions.
There is no possibility, that MIDI 2.0 can impact these general constraints of MIDI 1.0 in any meaningful way. Because MIDI 1.0 is just sooooooo sloooooow compared to any other modern digital transmission of information. MIDI 1.0 is indeed 40 years old and quite limited as it was never meant for roundtripping and beeing converted several times along it's path.
This is the main reason for the hold up of the free Behringer DAW.
It would be great if the MMA also added a MIDI 2.0 implementation on Raspberry PI devices. Since they are cheap, they can be used by both hobbyists as well as those looking to build commercial hardware products.
Andrew was actually showing MIDI 2.0 on the Raspberry Pi using the Linux ALSA stack.
@@Psychlist1972 exactly, a raspberry pi implementation is on Linux not on MMA
@@JKC40 I don't quite understand then. The MMA (The MIDI Association) is a standards body. In the API working group we have Apple, Linux, Microsoft, and Google all working together for our own implementations of the standard. The MIDI Association itself doesn't provide any OS stack implementations.
The Linux ALSA implementation is good. Takashi and team did a great job on it.
What is happening with Midi 2, and what updates are going on!?
Watch the video
@@PalmBobo I like did man. This had been promised for years and implementation takes years as well. Thanks for the very obvious advice. 🤟
MIDI 1.0 supported 14 bit CC, but no one uses it, and the new MIDI2.0 Keystage has a CC resolution of 128. Why? It would be great if someone explained this
You're talking RPNs and NRPNs, I assume. Some devices do use it, but they are more work and actually use more data in the already slow MIDI 1.0 serial transmission. And, they are over multiple messages so there could be other message data in between, especially if you are using all 16 channels on a cable, so you get potentially more latency and jitter for a control change.
So, instead, many hardware manufacturers have implemented smoothing algorithms.
But MIDI 2.0 offers a lot more than just increased resolution (and what I showed was velocity which must be tied to the note on/off messages, not an NRPN).
Finally, the keystage implements MIDI CI. Despite also being a part of MIDI 2.0, it's a bit different. That's more about discovery and auto-configuration. All of that runs over the existing MIDI 1.0 byte stream transports. The CI messages are all specialized SysEx. It doesn't include UMP support so the new messages types are not in the Keystage.
Maybe a dumb question, but does MIDI 2.0 give my older synths MIDI controlled parameters more resolution? Like say the LFO rate on my Prophet ‘08 has too a low of a resolution. The stepping from 0-1 or 2-3 is too big for the vibrato I’m trying to dial in (I want it to wobble more slowly). Can MIDI 2.0 give me that finer resolution if I use a DAW to send like a MIDI value of 2.19 or something like that, even with an old synth that was built with MIDI 1.0? Or does it only work on a new synth that is designed to receive MIDI 2.0 from the beginning?
Nope. But it is backward compatible
Thanks!
Your old MIDI devices will continue to work the same way they always did. Nothing magical in that sense. :)
Crazy. I didn't know how much Windows is lacking compared to Mac. Mac supports Midi 2.0 for a longer time and all the mentioned features. - Happy to see windows users enjoying this soon too.
To be fair MIDI 2.0 ready hardware / software / DAWs are not that ubiquitous yet and many that support 2.0 are really still in a prototype / partial implementation state, but Professor Pete Brown from Microsoft is also Chair of the MIDI Association and he's really been making some nice updated to Windows for audio recently, he's a proper audio studio geek too which will surely help drive it on!
True, but at the same time, the MIDI 2.0 UMP implementation in macOS really only hit true usability in Sonoma or so as Andrew mentioned to me. But yes, they are ahead of many others here and got the plumbing in early.
Crazy new lvl of possible velocities will be a problem for VSTi's that already rely on high amounts of samples, some in the GB's, I wonder what will be the answer to that, we cant have a 100GB lib for a Piano .. So, MIDI-2 will push us towards more Physical modeling stuff? 🤔
Interesting. The Osmose is maybe already a sign of that.
DAWs and VSTs already work on higher resolutions than MIDI 1.0 supports, in most cases. That's how you get smooth automation inside the DAW.
@@Psychlist1972 Yes with knobs, faders and such... sample-based instruments are based on ... well.. sample velocity layers. Different topic.
I wonder if the age old microsoft GS synth with the extremely high buffer size and roland samples will stay around or if that will be fixed also?
I had proposed removing it, but it'll stick around for now, because it turns out it's still used extensively for education and training in Japan, and there's also a group of Windows users who still prefer it to other synths for listening to MIDI files. But we're including app-to-app MIDI, so there will be more options available.
@@Psychlist1972 I still use it from time to time since some apps rely on it, either that or I want to have a somewhat true to life roland sound since that was the standard for composing midi for a while. I do wish it was better for playing in parts live when I need it though, and it doesn't even support things that would make it technically compatible with the gs extention since it doesn't support the bank changes that get you access to the extra instruments, which are included in the dls file it pulls from.
Awesome news!!!
Imagine having a head-shaving and moisturising game as strong as Pete Microsoft's, then completely neglecting the goatee. Like, it just doesn't exist to him.
He is totally beard-blind.
I'm actually lost for words.
LOL. I wasn't expecting beauty tips, but I appreciate it :)
But what about the DAWs? are they already using MIDI 2.0?
Yes there are a few DAWs (e.g. Logic, Cubase, Multitrack Studio) that can use MIDI 2.0 protocol on Mac. There are prototypes for Windows waiting for the driver release.
the music technology of the future! … and always will be 🙃
mac just casually watching over his shoulder
Great!
MIDI making bootay shake since 1983.
I dont know how this could be done but I had some conversations about the idea of merging/fusing MIDI and audio somehow. Like, capturing every event in the form of MIDI and capturing the audio produced at the same time? Or an entirely new format that is somehow both? 🤷 idk - not entirely fleshed out yet, but I think there’s something interesting there.
I truly hope they'll add it to Windows 10 as well in an update won't be moving to 11 until I have to. It's not ready yet...
Current plan is to release for the latest supported Windows 10 version as well as Windows 11.
Is this the end of MIDI errors?
Will MIDI 2.0 include a comb and a small pair of scissors? #Icantlookawayorlistentotheconversation
Finally? Win7 support, yes?
That is very unlikely. Support for Windows 7 ended in january 2020.
@@tiamat8123 Many people uses Win7. It's not a crime "per se". Win7 is the most stable from last MS OSs IMO. The less "need to be modded" ;)
Latest supported Windows 10 and also Windows 11. We had to make USB stack changes to support this, and the API tech used is Windows 10+.
@@Psychlist1972 are you sure it's technically unavailable?
@@KiR_3d Yes. Our service and API rely on things only in Windows 10 SDK 20348 and higher. We're also having to service the latest version of Windows 10 with the USB stack updates to support the new USB driver. Even that was a bit of a stretch (and extra work) because most things like that only go in the latest OS in feature updates, but we're still planning to do it to support the musician community. So technically not possible for Windows 7, and even if technically possible, we wouldn't be allowed to update an older out-of-service OS with anything other than critical security stuff, if even that.
The travesty are the fact midi Sys exclusive always could use any bit resolution, like 16bit velocy, 16bit bend etc , it was just about "actually" implementing it, but very few did due to the speed and memmory limitation of the 80ies, 8bit MCU's. Not me saying all this but Dave Smith in an interview long time ago.
True, except that SysEx messages in MIDI 1.0 took longer to send because they are so many more bytes. Each set of 3 bytes takes approximately 1ms in MIDI 1.0. There are also RPNs and NRPNs in MIDI 1.0
@@Psychlist1972 Indeed Pete, but remember the Prophet 2000/2002 had selectable Midi rates 1x, 2x,3x and 4x, unfortunately no other manufacturer catch'd on. Dave should have put that as a requirement into the spec but perhaps thought it was so obvious it was not needed. Midi's Akkiles heel was always the "definition" of transfer rate and the hardware interface. His first proposition in the standard i recall was 9600baud! Whoopdidoo!
Why are so many parameters 0 to 127 instead of 0 to 100?
MIDI is 8-bit, the maximum value that can be expressed is 2 to the power of 7, which is 128 steps (0-127).
@@Asyouwere Thanks for the reply. Why not stop at 100 though?
Because computers don't count in base 10. 8 bits is a byte and when used to store/represent unsigned integers, 8 bits can encode values 0-255. By using the last 7 bits, where the first bit encodes channel and midi message type, the remaining bits can store a maximum integer of 127.
@@spectre.garden I see! Thank you!
@@Asyouwere Thank you, I appreciate it
Glen Darcey is a cool guy.
I refuse to buy more hardware until this is done. I want hi res, but I mostly want sample accurate latency compensation.
crazy that microsoft is doing pro consumer stuff.
They ran out of corporations to consume.
There's quite a few Pro studios running Windows, especially on the more CPU demanding multichannel stuff like Ambisonics, AR.VR, Gaming, Atmos and Surround where you can take advantage of the faster more capable GPU and CPU processing not possible in Silicon at the moment.
I was aching to trim that beard all the way through
Not gonna happen until the beard trimmer is midi 2 compatible…!! 😂
@@stephencarter8625 😂
My wife agrees with you 100%.
tomorrow.. tomorrow never dies.. 😂
BLUETOOTH wireless midi. The future? No more cables
no - network MIDI transport over WIFI will be the future.
I like BLE MIDI, but prior to v5 there were real latency challenges due to how the data gets transmitted. There are decent implementations of devices out there, but usually there's a caveat around latency that's also very dependent on the strength of your BLE antenna. It's worth revisiting now.
I rather wait for MIDI 2.1 or 2.2.
MIDI 3.11 for workgroups is gonna be the breakthrough =))
Sound great, I wish I wasn’t so sceptical though, I can see where this is going to go
@@corzakkzakkcor3309 Out of curiosity, where do you see it going?
Who uses Windows for music production except for rank amateurs??
amateurs are the ones that buy thé bulk of instruments and prosumer gear keeping companies in business
More than half the world, and not just amateurs. Apple products are great for music production, but they don't have a monopoly.