Hi, I encountered a dependency conflict error when running your Google Colab notebook. The error message indicates that various Google packages (e.g., google-ai-generativelanguage, google-api-core, google-cloud-aiplatform) require an older version of protobuf, which conflicts with the installed version. Also it appears that only the model 'lakhclean_mmmtrack_4bars_d-2048' is available, is there a process to get access to the other models listed on your Huggingface repo? Cheers!
thank you o much for this! i tried running this on the collab link, but it dies on "note_seq.plot_sequence(note_sequence)" with: "AttributeError: unexpected attribute 'plot_width' to figure, similar attributes are outer_width, width or min_width", do you have any idea how to fix this?
Incredible work!! I will definitely use this as well. I am very curious about the possibility of conditioning on my own MIDI examples. Perhaps there is a way to to tokenize from MIDI?
I've used Riff Writer by Jonathan Bell to write this song: ua-cam.com/video/FSGhvhywMGc/v-deo.html I would love to have a metal composer tool with graphic interface.
Prompt engineering is about the token sequence you see. In this musical case, it is about a cyclic process where the AI predicts the next pieces of the sequence and the human constantly manipulating the full sequence. If you want, for example, to remove a track, you delete the respective tokens from the sequence. Chord conditioning is similar. You just have special tokens for the chords.
Hi, Fantastic work! I tried running it on Collab and on my local Jupyter notebook and got the same error "AttributeError: unexpected attribute 'plot_width' to figure, similar attributes are outer_width, width or min_width". Can you please advise? Thx
Hi Dr. Tristan! Amazing work! I produced Great music today! But for the other models other than the default lakhclean_mmmtrack_4bars_d-2048, is use_auth_token needed?
Thanks for this amazing contribution! Are the MIDI files you trained on all quantized or do they contain performative microtimings (like the dataset Magenta's GrooVAE was trained on)?
Very cool. I’m interested in recreating and preserving work songs that we have as text only. This seems very useful. Train text to known midi sequence and then hand the model lyrics…😁
Hi, I encountered a dependency conflict error when running your Google Colab notebook. The error message indicates that various Google packages (e.g., google-ai-generativelanguage, google-api-core, google-cloud-aiplatform) require an older version of protobuf, which conflicts with the installed version. Also it appears that only the model 'lakhclean_mmmtrack_4bars_d-2048' is available, is there a process to get access to the other models listed on your Huggingface repo? Cheers!
thank you o much for this! i tried running this on the collab link, but it dies on "note_seq.plot_sequence(note_sequence)" with: "AttributeError: unexpected attribute 'plot_width' to figure, similar attributes are outer_width, width or min_width", do you have any idea how to fix this?
Let us talk AI music. What are you most curious about?
Incredible work!! I will definitely use this as well. I am very curious about the possibility of conditioning on my own MIDI examples. Perhaps there is a way to to tokenize from MIDI?
I've used Riff Writer by Jonathan Bell to write this song: ua-cam.com/video/FSGhvhywMGc/v-deo.html I would love to have a metal composer tool with graphic interface.
Amazing work. I would love to use this expertise in the transportation sector.
Amazing, gonna make good use of it 🤘🏼
Glad you like! Do something great and show me :D
I have an error at the first step of the notebook: E: Package 'libfluidsynth2' has no installation candidate
Windows or unix?
@@drtristanbehrens I'm using the colab notebook, lakhclean_gpt2_generation.ipynb. Maybe the library is no longer available?
Very cool! What do you mean by "prompt engineering" in that context? What kinds of prompts could it take? How to feed it chords?
Prompt engineering is about the token sequence you see. In this musical case, it is about a cyclic process where the AI predicts the next pieces of the sequence and the human constantly manipulating the full sequence. If you want, for example, to remove a track, you delete the respective tokens from the sequence.
Chord conditioning is similar. You just have special tokens for the chords.
Hi, Fantastic work! I tried running it on Collab and on my local Jupyter notebook and got the same error "AttributeError: unexpected attribute 'plot_width' to figure, similar attributes are outer_width, width or min_width". Can you please advise? Thx
Hi Dr. Tristan! Amazing work! I produced Great music today! But for the other models other than the default lakhclean_mmmtrack_4bars_d-2048, is use_auth_token needed?
Please show your music! Yes. All other models need a token.
@@drtristanbehrens great! How to get the tokens? It’s for academic research purpose.
Thanks for this amazing contribution! Are the MIDI files you trained on all quantized or do they contain performative microtimings (like the dataset Magenta's GrooVAE was trained on)?
Very cool. I’m interested in recreating and preserving work songs that we have as text only. This seems very useful. Train text to known midi sequence and then hand the model lyrics…😁
That's a great idea! Go for it!
Sweet thanks for releasing. I’ll try it out
Have fun! And do not hesitate to share your results!
thanks for rigging it up! it's going to be greater in the future and right now it's fun to find little motifs with it
Can't wait! The future of AI music will be golden!
Thanks. Will check it out 😊
Enjoy!
Excited for this.
Thanks!
I need a glossary ❤
Tell me more!
Supercooool
Thanks!