I imagine musicians who accidently develop it when they're young and never really use it, eventually lose it. (classical musicians). I'd be very surprised if actual play-by-ear musicians and composers like myself actually lose it. More studying needs to be done on how perfect pitch is developed and potentially lost though.
Imagine a whole song being played fast enough to be an instrument, and that song is also made up of instruments made by speeding up entire songs into pitches. Fractal music
I believe Adam has a video on this. If I'm not mistaken he took the whole audio of Giant Steps, repeated endlesly and speed up to ridiculous BPM counts to obtain the pitches of the the notes in the song, you guessed right, Giant Steps. He even named this concept exactly as fractal music as you also did.
Imagine sampling such music not knowing it and then slowing it back down and realizing there's a whole piano concerto inside one note lol... Not actually much posible though
8:21 - _"There's nothing quite like a deadline to get the creative juices flowing"_ - Jim McNeely _"I love deadlines. I love the whooshing noise they make as they go by."_ - Douglas Adams
For me it's either the first or the second case. The latter one especially happens when I realize that there is no way to make it in the time given to finish a task, so I just say "fuck it" and hear the deadline wooshin'. Often I am hating myself afterwards, but that's the life of a master procrastinator.
3 роки тому
Precisely what I thought. It’s a mix of both for me.
“Fingeritis” was my biggest problem for literal years as a hobbyist musician and actually made me avoid playing for awhile. “The Advancing Guitarist”, “20th century harmony”, (books from a video on your channel), and learning drums really helped that and brought music back to a less frustrating love again. For awhile I felt super boxed in and habitually uncreative. Those books and this channel gave me so much insight and reminded me to think like a beginner as much as possible. Weird tangent from one word, but thanks for making such wonderful and insightful content.
I came to the comments because I wanted to learn more about the "fingeritis" cuz thats what I feel kind of stuck with right now. Do you have any tips on how to progress past that? Been meaning to check out 20th century harmony
@@SirNoxasKrad I've been going thru the same thing for a while now, and a few things that helped were: I don't play it on the instrument. I listen music and memorise the phrase or solo I would normally just figure out on guitar. Then I just imagine myself playing it (I don't always know exact notes on the guitar). I just imagine the feeling of playing it, I don't think about any logic or anything, just imagining the feeling of that phrase, why it's beautiful etc. Then I sing it for a while when doing other stuff and only after a couple of days do I allow myself to play it on a guitar. I first deeply enternalise the phrase so it isn't just muscle memory. Second thing is I sing everything, literally everything. Harmonizing with house appliences or car horns on the street or anything. I sing my favourite songs and accompany myself on guitar, I actually consider that practicing guitar. I don't sing because I want to be a singer, but because I want to be better guitar player. Third thing is playing vocal melodies on guitar, preferably from memory. Or some easier solos if you know the in your head. The point is, we've been practicing our instrument so much it became our second nature, but we didn't work enough on our musicality so our body just takes over. But when you sharpen your inner sense for musicality it fights your body and your habits. Your mind is free. I learned this because i've but playing and practicing guitar for 12 years and I realised I wasn't very free. On the other hand, my best friend never practiced, literaly never (he had a few piano lessons as a kid but never stuck with it). He just played melodica or piano when we were drinking and messing around and he could play the best solos and improvised melodies that were so so good and fun and free. His only form of practice was listening to music and singing it. It didn't matter that his tehnique wasn't the best because his solos were so creative and free. Remeber, there is a difference between being a good musician and a good instrumentalist.
Omg I just realized that "Stella by starlight" has the same chord progression Freddie Mercury used in the bridge of "Take my breath away" by Queen. About 3 minutes into the song if anyone is interested, it's the same key as well. I wonder if that was where he got the inspiration from!
Good find! ... See/hear also the Chopin #20 Prelude Adam used at time 2:10 is also at start of Barry Manilow's 1973 "Could It Be Magic" .. beautiful at ua-cam.com/video/Vc5XtkZSH-Q/v-deo.html
Excellent observation. I think it’s quite obvious that Freddie had deep musical knowledge; actually some of his music is profoundly classical. This might be explained by the fact that he was was of Eastern European origin- where people have have much more culture than in the West. Education is totally free in those parts of the world.
The E turning into A experiment is even more insidious. Because you're switching the E on and off at 440 times per second, you're effectively doing amplitude modulation on your basic frequency of the E (329.628 Hz or thereabouts) with 440 Hz being the carrier. This leads to two effects. First off, the E has its own envelope and probably doesn't go to 0 dB peaks - but your 440 Hz switching frequency does because you're doing on-off switching, so the most audible tone becomes 440 Hz and you hear the A. The second effect is, ring modulation also produces tones at (carrier + signal), so about 769 Hz and at (carrier-signal), so about 111 Hz. If you were to take a look at the spectrum, there should be two smaller spikes at 111 Hz and 769 Hz along with a massive one at 440 Hz. There will probably also be repetitions of that "trident" higher up because of the fact you're modulating it by turning the signal sharply on and off (so you're modulating with a square wave, which has an ugly spectrum). I'd love to actually see a view from your DAW.
Yes! I was looking for this comment ! It sounds pretty much like my square wave ring mod guitar pedal. An oscilloscope would have been neat for that segment.
The ring mod and amplitude mod are very similar, it's just that the ring mod inverts the phase during part of the cycle, whereas the amplitude mod preserves the phase. So there will be a difference in tone depending on how it's mixed back with the original. And possibly some uglier overtones because modulation is never "clean".
In other words (if I'm understanding you correctly [please correct any misconceptions]), the way Adam performed the ring modulation actually contributed to the amplitude of the observed frequencies. If Adam applied a filter to the post-ring-modulation audio to emphasize the lower and higher of the three frequencies, we would hear a chord consisting of frequencies 111 Hz-440 Hz-769 Hz (which would sound pretty close to A2-A4-G5 with the G5 being a bluesy/flat seventh a.k.a. the 7th harmonic of A2). Let's say that Adam instead used a starting tone of 110Hz, then the carrier-signal would be 330Hz, the carrier would be 440Hz, and the carrier+signal would be 550Hz. This would be a nice, crisp A major chord over E. if the initial signal had been 88Hz instead, then the triad would be 352-440-528 -- a standard F major triad.
Go signal processing nerds! In my opinion your answer about amplitude modulation is more correct than Adam's, however this is not what is going on here. Since Adam used a repeating sample rather than a volume control, the E was retriggered each time, and so the result is a complex waveform repeating identically 440 times per second. So its spectrum only contains frequencies multiple of 440Hz. If using a single E and fading it in and out without retriggering it, on the other hand, we would indeed obtain amplitude modulation. But we would not hear an A at all: in fact, we would still mostly hear the E, but also the differential tones at all the frequencies present in the original signal, plus or minus multiples of 440 Hz.
Isn't the term _feminine cadence_ borrowed from Old French / Middle French poetry, where feminine nouns often ended in an unstressed syllable, while masculine nouns often ended in a stressed one?
Whichever root if has it's still essentially stereotyping which the woke would claim is bordering on sexism/misogyny. That said I don't advocate for changing it.
Probably. But that's not "woke", so we ignore those kinds of facts. Look, I'm trying to "virtue signal" here, so stop bringing nuance, complexity and context into things, yes? It spoils the underlying message that I'm awesome.
@@markstanbrook5578 I'd argue the concept of masculinity and femininity are gendered concepts but have nothing to do with gender and even less to do with sex. After all, the 'woke', as it were, would argue males can be feminine and vice versa.
Why does finding out Adam is a coaster nerd make me smile so much? Just hearing coaster manufacturers named on this channel made me giggle immediately!
I never once thought I would meet another coaster enthusiast in this area of UA-cam, but it turns out there is a surprising amount of overlap. It's friggin' dope as heck.
4:36 Quick note; their question says "fade in and out", not "play". To me, this seems more like ring modulation, which is where you take the amplitude of some note, e.g. the note E, and then have some waveform applied to the amplitude of that note, with the wave at some frequency. At low, sub-audible Hz values, we hear a "tremolo" effect. At higher Hz values, we hear ring modulation. The effect sounds like two tones going out from the original, center tone. So, if you have, say, E5 (659.25 Hz), and apply a sine wave to its amplitude with a frequency of 150hz, you hear a sum and difference tone, meaning you hear 509.25 Hz and 809.25 Hz together as the resultant tones.
Yes. Because he is playing a sampled piano note, the faster it is played, the more we only get to hear the very initial part of the "attack" of the sample where the hammer strikes the strings without every really getting to play the E pitch. Eventually the attack is shot short of a duration that it is effectively reduced to a click. A hammer strike played 440 times per second is going to sound like A4. A pure E5 pitch faded in and out 440 times per second is going to product some sort of modulated tone instead. ua-cam.com/video/DCPxe6P1KWo/v-deo.html
Otherwise known as Amplitude Modulation where the “carrier” is E and the “modulation” is 440Hz. Note that if the E is below 440, you (I think?) get a wrap around at 0Hz, and a 180 out of phase signal on the low end that is the absolute value of the negative frequency. The other subtlety is that if your modulation is not sinusoidal (ie a triangle wave) its new spectrum should be as if you took the sum and difference with the E to the triangle wave’s original spectrum. If you use impulses like Adam used, you’ll get some kind of square-wavy, odd harmonic mess tho
@@enginerdy This is mostly right, though there's actually a slight difference! Ring modulation doesn't preserve the carrier signal! If you simply take a signal and modulate its amplitude, and then feed out the direct result, the carrier signal is absent in the end result. Amplitude Modulation preserves it. That is a good thing to point out though!
Spatial Audio is also an important part of classical music, especially with extended ensembles. One cool example is “The Unanswered Question” by Charles Ives, where the strings are meant to be back stage and the brass and woodwind are meant to be in the rafters of the concert hall. This gives the piece a really ethereal sound. It’s a super cool topic, I bet Adam could do a fab video on it, similar to his “UA-camrs react to Experimental Music” video.
I think Adam has actually mentioned that particular piece before (can't remember the exact video sorry) so it would be awesome if he did a whole video essay on it!
There's a whole field of psycho-acoustics around placement of sound in space. This dude in San Francisco does "sound sculpture" in an environment of 176 audio speakers en.wikipedia.org/wiki/Audium_(theater)
I really like your UA-cams. I know some of the music things you talk about but don't know what causes them. My favorite is the "Misty Chord". I got that message at work and my workmate made me play the UA-cam. We have a piano at work and she is always playing the first 2 bars of "Misty" with that chord. She can't get enough of it. Thanks for this and all your other UA-cams.
in synthesis, there is a feature called oscillator sync, in which a tonal oscillator can have its waveform position reset according to a second oscillator. so for example, one oscillator could be playing E but be reset 440 times per second. the result varies depending on the frequencies involved: it could sound like new timbres, and/or a blending of notes.
This is exactly what i was thinking. If you take a 440 tone and repeat it faster than the samples length but below 440 you aren't getting a pure tone, you're getting a mixture of the two.
@@euronomus The perceived tone would definitely still be whatever the oscillation frequency is at. A mixture of pure tone just determines timbre, but when you use it the tones become indistinguishable
i think a cleaner way to do what the question was asking for would just be amplitude modulation with a carrier (of whatever waveform made you happy) at E Hz and a 440 Hz signal
11:22 Colours of chords. This has always fascinated me. I do though, have to be careful about the person I bring up the subject with. Some people dont get it, & just think you're weird. Anyway-with me, it all started with me seeing A major as RED. D is yellow, D# is brown and E is dark brown/black, C yellow white. Complex chords are tartan! Diminished are 3D pink.
That original Stella bridge is bonkers; I've always learned to play the last two chords as bVII7 to Imaj7 (over its root), but that ivmiMa7 to I/iii is just unreal. So so beautiful. I'm never playing the realbook version of this again lol
11:25 as a child I always connected certain chords and songs with certain colors. When I tried to discuss it, people thought I was strange. Its feels good to know I'm not the only one
I really like seeing something that’s very intuitive to me because of my experience in modular synthesis explained by Adam. Playing the note E at 440 Hz is in synthesis simply called amplitude modulation and can be easily done on a modular using two oscillators (one at E and one at A) and a VCA. Produces great timbres and produces frequencies of both the sum and difference of the two frequencies :)
Our ear lobes (the whole outside part of the ear) filter a sound differently depending on its direction. Our brain then uses those timbre differences to help locate sounds. I read something about it a long time ago anyway. something about comb filtering in the brain. so that can be used in reverse to encode location in a track for playback in stereo (two front speakers) system. the limitation is that the listener must sit in a specific spot and keep their head facing forward. one listener. only. no headphones lol
1:04 is the coolest transition i have ever seen in my many years on the internet ever even months later. Transition flawless achieved kudos to you for such an amazing bit of editing and thank you for the hard work it must have taken to procure it for us all to enjoy.
My wife is a B3 organ jazzer. She’s had perfect pitch since childhood. At 63 she still has it. Chromesthesia is what she has, seeing sounds as colors in the minds eye.And she can still tell when my guitar’s b string is slipping flat.
@@ilyanoeteuscher6870 so if you did fade E in and out at 440hz you WOULD hear the interval... BUT (and here’s where it gets funky) You would ALSO get........ a G? I really don’t understand the interval relationships, but it goes into the undertone series or something idk
Fading E in/out at 440 Hz gives a different result than the experiment you did. If it was E5=659.25Hz, then you should be getting 659.25+440 Hz and 659.25-440 Hz. It should sound like what ring modulator effect does. (except that one is four-quadrant, while fade in/out is 2-quadrant... goes only to 0 volume and never to negative)
On "Flow", the psychological concept Mihály Csíkszentmihályi use to quantify an aspect of joy, you mentioned a deadline, IE, a challenge. Part of it is having a challenge that matches the reward. In your case, the intrinsic accomplishment of producing creative work. Watching TV tends to be inherently boring as the reward lacks challenge. Video games tend to provide more flow as there's involvement and clear challenge. IE, one knows what to do, how to do it, there's challenge/skill involvement, no distractions (focus). A chart on Wikipedia suggests relaxation/control and arousal/being energized lead to it. So plan what/how, execute with skill, focus.
Do people with audio-visual synesthesia actually *see* colours when hearing sounds? Like when you mentioned yellow and blue, do you get the 'feeling' of those colours? Or do you actually like... see them somehow?
I think Adam was more so talking about the actual letters like C and B, so a letter-color synesthesia. Idk if audio-visual is a thing but it probably is
I'd imagine it's like thinking about (or imagining) a colour. You don't see it overlayed on your visual field. But you still can kind of see it in your mind. Or like if I imagine my house. I don't see it in the same way that I see what is actually in front of me but I can see it in my mind and visually focus in on details etc. I'd say it's analagous to imagining a song in your head. And in fact smells and tastes often have sound to me and it's very much the same sort of thing as how I would usually imagine sounds.
There are some songs that use binaural audio quite brilliantly in their conception, like Chrome Sparks' "Marijuana" by having the intro fade in and swirl around, gradually speeding up. The effect is brilliantly psychedelic.
I’m first an foremost a musician, but my hobby is coasters. So awesome that you share that too!
3 роки тому+74
Your synesthesia crashes so hard with mine: Yellow = E ; Blue = C ; Bb/D is like mint ice cream on a bed of lettuce. We should do something like "5 synesthetes compose over the same painting (or any colorful visual thing)".
@@hugobouma I don’t like calling it jazz theory tbh because it’s just music theory. Classical musicians use the same stuff, like Ravel and Debussy. Ravel being the master of the 9ths and augmented chords. And Debussy using extended harmony which is a fundamental concept of jazz. Despite that people like to distinguish the two theories even though they are one. Jazz ain’t nothin special to be categorized into its own genre.
If you modulate the PITCH of a note, such as an E, at the frequency of another note, such as an A, you get Frequency Modulation (FM) synthesis which is exactly what the Yamaha DX7 introduced to the world. One "operator" is simply a sine wave generator that generates sine waves at audio frequencies and another operator is a sine wave generator that is used to modulate the pitch of the first generator at audio frequencies. When you do that, you can get crazy complex harmonic series depending on the harmonic relationships between the operators. Very cool.
I think that the more you practice and the more you work on your craft, the quicker and more easily you can enter a flow state. I once wrote a very long novel. I worked on it every day. In the beginning, I would only enter this flow state occasionally. But after several months, it happened just a few minutes after I sat down. And it happened almost every day.
Feminine and masculine cadences seems to just follow the terminology in poetry (especially French poetry), where "feminine" endings (ones that ended with a silent off-beat schwa vowel) alternate with "masculine" endings (ones that end without silent syllables). Within that context, there's nothing pejorative about it outside of the low hanging fruit of "feminine = weak". (And if we follow that train of thought, we'd probably need to unpack why being on the beat is "strong" while being off is "weak".) Feminine endings are as important to French poetry as masculine endings are.
Weak/strong is based off physical strength, it was never meant to call an entire gender weak overall. I wouldn't even say feminine cadence's are named so because they are "weak", they just remind people of feminine beauty. There's no problem with applying gender to things as long as it doesn't come from a place of misogyny/misandry.
@@DoubleOhSilver Frailty (or perceived frailty) is an integral part of traditional feminine (and also female) beauty across pretty much every human culture. So the idea of a trochee being female and an iamb being male is certainly "problematic" insofar that negative associations and connotations are problematic. But yeah, I agree with the fact that the terminology is innocuous. I still teach it to my students, since I find it a bit more imaginative and effective than the alternatives. Another context where masculine / feminine appears is as labels for sonata themes, where "masculine" = extroverted and proactive while "feminine" = introverted and reactive. My students are mostly aware enough to know that gendering nonsexual things is just a matter of analogy. For the younger ones (and also the more impressionable ones among the girls in my studio), I offer a caveat as needed.
A little on 'Flow State' - I had spent several years working with Kenny Werner's book 'Effortless Mastery' when I was invited to participate in a flow state study at Goldsmith's University in London. They hooked me up to an EEG and asked me to play unaccompanied improvisation on tunes i was familiar with. I used steps developed with effortless mastery and played, and the scans showed little/no activity in my pre-frontal cortex. I discussed it further with a neuropsychologist, and came to the conclusion that flow state is / can be entered by shutting down brain activity in the pre-frontal cortex -- for example, 'complex behaviours including planning', self-judgement and analysis -- and connecting directly with the wider brain architecture. I'm still using these hypotheses to structure my practice and playing routines, though I haven't had the opportunity to explore in more detail, yet....
Adam, I think the question at 4:33 is actually describing ring modulation. It would be a sustained note but "faded in and out" or the way I interpret it: Turned up and turned down. Your point still stands. The overtone would highly emphasize the 440 Htz that it's "fading" at.
Alan Parsons did a quadraphonic mix of Pink Floyd's Dark Side of the Moon in 1973, which could technically be a very early example of 8d music. The sound effects and instruments all have really cool placements.
What the muscle memory question reminded me was the "When you learn a riff and put it in everything" Daniel Trasher video. Obviously you should not be guided only by muscle memory, but just as the "Lick" these are common phrases. It's like speaking. You don't always have to invent a new word to discribe something, though repeating other people's ideas can get pretty boring pretty quickly. So I'll be really helpful with this totally exact answer: Just find the balance between old and new.
As a language educator and music enthusiast, it's uncanny how similar the art of learning both languages and music are. Common misconceptions exist in both fields, for example that native speakers or virtuoso players are consciously choosing and controlling every motion they make, whereas in reality, both are calling on "chunks" of drilled and deeply-rooted patterns that we string together in coherent ways. Like with any musical instrument, the path to fluency is less to do with grammar exercises and aimless production activities, and more with repetition and drilling of useful patterns and constructions.
Adam, I wanted to study music but my parents talked me into something more "practical", Electrical engineering. :-( Now that I'm retired, I can now study music like I always wanted to. Thanks for the inspiration.
Sorry Adam, but I think there's more to the E and A question. He said "fade in and out", not "play it at 440 beats per second", and the big difference is that the soundwave wouldn't start over again, it'd be in different positions everytime the amplitude ("volume") would go up. So, it would sound like a different thing, maybe still A, I'm not sure about that. I'll try to get into it!
So yeah, nobody asked but I did some experiments anyway, and the results are quite interesting! Here, have a listen: I started with this simple sinewave tuned at an E4: drive.google.com/file/d/1HBT08fdN0cghWxiJeompqX0frswF6wKM/view?usp=sharing which on a spectrum analyzer looks like this simple peak at 329Hz: drive.google.com/file/d/1KM4U4xx-M9lNYBE2Ga1nKI6UydD6MN7A/view?usp=sharing But THEN... I applied a sine-shaped amplitude modulation at 440Hz and this happened: drive.google.com/file/d/12CTepzSOUGcuDjnLh7CXkrHx3gdM8QxZ/view?usp=sharing There's two extra tones now! The whole thing now sounds like a strange A, in fact, the two extra tones emerge at approximately an A2 and a C#5 (it's not exact because of our tuning system quirks), and look like this: drive.google.com/file/d/1iv5QianoaDHuZEn6YwDfSgQmrQfBcU0e/view?usp=sharing Now you could say "yeah sure, that sounds cool and all but real instruments don't sound like that, what if you did it on, say, a piano?", and in the extremely remote case you were actually thinking that, I've got you covered! Here's a clean piano sound (using 4Front Piano VST): drive.google.com/file/d/1eJDHXBwjjAWAIK93sFJQYhxq40sGjHtF/view?usp=sharing It looks like this: drive.google.com/file/d/1KIjZkMSQj7Gyw60liEmlqEt0fcysMQuM/view?usp=sharing And then, the modulated version: drive.google.com/file/d/1tpy0cG2YyvP3wGXoEDK6ZbQi9Eb7HfYO/view?usp=sharing Now it looks like this: drive.google.com/file/d/1q0a3C3EeareeBd_kwlnGcQ5TI3e7HMWU/view?usp=sharing ... then I just searched "amplitude modulation" on UA-cam and immediately dawned on me that this is actually how AM radio works. It's in the damn name! But oh well, staying up until 4:30 AM listening to sinewaves was kinda fun anyway lol Thank you very much for reading this far! I really appreciate it :)
Re binaural audio, here's a bit of a summary about how stuff works: There are four basic ways we work out where sounds come from. The first two are called the interaural time difference (ITD) and the interaural level difference (ILD). If a sound is on your right, it has to travel further to reach your left ear, so it takes longer. Also, your head casts a shadow so it's quieter in your left ear. These two things alone are pretty good for working out where sounds are. The ITDs work better at lower pitches and the ILDs work better at higher pitches, because physics, but there's a bit more nuance to it than that. The problem with these interaural cues is that there's a lot of symmetry. There's no way to tell front vs back vs up vs down. This leads to what we call the Cone of Confusion, an infinite cone of points that all have the same binaural cues. The next thing that helps us resolve this is spectral cues. Those flappy things on the side of your head act as dishes that filter sounds differently depending on the direction they arrive from. For example, a sound in front will reach your eardrum with more high frequencies than a sound from behind. Over time, your brain learns the relationship between different sound spectra and different locations. If you swapped your ears for someone else's, you'd be confused about where stuff was. But after about 2 weeks, you'd be pretty good again. The spectral cues are good, but they tend to work better for things with a spread of frequencies, particularly higher ones. The final way we work out where sounds are in the real world is motion. Say you hear a sound that's either directly in front of or behind you. It reaches both ears at the same time and level. Now say you turn to the right. If the sound was in front of you, it'd now be to your left. If it was behind, it'd be to your right. By seeing how sounds move relative to your head as you move your head, you can pinpoint the correct location. Virtual audio stuff in headphones can replicate the ITDs and ILDs. You can even use some generic ears to replicate the spectral cues to an extent, but it would be so much better with your own ears. However, headphones can't generally replicate motion cues (unless the sounds are being generated in some complicated virtual acoustic environment).
@@andymcl92 Makes sense. Have just enough of a tangential understanding in signal processing to read between some of the lines you provide, it's neat to think about for sure.
Yeah, since the E of the sample sounds like it's lower pitched than the 440hz, and thus not actually being played at all. In the examples when the rate at which the samples are being played back is less than the pitch of the E (presumably ~330hz), it should sound less sawtoothy, though it still should have sudden discontinuities, so maybe it still should sound sort of sawtoothy.
@@saqlainsiddiqui1744 I think the specific reason is that Rachmaninoff spent the last part of his life in Beverly hills and I think he was friends with a few Hollywood composers but I'm not sure.
@@s1nd3rr0z3 Yeah I think you’re right, his emigration to the US definitely had an impact on his contemporaries. I think it also went the other way too - the works he wrote in the US are sometimes really different from his ‘usual’ style - the often-forgotten 4th piano concerto is a good example of that I think
I think your Ableton demonstration is wrong, because it stacks samples instead of manipulating sine waves. Here's why: If there's one thing I learned from my course in signal processing (honestly I hated that course, so there's not much more), it's this: When you add two sine waves with different frequencies, the result is a sine wave with the average frequency multiplied by the difference in frequencies. To illustrate: if you add a 439Hz signal and a 441Hz signal, the resulting signal is equal to a 440Hz signal that fades in and out at 2 Hz. Turning that around, if you take an E, let's say E5 at 659Hz, and fade that in and out at 440Hz, that should theoretically be the same as the addition of two notes at 659Hz - 440Hz = 219Hz, or a flat B2, and 659Hz + 440Hz = 1099Hz, or a Db6. After much fuss I tested this in Audacity, and indeed, when I multiply a 659Hz signal by a 440Hz signal, I get the same result as when adding the 219Hz and the 1099Hz: a high beep and a soft low hum (though the softness could also be due to my laptop speakers being sh*BASS*)
I think as well, the fact it's a piano sample means you have the attack of the hammer hitting the string, before you really hear the note it produces. Repeat it fast enough and that initial percussive click is all you get to hear, and you're just getting a... saw pattern I guess? The stuff that sounds like an E isn't part of the waveform anymore The stuff where it was slow enough to hear the E and the effect of the repeats was cool though! I wonder if that came from what you're talking about, where there's one wave (the near-constant E) and another (the repeating attack)
what you say about adding sinewaves is wrong. If you add two a sinwaves at different frequencies on top of each other they do not result in a sinewave. The resulting wave consists (as any harmonic wave {can be broken down by fouriertransformation}) of sinewaves but it is not a sinewave. technically even if they were the same frequency - if they weren't in the same phase they would not result in a sinewave, they could even cancel each other out. What happens if you add two sinewaves depends on frequency, phase and amplitude. the only time it would result in a sinewave would be if both sinewaves were oscillating at the same frequency and be locked in the same phase. The amplitude could differ it would only have an effect on the perceived loudness.
@@Felitsius Maybe what I said wasn't completely correct, or maybe worded badly. What I meant is adding two sine waves with frequencies a and b produces the same waveform as multiplying the sine wave with the frequency that's the average of a and b by the sine wave with the difference of a and b. It's not a sine wave by itself, that's true, instead it's like f(x) = sin((a+b)x/2)*sin((|a-b|)x). You can see for yourself if you use audacity to create two tracks, generate a tone of 100Hz in one track and a tone of 101Hz in another track, then select both tracks and mix them down to one track to get the addition, and what you'll see when you zoom in is a sine-like wave with the amplitude that is another sine wave.
@@daanwilmer Thank you for your answer! I'm sorry but I think you missed an important detail. I'm coming from a passionate-hobby-sounddesing-background, let me explain where I see the mistake in your assumtion. What you say about adding the two sinewaves is true, it just doesn't represent what was asked in the quetion. What you did is basically just playing two different notes with the same amplitude at the same time - which is not what the question asked. The question asks what happens if you fade an E (659hz) in and out at 440hz. This concept is called "Am-synthesis" (Amplitude modulation). The important detail here is that the modulating oscillator (sinewave) only affects the amplitude of the modulated oscillator. At lower modulating-frquencys this will be perceived as a tremolo-effect. Above arround 30hz we start perceiving the modulation as pitch. Another important factor in the equation is the difference in amplitude of the modulating oscillator and the modulated oscillator. If you amplitude-modulate 659hz with 440hz at a relatively low amplitude, the affected oscillator will start to develop one lower harmonic at 110hz and some higher harmonics at multiples of A and E (I think maybe also B and F it was a bit hard to distinguish in the spectrumanalizer as it is logarhytmic and at higher frequencys the peaks are super close to each other) Since the fundamental now is a 110hz technically it would be an A but as it is a lot less present (also the other harmonics) it will still be perceived as an E but with a different timbre. If you start enlarging the modulating amplitude, at some point the 110hz will become enough present so that the sound wil now be perceived as an A. So yes, I think adams demonstration was wrong and his answer incomplete. But I think yours is wrong as well :) I hope my explanation made sense.
Wow. Was browsing youtube because it's too effing hot to do anything else, and the first listen of that bridge: instant goosebumps. Thank you, Mister Neely
I think your “E at 440hz” experiment wasn’t quite optimal, because of that piano sample. It has a little percussive sound at the beginning which produces much of the 440hz tone. Could it be retried with a sine wave?
@@kruksog probably, yeah. There would probably be a “click” when the sine wave resets. The question mentions fading it, I wonder if it’d work that way.
@@Wyattporter you could avoid that by modulating the amplitude of a constantly playing sine wave rather than starting and stopping a sample over and over, this technique is called amplitude modulation and you've probably heard of it in the form of AM radio, which uses a carrier frequency in the radio band and modulates its amplitude (raises and lowers its volume) according to the signal that is to be broadcast. you could accomplish this very easily with a basic modular setup by running a 440hz sine wave from an oscillator through a voltage-controlled amplifier and then using whatever E signal you wanted as a control voltage for the VCA
I like when we get cool snippets of you playing something more technically complex. I'm very used to watching videos where you describe theory and play in a demonstrative way to reinforce learning concepts, I almost forget that you're a very talented performer!
4:33 if you're looking for straight up "fading a thing in and out at 440Hz" the thing you're probably looking for is amplitude modulation. it actually does produce some kinda cool results with "sideband frequencies", look into ring modulation too it's nearly the same thing. more synthesis territory than music theory but of course synthesis theory is a form of music theory innit you don't get the same effect by just playing a piano sample back 440 times per second because you're just repeating the exact same waveform, your "E" sound is being re-started 440 times per second whereas if it's just going on at its 329.62Hz or whatever and being brought in and out 440 times per second each time it's brought in it'll be starting at a different point in its cycle. also i think adam has a video that talks about sideband frequencies?
Does your a synesthesia change over time, or in context? Like “blue” is C now, it was more purple before? Or is the color consistent? If C is blue, is it always blue regardless of the mood of the piece (like sad, or thumping) or might it change for you living inside a piece of music vs in isolation? That’s a lot. I’ll take your answer off the air. I really like your channel.
As a synesthete, my experience is that, generally speaking, notes remain the colours that they've always been, but personally, I started on an instrument where you play in a lot of sharps keys, and when I began to play jazz piano (where you run into a lot more pieces in flat keys), found that reading (for example) a Db instead of C# would be different colours. My run down is C= white/cerise (this seems to be the only key which changes, and it is dependant on the mood of the piece) D= mid blue E= yellow F= blue-green G= mid green A= red B= pale blue
I don't have music/colour synaesthesia, but I thought you might be interested in this anyway! Mine is colour/grapheme, so every letter and number and some other things have colours for me. The colours have been completely consistent throughout my life, and a select few do change slightly in circumstances. 2 for example is red for me, but when paired with yellow numbers (like 4), it looks pink. Font, emotional connection to the words, etc, don't affect it at all. If anyone is reading this and has music/colour, I'd love to know what it sounds like to hear microtones. When I see non-letter/numbers in text, like &, %, ~ etc, they have no associated colour. @ takes on the colour of "a", though. I'd also love to know if music/colour synaesthesia works effectively like perfect pitch. Either way, always been super jealous of this type, it seems utterly magical :)
Someone with perfect pitch: "Please comfort me about losing perfect pitch."
Adam Neely: "Don't worry, you're going to die anyway."
Hahahahahaha
So true
E.H. agreed.
Welp... He is not wrong tbh...
Incidentally we call chopin Choppers up here.He suggests you master instrument. He says zzzzz
I imagine musicians who accidently develop it when they're young and never really use it, eventually lose it. (classical musicians). I'd be very surprised if actual play-by-ear musicians and composers like myself actually lose it. More studying needs to be done on how perfect pitch is developed and potentially lost though.
Band instructor: "Everyone play a C"
Me: Frantically increasing tempo in Ableton on my laptop
an
@@Kayuk no
yes!
en
If you can't train yourself to press the piano key several hundreds of times per second to produce the note, you're not a real musician
Century old composers be like: "Fellas, is it gay to resolve a chord progression on a weak beat?🤔🤔🤔"
pretty sure gay would mean happiness back then, which is funny because the question will get kinda the same answer anyway
Hell yeah this is the comment
@@nickkellam9155 stfu
"if you have your wig on, then's not gay"
"Ooh, it's totally gay, darling. You go, girl."
**wink**
Adam, I appreciate you. Thanks. 😁
Hello nahre my name's tom and I appreciate Adam
♥️♥️🌹💜
Omg my favorite pianist!! Hi!!!!!!
I appreciate both Adam and Nahre. You inspire me. Thanks
Nahre you are great, i appreciate you and your work. Thanks
The song Yellow, by Coldplay had the following chord progression:
C,C,C,C,C,C,C,C,C,C,C,C,C,C,C
it's all yellow
Its actually in B
@@umaroxp5207 yeah, but why let a fact ruin a good joke, eh?
@@umaroxp5207 It's in C if you play it fast enough
@@violetcitizen Excellent.
@@umaroxp5207 ok, nerd
The “STELLA” scream on the b13 is absolute perfection.
It pretty much bang on.
I tend to think the "big"-7 or "flat-7 ("dominant"?) is a strong musical analog to a Yell of some urgency but now I might have to rethink that a bit?
*STELLAAAAAAAA!!!!!!!!!!!!*
Imagine a whole song being played fast enough to be an instrument, and that song is also made up of instruments made by speeding up entire songs into pitches. Fractal music
This is all music was and ever will be
It's the Mendelssohn set.
I believe Adam has a video on this. If I'm not mistaken he took the whole audio of Giant Steps, repeated endlesly and speed up to ridiculous BPM counts to obtain the pitches of the the notes in the song, you guessed right, Giant Steps. He even named this concept exactly as fractal music as you also did.
Imagine sampling such music not knowing it and then slowing it back down and realizing there's a whole piano concerto inside one note lol... Not actually much posible though
@@AntonMochalin it's possible with a high enough sample rate
8:21 - _"There's nothing quite like a deadline to get the creative juices flowing"_ - Jim McNeely
_"I love deadlines. I love the whooshing noise they make as they go by."_ - Douglas Adams
For me it's either the first or the second case. The latter one especially happens when I realize that there is no way to make it in the time given to finish a task, so I just say "fuck it" and hear the deadline wooshin'. Often I am hating myself afterwards, but that's the life of a master procrastinator.
Precisely what I thought. It’s a mix of both for me.
@@thedoublek4816 rushing until you finally say “fuck it” is relatable af
“Fingeritis” was my biggest problem for literal years as a hobbyist musician and actually made me avoid playing for awhile. “The Advancing Guitarist”, “20th century harmony”, (books from a video on your channel), and learning drums really helped that and brought music back to a less frustrating love again. For awhile I felt super boxed in and habitually uncreative. Those books and this channel gave me so much insight and reminded me to think like a beginner as much as possible. Weird tangent from one word, but thanks for making such wonderful and insightful content.
I came to the comments because I wanted to learn more about the "fingeritis" cuz thats what I feel kind of stuck with right now. Do you have any tips on how to progress past that? Been meaning to check out 20th century harmony
@@SirNoxasKrad picking up another (different) instrument is a good idea, like the piano (Kiko Loureiro talked about that in his latest video)
Also, try practicing using only new scales or modes. You'll have to think more
@@SirNoxasKrad I've been going thru the same thing for a while now, and a few things that helped were:
I don't play it on the instrument. I listen music and memorise the phrase or solo I would normally just figure out on guitar. Then I just imagine myself playing it (I don't always know exact notes on the guitar). I just imagine the feeling of playing it, I don't think about any logic or anything, just imagining the feeling of that phrase, why it's beautiful etc. Then I sing it for a while when doing other stuff and only after a couple of days do I allow myself to play it on a guitar. I first deeply enternalise the phrase so it isn't just muscle memory.
Second thing is I sing everything, literally everything. Harmonizing with house appliences or car horns on the street or anything.
I sing my favourite songs and accompany myself on guitar, I actually consider that practicing guitar. I don't sing because I want to be a singer, but because I want to be better guitar player.
Third thing is playing vocal melodies on guitar, preferably from memory. Or some easier solos if you know the in your head.
The point is, we've been practicing our instrument so much it became our second nature, but we didn't work enough on our musicality so our body just takes over. But when you sharpen your inner sense for musicality it fights your body and your habits. Your mind is free.
I learned this because i've but playing and practicing guitar for 12 years and I realised I wasn't very free. On the other hand, my best friend never practiced, literaly never (he had a few piano lessons as a kid but never stuck with it). He just played melodica or piano when we were drinking and messing around and he could play the best solos and improvised melodies that were so so good and fun and free. His only form of practice was listening to music and singing it. It didn't matter that his tehnique wasn't the best because his solos were so creative and free.
Remeber, there is a difference between being a good musician and a good instrumentalist.
100% thought that said "fingertits"
Omg I just realized that "Stella by starlight" has the same chord progression Freddie Mercury used in the bridge of "Take my breath away" by Queen. About 3 minutes into the song if anyone is interested, it's the same key as well. I wonder if that was where he got the inspiration from!
Good find! ... See/hear also the Chopin #20 Prelude Adam used at time 2:10 is also at start of Barry Manilow's 1973 "Could It Be Magic" .. beautiful at ua-cam.com/video/Vc5XtkZSH-Q/v-deo.html
How observant 👏🏻👏🏻👏🏻
Excellent observation. I think it’s quite obvious that Freddie had deep musical knowledge; actually some of his music is profoundly classical. This might be explained by the fact that he was was of Eastern European origin- where people have have much more culture than in the West. Education is totally free in those parts of the world.
Totally fits the mood. The song is like an old movie soundtrack.
@@musamor75 what
The E turning into A experiment is even more insidious. Because you're switching the E on and off at 440 times per second, you're effectively doing amplitude modulation on your basic frequency of the E (329.628 Hz or thereabouts) with 440 Hz being the carrier.
This leads to two effects. First off, the E has its own envelope and probably doesn't go to 0 dB peaks - but your 440 Hz switching frequency does because you're doing on-off switching, so the most audible tone becomes 440 Hz and you hear the A. The second effect is, ring modulation also produces tones at (carrier + signal), so about 769 Hz and at (carrier-signal), so about 111 Hz.
If you were to take a look at the spectrum, there should be two smaller spikes at 111 Hz and 769 Hz along with a massive one at 440 Hz. There will probably also be repetitions of that "trident" higher up because of the fact you're modulating it by turning the signal sharply on and off (so you're modulating with a square wave, which has an ugly spectrum). I'd love to actually see a view from your DAW.
I knew I wasn't the only nerd who enjoys both signal processing and music theory! Thank you, sir.
Yes! I was looking for this comment ! It sounds pretty much like my square wave ring mod guitar pedal.
An oscilloscope would have been neat for that segment.
The ring mod and amplitude mod are very similar, it's just that the ring mod inverts the phase during part of the cycle, whereas the amplitude mod preserves the phase. So there will be a difference in tone depending on how it's mixed back with the original. And possibly some uglier overtones because modulation is never "clean".
In other words (if I'm understanding you correctly [please correct any misconceptions]), the way Adam performed the ring modulation actually contributed to the amplitude of the observed frequencies. If Adam applied a filter to the post-ring-modulation audio to emphasize the lower and higher of the three frequencies, we would hear a chord consisting of frequencies 111 Hz-440 Hz-769 Hz (which would sound pretty close to A2-A4-G5 with the G5 being a bluesy/flat seventh a.k.a. the 7th harmonic of A2).
Let's say that Adam instead used a starting tone of 110Hz, then the carrier-signal would be 330Hz, the carrier would be 440Hz, and the carrier+signal would be 550Hz. This would be a nice, crisp A major chord over E. if the initial signal had been 88Hz instead, then the triad would be 352-440-528 -- a standard F major triad.
Go signal processing nerds! In my opinion your answer about amplitude modulation is more correct than Adam's, however this is not what is going on here. Since Adam used a repeating sample rather than a volume control, the E was retriggered each time, and so the result is a complex waveform repeating identically 440 times per second. So its spectrum only contains frequencies multiple of 440Hz.
If using a single E and fading it in and out without retriggering it, on the other hand, we would indeed obtain amplitude modulation. But we would not hear an A at all: in fact, we would still mostly hear the E, but also the differential tones at all the frequencies present in the original signal, plus or minus multiples of 440 Hz.
I love how at 5:45, when he plays the E at 64hz, you can hear the major third with the low C
Isn't the term _feminine cadence_ borrowed from Old French / Middle French poetry, where feminine nouns often ended in an unstressed syllable, while masculine nouns often ended in a stressed one?
I have no idea but that seems like a good theory
Whichever root if has it's still essentially stereotyping which the woke would claim is bordering on sexism/misogyny. That said I don't advocate for changing it.
I'm pretty sure you are right.
Probably.
But that's not "woke", so we ignore those kinds of facts.
Look, I'm trying to "virtue signal" here, so stop bringing nuance, complexity and context into things, yes? It spoils the underlying message that I'm awesome.
@@markstanbrook5578 I'd argue the concept of masculinity and femininity are gendered concepts but have nothing to do with gender and even less to do with sex. After all, the 'woke', as it were, would argue males can be feminine and vice versa.
that chord progression is so beautiful man
im literally tearing up
Why is Adam so beautiful? Repetition legitimizes
M a d e m y d a y!
Models arent hot at all, its just that they upload daily
Repetition legitimizes
Repetition legitimizes
Bass man look gud:)
The way you describe how music works is so incredible. You are in the flow when you share.
Why does finding out Adam is a coaster nerd make me smile so much? Just hearing coaster manufacturers named on this channel made me giggle immediately!
Why did I first think this comment was about coasters for drinks?
@@int0x80 same here
coaster nerd LMFAO... iykyk
I never once thought I would meet another coaster enthusiast in this area of UA-cam, but it turns out there is a surprising amount of overlap. It's friggin' dope as heck.
@@MinkyBoodle44 lmao same
And in a stroke of musical genius, the explanation of how you change an E note to an A note begins at…4:40.
Bravo…
Adam : "Why are these chords SO beautiful?"
Me : "I don't know man, you're the music theory guy here you tell me"
Tell me the funny words magic man!
They are beautiful because of the 6-5 (Ab-G), 4-3 (Bb-Ab) and 7-6 (G-F and F-Eb) suspensions in the melody.
Adam: "I'm not a UA-camr, music is a lot more fun"
Also Adam: Has some of the most impressive video editing skills of all content creators on UA-cam.
4:36 Quick note; their question says "fade in and out", not "play". To me, this seems more like ring modulation, which is where you take the amplitude of some note, e.g. the note E, and then have some waveform applied to the amplitude of that note, with the wave at some frequency. At low, sub-audible Hz values, we hear a "tremolo" effect. At higher Hz values, we hear ring modulation.
The effect sounds like two tones going out from the original, center tone. So, if you have, say, E5 (659.25 Hz), and apply a sine wave to its amplitude with a frequency of 150hz, you hear a sum and difference tone, meaning you hear 509.25 Hz and 809.25 Hz together as the resultant tones.
You're right probably because I didn't read this
Yes, that was, what I was talking about, thank you so much for the explanation!
Yes. Because he is playing a sampled piano note, the faster it is played, the more we only get to hear the very initial part of the "attack" of the sample where the hammer strikes the strings without every really getting to play the E pitch. Eventually the attack is shot short of a duration that it is effectively reduced to a click. A hammer strike played 440 times per second is going to sound like A4. A pure E5 pitch faded in and out 440 times per second is going to product some sort of modulated tone instead. ua-cam.com/video/DCPxe6P1KWo/v-deo.html
Otherwise known as Amplitude Modulation where the “carrier” is E and the “modulation” is 440Hz. Note that if the E is below 440, you (I think?) get a wrap around at 0Hz, and a 180 out of phase signal on the low end that is the absolute value of the negative frequency.
The other subtlety is that if your modulation is not sinusoidal (ie a triangle wave) its new spectrum should be as if you took the sum and difference with the E to the triangle wave’s original spectrum.
If you use impulses like Adam used, you’ll get some kind of square-wavy, odd harmonic mess tho
@@enginerdy This is mostly right, though there's actually a slight difference! Ring modulation doesn't preserve the carrier signal! If you simply take a signal and modulate its amplitude, and then feed out the direct result, the carrier signal is absent in the end result. Amplitude Modulation preserves it. That is a good thing to point out though!
Your example of how an E becomes an A if you play it enough times a second was very cool
Spatial Audio is also an important part of classical music, especially with extended ensembles. One cool example is “The Unanswered Question” by Charles Ives, where the strings are meant to be back stage and the brass and woodwind are meant to be in the rafters of the concert hall. This gives the piece a really ethereal sound. It’s a super cool topic, I bet Adam could do a fab video on it, similar to his “UA-camrs react to Experimental Music” video.
I think Adam has actually mentioned that particular piece before (can't remember the exact video sorry) so it would be awesome if he did a whole video essay on it!
Henry Brant wrote a lot of spatial music, for example: ua-cam.com/video/0bWeDkfBp8U/v-deo.html score: issuu.com/theodorepresser/docs/wwcf_score
Nice! I’ll give it a listen.
There's a whole field of psycho-acoustics around placement of sound in space. This dude in San Francisco does "sound sculpture" in an environment of 176 audio speakers en.wikipedia.org/wiki/Audium_(theater)
but it's not really a huge part of classical music...
I really like your UA-cams. I know some of the music things you talk about but don't know what causes them. My favorite is the "Misty Chord". I got that message at work and my workmate made me play the UA-cam. We have a piano at work and she is always playing the first 2 bars of "Misty" with that chord. She can't get enough of it. Thanks for this and all your other UA-cams.
I appreciate your enthusiasm, makes everything more fun.
That experiment on frequency was fascinating! I guess I knew that the E would turn into an A but I've never seen that demonstrated.
in synthesis, there is a feature called oscillator sync, in which a tonal oscillator can have its waveform position reset according to a second oscillator. so for example, one oscillator could be playing E but be reset 440 times per second. the result varies depending on the frequencies involved: it could sound like new timbres, and/or a blending of notes.
This is exactly what i was thinking. If you take a 440 tone and repeat it faster than the samples length but below 440 you aren't getting a pure tone, you're getting a mixture of the two.
@@euronomus The perceived tone would definitely still be whatever the oscillation frequency is at. A mixture of pure tone just determines timbre, but when you use it the tones become indistinguishable
@@btat16 you hear both notes in oscillator sync/reset
i think a cleaner way to do what the question was asking for would just be amplitude modulation with a carrier (of whatever waveform made you happy) at E Hz and a 440 Hz signal
11:22 Colours of chords. This has always fascinated me. I do though, have to be careful about the person I bring up the subject with. Some people dont get it, & just think you're weird. Anyway-with me, it all started with me seeing A major as RED. D is yellow, D# is brown and E is dark brown/black, C yellow white. Complex chords are tartan! Diminished are 3D pink.
I miss the old theme "Question and answer time with Adam Neeeelyyyyy"
Yah!
ya!
bass
You blew my mind with the 440Hz demo
Are you really a musician first, or do you just want to avoid boxing matches against the Paul brothers?
This comment is great!
don't we all?
Let's be honest: all sane humans want to avoid those mooks as much as possible, at all times.
@@fhqwhgads1670 I’d fight them for millions tho. Pretty good shot at not having long-term damage for 10 minutes of “work”
@@johnfrenette I'd fight them for $20
Your knowledge is simply so vast and your video production going into the 4th wall and coming out again is simple yet ingenious... Great job Adam!
That original Stella bridge is bonkers; I've always learned to play the last two chords as bVII7 to Imaj7 (over its root), but that ivmiMa7 to I/iii is just unreal. So so beautiful. I'm never playing the realbook version of this again lol
I am a musician and have been doing it for a while but the beauty of it all is the ability to always learn something new!! Thank you Adam!
11:25 as a child I always connected certain chords and songs with certain colors. When I tried to discuss it, people thought I was strange. Its feels good to know I'm not the only one
I really like seeing something that’s very intuitive to me because of my experience in modular synthesis explained by Adam. Playing the note E at 440 Hz is in synthesis simply called amplitude modulation and can be easily done on a modular using two oscillators (one at E and one at A) and a VCA. Produces great timbres and produces frequencies of both the sum and difference of the two frequencies :)
Was going to comment this! ^
Pretty sure the horn counter melody in Stella by Starlight wasn't meant to be triumphant but to evoke an undercurrent of dread. It's a creepy movie.
“music and pitch is just how fast sth is happening” ily
Our ear lobes (the whole outside part of the ear) filter a sound differently depending on its direction. Our brain then uses those timbre differences to help locate sounds. I read something about it a long time ago anyway. something about comb filtering in the brain. so that can be used in reverse to encode location in a track for playback in stereo (two front speakers) system. the limitation is that the listener must sit in a specific spot and keep their head facing forward. one listener. only. no headphones lol
Playing bass with/for good people (in a room) is one of the best things! All the instruments are beautiful and wonderful but bass stole my heart
Damn, I knew you loved music, but a whole video on the physical attraction of this one chord? I am shocked
Oh look a checkmark
This feels like a fantano comment
Yo Brad Taste, didn’t expect you here love your videos man.
wait you watch Adam?
What's up checkmark
10:50 *use both tensions in succession instead of choosing just one,
but expect that the 9 to b9 tends to sounds more “correct” than the b9 to 9
I'd love to see your analysis of Carnatic or Hindustani music, or just generally non-western music, Thanks for the consistently awesome content!
1:04 is the coolest transition i have ever seen in my many years on the internet ever even months later. Transition flawless achieved kudos to you for such an amazing bit of editing and thank you for the hard work it must have taken to procure it for us all to enjoy.
(256/60) • 103 = 439.46 Yep! checks out :)
My wife is a B3 organ jazzer. She’s had perfect pitch since childhood. At 63 she still has it. Chromesthesia is what she has, seeing sounds as colors in the minds eye.And she can still tell when my guitar’s b string is slipping flat.
Great video as usual 👍
Damn u can watch fast
Naw he's a prophet. He knows the video will be good even during the first minute
@@kirjian you know the truth
When they ask me why I procrastinate, but they don’t realize I’m slowly entering my flow state
E 440 times per second sounds like A. That was insane. Living
You delight and inspire me even though I don't understand half of what you're saying.
Such is the power of music.
You made an A of Es.. that was cool to see. That's neat that you can play any note as another note lol
Playing music with only one note went to another level
This is how FM (Frequency Modulation) synthesis happens - the engine behind the old Yamaha DX7.
I think by “fade in and out” they’re talking about ring modulation
Or I guess amplitude modulation but like... same thing
It would still predominantly sound like an A, so the principle "pitch is how many times something happens a second" holds.
I was, but I don't the results would've varied much
@@ilyanoeteuscher6870 it actually sounds pretty different. It’s used in a lot of synths because it makes weird timbres, I wish he kinda went into it
@@ilyanoeteuscher6870 so if you did fade E in and out at 440hz you WOULD hear the interval...
BUT (and here’s where it gets funky)
You would ALSO get........ a G?
I really don’t understand the interval relationships, but it goes into the undertone series or something idk
question: help! which chord should i use?
adam: whatever makes you happy.
i'm living by this from now on
Fading E in/out at 440 Hz gives a different result than the experiment you did. If it was E5=659.25Hz, then you should be getting 659.25+440 Hz and 659.25-440 Hz. It should sound like what ring modulator effect does. (except that one is four-quadrant, while fade in/out is 2-quadrant... goes only to 0 volume and never to negative)
Love the camera effects where you are on screen within screen and then back out again!
"There is nothing quite like a deadline to get the creative juices flowing"
- Me, working on my thesis
On "Flow", the psychological concept Mihály Csíkszentmihályi use to quantify an aspect of joy, you mentioned a deadline, IE, a challenge. Part of it is having a challenge that matches the reward. In your case, the intrinsic accomplishment of producing creative work. Watching TV tends to be inherently boring as the reward lacks challenge. Video games tend to provide more flow as there's involvement and clear challenge. IE, one knows what to do, how to do it, there's challenge/skill involvement, no distractions (focus). A chart on Wikipedia suggests relaxation/control and arousal/being energized lead to it. So plan what/how, execute with skill, focus.
Do people with audio-visual synesthesia actually *see* colours when hearing sounds? Like when you mentioned yellow and blue, do you get the 'feeling' of those colours? Or do you actually like... see them somehow?
I think Adam was more so talking about the actual letters like C and B, so a letter-color synesthesia. Idk if audio-visual is a thing but it probably is
I'd imagine it's like thinking about (or imagining) a colour. You don't see it overlayed on your visual field. But you still can kind of see it in your mind.
Or like if I imagine my house. I don't see it in the same way that I see what is actually in front of me but I can see it in my mind and visually focus in on details etc.
I'd say it's analagous to imagining a song in your head. And in fact smells and tastes often have sound to me and it's very much the same sort of thing as how I would usually imagine sounds.
Adam's way of sharing his passion for music is SO beautiful
There are some songs that use binaural audio quite brilliantly in their conception, like Chrome Sparks' "Marijuana" by having the intro fade in and swirl around, gradually speeding up. The effect is brilliantly psychedelic.
I’m first an foremost a musician, but my hobby is coasters. So awesome that you share that too!
Your synesthesia crashes so hard with mine: Yellow = E ; Blue = C ; Bb/D is like mint ice cream on a bed of lettuce. We should do something like "5 synesthetes compose over the same painting (or any colorful visual thing)".
To me yellow = 5th degree (shines like gold aka dominates), blue = 2nd degree
Tonic is white obviously
D is yellow for me lol (the note and the scale)
@@ZachMcCordProg E is the yellowest yellow for me
YOURS MATCHES MINE. We're besties now
@@shadowbunny7892 oh yeah!
I’m happy he got my request on this series
Musician + Coaster Enthusiast = God tier person
Absolutely
You are one of the only chanels that entretains me and educates me
0:58: But is a jazz-based analysis really appropriate? It's clearly inspired by late Romantic symphonic music. Sounds like a Rachmaninoff.
The guy is a jazz musician... He's going to compare it to jazz theory, even if it isn't appropriate.
@@wolfgangamadeusmozart6457 and why wouldn't it be appropriate? The tune ended up in the Real Book, after all.
@@hugobouma I don’t like calling it jazz theory tbh because it’s just music theory. Classical musicians use the same stuff, like Ravel and Debussy. Ravel being the master of the 9ths and augmented chords. And Debussy using extended harmony which is a fundamental concept of jazz. Despite that people like to distinguish the two theories even though they are one. Jazz ain’t nothin special to be categorized into its own genre.
If you modulate the PITCH of a note, such as an E, at the frequency of another note, such as an A, you get Frequency Modulation (FM) synthesis which is exactly what the Yamaha DX7 introduced to the world. One "operator" is simply a sine wave generator that generates sine waves at audio frequencies and another operator is a sine wave generator that is used to modulate the pitch of the first generator at audio frequencies. When you do that, you can get crazy complex harmonic series depending on the harmonic relationships between the operators. Very cool.
Relieved that E didnt become "All Star" by smash mouth "somebody once told me" haha when you start playing w Ableton I get fractal flashbacks
I think that the more you practice and the more you work on your craft, the quicker and more easily you can enter a flow state. I once wrote a very long novel. I worked on it every day. In the beginning, I would only enter this flow state occasionally. But after several months, it happened just a few minutes after I sat down. And it happened almost every day.
Feminine and masculine cadences seems to just follow the terminology in poetry (especially French poetry), where "feminine" endings (ones that ended with a silent off-beat schwa vowel) alternate with "masculine" endings (ones that end without silent syllables). Within that context, there's nothing pejorative about it outside of the low hanging fruit of "feminine = weak". (And if we follow that train of thought, we'd probably need to unpack why being on the beat is "strong" while being off is "weak".) Feminine endings are as important to French poetry as masculine endings are.
Weak/strong is based off physical strength, it was never meant to call an entire gender weak overall. I wouldn't even say feminine cadence's are named so because they are "weak", they just remind people of feminine beauty.
There's no problem with applying gender to things as long as it doesn't come from a place of misogyny/misandry.
@@DoubleOhSilver Frailty (or perceived frailty) is an integral part of traditional feminine (and also female) beauty across pretty much every human culture. So the idea of a trochee being female and an iamb being male is certainly "problematic" insofar that negative associations and connotations are problematic.
But yeah, I agree with the fact that the terminology is innocuous. I still teach it to my students, since I find it a bit more imaginative and effective than the alternatives. Another context where masculine / feminine appears is as labels for sonata themes, where "masculine" = extroverted and proactive while "feminine" = introverted and reactive. My students are mostly aware enough to know that gendering nonsexual things is just a matter of analogy. For the younger ones (and also the more impressionable ones among the girls in my studio), I offer a caveat as needed.
A little on 'Flow State' - I had spent several years working with Kenny Werner's book 'Effortless Mastery' when I was invited to participate in a flow state study at Goldsmith's University in London.
They hooked me up to an EEG and asked me to play unaccompanied improvisation on tunes i was familiar with. I used steps developed with effortless mastery and played, and the scans showed little/no activity in my pre-frontal cortex.
I discussed it further with a neuropsychologist, and came to the conclusion that flow state is / can be entered by shutting down brain activity in the pre-frontal cortex -- for example, 'complex behaviours including planning', self-judgement and analysis -- and connecting directly with the wider brain architecture.
I'm still using these hypotheses to structure my practice and playing routines, though I haven't had the opportunity to explore in more detail, yet....
I can handle dying but the thought of becoming anymore tone-deaf than I already am is terrifying
Adam, I think the question at 4:33 is actually describing ring modulation. It would be a sustained note but "faded in and out" or the way I interpret it: Turned up and turned down. Your point still stands. The overtone would highly emphasize the 440 Htz that it's "fading" at.
Alan Parsons did a quadraphonic mix of Pink Floyd's Dark Side of the Moon in 1973, which could technically be a very early example of 8d music. The sound effects and instruments all have really cool placements.
I played horn for 15+ years, and therefore of course I always can appreciate good horn parts. Thanks Adam!
What the muscle memory question reminded me was the "When you learn a riff and put it in everything" Daniel Trasher video. Obviously you should not be guided only by muscle memory, but just as the "Lick" these are common phrases. It's like speaking. You don't always have to invent a new word to discribe something, though repeating other people's ideas can get pretty boring pretty quickly. So I'll be really helpful with this totally exact answer: Just find the balance between old and new.
As a language educator and music enthusiast, it's uncanny how similar the art of learning both languages and music are. Common misconceptions exist in both fields, for example that native speakers or virtuoso players are consciously choosing and controlling every motion they make, whereas in reality, both are calling on "chunks" of drilled and deeply-rooted patterns that we string together in coherent ways. Like with any musical instrument, the path to fluency is less to do with grammar exercises and aimless production activities, and more with repetition and drilling of useful patterns and constructions.
Probably the best music channel on UA-cam! Keep it up!
Me watching you warp an E to 440Hz like it's a new thing
".... so the new guys haven't watched the all star video huh"
Adam, I wanted to study music but my parents talked me into something more "practical", Electrical engineering. :-(
Now that I'm retired, I can now study music like I always wanted to. Thanks for the inspiration.
Bassist AND roller coaster enthusiast? Am I Adam Neely?
Sorry Adam, but I think there's more to the E and A question. He said "fade in and out", not "play it at 440 beats per second", and the big difference is that the soundwave wouldn't start over again, it'd be in different positions everytime the amplitude ("volume") would go up. So, it would sound like a different thing, maybe still A, I'm not sure about that. I'll try to get into it!
So yeah, nobody asked but I did some experiments anyway, and the results are quite interesting! Here, have a listen:
I started with this simple sinewave tuned at an E4: drive.google.com/file/d/1HBT08fdN0cghWxiJeompqX0frswF6wKM/view?usp=sharing
which on a spectrum analyzer looks like this simple peak at 329Hz: drive.google.com/file/d/1KM4U4xx-M9lNYBE2Ga1nKI6UydD6MN7A/view?usp=sharing
But THEN...
I applied a sine-shaped amplitude modulation at 440Hz and this happened: drive.google.com/file/d/12CTepzSOUGcuDjnLh7CXkrHx3gdM8QxZ/view?usp=sharing
There's two extra tones now! The whole thing now sounds like a strange A, in fact, the two extra tones emerge at approximately an A2 and a C#5 (it's not exact because of our tuning system quirks), and look like this: drive.google.com/file/d/1iv5QianoaDHuZEn6YwDfSgQmrQfBcU0e/view?usp=sharing
Now you could say "yeah sure, that sounds cool and all but real instruments don't sound like that, what if you did it on, say, a piano?", and in the extremely remote case you were actually thinking that, I've got you covered!
Here's a clean piano sound (using 4Front Piano VST): drive.google.com/file/d/1eJDHXBwjjAWAIK93sFJQYhxq40sGjHtF/view?usp=sharing
It looks like this: drive.google.com/file/d/1KIjZkMSQj7Gyw60liEmlqEt0fcysMQuM/view?usp=sharing
And then, the modulated version: drive.google.com/file/d/1tpy0cG2YyvP3wGXoEDK6ZbQi9Eb7HfYO/view?usp=sharing
Now it looks like this: drive.google.com/file/d/1q0a3C3EeareeBd_kwlnGcQ5TI3e7HMWU/view?usp=sharing
... then I just searched "amplitude modulation" on UA-cam and immediately dawned on me that this is actually how AM radio works. It's in the damn name! But oh well, staying up until 4:30 AM listening to sinewaves was kinda fun anyway lol
Thank you very much for reading this far! I really appreciate it :)
Re binaural audio, here's a bit of a summary about how stuff works:
There are four basic ways we work out where sounds come from. The first two are called the interaural time difference (ITD) and the interaural level difference (ILD). If a sound is on your right, it has to travel further to reach your left ear, so it takes longer. Also, your head casts a shadow so it's quieter in your left ear. These two things alone are pretty good for working out where sounds are. The ITDs work better at lower pitches and the ILDs work better at higher pitches, because physics, but there's a bit more nuance to it than that.
The problem with these interaural cues is that there's a lot of symmetry. There's no way to tell front vs back vs up vs down. This leads to what we call the Cone of Confusion, an infinite cone of points that all have the same binaural cues.
The next thing that helps us resolve this is spectral cues. Those flappy things on the side of your head act as dishes that filter sounds differently depending on the direction they arrive from. For example, a sound in front will reach your eardrum with more high frequencies than a sound from behind. Over time, your brain learns the relationship between different sound spectra and different locations. If you swapped your ears for someone else's, you'd be confused about where stuff was. But after about 2 weeks, you'd be pretty good again. The spectral cues are good, but they tend to work better for things with a spread of frequencies, particularly higher ones.
The final way we work out where sounds are in the real world is motion. Say you hear a sound that's either directly in front of or behind you. It reaches both ears at the same time and level. Now say you turn to the right. If the sound was in front of you, it'd now be to your left. If it was behind, it'd be to your right. By seeing how sounds move relative to your head as you move your head, you can pinpoint the correct location.
Virtual audio stuff in headphones can replicate the ITDs and ILDs. You can even use some generic ears to replicate the spectral cues to an extent, but it would be so much better with your own ears. However, headphones can't generally replicate motion cues (unless the sounds are being generated in some complicated virtual acoustic environment).
Thanks for this explanation! Do you study this kinda stuff? I just ask because it reads like someone with a deep understanding of the field.
@@anachronismic You're welcome, and thanks :)
Yeah, I did a PhD in sound localisation, and am a hearing researcher, though in a different area
@@andymcl92 Makes sense. Have just enough of a tangential understanding in signal processing to read between some of the lines you provide, it's neat to think about for sure.
I always wondered why Chopin's Prelude no. 20 was my favorite prelude, and now I know!
Notably the waveform of the E at 440 Hz will look like a chopped up wave with a jump every 1/440 seconds. Hence the sawtoothy sound?
Yeah, since the E of the sample sounds like it's lower pitched than the 440hz, and thus not actually being played at all. In the examples when the rate at which the samples are being played back is less than the pitch of the E (presumably ~330hz), it should sound less sawtoothy, though it still should have sudden discontinuities, so maybe it still should sound sort of sawtoothy.
Great teacher of all things musical. Subscribed! Loved the E 440 times demo to become A440! And great use of music graphics. Thanks for your work!
It's hilarious how much all those early film scores sound like Rachmaninoff
ikr, that rich Romanticism carried itself so far into the 20th century damn
@@saqlainsiddiqui1744 I think the specific reason is that Rachmaninoff spent the last part of his life in Beverly hills and I think he was friends with a few Hollywood composers but I'm not sure.
@@s1nd3rr0z3 Yeah I think you’re right, his emigration to the US definitely had an impact on his contemporaries.
I think it also went the other way too - the works he wrote in the US are sometimes really different from his ‘usual’ style - the often-forgotten 4th piano concerto is a good example of that I think
I have to say, this was so entertaining, enlightening, educational, but just really thoughtfully compiled. thanks.
I think your Ableton demonstration is wrong, because it stacks samples instead of manipulating sine waves. Here's why:
If there's one thing I learned from my course in signal processing (honestly I hated that course, so there's not much more), it's this:
When you add two sine waves with different frequencies, the result is a sine wave with the average frequency multiplied by the difference in frequencies. To illustrate: if you add a 439Hz signal and a 441Hz signal, the resulting signal is equal to a 440Hz signal that fades in and out at 2 Hz.
Turning that around, if you take an E, let's say E5 at 659Hz, and fade that in and out at 440Hz, that should theoretically be the same as the addition of two notes at 659Hz - 440Hz = 219Hz, or a flat B2, and 659Hz + 440Hz = 1099Hz, or a Db6.
After much fuss I tested this in Audacity, and indeed, when I multiply a 659Hz signal by a 440Hz signal, I get the same result as when adding the 219Hz and the 1099Hz: a high beep and a soft low hum (though the softness could also be due to my laptop speakers being sh*BASS*)
I think as well, the fact it's a piano sample means you have the attack of the hammer hitting the string, before you really hear the note it produces. Repeat it fast enough and that initial percussive click is all you get to hear, and you're just getting a... saw pattern I guess? The stuff that sounds like an E isn't part of the waveform anymore
The stuff where it was slow enough to hear the E and the effect of the repeats was cool though! I wonder if that came from what you're talking about, where there's one wave (the near-constant E) and another (the repeating attack)
what you say about adding sinewaves is wrong. If you add two a sinwaves at different frequencies on top of each other they do not result in a sinewave.
The resulting wave consists (as any harmonic wave {can be broken down by fouriertransformation}) of sinewaves but it is not a sinewave.
technically even if they were the same frequency - if they weren't in the same phase they would not result in a sinewave, they could even cancel each other out.
What happens if you add two sinewaves depends on frequency, phase and amplitude.
the only time it would result in a sinewave would be if both sinewaves were oscillating at the same frequency and be locked in the same phase.
The amplitude could differ it would only have an effect on the perceived loudness.
@@Felitsius Maybe what I said wasn't completely correct, or maybe worded badly. What I meant is adding two sine waves with frequencies a and b produces the same waveform as multiplying the sine wave with the frequency that's the average of a and b by the sine wave with the difference of a and b. It's not a sine wave by itself, that's true, instead it's like f(x) = sin((a+b)x/2)*sin((|a-b|)x). You can see for yourself if you use audacity to create two tracks, generate a tone of 100Hz in one track and a tone of 101Hz in another track, then select both tracks and mix them down to one track to get the addition, and what you'll see when you zoom in is a sine-like wave with the amplitude that is another sine wave.
@@daanwilmer Thank you for your answer! I'm sorry but I think you missed an important detail.
I'm coming from a passionate-hobby-sounddesing-background, let me explain where I see the mistake in your assumtion.
What you say about adding the two sinewaves is true, it just doesn't represent what was asked in the quetion.
What you did is basically just playing two different notes with the same amplitude at the same time - which is not what the question asked.
The question asks what happens if you fade an E (659hz) in and out at 440hz.
This concept is called "Am-synthesis" (Amplitude modulation).
The important detail here is that the modulating oscillator (sinewave) only affects the amplitude of the modulated oscillator. At lower modulating-frquencys this will be perceived as a tremolo-effect.
Above arround 30hz we start perceiving the modulation as pitch.
Another important factor in the equation is the difference in amplitude of the modulating oscillator and the modulated oscillator.
If you amplitude-modulate 659hz with 440hz at a relatively low amplitude, the affected oscillator will start to develop one lower harmonic at 110hz and some higher harmonics at multiples of A and E (I think maybe also B and F it was a bit hard to distinguish in the spectrumanalizer as it is logarhytmic and at higher frequencys the peaks are super close to each other)
Since the fundamental now is a 110hz technically it would be an A but as it is a lot less present (also the other harmonics) it will still be perceived as an E but with a different timbre.
If you start enlarging the modulating amplitude, at some point the 110hz will become enough present so that the sound wil now be perceived as an A.
So yes, I think adams demonstration was wrong and his answer incomplete.
But I think yours is wrong as well :)
I hope my explanation made sense.
Wow. Was browsing youtube because it's too effing hot to do anything else, and the first listen of that bridge: instant goosebumps. Thank you, Mister Neely
Adam Neely was at Cedar Point!? Darn, that's less than an hour from me. Coulda had a celebrity sighting 😆
Adam. Thanks. You're awesome and you share with others. Thoughtful person. Best wishes. Dan.
I think your “E at 440hz” experiment wasn’t quite optimal, because of that piano sample. It has a little percussive sound at the beginning which produces much of the 440hz tone. Could it be retried with a sine wave?
It would still sound like A440 in the end.
@@kruksog probably, yeah. There would probably be a “click” when the sine wave resets. The question mentions fading it, I wonder if it’d work that way.
@@Wyattporter you could avoid that by modulating the amplitude of a constantly playing sine wave rather than starting and stopping a sample over and over, this technique is called amplitude modulation and you've probably heard of it in the form of AM radio, which uses a carrier frequency in the radio band and modulates its amplitude (raises and lowers its volume) according to the signal that is to be broadcast. you could accomplish this very easily with a basic modular setup by running a 440hz sine wave from an oscillator through a voltage-controlled amplifier and then using whatever E signal you wanted as a control voltage for the VCA
@@woodlaminate your AM radio comparison explained it to me, I think. Modulating an E like that basically makes it broadcast an A.
I like when we get cool snippets of you playing something more technically complex. I'm very used to watching videos where you describe theory and play in a demonstrative way to reinforce learning concepts, I almost forget that you're a very talented performer!
"As we age we lose our faculties and our hearing goes..." WHAT! WHAT WAS THAT, SON? SPEAK UP!
4:33 if you're looking for straight up "fading a thing in and out at 440Hz" the thing you're probably looking for is amplitude modulation. it actually does produce some kinda cool results with "sideband frequencies", look into ring modulation too it's nearly the same thing. more synthesis territory than music theory but of course synthesis theory is a form of music theory innit
you don't get the same effect by just playing a piano sample back 440 times per second because you're just repeating the exact same waveform, your "E" sound is being re-started 440 times per second whereas if it's just going on at its 329.62Hz or whatever and being brought in and out 440 times per second each time it's brought in it'll be starting at a different point in its cycle.
also i think adam has a video that talks about sideband frequencies?
"if you're listening on headphones"
But what if my phone has Dolby Atmos?
That experiment with the frequency of the E note was enlightening. Thanks for that!
Does your a synesthesia change over time, or in context? Like “blue” is C now, it was more purple before? Or is the color consistent?
If C is blue, is it always blue regardless of the mood of the piece (like sad, or thumping) or might it change for you living inside a piece of music vs in isolation? That’s a lot. I’ll take your answer off the air. I really like your channel.
As a synesthete, my experience is that, generally speaking, notes remain the colours that they've always been, but personally, I started on an instrument where you play in a lot of sharps keys, and when I began to play jazz piano (where you run into a lot more pieces in flat keys), found that reading (for example) a Db instead of C# would be different colours. My run down is C= white/cerise (this seems to be the only key which changes, and it is dependant on the mood of the piece) D= mid blue E= yellow F= blue-green G= mid green A= red B= pale blue
I don't have music/colour synaesthesia, but I thought you might be interested in this anyway! Mine is colour/grapheme, so every letter and number and some other things have colours for me. The colours have been completely consistent throughout my life, and a select few do change slightly in circumstances. 2 for example is red for me, but when paired with yellow numbers (like 4), it looks pink. Font, emotional connection to the words, etc, don't affect it at all.
If anyone is reading this and has music/colour, I'd love to know what it sounds like to hear microtones. When I see non-letter/numbers in text, like &, %, ~ etc, they have no associated colour. @ takes on the colour of "a", though.
I'd also love to know if music/colour synaesthesia works effectively like perfect pitch. Either way, always been super jealous of this type, it seems utterly magical :)
That opening cadence to Prelude no 20 is one of my favourite things. Those old critiques really didnt know anything.
"Like that, right there, that's C major as f***" -Adam Neely, 2021
Isn't that too obvious to C?