Check out a free preview of the full Web Audio Synthesis & Visualization course

The "Advanced Frequency and Frequency Q&A" Lesson is part of the full, Web Audio Synthesis & Visualization course featured in this preview video. Here's what you'd learn in this lesson:

Matt discusses he advanced filter demo which builds on the previous frequency demo by using the frequency to map to a generative artwork. Student questions regarding how to adjust the fixed parameters for a live audio source and how to treat files at different bit depths for dB FS are also covered in this section.

Preview
Close

Transcript from the "Advanced Frequency and Frequency Q&A" Lesson

[00:00:00]
>> Something that I'm doing in this more advanced frequency demo, which I'm just gonna pop open here in the browser. So I'll just play it just to show.

[00:00:16]
[MUSIC]

[00:00:32]
So one thing I am doing is just splitting up the frequency into these several different bands, each band representing the size of these different boxes. And so certain notes or making certain boxes get bigger and smaller. But another thing I'm doing is I'm cycling the hue of the entire graphic based on the peak frequency.

[00:00:53]
So in this track it works really well because whoever it is that's playing the piano is going up, the piano and down the piano, so the notes are going higher and then lower. And you can sort of see how that's working here in the spectrum. The peak maybe over here, it's here but it's starts to move further and further up.

[00:01:15]
And so that's what is happening in this demo as well is it's capturing what is the frequency, what is the band that is currently highest? And so right now it might be here now it's here. And now it's over here. And that value is changing depending on notes that are being played.

[00:01:34]
And so that function, I'll show you how that looks, And this is a bit of an unusual function, I haven't really used it too much. But it works really well with this kind a audio track, where you just go through the frequency data. And you find out what is the Mac signal and what's the index of that Mac signal.

[00:02:05]
And then at the end of that, you convert that index back to a frequency. And then from there, you're gonna get back a value that's gonna be shifting depending on the sound that's playing. It's gonna be shifting from somewhere between 0 hertz and 22k hertz. And from that, you'll see what I'm doing with the hue.

[00:02:26]
As I'm mapping the hue from 0 hertz to 500 hertz, which is what I've decided, is a good value from where the piano's moving from left to right in terms of frequency. And then I'm mapping that to a hue between 0 degrees and 360 degrees, and that's what's changing this color.

[00:02:45]
And it's kind of an interesting effect, I'm doing it in a way that is smoothly interpolated from frame to frame. So that's using this damp function which is damp and interpolation, I guess you can say using linear interpolation. I tend to just copy and paste this function all over the place now because it's quite helpful to smoothly interpolate things from one frame to the next.

[00:03:10]
You just pass in the current value, what you want the value to be over time, and then some sort of smoothing constant, which could be a small number. The smaller the number, the slower it's gonna change. The higher the number, the faster it'll get to that value. And then you pass in the delta time that is the time that's elapsed since the last frame.

[00:03:31]
And from that you can get this nice sort of smooth change in the hue that looks quite nice on a graphic like this.

[00:03:39]
[MUSIC]

[00:03:43]
>> How do you adjust the fixed parameters for a live audio source like mic input?
>> Yeah, so I guess maybe one way of doing that would be to use something like what I'm doing here in terms of smoothing. Where you have, let's say, you're finding the peak signals, and you're over time over the course of a few seconds, or maybe 10 seconds, or 20 seconds.

[00:04:08]
You're having to adjust your parameters basically based on where the signals are. So if you're playing a track that's like just constantly kick drums, then the parameters might notice that there's tons of activity happening in the low frequency. And so that window of where you're looking might slowly drift over to there.

[00:04:31]
And then the track will keep playing and you'll have this nice visualization based on that. And then the next track might kick in and suddenly maybe it's something piano related. And so you're sort of running averages will have to slowly drift over to find the new window of which frequencies to look at based on the fact that there's not a lot of activity happening in the low end, and all the activities beginning to happen in the upper end.

[00:04:57]
And that's like an interesting problem, I had to work at that level. I think that's something that, if you're building a really, really professional grade visualization of something, that has to work across any potential input. That sounds like it's maybe something that's a little bit outside the scope of what I even know how to do with this.

[00:05:21]
So far, what I've been doing has been more tuned based on specific inputs. So that's kind of the nice thing about, if you're creating an installation, you're creating something that has maybe let's say some specific tracks already assigned to it. Then you can create parameters for each of those tracks.

[00:05:38]
And you know it's kind of predictable in that way. But yeah that's an interesting problem and one that would be interesting to try and solve.
>> I guess you're talking about dBFS? How can you treat files at different bit depths for dBFS? Does the API internally convert audio to a predefined bit depth?

[00:06:00]
>> Yeah, so from what I understand, when you're working with Web Audio, they give you back the data that's in like a standard bit depth. So you don't have to worry about that when you're just using the analyzer node.
>> Now, if you're synthesizing audio, there might be situations where you do have to worry about that.

[00:06:22]
Or if you're doing manual sort of decoding of the audio data into raw sort of PCM. And then going over that and maybe doing some manual FFT work on top of that, instead of using the web audio FFT library, you might have to start to concern yourself with it.

[00:06:40]
But, I think from what I understand it's all fairly automatically converted from for most of these user friendly API endpoints.
>> Someone suggested for the solution to the other question was, that's when you have to step in as a live VJ and tweak those parameters.
>> Yeah, definitely.

[00:07:03]
I mean, the cool thing is you can hook all of these parameters up to MIDI because there's actually a Web MIDI API. And so yeah, you can really start to do some live VJ stuff, which is pretty crazy.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now