Hardware with Arduino & JavaScript

Generating Audio with Face Detection

Steve Kinney

Steve Kinney

Temporal
Hardware with Arduino & JavaScript

Check out a free preview of the full Hardware with Arduino & JavaScript course

The "Generating Audio with Face Detection" Lesson is part of the full, Hardware with Arduino & JavaScript course featured in this preview video. Here's what you'd learn in this lesson:

Steve uses a face detection library to track the position of the user's head. A frequency is generated based on the work so the user can play higher and lower frequencies by moving their head.

Preview
Close

Transcript from the "Generating Audio with Face Detection" Lesson

[00:00:00]
>> That's cool I guess. I think it would be cooler if there was some way that we could control those frequencies from the browser. Could I put a range input on a page and could I slide it up and down over and over? Would that be fun? Yeah, I like that sound, that we've whoops sound, reminds me of Theremin.

[00:00:33]
I'm thinking we go into this directory that I have which is called face-theremin. And we go ahead and we see what's happening in there. I don't know what could be in there. We got some, I got this. We got to shrink down a little bit. We've got a little bit of code to write here, but we need to start out by getting some data.

[00:00:59]
At this point we're gonna, I could do a range slider. A range slider makes sense. Let's actually see what I've got going on in this directory and see what we can use that for. So I'm just gonna do, node index.js. And let's just take a look at the client-side code that we have.

[00:01:21]
We'll look at the DOM, then we'll look at the client-side code. I will grab that, I'm already there. Let's see, Make that normal-sized, I hit this play button over here, Looks like I've got some face tracking software. Looks like it's keeping track of the X and Y of where my face is.

[00:01:53]
It also, for other fun things, like I have it also tracking your facial expressions, so fun project you can do that we're not gonna do, I mean. I'm not gonna say we're not gonna do it, but you could theoretically have a bunch of LEDs that change based on your facial expression, and you could control the Arduino board with your face.

[00:02:13]
I might make the LED light up if I'm smiling or angry, we'll see how I feel. But first, I'd like to take the X and the Y on my face here, and I think we have a frequency, and I heard some whooshing sounds. What if we could tie all this together?

[00:02:29]
What if, as I move my face, we sent some instructions to the Arduino board and could theoretically change the tone of the piezo? Let's try it out. Pause that for a second. All right, so we've got some client-side code. We've got some server-side code. Right now the most things we need to get those movement events from the browser.

[00:02:59]
And we need to send them back to the board, right? So let's go ahead and go into the client-side code here. Now there's a lot of code in here, but that's mostly as I'm pulling in the faceapi.js library that's doing the face tracking. So don't concern yourself too much with that, it is waiting for a socket connection, which we need to just wire up on the server-side.

[00:03:25]
But if we scroll down a little bit. Really what we want to do is, as I move in, you can see I'm changing the DOM with the X and Y of where my face is, right. I also wanted to send it back to node of the face has moved, here is where the face is.

[00:03:47]
So, we'll go ahead and we're just gonna say, Well, don't ruin it for me, autocomplete. socket.emit. Face, and the facePosition, sure. And so now as the face moves, we ideally seek to send that over the WebSocket back over to our node server, where we will be ready and waiting for it.

[00:04:16]
And so we'll go back over into, That, and we're just gonna say, we've got to grab the piezo too, I guess, right? So we got the board, so we'll say const piezo, new five.piezo(6), and we'll say, When we get a new connection, when we hear from that socket, we're gonna say, We want face.

[00:04:54]
Socket on face and then really, we've got the data, you can care about the X or the Y. It's up to you, it's your face theremin, you do what you want with it. It's your face, right? I'm not gonna tell you what to do with your face. I'm gonna care about the X in this case.

[00:05:16]
Well, what the copilot thought I wanted? I might explore that in a second. So, we've got the x in there and we say if x is greater than 0, we'll go ahead and say, You might have to adjust this to tones that you prefer. We'll do it for just 100 milliseconds.

[00:05:44]
But it's gonna keep getting data so it's totally fine. Yeah, so let's try that out. And so now, as the face comes in, the server should be receiving that. And it should be talking to that piezo that it knows about. So let's spin that up. We're gonne hit play.

[00:06:11]
[SOUND]. Dustin, I don't even know how you're gonna handle this, the recording. [SOUND]

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now