Check out a free preview of the full Machine Learning in JavaScript with TensorFlow.js course

The "Sound Model Demos" Lesson is part of the full, Machine Learning in JavaScript with TensorFlow.js course featured in this preview video. Here's what you'd learn in this lesson:

Charlie shares a few applications she created that use Machine Learning and audio models.


Transcript from the "Sound Model Demos" Lesson

>> With that, there's a couple of things that I built, where I don't know if you remember, that was early 2020. There was an Apple conference, actually, WWDC 2020. And on the Apple Watch, they released the thing where you could wash your hands. So I was watching the conference and I was like, I think I can do that.

And I spent just a couple of hours doing in JavaScript because I knew that this audio model was available. So there's no sound hopefully. But you can see here, there's water. And then you have a countdown of, yeah, 20 seconds, because I assumed, I wasn't sure. I did some research, I think they didn't really explain how they built it, but I thought, well, the sound of water is quite something that you can recognize.

So you could train a model. And I don't know how much data Apple actually used to train, but I only used my tap for, I don't know, maybe 20 samples. Obviously, it's a prototype, it runs in the browser. But it means that, hey, I didn't wanna have to buy an Apple watch just to have this feature.

So now because it's a website, it also runs on a phone, so I could have a phone next to me in 20 seconds. And I just thought that it was nice that if you know a bit of machine learning and JavaScript, you can rebuild things with no team and no money, just me on my couch.

So I just thought that it was interesting to see that kind of idea, and kind of be able to bring it to life. And I think some people did the same with coughing, dictating that maybe someone is sick. And you can, I don't know, have an alert that maybe someone has COVID ir something.

So you can do the same, because you can train a model about coughing. And another thing that I did is I was reading a paper about what's called acoustic activity recognition. And that was super cool. Again, the paper was not implemented in JavaScript, but the point of it is to build potentially better IoT home systems by being able to detect the activities that you're doing around your house.

So for example, if you are watching a recipe on the iPad and you're cooking at the same time, you don't want to go and pause and you continue and then you pause. So instead, you could use the sound of the environment as an input. So if I'm chopping something, then you can even imagine the sound of a knife on a cutting board, like [SOUND].

So that is kind of a different sound, or the sound of a blender, or the sound of the door, like opening the door of your fridge, or the microwave when it's done, like [SOUND]. It's like, you can recognize these sounds and you can train a machine learning model to kind of end up creating more smarter devices.

So instead of buying a smart fridge and a smart coffee machine and a smart dishwasher or whatever, you could have almost like a unique system that listens. And it will be able to alert you about things without you having to spend a lot of money on ton of different IoT devices that work differently and then your data is sold to a lot of people.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now