Check out a free preview of the full Advanced Creative Coding with WebGL & Shaders course

The "Vertex Shaders" Lesson is part of the full, Advanced Creative Coding with WebGL & Shaders course featured in this preview video. Here's what you'd learn in this lesson:

Matt explains how vertex shaders determine the screen-space position of specific vertices of a mesh, creating a 3D effect within a 2D screen-space.


Transcript from the "Vertex Shaders" Lesson

>> So I've talked a little bit about Fragment Shaders, which is a specific type of shader used for rendering pixels and for coming up with the color of a pixel. That's really what a Fragment Shader is, just to decide on the color of a single pixel and then apply that throughout the entire surface, whether it's a rectangle or whether it's a sphere.

While a Vertex Shader is a similar concept. But instead of doing it on pixels, we're doing it on vertices. So in this mesh, we have lots of little vertices where these lines are meeting. Just like remember with a triangle, we have three vertices with a quad, we might have four vertices, while here we have maybe hundreds of vertices.

So here’s one at the very top where all these lines meet. And this shader here is a Vertex Shader that gets applied to every single one of these vertices. And also it’s parallel. And so, instead of going from top to bottom or something like that, instead of saying okay, let’s start at this vertex, compute the vertex, then go to the next vertex compute that,it's not doing that.

It's just saying, here's the shader code, run it on all of the vertices all at the same time, all in parallel. So it's just the same problem as before. We can't say well, I want this top vertex to be bigger up here or something like that, depending on the position of this bottom vertex.

We can't do that kind of conditional statements in a shader, because each vertex doesn't know anything about the vertices around it. And so, the point of a Fragment Shader was to do a pixel, the point of Vertex Shaders is to take the vertex attributes. So things like position, the position in this world space and turn that into a position that can actually be shown in a 2D screen.

So it's called projecting a world space position into screen space. And so, remember I was talking about the units and how I was saying we could in our earth demo. We could use miles instead of using just a random arbitrary unit like positioning at x3 or something. We could say the radius of the sphere is 3,000 miles and the moon is several 1000 miles on the x axis.

Well, how do we convert those miles into something that we can actually see on a 2D screen that has pixels and its 0, 0 in the top left, and the width and height of the screen in the bottom right. Well, this is what a Vertex Shader is doing is it's saying, give me those units, whichever those units are, whether it's a miles or whatever.

And transform them in such a way that they end up being 2D coordinates that I can then draw inside of a screen. And some part of this is taking the vertices in an object like a sphere, and transforming those based on the rotation and the placement, and the position, and the scale of the object.

But it's also transforming it based on where the camera is looking at this object. And then it's also transforming it based on the type of camera, because right now we're just dealing with perspective cameras, which is what this looks like. But let's say you wanted to draw something more like a good example would be Monument Valley.

So Monument Valley is a game that is what's called isometric or orthographic. And so, the camera that would be used to render this scene would be an orthographic camera. And the reason you can tell is because there's no sense of depth. It's all flat, all the lines are perfectly sort of symmetrical with each other no matter how far away a cube is in this virtual world.

So here we search isometric versus prospective, you will definitely see some examples of that. And so, let's just say the Vertex Shader also decides on this sort of thing, how do we go from perspective to isometric? And that depends on this final line here. So here we have an example where the Vertex Shader is accepting this stretch variable, which is a float.

And so far we've been defining things with these uniform things, which I'll have to explain, but just for now, let's just focus on this float. It's a float coming in from this slider somehow and we're using this stretch variable in our code. And the way we're doing it is by taking the original position of the geometry, and then transforming it by something.

So like messing up those positions somehow. And then we have this third line here, which does all the standard transformations. And we can't even see it, because it's a very long line, but there we go. So just to show an example, and I'll show these examples but then we'll walk through it interactively and start to build up these things ourselves.

But just to show an example, here I've commented this out. And all of a sudden we just have a regular cylinder geometry, so this could be a sphere, a box, or whatever. But by transforming these positions in between state where we have the position, we decide to transform it and then finally we projected onto the screen space coordinates.

Here we can mess up this geometry and start to create animations. Maybe start to do some interesting things with the triangles, with the vertices, just by modifying the position. So this is a Vertex Shader

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now