Check out a free preview of the full Advanced Creative Coding with WebGL & Shaders course:
The "Buffer Geometry" Lesson is part of the full, Advanced Creative Coding with WebGL & Shaders course featured in this preview video. Here's what you'd learn in this lesson:

Matt explains how to manually define a quad, or two triangles, within buffers using BufferGeometry. BufferGeometry is a function that stores vertex positions, indices, normals, colors, UVs, and custom attributes to apply to a mesh.

Get Unlimited Access Now

Transcript from the "Buffer Geometry" Lesson

[00:00:00]
>> The other demo here is buffer geometry. I'm not gonna try and code this from scratch, because it's a little bit more complex even than what I just did. But it's also good to be aware of this. Basically we have geometries, which is sort of the simple version, which is what we were looking at before, and geometry allows us to specify vertices as vector threes.

[00:00:28] And it also allows us to specify the indexes into those, this vertex array. So it allows us to specify the face and also specify other things to do with the face, like the color of the face. Or maybe the normals of the face or some other attribute of the face.

[00:00:42] But when we deal with more custom geometries, let's say we wanted to start to load a model from a file, and we wanted to maybe load a model that's not an OBJ file and not a gltf. Or it's not some file that we're getting from cinema 4d, but maybe it's like a data set of millions of points or millions of stars and the positions of the stars, or something like that in the sky.

[00:01:07] And so that might be just given to us in a CSV or in a JSON file that has to do with the positions of all these different points in space. And so we could use buffer geometries to just take that data and turn it into a flat array of float 32 vertices.

[00:01:26] And so the structure is a little bit different, because we're still able to use server vector 3, this vector 3 array just like we were before. But the difference is, before we pass this to the buffer geometry, which is this more advanced class, we have to flatten it into a flat structure, that's gonna just look like this, where the first vertex is gonna be like this, the second vertex is gonna be like this, the third vertex, and so on and so forth.

[00:01:58] So x2, y2, z2. And this is just a float 32 array, which is a typed array, which just is a way of saying it's an array, but it's a fancy array, and it has numbers in it, and each of those numbers is going to correspond with the vertex.

[00:02:14] And so it's a little bit more complicated to set up. It's definitely not something that is super easy to do off the top of your head. But, if you wanted to look into this, we can start to dive into this more and more in the afternoon with some of the more complex examples.

[00:02:30] Cuz this will tie in really well with shaders as well. When we start to work with shaders, we're gonna sometimes want to pass in very specific information into the vertices and into the objects. And we can do that with buffer geometries. Because if we wanted to pass custom attributes and custom properties with just a regular geometry class, it wouldn't really be possible.

[00:02:51] And so that's where we have to use this buffer geometry. So yeah, that's kind of the idea of buffer geometry here. A really good example of it is this, this more advanced example, because it shows how we can create hundreds or potentially thousands of particles. And they're all sort of squashed into the same batch, the same geometry.

[00:03:15] And so when we actually go to render this shape on the GPU, the shape will say okay, we've got a buffer geometry. And the buffer geometry might have, here it has 250 particles, but we can make this really big, 2500, or smaller. And all of these are squashed into the same float 32 array that I was just talking about.

[00:03:40] And that's what gonna give us the ability to draw thousands of things in a single draw call. And so it'll be really, really fast, really efficient, and really great for things like particle systems or for data visualization, and so on and so forth, where we have a lot of points.

[00:03:57] And we wanna render them efficiently. We're gonna talk about how do we have these points but then manipulate them, because it's just static right now, it's just a static geometry. But we might want to move these vertices, we might want to shade them in different ways, and so on and so forth.

[00:04:12] So yeah, this one, this sort of buffer geometry stuff is a little bit more nebulous. It's definitely something that takes a little bit more time to wrap your head around, even just, as you saw when I was coding the vertices, I was sort of struggling through the coordinates, and trying to wrap my head around how how those line up.

[00:04:31] And that's because there's just a lot of complexity in trying to define manual sort of geometry. But quite often, what I end up doing is I just take an existing geometry, like a plane or a cube or a sphere, and I manipulate the vertices, by just going through the vertices and maybe randomizing some of the XYZ components, or maybe using noise on some of those components.

[00:04:54] So it's not always manually typing out each triangle or anything like that.
>> Which of your examples were using the buffer geometry?
>> So if we look in the three dash demos, there's two of them that are using buffer geometries. One of them is the one that just says buffer geometry and the other ones that says buffer geometry advanced.

[00:05:16] Those two are using buffer geometries. This one is actually also using a buffer geometry, but it's again like I was saying I like to manipulate spheres and existing meshes rather than have to manually define all the triangle positions for the sphere, which would just be hell.