Lights and Shadows in Graphics - Computerphile

The Art and Science of 3D Shading: Understanding the Z-Buffer

In the world of computer graphics, 3D shading is a crucial aspect of creating realistic images. It involves determining the color of each pixel based on its position in 3D space and the properties of the surface it's on, as well as the direction of the light source. However, when dealing with complex scenes, it can be difficult to determine whether a particular pixel is in shadow or not. This is where the Z-buffer comes into play.

To understand how the Z-buffer works, let's first consider what we mean by "light" and "shade." When a pixel is said to be illuminated, it means that light from the source is hitting it directly, whereas if it's in shadow, the light can't reach it. However, when we're dealing with complex scenes, we often can't assume that we know which pixels are being illuminated or shaded. That's where the Z-buffer comes in.

The Z-buffer is a technique used to determine whether a pixel is in shadow or not by rendering the scene from the point of view of the light source rather than the camera. In other words, we render the scene as if we were looking at it from the direction of the light, rather than trying to see how it would look from our own perspective. This allows us to build up information about which pixels are in shadow and which ones are not.

When rendering the scene from the point of view of the light, we record depth information - exactly how far away each pixel is from the light source - into a buffer. We then use this information when we come to render the scene from our own perspective, taking care of each pixel's position in world space and projecting it out into light space. By doing so, we can find out how far away the point that we're shading is from the light source, and compare it to the value in the Z-buffer.

If the depth from the light at a particular point is further than the depth from the front face of an object, then that pixel is in shadow. We don't need to do any lighting calculations for that pixel because we know its depth. This information can be used to incorporate Shadow mapping into our lighting equations, allowing us to create more realistic images with subtle shading effects.

In modern games and applications, having multiple lights is often necessary to achieve the desired level of realism. However, rendering hundreds of thousands of triangles every frame can be expensive in terms of time and computational resources. As a result, developers may choose to sacrifice some of their graphics capabilities in order to prioritize other aspects of game performance.

For example, older games and applications might only have a single light source or no shadows at all. While this can lead to a loss of graphical fidelity, it's often a trade-off that's worth making in order to maintain smoother frame rates. In fact, many developers find that even with just one shadow, the visual impact can be significant - a single shadow can make an object look more realistic and immersive than no shadows at all.

However, as the complexity of scenes increases, so too does the need for accurate lighting calculations. When dealing with multiple lights and complex geometries, it's clear that using a Z-buffer is just one part of the solution. Other techniques, such as physically-based rendering (PBR) and global illumination (GI), are often used in conjunction with Shadow mapping to create truly realistic images.

Ultimately, the art of 3D shading is all about finding the right balance between complexity and performance. By understanding the Z-buffer technique and its applications, developers can take their graphics capabilities to new heights - even if that means making some sacrifices along the way.

Audible.com: A Partner in Learning

As we continue our journey into the world of 3D shading, it's worth taking a moment to appreciate the resources available to us. One such partner is Audible.com, a leading provider of audiobooks and educational content.

For those interested in learning more about science and technology, Audible.com offers a wide range of titles that are both informative and engaging. One particularly recommended title is "Bad Science" by Ben Goldacre. This book exposes common misconceptions and misuses of science in the media, providing readers with a critical perspective on how science should be reported and applied.

Whether you're an experienced developer or just starting out, there's never been a better time to learn more about the art and science of 3D shading. With resources like Audible.com available to us, we can take our skills to new heights - and create truly realistic images that capture the imagination of audiences around the world.

Clicking on links in this article will direct you to the relevant website. In general, these websites are not as well-designed as I'd like them to be.

"WEBVTTKind: captionsLanguage: enwe've been through all of the rigar all of the pipeline all of the complicated operations moving from local space to World space to view space collapsing our objects down to a 2d plane clipping them and then generating pixels deciding which pixels to keep which P pixels to throw away so the main question now is how do we color the pixels in what what do we do with the pixels what we generally want to do if we're trying to replicate reality is do some sort of lighting equations at every pixel um so for every pixel we run a little program that does these kind of computations um that will given a point on a surface and the normal at that point and some other information about the surface maybe the material properties and information about the light source where the light source is relative to the surface what color the light is what direction the light's coming in how the light attenuates and so on so forth with all of these inputs we can finally generate a shaded value to say well this is well lit or it's in it's it's facing away from the light so it's in shade and so on and so forth the equations that we do the um the computation of the light value at every pixel is local it's local to the surface that we're lighting because in all of our work that we've done getting down to pixel we're not really um holding on to any other other information about this the rest of the scene we only know about the one object that we're working with at a time so we only know about the pixels on the surface of this object we don't know for example if there is another object here blocking the lights reaching this object so we can't account for that in our simple light model at this pixel so in Old the computer Graphics when you look at them you'll often see that actually you can see these Local Light models in action and the lack of Shadows is quite apparent how then do we get that information how do we get that Global Information to augment our local light model and for example render Shadows how do we decide that this point on the surface is being uded by another object somewhere else in the scene and the answer to that or one solution is to go back to our friend the depth buffer um the depth buffer as we'll remember is um a representation at every pixel of the depth of that pixel in the scene what we can do is we can render the scene not from the camera's point of view but from the point of view of the light rather than rendering shaded pixels however we just output the depth buffer values so what we end up with is a buffer of data containing the depth per pixel depth from the position of the lights why is that useful thing to have well what we've done really as we've rephrased the question we're not saying is uh such a pixel um in light or shade we're saying is such a pixel visible to the light because if a pixel is visible to the light then it's being illuminated if it's not visible to the light then it's in Shadow it will probably be clearer to actually cast some Shadows andrate this there we go we got a good Shadow on that oh no not really that's a bit better isn't it right so we have our scene a simple pyramid we'll incorporate the table into this cuz this is where the shadow is being cast so the table is another 3D object in our 3D World and we want to shade a point here so for that not only do we need to know properties of the surface at this point um the normal and the uh the material properties and the direction of the light and so on and so forth but we also want to know whether it's in light or in shade so how do we determine that the shading of of a pixel is local and we don't have information about the rest of the scene so we can't know um whether or not a certain point is being uded by another object in the scene we don't know whether it's in Shadow or not to solve that problem we use a two-step process that involves the Zed buffer first of all though we render the scene from the point of view not of the camera but of the light what we do is we record not color information but depth information exactly as we would do in normal rendering in other words we render into the depth buffer and we record at every pixel exactly how far away every pixel is from the point of view of the light and we saved that information in a buffer for use later then when we come to render the scene from the point of view of the camera we take this point here and we project it out of world space where it is in into light space so what we know then is exactly how far away the point that we're shading is from the point of view of the light what we can then do is do a lookup into the Z buffer from the light pass that we rendered before and we can find out how far away that point is from the light and compare it to the value in The Zed buffer so in this case we find that the depth from the light at this point is further than the depth from the front face of the pyramid here therefore it's in Shadow and we don't need to do our lighting here or we need to incorporate that information into our lighting equations so if we had another light that's not done anything at all but in theory we can have as many lights as we like um be like plenty yeah so in theory we're can have as many lights as we like as long as we're prepared to render the scene again and again from the point of view of every light so that we have enough information when we come to uh shade in the pixels to be able to do that lockup and say is this uh pixel um in Shadow from this particular light or is it being illuminated and in modern games and things like that now we can have multiple lights because we can afford to render the scene multiple times from multiple perspectives in order to build up that uh that information however um if you look back just maybe 10 or 15 years at older games and things like that you tend only to have one Shadow casting light or maybe not even any at all because rendering or hundreds of thousands of of triangles every single frame is expensive um in terms of time and uh so it's sometimes better just to accept the quality loss and not have any Shadows uh rather than go for complete realism and often actually you find that a single shadow um looks much better than no Shadows at all but two or three shadows only looks marginally better than one Shadow it's a law of diminishing return it is a law of diminishing returns we'd like to thank audible.com for their support of this computer vile video and if you'd like to download one of their range of books go to audible.com /pu file and you can sign up and download one for free I'd like to recommend a book called bad science by Ben goldacre and for those of you interested in science it's fantastic it shows how science is misused in the media and debunks some common interesting practices such as Homeopathy that's my favorite part so get onto audible.com computer file uh sign up download a free book and thanks to them again for supporting this computer file video and when you click that go button it's not going to my awesome blog it goes to the really badly designed online bank which promptly says oh we've got a and in general it works because the majority of scenes are composed mostly of opaque objects that are amable to zed testingwe've been through all of the rigar all of the pipeline all of the complicated operations moving from local space to World space to view space collapsing our objects down to a 2d plane clipping them and then generating pixels deciding which pixels to keep which P pixels to throw away so the main question now is how do we color the pixels in what what do we do with the pixels what we generally want to do if we're trying to replicate reality is do some sort of lighting equations at every pixel um so for every pixel we run a little program that does these kind of computations um that will given a point on a surface and the normal at that point and some other information about the surface maybe the material properties and information about the light source where the light source is relative to the surface what color the light is what direction the light's coming in how the light attenuates and so on so forth with all of these inputs we can finally generate a shaded value to say well this is well lit or it's in it's it's facing away from the light so it's in shade and so on and so forth the equations that we do the um the computation of the light value at every pixel is local it's local to the surface that we're lighting because in all of our work that we've done getting down to pixel we're not really um holding on to any other other information about this the rest of the scene we only know about the one object that we're working with at a time so we only know about the pixels on the surface of this object we don't know for example if there is another object here blocking the lights reaching this object so we can't account for that in our simple light model at this pixel so in Old the computer Graphics when you look at them you'll often see that actually you can see these Local Light models in action and the lack of Shadows is quite apparent how then do we get that information how do we get that Global Information to augment our local light model and for example render Shadows how do we decide that this point on the surface is being uded by another object somewhere else in the scene and the answer to that or one solution is to go back to our friend the depth buffer um the depth buffer as we'll remember is um a representation at every pixel of the depth of that pixel in the scene what we can do is we can render the scene not from the camera's point of view but from the point of view of the light rather than rendering shaded pixels however we just output the depth buffer values so what we end up with is a buffer of data containing the depth per pixel depth from the position of the lights why is that useful thing to have well what we've done really as we've rephrased the question we're not saying is uh such a pixel um in light or shade we're saying is such a pixel visible to the light because if a pixel is visible to the light then it's being illuminated if it's not visible to the light then it's in Shadow it will probably be clearer to actually cast some Shadows andrate this there we go we got a good Shadow on that oh no not really that's a bit better isn't it right so we have our scene a simple pyramid we'll incorporate the table into this cuz this is where the shadow is being cast so the table is another 3D object in our 3D World and we want to shade a point here so for that not only do we need to know properties of the surface at this point um the normal and the uh the material properties and the direction of the light and so on and so forth but we also want to know whether it's in light or in shade so how do we determine that the shading of of a pixel is local and we don't have information about the rest of the scene so we can't know um whether or not a certain point is being uded by another object in the scene we don't know whether it's in Shadow or not to solve that problem we use a two-step process that involves the Zed buffer first of all though we render the scene from the point of view not of the camera but of the light what we do is we record not color information but depth information exactly as we would do in normal rendering in other words we render into the depth buffer and we record at every pixel exactly how far away every pixel is from the point of view of the light and we saved that information in a buffer for use later then when we come to render the scene from the point of view of the camera we take this point here and we project it out of world space where it is in into light space so what we know then is exactly how far away the point that we're shading is from the point of view of the light what we can then do is do a lookup into the Z buffer from the light pass that we rendered before and we can find out how far away that point is from the light and compare it to the value in The Zed buffer so in this case we find that the depth from the light at this point is further than the depth from the front face of the pyramid here therefore it's in Shadow and we don't need to do our lighting here or we need to incorporate that information into our lighting equations so if we had another light that's not done anything at all but in theory we can have as many lights as we like um be like plenty yeah so in theory we're can have as many lights as we like as long as we're prepared to render the scene again and again from the point of view of every light so that we have enough information when we come to uh shade in the pixels to be able to do that lockup and say is this uh pixel um in Shadow from this particular light or is it being illuminated and in modern games and things like that now we can have multiple lights because we can afford to render the scene multiple times from multiple perspectives in order to build up that uh that information however um if you look back just maybe 10 or 15 years at older games and things like that you tend only to have one Shadow casting light or maybe not even any at all because rendering or hundreds of thousands of of triangles every single frame is expensive um in terms of time and uh so it's sometimes better just to accept the quality loss and not have any Shadows uh rather than go for complete realism and often actually you find that a single shadow um looks much better than no Shadows at all but two or three shadows only looks marginally better than one Shadow it's a law of diminishing return it is a law of diminishing returns we'd like to thank audible.com for their support of this computer vile video and if you'd like to download one of their range of books go to audible.com /pu file and you can sign up and download one for free I'd like to recommend a book called bad science by Ben goldacre and for those of you interested in science it's fantastic it shows how science is misused in the media and debunks some common interesting practices such as Homeopathy that's my favorite part so get onto audible.com computer file uh sign up download a free book and thanks to them again for supporting this computer file video and when you click that go button it's not going to my awesome blog it goes to the really badly designed online bank which promptly says oh we've got a and in general it works because the majority of scenes are composed mostly of opaque objects that are amable to zed testing\n"