12 The Depth Buffer
12.010 How do I make depth buffering work?
Your application needs to do at least the
following to get depth buffering to work:
- Ask for a depth buffer when you create
your window.
- Place a call to glEnable (GL_DEPTH_TEST) in
your program's initialization routine, after a
context is created and made current.
- Ensure that your zNear and zFar
clipping planes are set correctly and in a way
that provides adequate depth buffer precision.
- Pass GL_DEPTH_BUFFER_BIT as a parameter to
glClear, typically bitwise OR'd with other values
such as GL_COLOR_BUFFER_BIT.
There are a number of OpenGL example
programs available on the Web, which use depth buffering. If
you're having trouble getting depth buffering to work
correctly, you might benefit from looking at an example
program to see what is done differently. This FAQ contains links to
several web sites that have example OpenGL code.
12.020 Depth buffering doesn't work in my
perspective rendering. What's going on?
Make sure the zNear and zFar clipping
planes are specified correctly in your calls to glFrustum()
or gluPerspective().
A mistake many programmers make is to
specify a zNear clipping plane value of 0.0 or a
negative value which isn't allowed. Both the zNear and
zFar clipping planes are positive (not zero or
negative) values that represent distances in front of the eye.
Specifying a zNear clipping plane
value of 0.0 to gluPerspective() won't generate an OpenGL
error, but it might cause depth buffering to act as if it's
disabled. A negative zNear or zFar clipping
plane value would produce undesirable results.
A zNear or zFar clipping
plane value of zero or negative, when passed to glFrustum(),
will produce an error that you can retrieve by calling
glGetError(). The function will then act as a no-op.
12.030 How do I write a previously stored depth
image to the depth buffer?
Use the glDrawPixels() command, with the
format parameter set to GL_DEPTH_COMPONENT. You may want to
mask off the color buffer when you do this, with a call to
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .
12.040 Depth buffering seems to work, but
polygons seem to bleed through polygons that are in front of them.
What's going on?
You may have configured your zNear and
zFar clipping planes in a way that severely limits
your depth buffer precision. Generally, this is caused by a zNear
clipping plane value that's too close to 0.0. As the zNear
clipping plane is set increasingly closer to 0.0, the
effective precision of the depth buffer decreases
dramatically. Moving the zFar clipping plane further
away from the eye always has a negative impact on depth
buffer precision, but it's not one as dramatic as moving the zNear
clipping plane.
The OpenGL
Reference Manual description
for glFrustum() relates depth precision to the zNear and
zFar clipping planes by saying that roughly log2(zFar/zNear)
bits of precision are lost. Clearly, as zNear
approaches zero, this equation approaches infinity.
While the blue book description is good at
pointing out the relationship, it's somewhat inaccurate. As
the ratio (zFar/zNear) increases, less precision is
available near the back of the depth buffer and more
precision is available close to the front of the depth buffer.
So primitives are more likely to interact in Z if they are
further from the viewer.
It's possible that you simply don't have
enough precision in your depth buffer to render your scene.
See the last question
in this section for more info.
It's also possible that you are drawing
coplanar primitives. Round-off errors or differences in
rasterization typically create "Z fighting" for
coplanar primitives. Here are some options to assist you
when rendering coplanar primitives.
12.050 Why is my depth buffer precision so poor?
The depth buffer precision in eye
coordinates is strongly affected by the ratio of zFar to
zNear, the zFar clipping plane, and how far
an object is from the zNear clipping plane.
You need to do whatever you can to push the
zNear clipping plane out and pull the zFar plane
in as much as possible.
To be more specific, consider the
transformation of depth from eye coordinates
xe, ye, ze,
we
to window coordinates
xw, yw, zw
with a perspective projection matrix
specified by
glFrustum(l, r, b, t, n, f);
and assume the default viewport transform.
The clip coordinates of zc and wc are
zc = -ze* (f+n)/(f-n)
- we* 2*f*n/(f-n)
wc = -ze
Why the negations? OpenGL wants to present
to the programmer a right-handed coordinate system before
projection and left-handed coordinate system after projection.
and the ndc coordinate:
zndc = zc /
wc = [ -ze * (f+n)/(f-n) - we
* 2*f*n/(f-n) ] / -ze
= (f+n)/(f-n) + (we / ze)
* 2*f*n/(f-n)
The viewport transformation scales and
offsets by the depth range (Assume it to be [0, 1]) and then
scales by s = (2n-1) where n is the bit depth of
the depth buffer:
zw = s * [ (we /
ze) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]
Let's rearrange this equation to express ze
/ we as a function of zw
ze / we = f*n/(f-n)
/ ((zw / s) - 0.5 * (f+n)/(f-n) - 0.5)
= f * n / ((zw / s) * (f-n)
- 0.5 * (f+n) - 0.5 * (f-n))
= f * n / ((zw / s) * (f-n)
- f) [*]
Now let's look at two points, the zNear
clipping plane and the zFar clipping plane:
zw = 0 => ze
/ we = f * n / (-f) = -n
zw = s => ze /
we = f * n / ((f-n) - f) = -f
In a fixed-point depth buffer, zw
is quantized to integers. The next representable z buffer
depth away from the clip planes are 1 and s-1:
zw = 1 => ze /
we = f * n / ((1/s) * (f-n) - f)
zw = s-1 => ze
/ we = f * n / (((s-1)/s) * (f-n) - f)
Now let's plug in some numbers, for example,
n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)
zw = 1 => ze /
we = -0.01000015
zw = s-1 => ze
/ we = -395.90054
Think about this last line. Everything at
eye coordinate depths from -395.9 to -1000 has to map into
either 65534 or 65535 in the z buffer. Almost two thirds of
the distance between the zNear and zFar clipping
planes will have one of two z-buffer values!
To further analyze the z-buffer resolution,
let's take the derivative of [*] with respect to zw
d (ze / we) / d zw
= - f * n * (f-n) * (1/s) / ((zw / s) * (f-n)
- f)2
Now evaluate it at zw = s
d (ze / we) / d zw
= - f * (f-n) * (1/s) / n
= - f * (f/n-1) / s [**]
If you want your depth buffer to be useful
near the zFar clipping plane, you need to keep this
value to less than the size of your objects in eye space (for
most practical uses, world space).
12.060 How do I turn off the zNear
clipping plane?
See this
question in the Clipping section.
12.070 Why is there more precision at the front
of the depth buffer?
After the projection matrix transforms the
clip coordinates, the XYZ-vertex values are divided by their
clip coordinate W value, which results in normalized device
coordinates. This step is known as the perspective divide.
The clip coordinate W value represents the distance from the
eye. As the distance from the eye increases, 1/W approaches 0.
Therefore, X/W and Y/W also approach zero, causing the
rendered primitives to occupy less screen space and appear
smaller. This is how computers simulate a perspective view.
As in reality, motion toward or away from
the eye has a less profound effect for objects that are
already in the distance. For example, if you move six inches
closer to the computer screen in front of your face, it's
apparent size should increase quite dramatically. On the
other hand, if the computer screen were already 20 feet away
from you, moving six inches closer would have little
noticeable impact on its apparent size. The perspective
divide takes this into account.
As part of the perspective divide, Z is
also divided by W with the same results. For objects that are
already close to the back of the view volume, a change in
distance of one coordinate unit has less impact on Z/W than
if the object is near the front of the view volume. To put it
another way, an object coordinate Z unit occupies a larger
slice of NDC-depth space close to the front of the view
volume than it does near the back of the view volume.
In summary, the perspective divide, by its
nature, causes more Z precision close to the front of the
view volume than near the back.
A previous question in this
section contains related information.
12.080 There is no way that a standard-sized
depth buffer will have enough precision for my astronomically
large scene. What are my options?
The typical approach is to use a multipass
technique. The application might divide the geometry database
into regions that don't interfere with each other in Z. The
geometry in each region is then rendered, starting at the
furthest region, with a clear of the depth buffer before each
region is rendered. This way the precision of the entire
depth buffer is made available to each region.
|