Friday, December 18, 2009

Fall 2009 CS 513 Final Project

The goal of this project was to take a photograph and place a model into that photo and match the lighting.

For this project, two cheap light probes were built for less than $10. Shiny and diffuse Christmas balls were used to probe the reflective and diffuse components of the scene. They were attached to dowel rods using duct tape and placed into the tree.

A consumer grade 10-megapixel ultracompact camera was placed on a tripod and used to take pictures of the spheres:

This resulted in the following sphere maps:

These sphere maps were mapped onto the model to produce an initial image:

Later, an image of a fingerprint was added to decrease the specularity of the model where the fingerprint left oil residue and caused dirt to accumulate.

Antialiasing was performed using the accumulation buffer. Slightly different view / projection matrices were used for each pass and the results averaged.

Ambient occlusion was approximated using only 26 lights. Shadow maps were computed for each pass and baked into a texture. Ideally at least 128 lights would be used.

Seams are always a problem with texture baking. Earlier in the semester, I wrote some software to find the seams in an OBJ model and transport texture data across seams to minimize their impact. The software isn't perfect, but it helps greatly.

The ambient occlusion is used to diminish the amount of light that is reflected by the model by decreasing the amount of available light in the shader. The end result has a pewter look to it:

This project was done on a 17" MacBook Pro running Snow Leopard. To build and download:

tar jxf cs513-final-jbowles.tar.bz2
cd cs513-final-jbowles/final

Coming back to ambient occlusion

My advisor (that's Dr. Joe Kniss) won't let me give up :)

He was right to give me crap, a trained monkey could have found the bug: to blend together the shadow maps, you need the correct texture coordinate to index into the previous shadow map. I had the wrong texture coordinate. Specifically, I was using the texture coordinate to index into the depth map. That was just stupid. Here's two blended shadow maps from lights at (1,0,0) and (-1,0,0):

And with 6 lights, one on each axis:

With 26 lights:

I need more lights. Here's the map on the model

Using the ambient occlusion to change how much light ends up on Frank:

Adding some rust / blemishes

It's hard to see in the previews, but some rust has been added to the model. The rust is a bit blurry, so it doesn't look very rusty. The rust diminishes the specularity at those points and adds some rust colored spots:

Frank with Fingerprints

I have to give up on ambient occlusion for now. I'm sure that I'm doing something stupid, but I'm running out of time to finish this project and can't seem to figure out the problem. So I moved on to adding fingerprints to Frank. The fingerprints change the amount of specular reflection.

With a rust texture blended in:

Thursday, December 17, 2009

Ambient Occlusion test

Here's my first try at texture baking some ambient occlusion:

This isn't what I'd expected. In the torso portion of the map (to the left and under the feet) I'd expect some darker areas. I'm building this by taking a lot of depth images / shadow mapped images from all around the model and blending them together. Maybe my sphere of lights isn't where I think it is.

Monday, December 14, 2009

Blurry Frank

In an attempt to get some depth of field into the image, I ended up with the model being out of focus:

I'll need to play with that some more. This was done in a similar fashion to the antialiasing by jittering the projection matrix.

Antialiasing with Accumulation Buffer

I've flipped the sphere map texture coordinates which makes the scene a little more believable and added antialiasing via the accumulation buffer:

9 samples were taken, which means each frame was drawn 9 times. This really slows things down.

Saturday, December 12, 2009

Initial Light Probe Test

Here's my very first attempt at setting up light probes and using them to light a model. I did this as fast as I possibly could, which means that I didn't do a very good job. I just wanted to get the images into my code and see what happened.

The light probes were built with about $5 worth of Christmas ornaments from Hobby Lobby, a dowel rod and duct tape. The camera is a fairly cheap, consumer grade, 10 megapixel, ultracompact digital camera mounted on a tripod. Notice the tape on the floor so that I can replicate the positioning of the camera:

Here's a picture of the Christmas tree. This will be used as the background:

The reflection map:

The diffuse map. I messed up and got some branches in the way. This shows up later in the render and looks bad.

The scene using the reflection map and the background:

The scene using the diffuse map and the background.

The reflection and diffuse map mixed together 50/50.

The reflection and diffuse maps and background image were taken in a room lit only by the Christmas tree with a 15 second exposure. The sphere mapping code is all done in a Cg vertex shader. I chose not to use the fixed function OpenGL sphere mapping texture coordinate generation because I'm going to want both sphere and model texture coordinates when I add a grunge map later. It isn't clear how to get that with the fixed function pipeline but it's fairly easy with shaders.

The most noticeable issue is that the background image doesn't match up to the sphere map and the diffuse map. The background is from a different part of the tree.

In the next test, I'll be a lot more careful about how I take the pictures and make sure that I crop the exact same areas and match them up.

Tuesday, December 8, 2009

Assignment 5.3

Here's assignment 5.3. I've incorporated translucency, softer shadows, light entering the model and returning to the surface (diffuse), reflection (specular).

The results of this assignment don't look much different than my result from my last post almost 5 days ago. What the heck was I doing?

Well, I wasn't slacking :)

The last post was using a very hacked up light model that provided similar results but did not allow lights to be repositioned. It was a great way to start understanding the problem but ultimately held me back on understanding how to calculate how much light was transmitted through the model.

The biggest learning experience of this assignment was understanding the difference between light that is transmitted through the model and to the eye and light that was transmitted through the material and away from the eye.

Depending on the situation, the light may or may not have contributed to the final color. This was very hard to wrap my head around but ultimately it was just a test of the sign of the dot product between the normal and the light direction. If negative, no contribution. If positive, add it in.

Other additions were conservation of energy, high dynamic range and tone mapping. The returned and transmitted light are run through a 3x3 Gaussian blur before being composited into the final image.

Here is an example of no blur of returned and transmitted light:

With blur of returned and transmitted light:

The difference is very subtle.

To compile and build:

tar jxf assign5.3.tar.bz2
cd assign5.3/assign5.3

Usage: click and drag the mouse to rotate the model.
I : reset rotation
B : toggle between baked textures and showing the final model

Show baked textures, with blur:
0 : returned
1: transmitted

No blur
2: returned
3: transmitted
4 : reflected
5: positions

Images of the baked textures:

Returned with blur:

Returned without blur:

Transmitted with blur:

Transmitted without blur:



Thursday, December 3, 2009

Improved returned light

I've improved the return light, but I don't like it.

Composite image:

Returned light component:

In the composite image, I'm conserving energy by subtracting out the reflected light before determining the amount of the front light that gets through the model, and then I'm subtracting out both before determining the amount of returned light. I'm also using chromatic scattering / absorption coefficients so the light shifts from the watery blue to more green.

I don't know why, but I don't like what's happening with the returned light.

Warm and Cool Lights

I decided that I needed to step back and review how my code was organized and make a bit of a clean start. Things were getting pretty messy and for reasons I don't understand, when I'm writing shaders, I don't feel like I can use subroutines.

Which is totally stupid.

So, I went through my notes and implemented a bunch of the equations that Joe discussed in class for light return and light attenuation through a material and reflection (the cosine term). I ended up with the following image:

There are two lights in the scene: a warm light directly behind the model and a cool light in front of the model.

The final color has three components: the reflected light, light that shines through the model, and light that penetrates into the model and is returned.

All of the light is added together and tone mapped, so the lighting is not constrained to the (0,1) range when calculated the final color until the tone mapping operation.

Let's look at the three components separately:




Shinethru and reflected look ok, but returned looks terrible. Maybe I've picked bad coefficients of scattering, poor optical depth, etc. Regardless, it isn't what I expected. Plus, energy is not being conserved, so the composite image definitely isn't correct, but I think that this is a start to some nice shading with translucency.

Monday, November 30, 2009

Seams might be handled!

Without seams handled:
With seams handled:
Now with a 3x3 blur, not position dependent:

Dilated seams

Seams are starting to look better:
I've switched back to GL_NEAREST for the texture filtering and implemented my own dilating filter that uses the max value from the texels in a 3x3 area as the fragment color. I'm not sure if this is correct, but I'll move ahead as if it is.

Seams and roundoff error?

I decided to check some of the texture coordinates for the opposite seam directly and see if they mapped to Frank. Some did, some did not. Then moved around the tex coords that mapped to black around by 1 pixel and found that they started mapping to Frank.

This lead to the realization that I was using GL_NEAREST for the texture filtering and not GL_LINEAR. Switching to GL_LINEAR gives the following:This still isn't what I'd expect to see, but at least I know that my seam finding algorithm is probably working correctly.