Tuesday, September 29, 2009

Displaying a point sprite in OpenGL

After looking at a bunch of examples on how to use point sprites in OpenGL and seeing some with blatant errors and leaving out key things like setting the point size, I've decided to cut and paste my point sprite code snippet from PointSprite::draw()


// Get the max point size
glGetFloatv(GL_POINT_SIZE_MAX_ARB, &_maxSize );

// Get the minimum point size. Not necessary for this example, but interesting to know
glGetFloatv(GL_POINT_SIZE_MIN_ARB, &_minSize );

// Enable point sprites
glEnable(GL_POINT_SPRITE);

// I've also seen the following used to enable point sprites:
// glEnable(GL_POINT_SPRITE_ARB);
// but I only needed the first one. Maybe it's because I'm using GLEW

// Replace the texture coordinates across the point sprite
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);

// Set the point size.
glPointSize(_maxSize);

// I saw some examples doing the following:
// glPointParameteri(GL_POINT_SIZE, _maxSize);
// But this didn't do anything for me. glPointSize was
// the only thing that actually set the point size

// Set a color. You'll want to use a nice texture map that you blend
// using a shader, but this is a simple example to get you started
glColor3f(1.0, 1.0, 1.0);

// Begin points. In real life, you'd use a vertex array.
glBegin(GL_POINTS);

// A point
glVertex3f(0.0, 0.0, -1.0);

// Done.
glEnd();

Monday, September 28, 2009

Assignment #3

Phong shading:


Bump mapped:


The normal map generated from the height map shown in object space:


The position map shown in object space:



Reflection. The reflected color is lerped with the bone_map texture using the reflectivity as the weight:


Refraction. The refacted color is lerped with the bone_map texture using the transmittance as the weight.


The tangent frame was computed using the baked position map. For a given texture coordinate, the position in front and in back of the current position was taken and their difference computed.

From this, the tanget vector was computed as (1, 0, dx/ds ), where dx/ds is the difference in the x position the s direction for a small delta.

Environment mapping was done using a cube map loaded from a Direct Draw Surface. The reflection and refraction vectors were calculated and projected into the cube map using the texCUBE function.

Seams in a map were handled by looking for a zero magnitude vector being retrieved from the sampler. If this happened, the derivative was taken using either a forward or backward looking sample as opposed to using a centered sample. Admittedly, I didn't notice any change in image quality when I did this.

Frame rates were usually between 90-110 frames per second.

Thursday, September 24, 2009

Bump mapping




This is the first step in completing assignment #3. The height map was created using difference clouds in photoshop. A fragment shader was written that computes the normal map. This is rendered using a frame buffer object. The normal map is attached to a texture and fed into another shader that lights the models using the normals from the normal map.

In the upper right corner of the scene, the normal map is displayed on a textured quad.

Wednesday, September 16, 2009

Storing shader parameters in a std::map

Setting a shader parameter in Cg is pretty simple: (note, this code hasn't been compiled)


// Initialize CG context
cgContext = cgCreateContext();
checkForError("creating Cg context");

cgSetParameterSettingMode(cgContext, CG_DEFERRED_PARAMETER_SETTING);
checkForError("setting up deferred parameter setting");

cgGLSetDebugMode(CG_FALSE);
checkForError("setting debug mode to false");

// Set up profile
_cgProfile = cgGLGetLatestProfile(_profileType);
checkForError("cgGLGetLatestProfile");

cgGLSetOptimalOptions(_cgProfile);
checkForError("cgGLSetOptimalOptions");

// Load program
_cgProgram =
cgCreateProgramFromFile(cgContext,
CG_SOURCE,
_filename.c_str(),
_cgProfile,
_progname.c_str(),
NULL);

// Get named parameters
_lightPosition = cgGetNamedParameter(_cgProgram, "lightPosition");

// Set parameter
float4 _lightPos[4] = {10, 10, 10, 1};
cgSetParameter4fv(_lightPosition, _lightPos);



I like to wrap everything up in an object, so I end up with a shader object that exposes methods to set the parameters and then call shader->update(); to load the uniform data into the shader.

This eliminates writing the setup code each time, but adds a lot of overhead in creating a bunch of private variables for the CGparameters, seting up a method that gets all of the named parameters and then write methods to set the parameters. It's nice once it is done because you just instantiate the shader, call load and start setting parameters.

The process become tedious when parameters are changed or added, which is a part of the development process. You have to create or remove private variables and methods. Since I'm not perfect, I sometimes make mistakes and the frustration mounts.

What if I wanted to create a more dynamic enviroment where the shaders can be changed at run time and the list of uniform variables shows up in a dialog box for easy changing? Not possible with my current set up.

So, I changed my base shader class to read all of the parameters in a shader program and build a map that maps parameter names to CGparameters. Then I added some overloaded functions that take a parameter name and a value, like a Matrix, Vector, Point or Color and calls the appropriate function to set the variables.

Loading a shader and setting variables is as easy as:


CGShader* frag = new CGFragmentShader("frag.cg", "shadow");
frag->load();
frag->set("shadowMap", _fbo->depthTex());


No more building custom classes to wrap everything up. It's all done for me. Structs are not supported and no type checking is done. But this gets me going and speeds up development. I'll add support for structs and type checking as needed.

Tuesday, September 15, 2009

Row major vs column major performance

I wrote my own 4x4 matrix class at the start of CS513 and stored it in column major order since this is how OpenGL expects matrices. However, Cg expects them in row-major order. So before I can pass a matrix to Cg, I have to transpose. Expecting this to be a performance hit, I set up my matrix class and demo app to take a -DCOLUMN_MAJOR at compile time so that I could compare performance.

No performance change. Well, it looks like column major may give me 110 frames per second and row major gives me 109 frames per second.

I've also added a frames per second counter to my app and display it in the upper right side of the window so that I can see when frame rate changes.

Monday, September 14, 2009

Attaching multiple renderbuffers to FBO

I've successfully attached multiple render buffers and their corresponding textures to a frame buffer object. In my test, I attached a depth buffer and a full color buffer. Then I attached the resulting textures to two squares that I render at the same time as the rest of the scene. I didn't notice any performance impact.

Here's my code, nothing fancy, I haven't wrapped up textures, render buffers and frame buffer objects inside actual C++ objects:

FBO::FBO(int width, int height)
: _width(width),
_height(height)
{
// create a depth texture object
glGenTextures(1, &_depthTex); glBindTexture(GL_TEXTURE_2D, _depthTex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);

// create a color texture object
glGenTextures(1, &_colorTex);
glBindTexture(GL_TEXTURE_2D, _colorTex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);

// create a framebuffer object
glGenFramebuffersEXT(1, &_fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, _fbo);

// create a renderbuffer object to store depth info
glGenRenderbuffersEXT(1, &_depthBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, _depthBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, _width, _height);

// create a renderbuffer object to store color info
glGenRenderbuffersEXT(1, &_colorBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, _colorBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, _width, _height);

glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);

// attach a texture to FBO depth attachement point
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, _depthTex, 0);

// attach a texture to FBO color attachement point
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, _colorTex, 0);

glReadBuffer(GL_NONE);

// check FBO status
checkFramebufferStatus();

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}

Shadow mapping deformation

I was noticing that the shadow is deformed. It appears that the order that the vertices are sent is important. After paying very careful attention to vertex order and switching from a GL_QUAD to a GL_TRIANGLE_STRIP, the deformation has mostly disappeared.

Something that is confusing me is when I send over a rectangular plane and not a square plane, the shadow seems deformed.

I'm not sure why this is happening. Here are the screen shots:

Square plane, shadow looks correct to me:

Rectangular plane, deformed shadow:

Friday, September 11, 2009

Assigment #2a

Assignment #2a:

The problem in the previous post was that I was forgetting to perform the perspective divide on the vertices. I was already doing the divide on the texture coordinates.

Updated source
View rendered from eye space:

View rendered from the light's point of view:

Notice how the shadow isn't visible from the light's point of view?

Thursday, September 10, 2009

Assignment #2

CS513, Assignment #2

Source is here

Images can be clicked on for a larger view.

Scene viewed normally from the camera:



Scene viewed from the light source:



How to build and run:

tar jcf jbowles-assign2.tar.bz2
cd jbowles-assign2/shadowmap
make
./shadowmap

Press 'c' to toggle light/eye camera
Press space to pause/unpause teapot rotation
Press 'r' to reload shaders

Problems and fixes:

The shadow is not in the correct position. I suspect that this is due to the problem that Joe described with the light projection and view matrices not being set up right. When the view is from the light position, everything looks correct to me, but there seems to be some subtle issue. You can see this in the light view screen cap. The shadow should not be visible.

I'm not sure what the fix is in this case. I'll keep looking into this.

Transforming the model coordinate into light texture space was problematic. Initially I was just multiplying the untransformed coordinate in the vertex shader by the light model view projection matrix.

Solution: perspective divide and pre multiply the MVP with a clip to texture matrix. The clip to texture matrix moves the vertex into the light's canonical view volume.

Row major vs column major storage of matrices was also a problem. I incorrectly assumed that the way OpenGL wants it matrices would be the same as the GPU. There is no reason why this assumption should be true and it's clear from the Cg examples that this is not the case. While I was getting a lot of the math correct on the CPU side,
I was sending the GPU matrices that needed to be transposed.

I plan on changing my matrix class to store everything row major. If I need to load one of my matrices into the OpenGL state, then I'll transpose it. Right now I transpose all matrices before sending them to the GPU because I store them in column major order.