- Part 1: Reading Meshes, Setting up the Scene
- Part 2: Setting up the Matrices
- Part 3: Vertex Processing
- Part 4: Rasterization
- Part 5: Fragment Processing
- Part 6: Execution and Testing
- Part 7: Written Questions
In this assignment, you will implement a software-based rasterization pipeline, following the major steps outlined in the textbook. While notably slower than modern hardware, even on modest machines we can achieve reasonable rendering speeds for basic scenes. Our rasterizer will be designed to render triangle meshes only, which can be read in from Wavefront
.obj files. From these inputs, we will implement the basic steps of an object-ordered rendering pipeline. We will experiment with three different shading models.
This assignment is designed to teach students the central concepts of rasterization of 3D primitives to draw a 2D scene. Specifically, students will:
- Implement a simple graphics pipeline from start to finish
- Understand the important concepts of around perspective projection to map 3D objects to 2D scenes
- Develop basic rasterization primitives to draw primitives to an image.
- Implement a z-buffer to determine which fragments are drawn to the screen.
- Utilize a fragment shader to color individual pixels based on three shading models: flat, Gouraud, and Phong.
Part 1: Reading Meshes, Setting up the Scene
We will use a modified scene format from A03 to read in the information for the camera and image specification. The prefixes
i should be interpreted the same as previous assignments. This is sufficient for describing the scene except for that we need to explicitly specify values for the near and far plane as well. These will be specified by the character
d (for “depth”) followed by two numbers, the near plane depth and far plane depth.
This scene file will also contain information for the lights, encoded by the prefix
L. We will use positional lights only, just as with our ray tracer. You should interpret the 3 reals for that follow the
L as the position of the light and the next 3 reals in the scene file are the color of the light.
As for surfaces, your program should be able to read and parse a triangle mesh stored in Wavefront
.obj files, as discussed in class and A04. We will specify which meshes to load in the scene file using the prefix
M followed by a filename specified relative to the path of the scene file. The meshes that you will support reading do not have any colors associated with them inside the file. Thus, we will also specify, as we have done for other surfaces, 10 reals to control the diffuse, ambient, and specular color information.
You can assume that all meshes are stored in absolute, world coordinates. After reading in the mesh, store it in your data structure from A04. Next, you should compute a normal for each triangular face. You can then average the face normals for triangles adjacent to a vertex to compute the vertex normals.
Part 2: Setting up the Matrices
A key component of the rasterizer will be setting up the appropriate transformation matrices to map the 3D elements to a 2D scene. Towards this, you will take advantage of your code from A04 to work with four-dimensional homogeneous coordinates. All positional data should be initialized with a value, while any vector information (such as the direction of a light or a normal vector) can be initialized with a value.
The first main task of your rasterizer is to use the scene information to set up the important matrices for the view. Using the scene specification, you will need to compute a variety of terms, include the camera coordinate vectors , , and , based on the eye, lookat point (which can be used to compute the gaze direction), and up vector in the scene file (see Section 2.4.7 and 7.1.3). You will also need to compute the six parameters that define the view volume , , , , , . The values , , , should be specified based on the dimensions of the near plane, similar to how we were computing these parameters in A03. The values for and should be read directly from the scene file and do not need to be computed. Note that and should both be negative and that , as per the convention of the book.
Using these terms, you should be able to compute , , and as in the book, and multiply these together to form the matrix .
Part 3: Vertex Processing
Vertex processing should perform two steps. First, you will shade all vertices. If one is doing Gouraud shading, you will use the color properties of the shape and perform a single shading calculation based on the vertex’s position and normal. This information will be accumulated per-light to form a final vertex color. To compute the vertex color, you should implement the standard approximation to the Blinn-Phong shading model, accumulating the result of ambient, diffuse, and specular components.
Instead, if one is doing flat shading, you can also compute a single shaded color for each triangle, which we will use downstream. This color is computed the same way as the vertices, except using the triangle’s normal. To define a position for the triangle, necessary for lighting, you can use the barycenter (easily computed by averaging all three vertices together).
Note that for either color type, we need not invert the transformation matrix to update the normal. Our scenes have fixed models and only the camera position and light directions can change, thus the normal information need not be updated as a result of the transformation.
Finally, you will apply all transformations to vertex positions to compute the screen space coordinates for each vertex. To keep things in line with how we’ve discussed them in the book, do not perform the homogeneous divide yet.
Part 4: Rasterization
For each primitive, you will compute the set of fragments. To start with, you will perform the homogeneous divide for the vertices. Next, you will need to compute the bounding box of each triangle and then enumerate all pixels within it to check if their barycentric coordinates are within the primitive. Some care will be required to prevent any cracks, as described in the textbook in section 8.1.2.
For any pixel that is contained within the triangle, you should produce a fragment associated with the pixel. Retain only the fragment with the nearest -value, computed by interpolating the -values of the vertices. A handy way to track this is to keep a -buffer, in the form of an array of size of floats, that tracks the closest -value for each pixel. Any time you encounter a fragment with a closer value, replace it and update the value in -buffer for that pixel.
If one is doing Gouraud shading, at this stage you could also interpolate the fragment’s color based on the barycentric coordinates and the vertex colors. Alternatively, if one is doing flat shading, you can simply set the fragment’s color to the triangle color you compute in Part 4. More generally for fragment processing, you could simply store the barycentric coordinates of the fragment and a reference to the primitive that it comes from.
Part 5: Fragment Processing
At this stage, if enabled you can instead perform the Phong shading approximation to produce a color for each fragment. To do this, you will need to interpolate both the normal vector and vertex position, so that you can appropriately compute vectors. This interpolations will require you to coordinate with the vertex processing stage to pass information, as well as the rasterization stage to keep the barycentric coordinates for the fragment. Note that interpolated positions should work OK, but interpolated normals will need to be renormalized.
After computing this information, you should be able to evaluate the Blinn-Phong lighting model for each fragment (using the ambient, diffuse, and color components for the surface).
Finally, the fragment shader should write all fragments to pixels in the image, retaining only the fragment that is closest to the camera.
Part 6: Execution and Testing
Your program should be executed from the command line as
prog05 followed by two parameters: (1) the input scene file, (2) and output image filename. This execution should open up an SDL window to display the final scene. You should allow the user to vary between flat, Gouraud, and Phong shading by pressing the
p keys during runtime. And, like the ray tracer, you should be able to write the final image to the output image filename.
Using the tools provided, you should also create your own scene using a combination of objects and a combination of lights. Be creative! Please be sure to add your scene file as
myscene.txt as well as any
.obj files that you used and a rendered image
myscene.ppm. To position objects in the scene, you may want to utilize your tool from A04.
Part 7: Written Questions
Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).
Most questions should able to be answered in 100 words or less of text.
Please create a commit a separate directory in your repo called
written and post all files (text answers and written) to this directory.
Exercise 7.5 on pg. 157 of the textbook.
Homogeneous coordinates model affine transformations, thus allowing for translation using a clever bookkeeping scheme. Additionally, our view transformation generalizes them for perspective projection. Briefly explain the key property of perspective, in terms of how it affects the size of objects, and how this maps to what new factors we include in a homography matrix.
The painter’s algorithm offers a simple way to decide which objects are drawn on the screen, but it has several problems. Explain one of them.
Explain the difference between how the color of a fragment is computed with Gouraud shading vs. Phong shading
Gouraud shading is known to be a poor approximation for lighting when the size of primitives is large. Explain why.
- You should use git to submit all source code files and a working CMakeLists.txt. Do not submit any build files, binary files, or otherwise. The expectation is that your code will be graded by cloning your repo and then executing:
Your code must compile on the CS lab machines, in particular we will test on
cambridge. Code that does not compile in this way will be given an automatic zero. You will be given one “warning” for the first instance during the semester that it does not compile, but after a zero will occur. If you are working on a different environment, please log into
cambridge and test your code before submitting.
Make sure that this build process produces an executable named
prog05. You will need to edit
Please provide a
README.mdfile that provides a text description of how to run your program and any command line parameters that you used. Also document any idiosyncrasies, behaviors, or bugs of note that you want us to be aware of.
To summarize, my expectation is that your repo will contain:
- Answers to the written questions in a separate directory named
.hfiles that you authored and are necessary to compile your code
myscene.txtand its rendered output
- (Optionally) any other test scenes/images/objects
|Program does not compile. (First instance across all assignments will receive a warning with a chance to resubmit, but subsequence non-compiling assignments will receive the full penalty)||-100|
|Program crashes due to bugs||-10 each bug at grader's discretion to fix|
Point Breakdown of Features
|Consistent modular coding style||10|
|External documentation (README.md), Providing a working CMakeLists.txt||5|
|Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files||15|
| Expected output / behavior based on the assignment specification, including|
A variety of extra credit opportunities are described below, but of course you are welcome to experiment with any technique you’ve discovered in reading the book or other papers. Please document any that you choose to implement in your README, but what follows are a few example ideas:
In between vertex processing and rasterization, perform clipping (this requires setting up a scene to test where the geometry necessitates clipping). To do so, vertex processing should apply all matrices except , and it should not perform the homogeneous divide at this stage. Next, you will clip all points against the view volume planes, which should be in at this point. Finally, apply and perform the homogeneous divide.
Cel shading, also known as “toon” shading, is a cool effect that can be done during the fragment processing stage. To do so, instead of computing the Phong shading model you will analyze the dot product of the normal and the light vector, and partition the color space to vary the intensity in chunks.
Improved Scene File / Instancing
Section 12.2 of the textbook describes the concept of a scene graph, which can be used to store and manage scenes with a variety of objects. Section 13.2 of the textbook also describes instancing, we can be translated to the concept of rasterization by applying a single unique transformation to each mesh that we load.
Both of these concepts offer capabilities for one to specify a single shape in absolute coordinates. One could then modify the scene file format to include a transformation that can be applied at load time to the mesh. Adding features that support this will prevent one from having to edit each mesh file they want to have to position them in the scene.
Because of its efficiency, it is possible to enable interactions in our raytracer to allow the user to navigate in the scene. One simple way to do this is to use mouse and keyboard in SDL to update the eye, lookat, and/or up vectors in the scene.
A more complex version of this is to use the mouse/keyboard in SDL to specify a new matrix, , that is used to apply a transformation to all geometric primitives before the camera transformation. This more complex solution allows a richer specification to navigate in the scene and, for example, move models around to light and lookat regions that were not previously scene. In doing this update, one will necessarily need to update normal vectors as well, which requires inverting the transformation and updating the normal vectors as described in class.
Implementing any one of the effects described in Gooch et al. A Non-Photorealistic Lighting Model For Automatic Technical Illustration. These are particularly interesting for producing transitions from a cool-to-warm color as well as surfaces that look metallic.
By using a multi-pass algorithm, one can implement shadows using a technique called shadow maps. The basic idea is that you render the from the light position and instead of recording the color, record the nearest coordinate. It is described in more detail in the paper by Williams, Casting curved shadows on curved surfaces.
Next, when rendering transform the point you are shading using the light transformation and compare its value to the value in the shadow map. If they are the same, the surface can see the light, if the value in the shadow buffer is closer, then the surface is in shadow and the light should be ignored. You’ll need a small bias to prevent collisions, similar to how a shadow ray needs to be offset in ray tracing. This works best with the Phong shading model in the fragment shader.
As described in the book,
.obj files support loading textures and texture coordinates with a few additional commands. Adding texture support can allow your rasterizer to draw more sophisticated scenes, but requires one to (1) parse
.obj files to support texture coordinates, (2) update the fragment shader to interpolate texture coordinates, and (3) using the texture coordinates, look up in a texture image that you load. One could support a basic version of this by converting
.objs to use a
.ppm file as the texture image. I found a variety of free-to-use, texture
.obj files on https://free3d.com/3d-models/obj-textures, but you’ll still need to use an image editor to convert the texture files to