All students will implement a software-based rasterization pipeline, following the major steps outlined in the textbook. While notably slower than modern hardware, even on modest machines we can achieve reasonable rendering speeds for basic scenes. Our rasterizer will be designed to render triangle meshes only, which can be read in from Wavefront .obj files. From these inputs, we will implement the basic steps of an object-ordered rendering pipeline. We will experiment with three different shading models.


This assignment is designed to teach students the central concepts of rasterization of 3D primitives to draw a 2D scene. Specifically, students will:

  • Implement a simple graphics pipeline from start to finish
  • Understand the important concepts of around perspective projection to map 3D objects to 2D scenes
  • Develop basic rasterization primitives to draw primitives to an image.
  • Implement a -buffer to determine which fragments are drawn to the screen.
  • Utilize a fragment shader to color individual pixels based on three shading models: flat, Gouraud, and Phong.

Part 1: Reading Meshes, Setting up the Scene

We will use a modified scene format from A03/A04 to read in the information for the camera and image specification. The parameters eye, lookat, up, fov_angle, width, and height should be interpreted the same as previous assignments. This is sufficient for describing the scene except for that we need to explicitly specify values for the near and far plane as well. These will be specified by parameters near and far, each specified as a single value for the near plane depth and far plane depth respectively.

This scene file will also contain information for the lights, encoded as a list names lights. We will use positional lights only, just as with our ray tracer. Each light will have an position and an RGB color.

As for surfaces, your program should be able to read and parse a triangle mesh stored in Wavefront .obj files, as discussed in class and used in A05.

To load meshes, we will make use of Javascript’s ability to load multiple files simultaneously. To do so, we will use the HTML file input to both select the .js file for the scene as well as any .obj files this scene must load. Within the scene file, there will be a list of meshes. For each mesh, the color information is specified as ambient, diffuse, specular, and phong_exponent. Importantly, there is a string, obj, that refers to the name of the file containing the .obj information for the mesh. You can assume that all meshes are stored in absolute, world coordinates.

In the starter repository, I have provided a sample Javascript code that will load any file that matches the name of the mesh files specified in the scene. You will have to modify this code to use your .obj parser when loading the mesh. I anticipate that you will modify the function allFilesLoaded() to convert anything from the scene file as well as parse the .obj files, but feel free to deviate from this (of course, at your own risk!).

Additionally, after reading in the mesh and storing it in your data structure from A05, you will need to post-process it. Specifically, you should compute a normal for each triangular face. You can then average the face normals for triangles adjacent to a vertex to compute the vertex normals.

Part 2: Setting up the Matrices

A key component of the rasterizer will be setting up the appropriate transformation matrices to map the 3D elements to a 2D scene. Towards this, 433 students will take advantage of your code from A05 to work with four-dimensional homogeneous coordinates. 533 students will have to code a four-dimensional matrix class anew. All positional data should be initialized with a value, while any vector information (such as the direction of a light or a normal vector) can be initialized with a value.

The first main task of your rasterizer is to use the scene information to set up the important matrices for the view. Using the scene specification, you will need to compute a variety of terms, include the camera coordinate vectors , , and , based on the eye, lookat point (which can be used to compute the gaze direction), and up vector in the scene file (see Section 2.4.7 and 7.1.3). You will also need to compute the six parameters that define the view volume , , , , , . The values , , , should be specified based on the dimensions of the near plane, similar to how we were computing these parameters in A03. Specifically, this means that instead of using the focal length defined by the distance between the eye and the lookat point, you should use the distance to the near plane (taken as the absolute value of near). The values for and should be read directly from the scene file (as near and far) and do not need to be computed. Note that and should both be negative and that , as per the convention of the book.

Using these terms, you should be able to compute , , and as in the book, and multiply these together to form the matrix .

Part 3: Vertex Processing

Vertex processing should perform two steps. First, you will shade all vertices. If one is doing Gouraud shading, you will use the color properties of the shape and perform a single shading calculation based on the vertex’s position and normal. This information will be accumulated per-light to form a final vertex color. To compute the vertex color, you should implement the standard approximation to the Blinn-Phong shading model, accumulating the result of ambient, diffuse, and specular components.

Instead, if one is doing flat shading, you can also compute a single shaded color for each triangle, which we will use downstream. This color is computed the same way as the vertices, except using the triangle’s normal. To define a position for the triangle, necessary for lighting, you can use the barycenter (easily computed by averaging all three vertices together).

Note that for either color type, we need not invert the transformation matrix to update the normal. Our scenes have fixed models and only the camera position and light directions can change, thus the normal information need not be updated as a result of the transformation.

Finally, you will apply all transformations to vertex positions to compute the screen space coordinates for each vertex. To keep things in line with how we’ve discussed them in the book, do not perform the homogeneous divide yet.

Part 4: Rasterization

For each primitive, you will compute the set of fragments. To start with, you will perform the homogeneous divide for the vertices. Next, you will need to compute the bounding box (in screen space) of each triangle and then enumerate all pixels within it to check if their barycentric coordinates are within the primitive. Some care will be required to prevent any cracks, as described in the textbook in section 8.1.2.

For any pixel that is contained within the triangle, you should produce a fragment associated with the pixel. Retain only the fragment with the nearest -value, computed by interpolating the -values of the vertices. A handy way to track this is to keep a -buffer, in the form of an array of size of floats, that tracks the closest -value for each pixel. Any time you encounter a fragment with a closer value, replace it and update the value in -buffer for that pixel.

If one is doing Gouraud shading, at this stage you could also interpolate the fragment’s color based on the barycentric coordinates and the vertex colors. Alternatively, if one is doing flat shading, you can simply set the fragment’s color to the triangle color you compute in Part 4. More generally for fragment processing, you could simply store the barycentric coordinates of the fragment and a reference to the primitive that it comes from.

Part 5: Fragment Processing

At this stage, if Phong shading is enabled you will do the shading approximation to produce a color for each fragment. To do this, you will need to interpolate both the normal vector and vertex position, so that you can appropriately compute vectors. These interpolations will require you to coordinate with the vertex processing stage to pass information, as well as the rasterization stage to keep the barycentric coordinates for the fragment. Note that interpolated positions should work OK, but interpolated normals will need to be renormalized.

After computing this information, you should be able to evaluate the Blinn-Phong lighting model for each fragment (using the ambient, diffuse, and color components for the surface).

Finally, the fragment shader should write all fragments to pixels in the image, retaining only the fragment that is closest to the camera.

Part 6: Execution and Testing

Your program should be able to load all of the scene files I’ve provided and display the resulting rasterized scene. You should also use an interface to switch between flat, Gouraud, and Phong shading and then redraw the scene. And, like the ray tracer, you should be able to write the final output image as a .ppm file.

Additionally, using the tools provided, you should also create your own scene using a combination of objects and a combination of lights. Be creative! Please be sure to add your scene file as myscene.js as well as any .obj files that you used and a rendered image myscene.ppm. To position objects in the scene, you may want to utilize your tool from A05 and/or Meshlab.

Part 7: Written Questions

Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).

These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.

Please create a separate directory in your repo called written and post all files (text answers and written) to this directory. Recall that the written component is due BEFORE the programming component.

  1. A key software challenge in rasterization is maintaining all of the geometric information throughout the pipeline, from world space mesh, to transformed mesh, to screen space fragments. Briefly describe what data structures you will use storing this information, being sure to mention specifically what elements (points, normals, indices, etc.) are stored by each one.

  2. Exercise 7.5 on pg. 157 of the textbook.

  3. Homogeneous coordinates model affine transformations, thus allowing for translation using a clever bookkeeping scheme. Additionally, our view transformation generalizes them for perspective projection. Briefly explain the key property of perspective, in terms of how it affects the size of objects, and how this maps to what new factors we include in a homography matrix.

  4. The painter’s algorithm offers a simple way to decide which objects are drawn on the screen, but it has several problems. Explain one of them.

  5. Gouraud shading is known to be a poor approximation for lighting when the size of primitives is large. Explain why.


You should use git to submit all source code files. The expectation is that your code will be graded by cloning your repo and then executing it within a modern browser (Chrome, Firefox, etc.)

Please provide a file that provides a text description of how to run your program and any parameters that you used. Also document any idiosyncrasies, behaviors, or bugs of note that you want us to be aware of.

To summarize, my expectation is that your repo will contain:

  1. A file
  2. Answers to the written questions in a separate directory named written
  3. A index.html file
  4. A a06.js file
  5. Any other .js files that you authored.
  6. You sample scene file, myscene.js, its associated .obj files, as well as the ppm file your code generated from it, myscene.ppm.
  7. (Optionally) any other test images that you want.



Reason Value
Program crashes due to bugs -10 each bug at grader's discretion to fix

Point Breakdown of Features

Requirement Value
Consistent modular coding style 10
External documentation ( 5
Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files 15
Expected output / behavior based on the assignment specification, including

Parsing the scene and .obj file, allowing the user to save the image.5
Correctly setting up the transformation matrices for the scene10
Transforming vertices to screen coordinates5
Correctly shading vertices at the vertex processing stage and computing Gouraud shading 5
Enumerating the set of fragments for each primitive, using a z-buffer to keep the closest10
Correctly computing the barycentric coordinates for each fragment for world space interpolation10
Implementing flat shading5
Implementing Phong shading in the fragment processing stage5
Allowing the user to switch between flat, Gouraud, and Phong5
Producing a custom scene of your choice10

Total 100

Extra Credit

While this assignment does not have a separate graduate and undergraduate version, a variety of extra credit opportunities exist. They are described below, but of course you are welcome to experiment with any technique you’ve discovered in reading the book or other papers. Please document any that you choose to implement in your README, but what follows are a few example ideas:


In between vertex processing and rasterization, perform clipping (this requires setting up a scene to test where the geometry necessitates clipping). To do so, vertex processing should apply all matrices except , and it should not perform the homogeneous divide at this stage. Next, you will clip all points against the view volume planes, which should be in at this point. Finally, apply and perform the homogeneous divide.

Cel Shading

Cel shading, also known as “toon” shading, is a cool effect that can be done during the fragment processing stage. To do so, instead of computing the Phong shading model you will analyze the dot product of the normal and the light vector, and partition the color space to vary the intensity in chunks.

Blending Transparent Objects

A rasterization pipeline makes it fairly straightforward to implement blending. To do so, you’ll need to specify a transparency value for each surface (you can think to do this with transparency information for each color channel, or just a global alpha for the shape itself). Next, you need to modify your -buffer to maintain all fragments associated with a given pixel and their depth. To blend, you should use the blending operations from image compositing to accumulate the final color for the pixel, sorted in depth order.

Improved Scene File / Instancing

Section 12.2 of the textbook describes the concept of a scene graph, which can be used to store and manage scenes with a variety of objects. Section 13.2 of the textbook also describes instancing, we can be translated to the concept of rasterization by applying a single unique transformation to each mesh that we load.

Both of these concepts offer capabilities for one to specify a single shape in absolute coordinates. One could then modify the scene file format to include a transformation that can be applied at load time to the mesh. Adding features that support this will prevent one from having to edit each mesh file they want to have to position them in the scene.


Because of its efficiency, it is possible to enable interactions in our raytracer to allow the user to navigate in the scene. One simple way to do this is to use mouse and keyboard to update the eye, lookat, and/or up vectors in the scene. One can do this will callbacks assigned to the HTML canvas object.

A more complex version of this is to use the mouse/keyboard to specify a new matrix, , that is used to apply a transformation to all geometric primitives before the camera transformation. This more complex solution allows a richer specification to navigate in the scene and, for example, move models around to light and lookat regions that were not previously scene. In doing this update, one will necessarily need to update normal vectors as well, which requires inverting the transformation and updating the normal vectors as described in class.


As described in the book, .obj files support loading textures and texture coordinates with a few additional commands. Adding texture support can allow your rasterizer to draw more sophisticated scenes, but requires one to (1) parse .obj files to support texture coordinates, (2) update the fragment shader to interpolate texture coordinates, and (3) using the texture coordinates, look up in a texture image that you load. One could support a basic version of this by converting .objs to use a .ppm file as the texture image. I found a variety of free-to-use, texture .obj files on, but you’ll still need to use an image editor to convert the texture files to .ppm.

Shadow Maps

By using a multi-pass algorithm, one can implement shadows using a technique called shadow maps. The basic idea is that you render the from the light position and instead of recording the color, record the nearest coordinate. It is described in more detail in the paper by Williams, Casting curved shadows on curved surfaces.

Next, when rendering transform the point you are shading using the light transformation and compare its value to the value in the shadow map. If they are the same, the surface can see the light, if the value in the shadow buffer is closer, then the surface is in shadow and the light should be ignored. You’ll need a small bias to prevent collisions, similar to how a shadow ray needs to be offset in ray tracing. This works best with the Phong shading model in the fragment shader.

Gooch Shading

Implementing any one of the effects described in Gooch et al. A Non-Photorealistic Lighting Model For Automatic Technical Illustration. These are particularly interesting for producing transitions from a cool-to-warm color as well as surfaces that look metallic.