Assignment 03  Undergraduate
Ray Tracing
Due: Mar. 11, 2018 11:59:59 PM
Graded: Mar. 18, 2018
Percentage of Grade: 12%
Assignment Description: Finalized
 Objectives
 Part 1: Linear Algebra
 Part 2: Cameras and Rays
 Part 3: Surfaces and Scenes
 Part 4: Lights and Color
 Part 5: Putting it all together
 Part 6: Written Questions
 Grading
Undergraduates will implement a ray tracer that models the illumination of a scene composed of basic primitive shapes and point light sources.
Objectives
This assignment is designed to teach you the main concepts for ray tracing, including:
 The basic linear algebra necessary for modeling light transport, camera coordinates, and shapes.
 Simple data structures for maintaining a scene of surfaces and computing their intersections with light rays.
 Introductory models for illumination, including Lambertian and BlinnPhong shading and hard shadows.
 Implementing all of these components to put together a basic ray tracer that accumulates the light that illuminates a scene to produce an image.
Software Design for Ray Tracing
Your goal will be to develop a system for inverse modeling of the light transport within a scene.
Note, to keep the language consistent with the textbook, we will call any shape/object in the scene a surface.
We can break this high level task into a number of components:
 The algebraic operations required, including a well designed module for handling threedimensional vectors.
 A data structure for generating the appropriate rays from a specification of the camera and image plane.
 A mechanism for storing the surfaces in the scene, so that you can test whether or not view rays intersect them. Surfaces can also be linked to their material (color) properties. You will also need a way to specify the scene itself and load its description.
 A mechanism for storing lights and accumulating illumination to produce a pixel color. You may want to consider how this connects to an color structures that you’ve used in previous assignments, as you will ultimately have to store colors in pixels.
 Finally, a module that connects all of the above to actual process to produce the final image (relying on your image class from previous assignments)
You are welcome to deviate from the above decomposition, except where I say you must implement something in a specific way. Nevertheless, you will still be required to implement all of the same functionality no matter how you choose to design your code.
Part 1: Linear Algebra
To handle the geometry of light rays and the surfaces that intersect them, some basic facilities for manipulating threedimensional vectors will be extremely helpful. Chapter 2 of the textbook has a review of this material.
As discussed in lecture, operations for computing raysurface intersection generally require computing both dot products and cross products of threedimensional vectors. In addition, addition and subtraction of vectors, as well as multiplying a vector by a scalar, are necessary for the ray equations. Finally, computing the length of a vector and normalizing it will be necessary.
The richer you can make this functionality, the better. Future assignments will also benefit. For the purposes of a ray tracer, a threedimensional matrix class is not necessary, but thinking in terms of encapsulating vectors as a class is extremely useful. For example, when storing a sphere you need a center position (a vector) and a radius (a scalar). If you do not have a vector class this means you need to store the center as 3 separate floats for the , , and coordinates.
You may also want to consider using threedimensional vectors for encoding color, and refactoring your code to work with that. Alternatively, since certain operations like dot products do not make sense for colors, you may want to consider further developing a separate color class that has the appropriate operators you will use. Ch. 1 of the has a brief interesting discussion on whether or not this is a good design choice, but I will note that there is redundancy: operations such as accumulating color often look like adding two vectors and multiplying a vector by a scalar.
Part 2: Cameras and Rays
Once you have settled on how you want to encapsulate the mathematics, your next task is to design the infrastructure for specifying and querying the view information for the scene. This includes the description of the viewpoint and camera as well as a description of the image plane.
Your program should eventually be able to read this information from an input scene specification, as described below. As discussed in lecture, there are multiple ways to specify a scene, but we will largely follow the convention of Chapter 4 of the textbook. In particular, your scene will be specified by:
 Vectors for the position of the
eye
, position tolookat
, and which direction isup
 An
angle
describing the vertical field of view.  The width and height of the image plane, specified as the number of
columns
androws
This information should be sufficient to specify the threedimensional geometry of the image plane. To do so, you will need to use eye
, lookat
, and up
to define a coordinate frame of the camera, . You should follow the convention of the book that points in the opposite direction of the view direction. You can assume that lookat
is located at the exact center of the image plane, and that moves horizontally and moves vertically in the plane.
After setting up how you will store this specification, your program should be able to compute a ray for each pixel in the image plane. This ray will travel from the eye position through each pixel into the scene. Otherwise, since the camera position is fixed, you are not required to implement any specific functionality beyond generating rays.
Part 3: Surfaces and Scenes
Your program must decompose the scene as a collection of surfaces, and be able to support ray tracing scenes with multiple surfaces. Surfaces must be handled using an abstract base class for surfaces that has a virtual function that computes where and when rays hit the surface.
All surface types that you will implement should extend from this abstract class. Each subclass will implement the intersection function and return all of the information necessary to encapsulate the hit. In particular, you will need to be able to compute the associated position on the surface as well as the surface normal at that position.
Your ray tracer must support two types of shapes:
 Spheres, specified by a threedimensional vector for the position of its
center
and a value for itsradius
.  Planes, specified by a threedimensional vector for a
point
on the plane and a threedimensional vector for the plane’snormal
.
Finally, surfaces will need to store all of the appropriate parameters to determine the color of the hit point relative to the surface, as described in the next section.
Your code should be designed so that it is easy for you to then have functionality that can take a description of a ray, a range of possible values for on the ray, and compute the hit that is nearest to the origin of the ray and within the range.
In future assignments, we will generalize this surface model further, so it is important that you implement surface types in this way. You are free to implement the scene itself and how you encapsulate the set/list of all surfaces however you like.
Part 4: Lights and Color
Your program must implement point light sources and be able to light scenes that include multiple light sources. You do not need to modulate the intensity of the light based on distance, but instead you can use a constant intensity value (as suggested by the book). Point lights should be defined so that they can be of any color (thus, light “intensity” should be an RGB value).
At a given surface hit, you should compute the appropriate color based on the BlinnPhong lighting model. You will compute Lambertian (diffuse) color as well as ambient and specular color. Each surface should store the necessary coefficients for ambient
, diffuse
, and specular
color, as well as its Phong exponent
. These four values determine a very simple material model for the surface.
Thus, when a hit is encountered, you will use which surface was hit to look up the material information, and then accumulate the amount of illumination for each light in the scene. When testing, I recommend that you first test the Lambertian component. Once you are convinced this is working, you can then debug the additional BlinnPhong components. As you are accumulating color, be careful about overflow. The final colors you display will need to be clamped.
Finally, during the BlinnPhong lighting calculation, your code should also be able to compute hard shadows. This means that if a light source is not visible from the hit point, its illumination should be not be including in the final color. To test this, you should follow the scheme from the book: after computing the hit point, compute a ray from the hit point to each light source to test its visibility (being careful to offset ). If this ray does not intersect any other surfaces in the scene, then you can safely include the contribution from this light source in the final accumulated color.
Part 5: Putting it all together
Your program should connect together all of the functionality above to build a simple ray tracing application. This task will require three high level components.
First, your code should be able to parse a simply input file with the description of the scene: camera parameters, lights, surfaces, and colors. The description of this file format is below.
Next, your ray tracer should utilize all of the above components to generate an image, stored in your image class from previous assignments. The color of every pixel in the image should be computed by tracing one ray into the scene and accumulating the color of every light that illuminates where it hits the surfaces in the scene.
Finally, your should use SDL to display this image. If the scene is complex or the image is large, this might take some time to compute – you are welcome to provide an indication of progress to a user, but this is not required. After the image is computed and displayed, the user should be able to save it to a PPM file, whose filename is specified at the command line.
Thus, when executing your program, your program should expect to be run by:
You can expect that we will test your program using the above execution. So, please be sure your executable supports input parameters that can specify these two input file names from the command line.
Scene files will be formatted as an ASCII file that you will parse. Every entry in the scene file begins with a single letter than describes what follows, and then a sequence of numbers based on the specification. In particular, you must support:
e
, followed by 3 reals for theeye
vectorl
, followed by 3 reals for thelookat
vectoru
, followed by 3 reals for theup
vectorf
, followed by 1 real for the field of viewangle
in degreesi
, followed by 2 integers for thewidth
andheight
of the image
Lights will be specified with uppercase letters. Specifically, we will use:
L
, followed by 3 reals for the position of the light and then 3 reals that are the RGB value (in the range ) for the light.
Finally, shapes will also be specified with uppercase characters. Each shape will first have its geometry specified and then following that will be color information:
S
, followed by 3 reals for thecenter
vector and then 1 real for the radius of asphere
P
, followed by 3 reals for apoint
vector and 3 reals for anormal
vector of aplane
Following this geometry information will be a sequence of 9 reals that are RGB values (in the range ) for the ambient
, diffuse
, and specular
color as well as 1 real for the Phong exponent
.
We have include a couple sample scene files in your repository to test with. You are also required to submit a scene file of your own creation, called myscene.txt
as well as the ppm file your ray tracer created from it, called myscene.ppm
.
Part 6: Written Questions
Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed handdrawings or produced using an image editing tool).
These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.
Please create a commit a separate directory in your repo called written
and post all files (text answers and written) to this directory.

Given an
eye
position ,lookat
position , andup
vector , compute the , , and vectors associated with the orthonormal basis of this camera. See Sections 2.4.52.4.7 of the textbook. Show your work. 
What is the difference between orthographic (parallel) and perspective projection. Be precise, in particular with how view rays are defined.

Explain how the specular term of the BlinnPhong shading model works and what visual effect it is trying to approximate. In particular, explain the vector and the Phong exponent .

When deciding if a light at position is visible for a hit position what ray should one use and what values of should one check?

Exercise 4.1 on pg. 88 of the textbook.
Grading
Deductions
Reason  Value 
Program does not compile. (First instance across all assignments will receive a warning with a chance to resubmit, but subsequence noncompiling assignments will receive the full penalty)  120 
Program crashes due to bugs  10 each bug at grader's discretion to fix 
Point Breakdown of Features
Note: this assignment is graded out of 12 points instead of 10, and thus we are scoring it out of 120 instead of 100. A grade of 12 requires a score of 120/120.
Requirement  Value  
Consistent modular coding style  10  
External documentation (README.md), Providing a working CMakeLists.txt  5  
Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files  15  
Expected output / behavior based on the assignment specification, including
 70  
Written Questions  20  
Total  120 