In this assignment, you will get comfortable with the concept of ray tracing by implementing a ray tracer that models the illumination of a scene composed of basic primitive surfaces and point light sources.

Students in both sections are encouraged to check out some of the C++ tutorials that I am working on writing up, that describe some useful features of C++ that you may find useful in this assignment.

Please click here to create your repository.

Objectives

This assignment is designed to teach you the main concepts for ray tracing, including:

  • The basic linear algebra necessary for modeling light transport, camera coordinates, and simple surfaces.
  • Data structures for maintaining a scene of surfaces and computing their intersections with light rays.
  • Introductory models for illumination, including Lambertian and Blinn-Phong shading and hard shadows.
  • Implementing all of these components to put together a basic ray tracer that accumulates the light that illuminates a scene to produce an image.

Notes on Software Design for Ray Tracing

Your goal will be to develop a system for inverse modeling of the light transport within a scene.

Note, to keep the language consistent with the textbook, we will call any shape/object in the scene a surface.

We can break this high level task into a number of components:

  1. The algebraic operations required, including a well designed module for handling three-dimensional vectors.
  2. A data structure for generating the appropriate rays from a specification of the camera and image plane.
  3. A mechanism for storing the surfaces in the scene, so that you can test whether or not view rays intersect them. Surfaces can also be linked to their material (color) properties. You will also need a way to specify the scene itself and load its description.
  4. A mechanism for storing lights and accumulating illumination to produce a pixel color. You may want to consider how this connects to an color structures that you’ve used in previous assignments, as you will ultimately have to store colors in pixels.
  5. Finally, a module that connects all of the above to actual process to produce the final image (relying on your image class from previous assignments)

You are welcome to deviate from the above decomposition, except where I say you must implement something in a specific way. Nevertheless, you will still be required to implement all of the same functionality no matter how you choose to design your code.

Part 1: Linear Algebra

To handle the geometry of light rays and the surfaces that intersect them, some basic facilities for manipulating three-dimensional vectors will be extremely helpful. Chapter 2 of the textbook has a review of this material.

As discussed in lecture, operations for computing ray-surface intersection generally require computing both dot products and cross products of three-dimensional vectors. In addition, addition and subtraction of vectors, as well as multiplying a vector by a scalar, are necessary for the ray equations. Finally, computing the length of a vector and normalizing it will be necessary.

The richer you can make this functionality, the better. Future assignments will also benefit. For the purposes of a ray tracer, a three-dimensional matrix class is not necessary, but thinking in terms of encapsulating vectors as a class is extremely useful. For example, when storing a sphere you need a center position (a vector) and a radius (a scalar). If you do not have a vector class this means you need to store the center as 3 separate floats for the , , and coordinates.

You may also want to consider using three-dimensional vectors for encoding color, and refactoring your code to work with that. Alternatively, since certain operations – like dot products – do not make sense for colors, you may want to consider further developing a separate color class that has the appropriate operators you will use. Chapter 1 of FOCG has a brief, interesting discussion on whether or not this is a good design choice. While the choice is yours, note that there is redundancy: operations such as accumulating color often look like adding two vectors and multiplying a vector by a scalar.

Regardless of your choice – storing colors as floats in the range will be extremely useful.

Part 2: Cameras and Rays

Once you have settled on how you want to encapsulate the mathematics, your next task is to design the infrastructure for specifying and querying the view information for the scene. This includes the description of the viewpoint and camera as well as a description of the image plane.

Your program should eventually be able to read this information from an input scene specification, as described below. As discussed in lecture, there are multiple ways to specify a scene, but we will largely follow the convention of Chapter 4 of FOCG. In particular, your scene will be specified by:

  • Vectors for the position of the eye, position to lookat, and which direction is up
  • An angle describing the vertical field of view.
  • The width and height of the image plane, specified as the number of columns and rows

This information should be sufficient to specify the three-dimensional geometry of the image plane. To do so, you will need to use eye, lookat, and up to define a coordinate frame of the camera, . You should follow the convention of the book that points in the opposite direction of the view direction. You can assume that lookat is located at the exact center of the image plane, and that moves horizontally and moves vertically in the plane.

After setting up how you will store this specification, your program should be able to compute a ray for each pixel in the image plane. This ray will travel from the eye position through each pixel into the scene. Otherwise, since the camera position is fixed, you are not required to implement any specific functionality beyond generating rays.

Part 3: Surfaces and Scenes

Your program must decompose the scene as a collection of surfaces, and be able to support ray tracing scenes with multiple surfaces. Surfaces must be handled using an abstract base class for surfaces that has a virtual function that computes where and when rays hit the surface.

All surface types that you will implement should extend from this abstract class. Each subclass will implement the intersection function and return all of the information necessary to encapsulate the hit. In particular, you will need to be able to compute the associated position on the surface as well as the surface normal at that position.

Your ray tracer must support two types of surfaces:

  • Spheres, specified by a three-dimensional vector for the position of its center and a value for its radius.
  • Planes, specified by a three-dimensional vector for a point on the plane and a three-dimensional vector for the plane’s normal.

Finally, surfaces will need to store all of the appropriate parameters to determine the color of the hit point relative to the surface, as described in the next section.

Your code should be designed so that it is easy for you to then have functionality that can take a description of a ray, a range of possible values for on the ray, and compute the hit that is nearest to the origin of the ray and within the range.

In future assignments, we will generalize this surface model further, so it is important that you implement surface types in this way. You are free to implement the scene itself and how you encapsulate the set/list of all surfaces however you like.

Part 4: Lights and Color

Your program must implement point light sources and be able to light scenes that include multiple light sources. You do not need to modulate the intensity of the light based on distance, but instead you can use a constant intensity value (as suggested by the book). Point lights should be defined so that they can be of any color (thus, light “intensity” should be an RGB value).

At a given surface hit, you should compute the appropriate color based on the Blinn-Phong lighting model. You will compute Lambertian (diffuse) color as well as ambient and specular color. Each surface should store the necessary coefficients for ambient, diffuse, and specular color, as well as its Phong exponent. These four values determine a very simple material model for the surface.

Thus, when a hit is encountered, you will use which surface was hit to look up the material information, and then accumulate the amount of illumination for each light in the scene. When testing, I recommend that you first test the Lambertian component. Once you are convinced this is working, you can then debug the additional Blinn-Phong components. As you are accumulating color, be careful about overflow. The final colors you display will need to be clamped.

Finally, during the Blinn-Phong lighting calculation, your code should also be able to compute hard shadows. This means that if a light source is not visible from the hit point, its illumination should be not be including in the final color. To test this, you should follow the scheme from the book: after computing the hit point, compute a ray from the hit point to each light source to test its visibility (being careful to offset ). If this ray does not intersect any other surfaces in the scene, then you can safely include the contribution from this light source in the final accumulated color.

Part 5: Putting it all together

Your program should connect together all of the functionality above to build a simple ray tracing application. This task will require three high level components.

First, your code should be able to parse a simply input file with the description of the scene: camera parameters, lights, surfaces, and colors. The description of this file format is below.

Next, your ray tracer should utilize all of the above components to generate an image, stored in your image class from previous assignments. The color of every pixel in the image should be computed by tracing one ray into the scene and accumulating the color of every light that illuminates where it hits the surfaces in the scene.

Finally, your should use SDL to display this image. If the scene is complex or the image is large, this might take some time to compute – you are welcome to provide an indication of progress to a user, but this is not required. After the image is computed and displayed, the user should be able to save it to a PPM file, whose filename is specified at the command line.

Thus, when executing your program, your program should expect to be run by:

$ ./prog03 scene_file output.ppm

You can expect that we will test your program using the above execution. So, please be sure your executable supports input parameters that can specify these two input file names from the command line.

Scene files will be formatted as an ASCII file that you will parse. Every entry in the scene file begins with a single letter than describes what follows, and then a sequence of numbers based on the specification. In particular, you must support:

  • e, followed by 3 reals for the eye vector
  • l, followed by 3 reals for the lookat vector
  • u, followed by 3 reals for the up vector
  • f, followed by 1 real for the field of view angle in degrees
  • i, followed by 2 integers for the width and height of the image

Lights will be specified with uppercase letters. Specifically, we will use:

  • L, followed by 3 reals for the position of the light and then 3 reals that are the RGB value (in the range ) for the light.

Finally, surfaces will also be specified with uppercase characters. Each surface will first have its geometry specified and then following that will be color information:

  • S, followed by 3 reals for the center vector and then 1 real for the radius of a sphere
  • P, followed by 3 reals for a point vector and 3 reals for a normal vector of a plane

Following this geometry information will be a sequence of 9 reals that are RGB values (in the range ) for the ambient, diffuse, and specular color as well as 1 real for the Phong exponent.

We have included a couple sample scene files in your repository to test with. You are also required to submit a scene file of your own creation, called myscene.txt as well as the ppm file your ray tracer created from it, called myscene.ppm.

Part 6: Written Questions

Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).

These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.

Please create a commit a separate directory in your repo called written and post all files (text answers and written) to this directory.

  1. Given an eye position , lookat position , and up vector , compute the , , and vectors associated with the orthonormal basis of this camera. See Sections 2.4.5-2.4.7 of the textbook. Show your work.

  2. What is the difference between orthographic (parallel) and perspective projection. Be precise, in particular with how view rays are defined.

  3. Explain how the specular term of the Blinn-Phong shading model works and what visual effect it is trying to approximate. In particular, explain the vector and the Phong exponent .

  4. Explain the three possible cases for a ray-sphere intersection.

  5. Exercise 4.1 on pg. 88 of the textbook.

Submission

  • You should use git to submit all source code files and a working CMakeLists.txt. Do not submit any build files, binary files, or otherwise. The expectation is that your code will be graded by cloning your repo and then executing:
$ mkdir build
$ cd build
$ cmake ..
$ make

Your code must compile on the CS lab machines, in particular we will test on cambridge. Code that does not compile in this way will be given an automatic zero. You will be given one “warning” for the first instance during the semester that it does not compile, but after a zero will occur. If you are working on a different environment, please log into cambridge and test your code before submitting.

  • Make sure that this build process produces an executable named prog02. You will need to edit CMakeLists.txt accordingly.

  • Please provide a README.md file that provides a text description of how to run your program and any command line parameters that you used. Also document any idiosyncrasies, behaviors, or bugs of note that you want us to be aware of.

  • To summarize, my expectation is that your repo will contain:

    1. A README.md file
    2. Answers to the written questions in a separate directory named written
    3. A CMakeLists.txt file
    4. All .cpp and .h files that you authored and are necessary to compile your code.
    5. A sample scene file, myscene.txt, as well as the ppm file your code generated from it, myscene.ppm.
    6. (Optionally) any other scenes/test data that you would like to include.

Grading

Note that this assignment is graded out of 12 possible points instead of the usual 10, and thus the grading scale is out of 120 to reflect these extra points.

Deductions

Reason Value
Program does not compile. (First instance across all assignments will receive a warning with a chance to resubmit, but subsequence non-compiling assignments will receive the full penalty) -120
Program crashes due to bugs -10 each bug at grader's discretion to fix


Point Breakdown of Features

Note: this assignment is graded out of 12 points instead of 10, and thus we are scoring it out of 120 instead of 100. A grade of 12 requires a score of 120/120.

Requirement Value
Consistent modular coding style 10
External documentation (README.md), Providing a working CMakeLists.txt 5
Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files 15
Expected output / behavior based on the assignment specification, including

Parsing the scene file5
Computing view rays correctly from the specification of the camera15
Computing ray-surfaces intersections, correctly utilizing an abstract base class for both spheres and planes15
Computing Blinn-Phong lighting, employing ambient, diffuse, and specular components15
Computing hard shadows by tracing rays from hit points to lights7.5
Displaying your ray traced scene using SDL5
Designing a scene of your choice that shows off your ray tracer's capabilities, submitted as myscene.txt and myscene.ppm7.5

70
Written Questions 20
Total 120


Extra Credit

Implementing features above and beyond the specification may result in extra credit, please be sure to document these in your README. Ray tracers are extremely modular and often a variety of features can be combined in novel ways to create realistic scenes.

Extra credit features might include, but are not limited to, implementing interactive camera controls; different surface types; various material classes like metals and dielectrics; complete light sources such as area lights; reflection, refraction, or other shading models that involve multiple rays per pixel.