- Part 1: Linear Algebra
- Part 2: Cameras and Rays
- Part 3: Surfaces and Scenes
- Part 4: Putting it all together
- Part 5: Written Questions
In this assignment, both undergraduates and graduates will develop the basic scaffolding code for implementing a ray tracer (to be built upon in the next assignment).
All students will implement the same basic features for this assignment, while the functionality for the ray tracer will be split for each group in the next assignment. Please use the link associated with your section:
- 433 students: Please click here to create your repository
- 533 students: Please click here to create your repository
This assignment is designed to teach you the foundational concepts for ray tracing, including:
- The basic linear algebra necessary for modeling light transport, camera coordinates, and simple surfaces.
- Data structures for maintaining a scene of surfaces and computing their intersections with light rays.
- Implementing all of these components to put together a basic ray tracer that .produces a flat scene without the shading calculation.
Notes on Software Design for Ray Tracing
Your goal will be to develop a system for inverse modeling of the light transport within a scene.
Note, to keep the language consistent with the textbook, we will call any shape/object in the scene a surface.
We can break this high level task into a number of components:
- The linear algebra operations required, including a well designed module for handling three-dimensional vectors.
- A data structure for generating the appropriate rays from a specification of the camera and image plane.
- A mechanism for storing the surfaces in the scene, so that you can test whether or not view rays intersect them. Surfaces will also be linked to their material (color) properties. You will also need a way in your code to specify the scene itself and load its description.
- Finally, a module that connects all of the above to actual process to produce the final image (relying on your image operations from previous assignments)
You are welcome to deviate from the above decomposition, except where I say you must implement something in a specific way. Nevertheless, you will still be required to implement all of the same functionality no matter how you choose to design your code.
Part 1: Linear Algebra
To handle the geometry of light rays and the surfaces that intersect them, some basic facilities for manipulating three-dimensional vectors will be extremely helpful. Chapter 2 of the textbook has a review of this material.
As discussed in lecture, operations for computing ray-surface intersection generally require computing both dot products and cross products of three-dimensional vectors. In addition, vector addition and subtraction, as well as multiplying a vector by a scalar, are necessary for the ray equations. Finally, computing the length of a vector and normalizing it will be necessary.
The richer you can make this functionality, the better. Future assignments will also benefit. For the purposes of a ray tracer, a three-dimensional matrix class is not necessary, but thinking in terms of encapsulating vectors as a class is extremely useful for writing readable code. For example, when storing a sphere you need a center position (a vector) and a radius (a scalar). If you do not have a vector class this means you need to store the center as 3 separate floats for the , , and coordinates.
For this assignment, all colors I provide will be specified with RGB floating point values in the range for each channel. We will not yet be doing mathematical operations on these colors, but you may already want to think about how this connects your storage of pixel colors for images as well as with vectors.
Part 2: Cameras and Rays
Once you have settled on how you want to encapsulate the mathematics, your next task is to design the infrastructure for specifying and querying the view information for the scene. This includes the description of the viewpoint and camera as well as a description of the image plane.
Your program should eventually be able to read this information from an input scene specification, as described below. As discussed in lecture, there are multiple ways to specify a scene, but we will largely follow the convention of Chapter 4 of FOCG. In particular, your scene will be specified by:
- Vectors for the position of the
eye, position to
lookat, and which direction is
fov_angledescribing the vertical field of view.
- The width and height of the image plane, specified as the number of
This information should be sufficient to specify the three-dimensional geometry of the image plane. To do so, you will need to use
up to define a coordinate frame of the camera, . You should follow the convention of the book that points in the opposite direction of the view direction. You can assume that
lookat is located at the exact center of the image plane, and that moves horizontally and moves vertically in the image plane.
After setting up how you will store this specification, your program should be able to compute a ray for each pixel in the image plane. This ray will travel from the eye position through each pixel into the scene. Otherwise, since the camera position is fixed, you are not required to implement any specific functionality beyond generating rays from pixels. (You may optionally, for extra credit, want to consider an interface for a camera that can move – but that is not required).
Part 3: Surfaces and Scenes
Your program must decompose the scene as a collection of surfaces, and be able to support ray tracing scenes with multiple surfaces. For this assignment, you only need to implement one type of surface: spheres. Spheres are specified by a three-dimensional vector for the position of its
center and a value for its
For this assignment, surfaces will also only be specified as having one color, a so-called
ambient color. Upon detecting that a ray hits a given surface, you should report which ambient color for the hit surface, and set the pixel to be that color. No other shading calculations will be used.
Your code should be designed with functionality so that for a given surface, it is easy for you to take a description of a ray, a range of possible values for on the ray, and compute the hit that is nearest to the origin of the ray and within the range.
In future assignments, we will generalize this surface model further, so it is important that you implement surface types in this way. You are free to implement the scene itself and how you encapsulate the set/list of all surfaces however you like. My recommendation is to create a
Sphere class that includes a
hit() function with returns a hit record to be used by the ray tracer.
Part 4: Putting it all together
Your program should connect together all of the functionality above to build a simple ray tracing application. This task will require three high level components.
Next, your ray tracer should utilize all of the above components to generate an image. The color of every pixel in the image should be computed by tracing one ray into the scene and setting the color of the pixel to the ambient color of the surface this ray hits.
Finally, your should display your ray traced image. If the scene is complex or the image is large, this might take some time to compute – you are welcome to provide an indication of progress to a user, but this is not required. After the image is computed and displayed, the user should be able to save it to a PPM file using the same mechanism as in the first two assignments.
You are also required to submit a scene file of your own creation, called
myscene.js as well as the ppm file your ray tracer created from it, called
In your repository, we have provided a few sample scene files as JSON files. By calling
fov_angle (in degrees),
height. I have used simple arrays for the vectors, you will need to use these to initialize any vectors types you use within your code.
Besides the camera parameters, scenes include an array called
surfaces, that specifies a list of spheres. Spheres are specified by their
We have included a couple sample scene files in your repository to test with as well as what our ray tracer produces for these scenes.
Part 5: Written Questions
Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).
These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.
Please create a separate directory in your repo called
written and post all files (text answers and written) to this directory. Recall that the written component is due BEFORE the programming component.
Briefly describe your implementation design choices for how you plan to represent 3D vectors. Considerations can include (1) ease of writing mathematical expressions, (2) performance, (3) utility for converting from other types and constructing.
lookatposition , and
upvector , compute the , , and vectors associated with the orthonormal basis of this camera. See Sections 2.4.5-2.4.7 of the textbook. Show your work.
What is the difference between orthographic (parallel) and perspective projection. Be precise, in particular with how view rays are defined.
Explain the three possible cases for a ray-sphere intersection.
Exercise 4.1 on pg. 88 of the textbook.
You should use git to submit all source code files. The expectation is that your code will be graded by cloning your repo and then executing it within a modern browser (Chrome, Firefox, etc.)
Please provide a
README.md file that provides a text description of how to run your program and any parameters that you used. Also document any idiosyncrasies, behaviors, or bugs of note that you want us to be aware of.
To summarize, my expectation is that your repo will contain:
- Answers to the written questions in a separate directory named
- Any other
.jsfiles that you authored.
- You sample scene file,
myscene.js, as well as the ppm file your code generated from it,
- (Optionally) any other test images that you want.
|Program crashes due to bugs||-10 each bug at grader's discretion to fix|
Point Breakdown of Features
|Consistent modular coding style||10|
|External documentation (README.md)||5|
|Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files||15|
| Expected output / behavior based on the assignment specification, including|
Implementing features above and beyond the specification may result in extra credit, please document these in your README.md.
As the next assignment will involve additional lighting and shading, you may want to check with the instructor before implementing an extra feature that would be redundant to work in the next assignment. Only features that are not part of future work will be eligible for extra credit, at the discretion of the instructor.