In this assignment, both undergraduates and graduates will develop the basic scaffolding code for implementing a ray tracer (to be built upon in the next assignment).

All students will implement the same basic features for this assignment, while the functionality for the ray tracer will be split for each group in the next assignment. Please use the link associated with your section:

Objectives

This assignment is designed to teach you the foundational concepts for ray tracing, including:

  • The basic linear algebra necessary for modeling light transport, camera coordinates, and simple surfaces.
  • Data structures for maintaining a scene of surfaces and computing their intersections with light rays.
  • Implementing all of these components to put together a basic ray tracer that .produces a flat scene without the shading calculation.

Notes on Software Design for Ray Tracing

Your goal will be to develop a system for inverse modeling of the light transport within a scene.

Note, to keep the language consistent with the textbook, we will call any shape/object in the scene a surface.

We can break this high level task into a number of components:

  1. The linear algebra operations required, including a well designed module for handling three-dimensional vectors.
  2. A data structure for generating the appropriate rays from a specification of the camera and image plane.
  3. A mechanism for storing the surfaces in the scene, so that you can test whether or not view rays intersect them. Surfaces will also be linked to their material (color) properties. You will also need a way in your code to specify the scene itself and load its description.
  4. Finally, a module that connects all of the above to actual process to produce the final image (relying on your image operations from previous assignments)

You are welcome to deviate from the above decomposition, except where I say you must implement something in a specific way. Nevertheless, you will still be required to implement all of the same functionality no matter how you choose to design your code.

Part 1: Linear Algebra

To handle the geometry of light rays and the surfaces that intersect them, some basic facilities for manipulating three-dimensional vectors will be extremely helpful. Chapter 2 of the textbook has a review of this material.

As discussed in lecture, operations for computing ray-surface intersection generally require computing both dot products and cross products of three-dimensional vectors. In addition, vector addition and subtraction, as well as multiplying a vector by a scalar, are necessary for the ray equations. Finally, computing the length of a vector and normalizing it will be necessary.

The richer you can make this functionality, the better. Future assignments will also benefit. For the purposes of a ray tracer, a three-dimensional matrix class is not necessary, but thinking in terms of encapsulating vectors as a class is extremely useful for writing readable code. For example, when storing a sphere you need a center position (a vector) and a radius (a scalar). If you do not have a vector class this means you need to store the center as 3 separate floats for the \(x\), \(y\), and \(z\) coordinates.

For this assignment, all colors I provide will be specified with RGB floating point values in the range \([0,1]\) for each channel. We will not yet be doing mathematical operations on these colors, but you may already want to think about how this connects your storage of pixel colors for images as well as with vectors.

Part 2: Cameras and Rays

Once you have settled on how you want to encapsulate the mathematics, your next task is to design the infrastructure for specifying and querying the view information for the scene. This includes the description of the viewpoint and camera as well as a description of the image plane.

Your program should eventually be able to read this information from an input scene specification, as described below. As discussed in lecture, there are multiple ways to specify a scene, but we will largely follow the convention of Chapter 4 of FOCG. In particular, your scene will be specified by:

  • Vectors for the position of the eye, position to lookat, and which direction is up
  • A fov_angle describing the vertical field of view.
  • The width and height of the image plane, specified as the number of columns and rows

This information should be sufficient to specify the three-dimensional geometry of the image plane. To do so, you will need to use eye, lookat, and up to define a coordinate frame of the camera, \((u,v,w)\). You should follow the convention of the book that \(w\) points in the opposite direction of the view direction. You can assume that lookat is located at the exact center of the image plane, and that \(u\) moves horizontally and \(v\) moves vertically in the image plane.

After setting up how you will store this specification, your program should be able to compute a ray for each pixel in the image plane. This ray will travel from the eye position through each pixel into the scene. Otherwise, since the camera position is fixed, you are not required to implement any specific functionality beyond generating rays from pixels. (You may optionally, for extra credit, want to consider an interface for a camera that can move – but that is not required).

Part 3: Surfaces and Scenes

Your program must decompose the scene as a collection of surfaces, and be able to support ray tracing scenes with multiple surfaces. For this assignment, you only need to implement one type of surface: spheres. Spheres are specified by a three-dimensional vector for the position of its center and a value for its radius.

For this assignment, surfaces will also only be specified as having one color, a so-called ambient color. Upon detecting that a ray hits a given surface, you should report which ambient color for the hit surface, and set the pixel to be that color. No other shading calculations will be used.

Your code should be designed with functionality so that for a given surface, it is easy for you to take a description of a ray, a range of possible values for \(t \in [t_\min, t_\max]\) on the ray, and compute the hit that is nearest to the origin of the ray and within the range.

In future assignments, we will generalize this surface model further, so it is important that you implement surface types in this way. You are free to implement the scene itself and how you encapsulate the set/list of all surfaces however you like. My recommendation is to create a Sphere class that includes a hit() function with returns a hit record to be used by the ray tracer.

Part 4: Putting it all together

Your program should connect together all of the functionality above to build a simple ray tracing application. This task will require three high level components.

First, your code should be able to parse a simple JSON input file with the description of the scene: camera parameters, surfaces, and colors. The description of this file format is below, but it should be a straightforward task using Javascript’s JSON parsing.

Next, your ray tracer should utilize all of the above components to generate an image. The color of every pixel in the image should be computed by tracing one ray into the scene and setting the color of the pixel to the ambient color of the surface this ray hits.

Finally, your should display your ray traced image. If the scene is complex or the image is large, this might take some time to compute – you are welcome to provide an indication of progress to a user, but this is not required. After the image is computed and displayed, the user should be able to save it to a PPM file using the same mechanism as in the first two assignments.

You are also required to submit a scene file of your own creation, called myscene.js as well as the ppm file your ray tracer created from it, called myscene.ppm.

Scene Specification

Scene files will be formatted as an JSON file that you will parse. JSON, or Javascript Object Notation, is a simple ASCII format that mirrors the notation for how objects are used in Javascript. Please check out Working with JSON for more info about it.

In your repository, we have provided a few sample scene files as JSON files. By calling JSON.parse(), you should be able to convert this file directly into an object that you can access from Javascript. If you open a scene file in a text editor, you should be able to read off entries for camera specification, including the eye, lookat, up, fov_angle (in degrees), width, and height. I have used simple arrays for the vectors, you will need to use these to initialize any vectors types you use within your code.

Besides the camera parameters, scenes include an array called surfaces, that specifies a list of spheres. Spheres are specified by their center, radius, and ambient color.

We have included a couple sample scene files in your repository to test with as well as what our ray tracer produces for these scenes.

Part 5: Written Questions

Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).

These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.

Please create a separate directory in your repo called written and post all files (text answers and written) to this directory. Recall that the written component is due BEFORE the programming component.

  1. Briefly describe your implementation design choices for how you plan to represent 3D vectors. Considerations can include (1) ease of writing mathematical expressions, (2) performance, (3) utility for converting from other types and constructing.

  2. Given an eye position \((0,0,8)\), lookat position \((0,0,4)\), and up vector \((0,1,0)\), compute the \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) vectors associated with the orthonormal basis of this camera. See Sections 2.4.5-2.4.7 of the textbook. Show your work.

  3. What is the difference between orthographic (parallel) and perspective projection. Be precise, in particular with how view rays are defined.

  4. Explain the three possible cases for a ray-sphere intersection.

  5. Exercise 4.1 on pg. 88 of the textbook.

Submission

You should use git to submit all source code files. The expectation is that your code will be graded by cloning your repo and then executing it within a modern browser (Chrome, Firefox, etc.)

Please provide a README.md file that provides a text description of how to run your program and any parameters that you used. Also document any idiosyncrasies, behaviors, or bugs of note that you want us to be aware of.

To summarize, my expectation is that your repo will contain:

  1. A README.md file
  2. Answers to the written questions in a separate directory named written
  3. A index.html file
  4. A a03.js file
  5. Any other .js files that you authored.
  6. You sample scene file, myscene.js, as well as the ppm file your code generated from it, myscene.ppm.
  7. (Optionally) any other test images that you want.

Grading

Deductions

Reason Value
Program crashes due to bugs -10 each bug at grader's discretion to fix


Point Breakdown of Features

Requirement Value
Consistent modular coding style 10
External documentation (README.md) 5
Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files 15
Expected output / behavior based on the assignment specification, including

Parsing the scene file10
Computing view rays correctly from the specification of the camera20
Computing ray-surfaces intersections for spheres20
Displaying your ray traced scene10
Designing a scene of your choice that shows off your ray tracer's capabilities, submitted as myscene.js and myscene.ppm10

70
Total 100


Extra Credit

Implementing features above and beyond the specification may result in extra credit, please document these in your README.md.

As the next assignment will involve additional lighting and shading, you may want to check with the instructor before implementing an extra feature that would be redundant to work in the next assignment. Only features that are not part of future work will be eligible for extra credit, at the discretion of the instructor.