Graduates will implement a distribution ray tracer, discussed in Chapter 13 of the textbook as well as the 1984 SIGGRAPH paper Distributed Ray Tracing by Cook, Porter, and Carpenter.

Objectives

This assignment is designed to teach you advanced concepts for ray tracing, but using a methodology that corrects for many of the artifacts in basic ray tracing. Your distribution ray tracer will using recursion and sampling to:

  • Understand models for reflection, translucency, refraction, and shadows of shapes.
  • Apply advanced models for lights that allow for soft shadowing.
  • Modulate how cameras are used to implement antialiasing and depth of field effects.
  • Put all of these components together to produce realistic images a scenes using the framework of distribution ray tracing.

Software Design for Distribution Ray Tracing

As with a traditional ray tracer, your goal will be to develop a system for inverse modeling of the light transport within a scene. But, as you will discover, a number of the components for a distribution ray tracer need to be considered differently from the ground up for this task. Thus, your code will end up having a number of similar components to A03, but mixed in different ways.

You will also need to consider carefully generating random numbers and distributions. For this assignment, Javascript’s default random number method, Math.random() should be sufficient. Nevertheless, be warned that certain browsers implement this a bit differently and you may end up with slightly different results depending on your browser.

Most of the architecture designed around a standard ray tracer is providing generic functionality to compute when a ray hits the scene. When a hit is encountered, the color for the pixel is determined by using simple lighting models, like Blinn-Phong, and simple checks to skip lights, like hard shadows. In a distribution ray tracer, the central computation is instead not what the ray hits, but rather what color the ray should return. This method, called ray_color(), returns the amount of accumulated color intensity that a ray cast into the scene has found. This computation is done recursively, as when a ray hits a reflective or translucent object, the ray will bounce. Notably, this is why distribution ray tracers are a type of recursive ray tracer.

As a result, like hit() from A03, ray_color() must also track a range of possible \(t\) values, \([t_\min, t_\max]\), to avoid hitting the same spot twice as well as a depth value to provide infinite recursive bounces.

Part 1: Distribution Ray Tracing Algorithm

The concept of recursive ray tracing suggests that a ray can bounce at each hit. Similarly, distribution ray tracing suggests that multiple rays (a distribution of them!) are bounced for each hit. This collection of rays models a distribution of possibilities. Distribution ray tracing is mostly easily understood by the summary of the algorithm at the end of Cook et al.’s paper, that highlights where such distributions are appropriate. In this assignment, you are required to implement all features from that paper except motion blur. Thus, your algorithm will look like:

  1. Choose a position of the eye (for depth of field) and choose a (sub-)pixel (for antialiasing). Cast a ray into the scene and determine what object is hit.
  2. Compute illumination from each light source by choosing a random position on the light (for soft shadows).
  3. Compute reflection by choosing a random direction near the ideal reflected direction (for glossy reflections) and recursively compute the color.
  4. For dieletric materials, choose a random direction near the ideal refracted direction (for glossy refraction) and recursively compute the color.

In the above, every time I mention the word choose you need to compute a random ray. There are a variety of ways to sample such rays, and you will need to be careful to test each.

Because of the stochastic nature of the above, you will then have to modify the high level loop around which you color each pixel. In pseudocode, your loop will look something like this:

for each pixel p {
  color total_c
  for i in num_samples {            //necessary for distribution sampling
    Ray r = camera.get_ray(p)
    c = ray_color(r, ...)           //additional parameters for t and depth
    total_c += c
  }
  total_c = total_c / num_samples   //necessary for distribution sampling
  p.set_color(total_c)
}

In the above num_samples is the number of samples that you will take for each pixel. Fortunately, this structure prevents you from having to separately perform a sampling loop for each of the above features. You can assume that if you sample enough, averaging all the resulting colors will allow you to blend together the effects of each element.

As a result, your interface for displaying the image should allow the user to adjust the num_samples. Your interface for displaying the image should include also controls for enabling and disabling: (1) depth of field, (2) anti-aliasing, (3) show shadows, and (4) glossy reflection

Part 2: Shapes and Colors

Graduate student will need to only support Spheres for this assignment.

We will expand upon the color model used for Spheres, like in A04UG, we will support ambient, diffuse, and specular colors with a phong_exponent. Your ray_color() method should accumulate lighting as in the Blinn-Phong model, but since it will handle reflection/refraction, you will also support additional parameters. To account for reflection will use a mirror reflection color (specified as an RGB tuple), a refractive index (notated as \(n_t\) in equations), as well as a glossiness value. How these values are used are described below. Any surface where these new values are undefined should be treated as a standard Blinn-Phong surface.

Part 3: Cameras for Antialiasing and Depth of Field

A standard ray tracing approach is to compute the ray from the eye through the center of each pixel. This produces staircase-like artifacts when sampling curve surfaces and sharp features that are the result of aliasing. One method to fix this is sample within the screen coordinate space bounded by the pixel. Such a strategy can be achieved by treating the integer pixel coordinates as a floating point value and sampling within plus or minus half a pixel in both the horizontal and vertical directions. Thus, you will replace the expressions \(i+0.5\) and \(j+0.5\) with \(i+r_i\) and \(j+r_j\) where \(r_i,r_j \in [0,1]\).

While these floating point values no longer refer to the center of a pixel, they can still be mapped to a three-dimensional world space coordinate by the camera, and thus can serve as a target to define the ray direction. Averaging across all of these samples is similar to blurring the image, but the colors that you average together are the accumulated light intensities for rays, as opposed to averaging just on the pixel colors. Thus, it works much better that an image-based smoothing.

Similarly, perturbing the position of the eye within a small square (or disc) orthogonal to the view direction allows for creating depth of field effects. The idea is that you are sampling light from a lens with positive area as opposed to a pinhole camera. Because of this, it also will become important to make sure that you set an appropriate focal length. Luckily, you can use the lookat point to define the target focal length. Whereas in the standard ray tracer, the ray direction could be defined independent of the eye as in Chapter 4 of the textbook, you will now need to compute a new ray direction as the eye has moved away from \((0,0,0)\) in the \((u,v,w)\) coordinate space.

You are welcome to let the user vary this effect with an interface. In my implementation (and the example images I provided), I used a square of side length \(a\) centered at the eye position where \(a\) was a random number between \([-0.01d,0.01d]\) where \(d\) is the original focal length.

Put together, depth of field varies the position of the eye and antialiasing effectively varies the position of the pixel, but both are modifications that modify the camera to sample a distribution of possible view rays.

Part 4: Lights and (Soft) Shadows

Given a sampled ray from Part 3, our next step will be to incorporate distribution sampling within the ray_color() method. This method must support two types of lights. Point lights (as described in A04UG) as well as area lights.

Unlike a point light, that has a fixed position defined by a single three-dimensional vector, area lights are defined by a parallelogram. To specify this, a user will set a position vector \(\mathbf{p}\) for one corner of the light, and two directional vectors \(\mathbf{a}\) and \(\mathbf{b}\). The parallelogram of light positions will then be defined as all points \(\mathbf{l}(\alpha, \beta) = \mathbf{p} + \alpha\mathbf{a} + \beta\mathbf{b}\) for \(\alpha, \beta \in [0,1]\).

Both lights will have a single color specified as an RGB tuple.

To create a soft shadow, one simply samples from the set of light positions defined in this area when computing illumination. Scenes that have point lights are equivalent to area lights where \(\mathbf{a}\) and \(\mathbf{b}\) are both \((0,0,0)\). So, you can generically compute directions to lights by sampling offsets \(\alpha\) and \(\beta\) for both.

Otherwise, the (non-recursive) step of our approach will compute shadows and illumination using a similar approach to the standard Blinn-Phong ray tracer when computing ambient and diffuse components.

Part 5: Glossy Reflection and Refraction

First, to basic reflection/refraction, one must cast an additional ray (recursively) and accumulate an additional color for each hit. Specifically, when a surface is hit one will make an additional call to ray_color(), based on a sampling approach and the type of material that the hit surface is.

Reflections should be taken into account for any surface that has a refract index value that is greater than zero. Reflections are straightforward and should accumulate based on the mirror color.

We will approximate that the refractive index of the ambient “air” medium as \(n = 1.0\), meaning that refractive surfaces should only need to specify \(n_t\). To make our lives a little easier, we will use the convention that if index is undefined, the surface should neither reflect nor refract. Note that by the Schlick approximation, if \(n_t = 0\), then \(R_0 = (\frac{n_t-1}{n_t+1})^2 = 1.0\), which would imply no refraction and perfect reflection.

Because we are using distribution ray tracing, instead of computing both a refract ray and a reflect ray, we will use the value of \(R_0\) to send either a reflect ray or a refract ray. In particular, after computing \(R_0\), compute a random number between \(r \in [0,1]\). If \(r < R_0\), then we will proceed with a standard reflection. If \(r >= R_0\), then we will proceed with a standard refraction calculation (where you also check if you are entering or exiting the shape).

Finally, distribution ray tracers can also produce both reflection and refraction that is glossy. To achieve glossy effects, you should choose a random ray that is nearby the ideal reflect/refract ray. To do this, you will need to compute a coordinate system that is orthogonal to the ideal bounce direction, and sample within a small span around this. To set this span, I used a square of side length \(g\) where \(g\) is the glossiness parameter specified in the scene. I then computing to random numbers, \(u_g, v_g \in [-\frac{g}{2},\frac{g}{2}]\) and built a new ray in the direction of the bounce and offset in \(\mathbf{u},\mathbf{v}\) directions of the coordinate space.

Finally, shadows behave a bit differently around gloss materials, in particular for materials that are translucent. The right way to handle translucent materials is to in fact bend the ray from the hit position to the light source. This is a bit tricky to do, so instead you can simply skip computing shadows for any dielectric material that has an \(n_t > 0\). Essentially, this assumes that translucent materials do not cast shadows.

Part 6: Written Questions

Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool).

These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text.

Please create a separate directory in your repo called written and post all files (text answers and written) to this directory. Recall that the written component is due BEFORE the programming component.

  1. Explain your software design choices with how you organized your code to prepare it for distribution ray tracing. Particular, in what ways did you change the loop to color each pixel from A03, and how did you account for multiple sampling for each effect.

  2. Explain the motivation for using distribution ray tracing to model glossy reflections as opposed to using ideal reflection.

  3. Given a vector \((1,1,0)\), construct an orthonormal basis. See Sections 2.4.5-2.4.7 of the textbook. State which vector \(\mathbf{t}\) you used. Show your work.

  4. Explain how the specular term of the Blinn-Phong shading model works and what visual effect it is trying to approximate. In particular, explain the vector \(\mathbf{h}\) and the Phong exponent \(p\).

  5. Exercise 10.2 on pg. 241 of the textbook.

Grading

Deductions

Reason Value
Program crashes due to bugs -10 each bug at grader's discretion to fix


Point Breakdown of Features

Requirement Value
Consistent modular coding style 10
External documentation (README.md) 5
Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files 15
Expected output / behavior based on the assignment specification, including

Updated your scene file parser for lights and shape materials10
Computing depth-of-field effects5
Computing antialiasing5
Computing soft shadows10
Implementing glossy reflection10
Implementing glossy refraction10
Displaying your ray traced scene10
Designing a scene of your choice that shows off your ray tracer's capabilities, submitted as myscene.js and myscene.ppm10

70
Total 100