Published 2025-11-13
Based on the insights from part 1, we have the means to define illuminant and reflectance spectra for our scene data. Then the color of a pixel arises from an integral over a product of spectra. The main goal of this blog post is to find efficient ways to evaluate such integrals using Monte Carlo integration and importance sampling. Additionally, we discuss how to define a BRDF based on known reflectance spectra, how to implement that and look at results. I have extended my educational path tracer to support both [spectral and RGB rendering](https://github.com/MomentsInGr…
Published 2025-11-13
Based on the insights from part 1, we have the means to define illuminant and reflectance spectra for our scene data. Then the color of a pixel arises from an integral over a product of spectra. The main goal of this blog post is to find efficient ways to evaluate such integrals using Monte Carlo integration and importance sampling. Additionally, we discuss how to define a BRDF based on known reflectance spectra, how to implement that and look at results. I have extended my educational path tracer to support both spectral and RGB rendering. You can take a look at this code to see specifics of how all of this is implemented (see the links at the bottom).
A density for wavelength sampling
As a reminder, we are interested in integrals of the form
[\begin{pmatrix}X\Y\Z\end{pmatrix} =\int_{360~\mathrm{nm}}^{830~\mathrm{nm}} \begin{pmatrix}\bar{x}(\lambda)\ \bar{y}(\lambda)\ \bar{z}(\lambda)\end{pmatrix} i(\lambda) \prod_{j=1}^{n-1} a_j(\lambda) ,\mathrm{d}\lambda\text{,}]
where (i(\lambda)) is an illuminant spectrum, (a_1(\lambda),\ldots,a_{n-1}(\lambda)) are surface reflectance spectra at each path vertex and (\bar{x}(\lambda), \bar{y}(\lambda), \bar{z}(\lambda)) are the CIE XYZ color matching functions. Out of that, we get the color for that path in the XYZ color space, which we can then convert to sRGB or another color space for display. We can also define RGB color matching functions to combine two steps into one:
[ \begin{pmatrix}\bar{r}(\lambda) \ \bar{g}(\lambda) \ \bar{b}(\lambda)\end{pmatrix} := \begin{pmatrix} 3.2406255 & \hspace{-0.9em}-1.5372080 & \hspace{-0.6em}-0.4986286 \ -0.9689307 & \hspace{-0.9em}1.8757561 & \hspace{-0.6em}0.0415175 \ 0.0557101 & \hspace{-0.9em}-0.2040211 & \hspace{-0.6em}1.0569959 \end{pmatrix} \hspace{-0.5em}\begin{pmatrix} \bar{x}(\lambda) \ \bar{y}(\lambda) \ \bar{z}(\lambda) \end{pmatrix} ]
Then
[\begin{pmatrix}R_\mathrm{linear}\G_\mathrm{linear}\B_\mathrm{linear}\end{pmatrix} =\int_{360~\mathrm{nm}}^{830~\mathrm{nm}} \begin{pmatrix}\bar{r}(\lambda)\ \bar{g}(\lambda)\ \bar{b}(\lambda)\end{pmatrix} i(\lambda) \prod_{j=1}^{n-1} a_j(\lambda) ,\mathrm{d}\lambda\text{.}]
How do we compute these integrals? A somewhat common approach in rendering (that fell out of fashion in recent years) is to replace RGB triples by higher-dimensional vectors. For example, we may sample each spectrum using a fixed set of 16 wavelengths. Where an RGB renderer would use component-wise multiplication of RGB triples, we now perform component-wise multiplication for these vectors of spectral samples. This has a few disadvantages though: First of all, 16 wavelengths are not enough for truly accurate color reproduction. At the same time, 16 floats are quite a lot of data to compute and carry around. It would be nice to avoid such approximations while also reducing the overall cost.
Therefore, we will rely on Monte Carlo integration: To evaluate the integral, we sample one or more random wavelengths (I usually use (m=4) wavelengths). My path tracing lectures explain basics of Monte Carlo integration. If we have sampled (m) wavelengths (\lambda_0, \ldots, \lambda_{m-1}) using the probability density function (p(\lambda)), an unbiased Monte Carlo estimate for the integral is
[\begin{pmatrix}R_\mathrm{linear}\G_\mathrm{linear}\B_\mathrm{linear}\end{pmatrix} \approx \frac{1}{m} \sum_{k=0}{m-1}\frac{\begin{pmatrix}\bar{r}(\lambda_k)\\bar{g}(\lambda_k)\\bar{b}(\lambda_k) \end{pmatrix} i(\lambda_k) \prod_{j=1}{n-1} a_j(\lambda_k)}{p(\lambda_k)}\text{.}]
Now the big question is how we should choose the density (p(\lambda)). We assume here that we know what illuminant (i(\lambda)) we are dealing with. In my spectral path tracer, that is indeed the case because I only use a single illuminant spectrum per scene. We discuss the general case below but it is a bit of an open problem. When the illuminant spectrum (i(\lambda)) is known, it should be a factor in the density, because it is also a factor in the integrand. And as we saw in part 1, illuminant spectra can be quite spiky, so we need the importance sampling to ensure that we sample their peaks often enough. Additionally, we should account for the color matching functions, because they are always known. I am not sure what the best way is to do so, but for my renderer I settled on the 1-norm of the RGB color matching functions. The resulting density is
[p(\lambda):=\frac{(|\bar{r}(\lambda)|+|\bar{g}(\lambda)|+|\bar{b}(\lambda)|)i(\lambda)}{\int_{360~\mathrm{nm}}^{830~\mathrm{nm}} (|\bar{r}(\lambda^\prime)|+|\bar{g}(\lambda^\prime)|+|\bar{b}(\lambda^\prime)|)i(\lambda^\prime) ,\mathrm{d}\lambda^\prime}\text{.}]
Note how we explicitly normalize this density so that it integrates to (1).
This importance sampling density completely neglects the reflectance spectra, but that is acceptable for two reasons: First of all, we have no way to know upfront, which combination of reflectance spectra a light transport path will encounter. Thus, implementing importance sampling based on the combined reflectance spectra would be difficult. Secondly, we know that reflectance spectra are relatively smooth functions that are bounded between zero and one with a tendency to stay away from zero and one. Thus, their dynamic range is relatively limited and neglecting them in the importance sampling does not cause too much variance.
Implementing wavelength sampling
We know what illuminant spectra we want to use before we start rendering and have to convert them to a suitable data format anyway. While we are at it, we can perform preprocessing that will facilitate efficient importance sampling. If we want to sample proportional to (p(\lambda)), we can use inverse CDF sampling: First, we compute the cumulative distribution function (CDF)
[F(\lambda):=\int_{360~\mathrm{nm}}\lambda p(\lambda\prime) ,\mathrm{d}\lambda^\prime \text{.}]
Then we feed a uniform random number (u_k\in[0,1)) into the inverse CDF to obtain a wavelength (\lambda_k := F^{-1}(u_k)). This wavelength (\lambda_k) is distributed according to (p(\lambda_k)).
Once we have a wavelength, we want to compute its contribution to the Monte Carlo estimate, namely
[\frac{\begin{pmatrix}\bar{r}(\lambda_k)\\bar{g}(\lambda_k)\\bar{b}(\lambda_k) \end{pmatrix} i(\lambda_k)}{p(\lambda_k)} \prod_{j=1}^{n-1} a_j(\lambda_k)\text{.}]
As explained in part 1, we have to convert the wavelength (\lambda_k) to a phase (\varphi_k) before we can evaluate reflectance spectra (a_j(\lambda_k)). Thus, it makes sense to store a lookup table that maps one random number (u_k\in[0,1)) to the values of
[\frac{\begin{pmatrix}\bar{r}(\lambda_k)\\bar{g}(\lambda_k)\\bar{b}(\lambda_k) \end{pmatrix} i(\lambda_k)}{p(\lambda_k)}\quad\text{and}\quad\varphi_k]
for (\lambda_k = F^{-1}(u_k)). These four values nicely fit into a 1D RGBA texture. In my implementation, I use a default resolution of 1024 and 16-bit floats ((8~\text{kiB}) per illuminant spectrum).
To take (m) samples, we could use (m) independent random numbers (u_0,\ldots,u_{m-1}). However, it is beneficial to use a form of stratified sampling here that is known as uniform jittered sampling. We use our random number generator to produce a single random number (u\in[0,1)). From that we derive the random numbers (u_k:=\frac{u+k}{m}) for (k\in{0,\ldots,m-1}). This scheme is inspired by Hero wavelength sampling [Wilkie14] and helps to achieve a lower variance (i.e. less noise) at the same sample count.
We now have a fairly complete description of our spectral rendering algorithm: When we start constructing a path, we produce one random number (u) and sample our precomputed 1D lookup table for importance sampling (m) times (usually (m=4)). The radiance estimate is still stored as RGB but the path throughput weight (explained in my path tracing workshop) is now a vector with (m) entries, which is initialized to all ones. At each path vertex, we multiply the path throughput weight by BRDF times geometry term divided by density (as in a usual path tracer). The only difference is that the BRDF incorporates the reflectance spectrum (a_j(\lambda_k)) now. When we connect to a light source with spectrum (i(\lambda)), we make a contribution to the radiance and that incorporates the factor
[\frac{\begin{pmatrix}\bar{r}(\lambda_k)\\bar{g}(\lambda_k)\\bar{b}(\lambda_k) \end{pmatrix} i(\lambda_k)}{p(\lambda_k)}\text{,}]
which we already obtained from our 1D lookup table. I provide run time measurements later, but at this point it should already be clear that this procedure is not drastically more expensive than RGB rendering.
Spectral BRDFs
One aspect that I glossed over above, is how the BRDF incorporates the reflectance spectrum (a_j(\lambda)) exactly. For a Lambertian diffuse BRDF, the BRDF would simply be (\frac{a_j(\lambda)}{\pi}) and there is nothing more to say about that. But of course, we care about more sophisticated reflectance models. I will focus on the one used in my spectral renderer, namely the Frostbite BRDF, but the underlying idea is quite general.
For the Frostbite BRDF, we do not control the albedo directly. Instead, we specify a base color texture in sRGB. This is the texture to which I apply the Fourier sRGB conversion. Thus, the reconstructed reflectance spectra pertain to the base color. The Frostbite BRDF then constructs all other colors from this base color in a fairly simple way: For dielectrics, the specular component will be pure white, for metals it will match the base color. The diffuse albedo is a scaled version of the base color. Either way, all colors are a linear combination of the base color and pure white.
Thus, we change the representation of colors produced by BRDF models as follows: Instead of mixing colors together directly, our functions produce a 2D vector consisting of weights for the base color and pure white. Changing the implementation of the BRDF like that is relatively straight forward. Once these weights are available, we can easily compute the spectral reflectance (using constant (1) for the pure white spectrum). Of course, this could easily be generalized to linear combinations of a greater number of different reflectance spectra. We could also have used the true spectral Fresnel equations here. These things become easy once you have a spectral renderer, but I did not do so since one goal of this blog post series is to provide an apples-to-apples comparison of RGB and spectral rendering.
Color noise
At this point, you may be convinced that it is viable to use spectral rendering with legacy assets and that it does not make rendering unreasonably inefficient. But what about noise from the Monte Carlo integration? Will it drive up the number of paths that we need for reasonably converged images or overwhelm our denoisers?
Figure 1 provides an interactive comparison in a scenario where the RGB renderer produces no noise at all: Indirect illumination, shadows, antialiasing and multiple importance sampling are disabled and we only use a single small light source. All results use 1 sample per pixel. There are two things to observe here: First of all, the colors look a bit different in general. That is expected and part 3 analyzes these differences extensively. Secondly, and for now more importantly, spectral rendering with (m=4) wavelength samples introduces a moderate amount of color noise. This color noise has fairly low dynamic range and thus little variance, but compared to the noise-free RGB rendering in this setup, this is a worse result. The figure uses a warm white LED with a relatively smooth illuminant spectrum.
Figure 1: A scene illuminated by a single small warm white LED and rendered with direct illumination only using either RGB or spectral rendering.
Figure 2 repeats this experiment using a metal halide lamp, which has an extremely complex and spiky spectrum. Nonetheless, the conclusions remain largely the same. While RGB rendering produces no noise at all, spectral rendering produces noise with moderate variance. On red surfaces, the noise is a bit stronger this time and there are stronger color shifts between spectral and RGB rendering.
Figure 2: A scene illuminated by a single small metal halide lamp and rendered with direct illumination only using RGB or spectral rendering.
Figure 3 uses multiple metal halide lamps and enables path tracing with longer paths (up to 3 bounces). As a result, there is considerably more noise with both RGB rendering and spectral rendering. The difference in terms of noise levels is hard to discern since the path tracer noise is much stronger than the noise due to spectral rendering.
Figure 3: A scene illuminated by multiple metal halide lamps and rendered with global illumination using RGB or spectral rendering.
That is of course a result of our importance sampling strategy, which is perfectly adapted to the used illuminant. Without that, spectral rendering would produce more noisy output. Nonetheless, these results demonstrate that color noise caused by spectral rendering with good importance sampling tends to be quite negligible relative to the noise caused by other Monte Carlo strategies in rendering. A denoiser that can deal with noise from path tracing would have no trouble with the color noise from spectral rendering.
Handling multiple illuminants
The fact that we know which illuminant to use for importance sampling is key to the efficiency of our sampling strategy. In a more fully-featured path tracer, that is unlikely to be the case. We construct a path with many vertices and at each vertex we sample a connection towards one or more light sources. In this scenario, there is no single illuminant spectrum to use for sampling. And even if there were one, we would not know which one it is when we start constructing our path.
A relevant case where we can sidestep this problem is when we use direct illumination only. In this case, we can simply decide which illuminant to sample first and sample wavelengths after that. In the more general case of longer paths, we can store the Fourier sRGB triple (or the Lagrange multipliers) and the two weights characterizing the spectral BRDF for each path vertex. Then once we have selected an illuminant, we sample wavelengths and evaluate the path throughput based on the data of all path vertices. That works but it takes linear storage and quadratic run time in the length of the path. For short paths, that is unproblematic but that changes quickly as paths grow longer.
Ideally, we would be able to tell from the very start which illuminants are important for the pixel being rendered and act accordingly. That can be realized with special guiding approaches [Ruit21]. First, render a low-resolution image with few paths and filter it to get rough estimates of the spectra reaching the primary hit points. Then perform importance sampling of wavelengths accordingly during rendering. A drawback of this approach is that it relies on a relatively coarse discretization of spectra, so it will still struggle with extremely narrow peaks in illuminant spectra. Nonetheless, this is a promising direction.
Finally, I want to mention the solution that has been used as baseline to which the wavelength guiding was compared [Ruit21]. Combine all illuminant spectra in the scene into a single illuminant spectrum, weighting them by the power of the light sources in the scene. Then use this spectrum for importance sampling as if there were only one illuminant spectrum in the scene. If the scene is small enough and does not have too much variation in how its different parts are illuminated, that will work well. It is also simple to implement. But for larger scenes with many different types of illuminants, it will result in increased color noise.
Timings
To support the claim that the overhead of spectral rendering is quite limited, the following overview provides full frame times for my path tracer (spectral or RGB). Frames are rendered at (1920\times1080) with one path per pixel on an NVIDIA RTX 5070 Ti with GPU and memory clocks locked to 2452 MHz and 13801 MHz, respectively.
Scene Path length RGB timing Spectral timing
Cornell box 2 0.26 ms 0.31 ms
Cornell box 8 0.83 ms 1.13 ms
Bistro 2 2.71 ms 2.89 ms
Bistro 8 14.3 ms 14.6 ms
The main conclusion here is that spectral rendering as described in this blog post series introduces only a modest overhead. In absolute numbers, it is never greater than 0.3 ms. In relative numbers we have an overhead of 2%-7% for the Bistro and 19%-36% for the Cornell box. The larger relative numbers for the Cornell Box are due to the simple geometry of this scene, which makes ray tracing more efficient.
Conclusions
Spectral rendering has a reputation for having a much higher computational cost than RGB rendering. This reputation may have been justified at some point in the past. For example, when it was common to just sample a large number of wavelengths instead of relying on Monte Carlo integration, the overhead was higher. Though, at this point, spectral rendering can be nearly as efficient as RGB rendering. In terms of bandwidth, the only additional cost incurred by my approach are the lookups in the 1D lookup tables for importance sampling of the illuminant. One such lookup table takes exactly (8~\mathrm{kiB}), so these reads are extremely cache coherent. Other than that, the main cost are the arithmetic instructions needed to evaluate the reflectance spectra using the methods from my papers. These computations are not completely negligible but given how much compute capabilities of GPUs have grown over the past decades, there is a good chance that the compute units would be idle without this additional work. And even under pessimistic assumptions, I measured an overhead of merely 0.3 ms above.
Spectral rendering is a mature technology that can be deployed now, even in real-time renderers. It is compatible with path tracers and rasterizers alike. At this point, the main downside is that it is more complicated than RGB rendering. Though, RGB rendering intertwines color spaces modeling human perception with light transport in a rather unnatural way and that can become a major headache, too. Spectral rendering decouples concepts in a physically meaningful way. For example, if you suddenly need to output colors for a different gamut because you are transitioning from LDR to HDR displays, spectral rendering makes that trivial, unlike RGB rendering. If you want to use spectral sensitivity curves for a specific camera instead of using the CIE XYZ color matching functions designed for human observers, that is also easily possible.
In the third and final part of this blog post series, I will show how the results of spectral rendering differ from RGB rendering and what spectral rendering can do that RGB rendering cannot.
Links to relevant code sections
To make everything discussed in this blog post more tangible, here is a list of direct links to the corresponding sections in the source code of my implementation:
- Conversion of textures from sRGB to Fourier sRGB
- Preprocessing of illuminant spectra
- Spectral Frostbite BRDF
- Spectral albedo specification
- Spectral path tracing
- Wavelength importance sampling
- Conversion from Fourier sRGB to spectra
Downloads and links
References
Peters, Christoph and Merzbach, Sebastian and Hanika, Johannes and Dachsbacher, Carsten (2019). Using Moments to Represent Bounded Signals for Spectral Rendering. ACM Transactions on Graphics (Proc. SIGGRAPH), 38(4). Official version | Author’s version
Peters, Christoph and Merzbach, Sebastian and Hanika, Johannes and Dachsbacher, Carsten (2019). Spectral Rendering with the Bounded MESE and sRGB Data. Workshop on Material Appearance Modeling. The Eurographics Association. Official version | Author’s version
van de Ruit, Mark and Eisemann, Elmar (2021). A Multi-Pass Method for Accelerated Spectral Sampling. Computer Graphics Forum (Proc. Pacific Graphics), 40(7). Official version | Author’s version
Wilkie, Alexander and Nawaz, Sehera and Droske, Marc and Weidlich, Andrea and Hanika, Johannes (2014). Hero Wavelength Spectral Sampling. Computer Graphics Forum (proc. EGSR), 33(4). Official version | Author’s version