www.xbdev.net
xbdev - software development
Friday February 20, 2026
Home | Contact | Support | Computer Graphics Powerful and Beautiful ...
     
 

Physically-Based
Rendering

Lights and Rays ...

 



[TOC] Chapter 6: Sampling and Reconstruction


Effective sampling and reconstruction methods are essential for generating high-quality images with minimal noise. This section covers various sampling theories, techniques, and reconstruction processes that help achieve realistic rendering results. We'll discuss sampling theory, sampling interfaces, stratified sampling, and various sampling methods, along with image reconstruction concepts and the imaging pipeline.

Sampling Theory


Sampling theory deals with the principles of converting a continuous signal (like light) into a discrete one (sampled data). The key concept is the Nyquist-Shannon sampling theorem, which states that to avoid aliasing, a signal must be sampled at least twice its highest frequency.

In computer graphics, sampling techniques are used to discretize the process of gathering light information from a scene. This helps approximate the color and intensity values that contribute to the final image.

The mathematical representation of a sampled function can be expressed as:

\[
f(x) = \sum_{n=-\infty}^{\infty} f(nT) \cdot \delta(x - nT)
\]

where:
\( f(x) \) is the continuous function,
\( T \) is the sampling interval,
\( \delta \) is the Dirac delta function, which samples \( f(x) \) at discrete points.

Sampling Interface


A sampling interface serves as a common structure to handle various sampling techniques. It defines the methods for generating random samples and managing sample distributions.

Here's an example of a JavaScript sampling interface:

class Sampler {
    
constructor(sampleCount) {
        
this.sampleCount sampleCount// Number of samples to generate
    
}

    
getSamples() {
        throw new 
Error("getSamples() must be implemented by subclasses.");
    }
}


Stratified Sampling


Stratified sampling improves the sampling process by dividing the sample space into distinct strata or segments, ensuring that each segment is adequately represented. This approach minimizes variance and improves convergence in rendering.

In stratified sampling, the sample space is divided into \( N \) regions, and a fixed number of samples are taken from each region.

Here's a simple JavaScript implementation of stratified sampling:

class StratifiedSampler extends Sampler {
    
constructor(sampleCountstrataCount) {
        
super(sampleCount);
        
this.strataCount strataCount;
    }

    
getSamples() {
        const 
samples = [];
        const 
samplesPerStratum Math.floor(this.sampleCount this.strataCount);
        
        for (
let i 0this.strataCounti++) {
            const 
offset this.strataCount;
            for (
let j 0samplesPerStratumj++) {
                const 
offset Math.random() / this.strataCount;
                
samples.push(x);
            }
        }
        return 
samples;
    }
}


The Halton Sampler


The Halton sampler is a quasi-random sampling method that produces low-discrepancy sequences. It uses different prime bases to generate samples in a multi-dimensional space, making it useful for reducing visual artifacts in rendered images.

The Halton sequence generates numbers using a base \( b \):

\[
H(n) = \sum_{k=1}^{\infty} \frac{d_k(n)}{b^k}
\]

where \( d_k(n) \) is the \( k \)-th digit of \( n \) in base \( b \).

Here's a simple implementation of a Halton sampler in JavaScript:

function haltonSequence(nbase) {
    
let result 0;
    
let f base;

    while (
0) {
        
result += (base) * f;
        
Math.floor(base);
        
/= base;
    }
    return 
result;
}

class 
HaltonSampler extends Sampler {
    
constructor(sampleCount) {
        
super(sampleCount);
    }

    
getSamples() {
        const 
samples = [];
        for (
let i 0this.sampleCounti++) {
            const 
haltonSequence(i2); // Base 2 for x-coordinate
            
const haltonSequence(i3); // Base 3 for y-coordinate
            
samples.push({ x});
        }
        return 
samples;
    }
}


Maximized Minimal Distance Sampler


The Maximized Minimal Distance (MMD) sampler ensures that the distance between any two samples is maximized, helping to minimize clustering and ensure better coverage of the sample space.

The MMD technique can be useful for generating sample points in a way that they are uniformly distributed across a defined area.

Here's a basic implementation of MMD sampling in JavaScript:

class MMD_Sampler extends Sampler {
    
constructor(sampleCount) {
        
super(sampleCount);
        
this.samples = [];
    }

    
distance(p1p2) {
        return 
Math.sqrt((p1.p2.x) ** + (p1.p2.y) ** 2);
    }

    
generateSample() {
        return { 
xMath.random(), yMath.random() }; // Random sample in unit square
    
}

    
getSamples() {
        while (
this.samples.length this.sampleCount) {
            const 
newSample this.generateSample();
            if (
this.samples.every(existingSample => this.distance(existingSamplenewSample) > 0.1)) {
                
this.samples.push(newSample);
            }
        }
        return 
this.samples;
    }
}


Sobol' Sampler


The Sobol' sampler is another low-discrepancy sequence generator that creates quasi-random samples, making it well-suited for numerical integration and rendering applications.

Sobol' sequences can be generated using a linear transformation matrix specific to the dimension of the sample space. The method is based on generating binary representations for sample indices.

Here's a simple JavaScript implementation of a Sobol' sampler:

function sobolSequence(n) {
    
// Placeholder for the actual Sobol' generation algorithm
    
return { x0.5y0.5 }; // Replace with actual Sobol' logic
}

class 
SobolSampler extends Sampler {
    
constructor(sampleCount) {
        
super(sampleCount);
    }

    
getSamples() {
        const 
samples = [];
        for (
let i 0this.sampleCounti++) {
            
samples.push(sobolSequence(i));
        }
        return 
samples;
    }
}


Image Reconstruction


Image reconstruction is the process of converting sampled data back into a continuous image. This is typically done using reconstruction filters to smooth out the pixel values and reduce aliasing artifacts.

One common filter used in image reconstruction is the box filter, which averages the values of the pixels in a specific region.

Mathematically, the reconstruction of an image \( I \) can be represented as:

\[
I(x, y) = \int \int f(x', y') \cdot R(x - x', y - y') \, dx' \, dy'
\]

where \( R \) is the reconstruction filter.



Imaging Pipeline Mediums (Film Quality)


In ray tracing, the film medium is the output target image for the rendering process. The imaging pipeline includes various stages that process the data from the scene to create a final image.

1. Ray Generation: Rays are cast from the camera through pixels on the film.
2. Scene Intersection: Rays intersect with scene geometry to gather color and intensity information.
3. Shading and Lighting: Based on the material properties and light sources, color values are calculated.
4. Image Reconstruction: Samples are combined using reconstruction filters to produce the final image.
5. Output: The final image is displayed or saved to a file.







Ray-Tracing with WebGPU kenwright WebGPU Development Cookbook - coding recipes for all your webgpu needs! WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Games WGSL 2d 3d interactive web-based fun learning WebGPU Compute WebGPU API - Owners WebGPU & WGSL Essentials: A Hands-On Approach to Interactive Graphics, Games, 2D Interfaces, 3D Meshes, Animation, Security and Production Kenwright graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu for dummies kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright WebGPU Shader Language Development: Vertex, Fragment, Compute Shaders for Programmers Kenwright wgsl webgpugems shading language cookbook kenwright WGSL Fundamentals book kenwright WebGPU Data Visualization Cookbook kenwright Special Effects Programming with WebGPU kenwright WebGPU Programming Guide: Interactive Graphics and Compute Programming with WebGPU & WGSL kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.