www.xbdev.net
xbdev - software development
Friday February 6, 2026
Home | Contact | Support | WebGPU Graphics and Compute ... | LearnWebGPU Series Step by step guide with interactive examples.....
     
 

LearnWebGPU
Series

Lights and Rays ...

 



[TOC] Chapter 3: Meshes and Models


Introduction


We assume that a world is composed of objects. We need to model the following properties of each object:

• its location - where is an object in reference to the scene?
• its orientation - which way is the object turned or facing?
• its volume - what 3-dimensional space does the object take up?
• its surface properties - what color is the object? Is the object smooth or rough?

Our description of the world must be in mathematical values, symbols and operations that a computer is capable of manipulating. This means you need to understand some math! But don't let the math scare you. The math will be introduced slowly with plenty of explanations and examples. Just make sure you fully understand one topic before proceeding to the next. The math is more easily understood in 2-dimensions, so we will introduce the ideas in 2-dimensional space and then extend them to 3-dimensional space.

So let's start with the basics of location and build from there...


Model Location


In WebGPU, positioning models in a scene involves defining their locations in 3D space. The way objects are positioned, translated, or manipulated in relation to other objects is a core aspect of working with 3D rendering.

Location Is Relative


In 3D graphics, positions are often relative to a particular coordinate system. This means that objects are located based on a parent object’s position or a global coordinate system. For example, the position of a model relative to the world origin (0, 0, 0) could be different when viewed relative to the camera’s position.

Example: Relative Position



If a model is placed at coordinates (2, 0, 0) relative to the world origin, and then the world origin shifts, the model’s relative position may remain unchanged, but its absolute position will be different from the viewer’s perspective.

const worldOrigin = [000];   // Global coordinate system
const modelLocation = [200]; // Relative to world origin


Describing Locations


A 3D location is typically described by an ordered triplet of values `(x, y, z)`. In WebGPU, these values represent positions on the X (horizontal), Y (vertical), and Z (depth) axes, respectively. A point in space can be represented as a simple vector:

let modelPosition = [1.02.0, -3.0]; // 3D coordinates


This point lies 1 unit to the right, 2 units upward, and 3 units into the screen.

3-dimensional Locations


Three-dimensional locations in WebGPU define where an object is placed in space using Cartesian coordinates. These coordinates can be transformed using various operations such as translation, rotation, and scaling to control an object’s position.

Equation: 3D Coordinate Representation



The position of any point \(P\) in 3D space can be written as:

\[
P = (x, y, z)
\]

Where \(x\), \(y\), and \(z\) represent the position along the horizontal, vertical, and depth axes, respectively.

Manipulating Locations


Manipulating the location of an object involves transformations like translation, rotation, and scaling. Translation moves an object along an axis, rotation changes its orientation, and scaling adjusts its size.

Example: Translation of an Object



To move an object 2 units along the x-axis:

modelPosition[0] += 2.0// Move by +2 units on X-axis


This will adjust the object's position horizontally.

WebGPU Points


In WebGPU, points are used to represent the vertices of geometric primitives. Each point has a position in 3D space, which can be used to define more complex structures like triangles or polygons.

Example: Defining Vertices as Points



const vertices = new Float32Array([
  
0.00.50.0,  // Point 1
  
-0.5, -0.50.0// Point 2
  
0.5, -0.50.0  // Point 3
]);


These points represent the vertices of a triangle in 3D space.



Direction / Orientation


In addition to specifying the position of objects, it’s important to define their orientation or direction. Vectors are used to describe these directions, which are essential for lighting, motion, and camera systems.

Vectors


A vector in 3D space represents both direction and magnitude. It is a fundamental concept for describing how an object moves or is oriented within a scene.

Equation: Vector Representation



A vector is typically represented as:

\[
\mathbf{v} = \langle v_x, v_y, v_z \rangle
\]

Where \(v_x\), \(v_y\), and \(v_z\) are the components along the x, y, and z axes.

Example: Defining a Vector in JavaScript



const directionVector = [1.00.00.0];  // Vector pointing along the X-axis


Vectors vs. Points


Points and vectors are often confused, but they serve different purposes. A point describes a position in space, whereas a vector describes a direction or motion. Vectors have magnitude (length), while points do not.

Example: Using a Point and a Vector



let point = [1.02.03.0];  // Represents a location in space
let vector = [0.01.00.0]; // Represents the direction "up"


WebGPU Vectors


In WebGPU, vectors are commonly used for operations like lighting, where surface normals (which are vectors) are critical for determining how light reflects off surfaces.

Example: Normal Vector for Lighting



const normalVector = [0.01.00.0]; // Surface normal pointing up


Manipulating Vectors


Vectors can be scaled, normalized, or added together. Normalizing a vector means converting it into a unit vector, which retains the same direction but has a length of 1.

Example: Normalizing a Vector



function normalize(vec) {
  const 
length Math.sqrt(vec[0]**vec[1]**vec[2]**2);
  return 
vec.map(=> length);
}


A JavaScript Vector Object


To make working with vectors easier, you can encapsulate vector operations in a JavaScript class:

class Vector3 {
  
constructor(xyz) {
    
this.x;
    
this.y;
    
this.z;
  }

  
length() {
    return 
Math.sqrt(this.x**this.y**this.z**2);
  }

  
normalize() {
    const 
len this.length();
    
this./= len;
    
this./= len;
    
this./= len;
  }
}




Volume / Surfaces


In 3D graphics, surfaces define the visible boundaries of objects. These surfaces are usually constructed from triangles, as triangles are the simplest polygon that can describe a surface.

Defining Triangles


A triangle in 3D space is defined by three vertices. It is the fundamental building block for constructing complex surfaces in WebGPU. Every surface can be represented as a collection of triangles.

Example: Triangle Defined by Vertices



const triangleVertices = new Float32Array([
  -
1.0, -1.00.0,  // Vertex 1
  
1.0, -1.00.0,   // Vertex 2
  
0.01.00.0     // Vertex 3
]);


Defining 3D Objects


More complex 3D objects are built by combining multiple triangles into what’s called a mesh. The mesh is a collection of vertices and edges that define the surface of a 3D object.

Example: Cube Defined by Triangles



const cubeVertices = new Float32Array([
  
// Front face
  
-1.0, -1.01.0,
  
1.0, -1.01.0,
  
1.01.01.0,
  -
1.01.01.0,
  
// Other faces...
]);


WebGPU Triangle Rendering Modes


WebGPU supports different rendering modes for triangles:

Fill Mode: The triangle is rendered with its interior filled.
Wireframe Mode: Only the edges of the triangle are drawn.
Point Mode: Only the vertices of the triangle are rendered as points.



Material Properties


Material properties describe how an object's surface interacts with light and how it appears to the viewer. In WebGPU, materials determine the color, shininess, transparency, and texture of a surface.

Surface Properties to Consider



Color: Solid or multi-colored?
Reflectivity: Does the surface reflect light? Is it shiny or dull?
Texture: Is the surface smooth or rough?
Transparency: Does the surface allow light to pass through? Does it refract light?

Example: Material Properties in WebGPU



const material = {
  
color: [0.70.20.5],   // RGB color
  
shininess30,            // Specular shininess
  
reflectivity0.8,        // How much light it reflects
  
roughness0.3            // Roughness level
};




Color


Color in WebGPU is defined using a variety of models, most commonly the RGB (Red, Green, Blue) model, where each component is a floating-point number between 0.0 and 1.0.

Color Models


In addition to RGB, other color models like HSV (Hue, Saturation, Value) are used to represent colors in a more human-perceptual way.

Example: RGB Color in WebGPU



const color = [1.00.00.0];  // Pure red


Device Independent Color


Device-independent color models, such as the CIE XYZ model, aim to represent colors consistently across different devices.

Transparency


Transparency is handled by the alpha component in RGBA (Red, Green, Blue, Alpha), where alpha controls the opacity.

Example: Semi-Transparent Color



const color = [0.01.00.00.5];  // Semi-transparent green


Shades of Color


Shades can be created by varying the intensity of color components. A darker shade of blue can be created by lowering the RGB values:

const darkBlue = [0.00.00.5];


WebGPU Color


In WebGPU, colors are typically passed to shaders, where they define the appearance of objects.



Texture Maps


What is a Mapping?


Texture mapping is the process of applying a 2D image (or texture) onto a 3D surface. Each point on the surface corresponds to a coordinate in the texture, known as UV coordinates.

Procedural Texture Maps


Procedural textures are generated algorithmically rather than being pre-designed. For example, a noise texture can simulate surfaces like marble or wood.

Image Texture Maps


Image-based texture maps use bitmap images to provide detailed surface textures.

Example: Loading a Texture in WebGPU



const image await loadImage('texture.png');
const 
texture device.createTexture({
  
size: [image.widthimage.height1],
  
format'rgba8unorm',
  
usageGPUTextureUsage.TEXTURE_BINDING GPUTextureUsage.COPY_DST,
});


WebGPU Implementation


WebGPU supports textures by creating texture objects that are used in shaders to sample colors based on UV coordinates.



Light Modeling


Lighting is a critical aspect of 3D rendering, as it affects the way surfaces appear. WebGPU implements various lighting models, including ambient, diffuse, and specular reflections.

Light Sources


Common light sources include:

Directional Light: Light with parallel rays, like sunlight.
Point Light: Light that emanates from a single point, like a light bulb.
Spotlight: Light focused in a specific direction.

Ambient Reflected Light


Ambient light is the general light present in a scene, illuminating all objects equally. It's used to simulate indirect lighting.

Diffuse Reflected Light


Diffuse reflection occurs when light hits a rough surface, scattering in many directions. The intensity of the reflection depends on the angle between the light and the surface normal.

Equation: Diffuse Reflection



\[
I_{diffuse} = I_{light} \cdot (\mathbf{N} \cdot \mathbf{L})
\]

where:
• \(I_{light}\) is the intensity of the light.
• \(\mathbf{N}\) is the surface normal.
• \(\mathbf{L}\) is the light direction.

Specular Reflected Light


Specular reflection creates highlights and occurs on shiny surfaces where light is reflected in a single direction.

Equation: Specular Reflection



\[
I_{specular} = I_{light} \cdot (\mathbf{R} \cdot \mathbf{V})^{shininess}
\]

Where:
- \(\mathbf{R}\) is the reflected light direction.
- \(\mathbf{V}\) is the view direction.

WebGPU Implementation


Lighting in WebGPU is handled in shaders, where the light source, surface properties, and normals are used to calculate the final color of each pixel.



Surface Normals


Surface normals are vectors perpendicular to a surface and are essential for lighting calculations. They help determine how light interacts with the surface.

One Normal Vector Per Triangle


In simple models, a single normal is defined per triangle, meaning that the entire triangle shares the same normal vector.

Example: Single Normal Per Triangle



const normal = [0.01.00.0]; // Normal pointing upwards


One Normal Vector Per Vertex


A more advanced method involves defining a normal per vertex. This results in smooth lighting across the surface of the object, making curves and organic shapes appear more realistic.

One Normal Vector Per Pixel


Per-pixel normals are calculated in the fragment shader for more detailed lighting effects. This technique is commonly used in bump mapping or normal mapping.

WebGPU Implementation


In WebGPU, normals are passed as vertex attributes and used in the shader to calculate the final color based on the lighting model.



Lines


In 3D rendering, lines can be used to represent edges of models or wireframes of complex objects. WebGPU supports rendering lines using vertex buffers, where each pair of vertices defines a segment.

Rendering Wireframes


Wireframe rendering is a common technique used to visualize the structure of a 3D model by drawing only the edges of the polygons.

Example: Defining Line Segments



const lineVertices = new Float32Array([
  
0.00.00.0,   // Start of line
  
1.01.00.0    // End of line
]);

// Create buffer for line vertices
const lineBuffer device.createBuffer({
  
sizelineVertices.byteLength,
  
usageGPUBufferUsage.VERTEX,
  
mappedAtCreationtrue,
});

new 
Float32Array(lineBuffer.getMappedRange()).set(lineVertices);
lineBuffer.unmap();


In this example, a simple line segment is defined by two vertices. Lines are useful for rendering wireframes or simple geometric representations of more complex objects.

Line Rendering in WebGPU



In WebGPU, line rendering can be achieved by specifying the primitive topology as `line-list` when creating the pipeline. This allows each pair of vertices to be treated as a distinct line segment.

const pipeline device.createRenderPipeline({
  
primitive: {
    
topology'line-list',  // Specifies the use of lines
  
},
  
// Other pipeline settings
});


Applications of Line Rendering


Wireframe models: Visualizing the underlying structure of 3D meshes.
Graphs and charts: Drawing lines to represent connections between data points.
Debugging tools: Highlighting edges or intersections within 3D scenes.



Points


Points are essential in 3D graphics for representing individual vertices, stars, particles, or other small objects that don't need to be rendered as full shapes. In WebGPU, points can be rendered directly by setting the primitive topology to `point-list`.

Defining Points in WebGPU


Each point in WebGPU is treated as a single vertex, which can be rendered with a given size. Unlike lines or triangles, points are represented as individual pixels on the screen, which makes them useful for specific visual effects like particle systems.

Example: Rendering Points



const pointVertices = new Float32Array([
  
0.00.00.0,  // Point 1
  
1.01.00.0,  // Point 2
]);

const 
pointPipeline device.createRenderPipeline({
  
primitive: {
    
topology'point-list',  // Specifies the use of points
  
},
  
// Other pipeline settings
});


In this example, two points are defined and rendered using a WebGPU pipeline set to handle points as individual primitives.

Applications of Point Rendering


Particle systems: Points can be used to represent particles such as dust, sparks, or stars in the sky.
Point clouds: A collection of points can represent 3D shapes without connecting them with lines or triangles, useful in applications like 3D scanning and visualization.
Vertex highlighting: Points can be used to highlight specific vertices on a mesh for debugging or user interaction.

Example: Particle System with Points


A particle system can be simulated by rendering many points in 3D space, with each point representing a particle.

const particlePositions = new Float32Array([
  
0.10.2, -0.3,  // Particle 1
  
-0.40.30.2,  // Particle 2
  // More particles...
]);

// Buffer and rendering setup similar to other examples


Updating the positions of these points each frame, you can create dynamic effects like fireworks, snowfall, or smoke.


Summary


In this chapter, we delved into several key concepts that are fundamental to working with 3D graphics in WebGPU. We explored how to define and manipulate locations in 3D space, focusing on points, vectors, and orientations. Understanding how to work with positions, directions, and transformations is critical for positioning models accurately in a scene.

We then discussed how surfaces are defined using triangles, how material properties affect the visual appearance of objects, and the importance of lighting and surface normals for creating realistic scenes. Lighting models were introduced, emphasizing the roles of ambient, diffuse, and specular reflections in determining how light interacts with surfaces.

Additionally, we examined how texture maps are applied to 3D objects, using both procedural and image-based textures to enhance the realism of materials. Finally, we covered the rendering of basic geometric primitives like lines and points, which are essential for wireframes, particle systems, and other effects.

These concepts form the backbone of creating visually compelling and interactive 3D applications using WebGPU. As you proceed, you’ll combine these elements to construct increasingly complex and dynamic scenes in your WebGPU-powered applications.







101 WebGPU Programming Projects. WebGPU Development Pixels - coding fragment shaders from post processing to ray tracing! WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Games WGSL 2d 3d interactive web-based fun learning WebGPU Compute WebGPU API - Owners WebGPU Development Cookbook - coding recipes for all your webgpu needs! WebGPU & WGSL Essentials: A Hands-On Approach to Interactive Graphics, Games, 2D Interfaces, 3D Meshes, Animation, Security and Production Kenwright graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu for dummies kenwright webgpu wgsl compute graphics all in one kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright WebGPU Shader Language Development: Vertex, Fragment, Compute Shaders for Programmers Kenwright WGSL Fundamentals book kenwright WebGPU Data Visualization Cookbook kenwright Special Effects Programming with WebGPU kenwright WebGPU Programming Guide: Interactive Graphics and Compute Programming with WebGPU & WGSL kenwright Ray-Tracing with WebGPU kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.