www.xbdev.net
xbdev - software development
Thursday February 19, 2026
Home | Contact | Support | WebGPU Graphics and Compute ... | WebGPU.. Games, Tutorials, Demos, Projects, and Code.....
     
 

WebGPU..

Games, Tutorials, Demos, Projects, and Code.....

 



Matrices


Matrices introduce high level control which is well suited to the GPU. GPU's love matrices! They're very parallel and can be added to the vertex pipeline very easily.


Matrices and triangles - bringing transforms, projections, cameras, ... into the picture using uniforms and layouts.
Matrices and triangles - bringing transforms, projections, cameras, ... into the picture using uniforms and layouts.


Functions Used: setVertexBuffer(), setIndexBuffer(), drawIndexed(), createBuffer(), getMappedRange(), getContext(), requestAdapter(), getPreferredCanvasFormat(), createCommandEncoder(), beginRenderPass(), setPipeline(), draw(), end(), submit(), getCurrentTexture(), createView(), createShaderModule()

What are Matrices? (Just Numbers)


Before we jump into adding matrices into our WebGPU program, let's just clarify what matrices are! Matrices are just arrays of numbers (similar to vectors). For example, a 3x3 matrix is just 9 floating point numbers:

[000000000]


While it's stored in memory as a flat array of numbers, we usually visuase and think of matrices in their 2d form (3 rows and 3 columns):

000
  0
00,
  
00]


Matrices are built on top of linear algebra. There are a set of rules for how to combine them together so that they have a special meaning. For example, to combine two matrices together - you don't add them, you would `multiply` them.


Power of Matrices in Computer Graphics


Matrices are a powerful (indespensible) concept in computer graphics. They enable us to combine multiple simple and complex transforms in a single form which can be quickly and easily applied to all vectors.

The 3 main matrices you use in computer graphics are the `model' transform, `view' (transform) and the `projection' transform. When these three matrices are combined they're referred to as the `modelviewprojection` or `MVP`.

The represent the local tranform for the shape, the transform to position the shape relative to the camera and the final transform (the projection) is to add depth (things get smaller as they get further away).


Matrix Libraries


To make our life easier, there are a number of free open source libraries available online - one of the most popular and well know is the
gl-matrix
.

The gl-matrix Homepage: https://glmatrix.net/.

You can include the gl-matrix like any other script:

<script src='https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.6.0/gl-matrix-min.js'></script>


We can also load the script dynamically on-the-fly - however, to ensure the script is ready and loade on time - we load the script using a
fetch
function (using asyncrhonous
await
).

let promise      await fetch('https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.6.0/gl-matrix-min.js');
let text         await promise.text();
let script       document.createElement('script');
script.type      'text/javascript';
script.async     false;
script.innerHTML text;
document.body.appendChild(script); 


Some test cases of how to use the matrix library:

let m mat4.create();
console.log);
console.logString(m) );

console.log('m instanceof Float32Array: 'instanceof Float32Array );

let v vec4.create();
console.log);
console.logString(v) );

v.2;
console.log);

let r vec4.create();
vec4.transformMat4rv);
console.log);

console.log('**list all methods available in vec4:**');
for (
let key in vec4)
{
    
console.logkey  );
}

m[4] = 4;
m[5] = 5;
console.logString(m) );


The output for the test cases:

[{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":0,"13":0,"14":0,"15":1}]
[
"1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1"]
[
"m instanceof Float32Array: ",true]
[{
"0":0,"1":0,"2":0,"3":0}]
[
"0,0,0,0"]
[{
"0":0,"1":0,"2":0,"3":0,"x":2}]
[{
"0":0,"1":0,"2":0,"3":0}]
[
"**list all methods available in vec4:**"]
[
"sub"]
[
"mul"]
[
"div"]
[
"dist"]
[
"sqrDist"]
[
"len"]
[
"sqrLen"]
[
"forEach"]
[
"create"]
[
"clone"]
[
"fromValues"]
[
"copy"]
[
"set"]
[
"add"]
[
"subtract"]
[
"multiply"]
[
"divide"]
[
"ceil"]
[
"floor"]
[
"min"]
[
"max"]
[
"round"]
[
"scale"]
[
"scaleAndAdd"]
[
"distance"]
[
"squaredDistance"]
[
"length"]
[
"squaredLength"]
[
"negate"]
[
"inverse"]
[
"normalize"]
[
"dot"]
[
"lerp"]
[
"random"]
[
"transformMat4"]
[
"transformQuat"]
[
"str"]
[
"exactEquals"]
[
"equals"]
[
"1,0,0,0,4,5,0,0,0,0,1,0,0,0,0,1"]



Of course, you don't want to manually go looping over all of your vertices and applying the transform on the CPU! Instead, you can take advantage of the GPU and the vertex shader. You can setup the matrix, copy it across to the GPU, link it up with the graphics pipeline - and you're all set to go.


Key things:
• You'll usually work with mat4 matrices and vec4 - so you can do all transforms (including projection)
• Usually the `projection` and `view` matrices are common to the scene but each mesh/shape has their own local `model` transform
• Matrix transforms are combined through multiplication
• Identity matrix is a special type of matrix (all zero except for 1's along the diagonal) - when combined with any other matrix it does not change anythign (sort of like multiplying it by the number 1)


Matrix Helper Function (Build Matrices)


To help you manage your matrices you can construct a simple help function - pass in the camera location (and target) and the local transform. The projection matrix is left as as the default (which uses the canvas dimensions).

function buildMatrixpr// position, rotation, scale
{
    
// if not set fall back to default values
    
if (!s= {x:1y:1z:1};
    if (!
r= {x:0y:0z:0};
    if (!
p= {x:0y:0z:0};
  
    
// Create the matrix in Javascript (using matrix library)
    
const modelMatrix          mat4.create();

    
// create the model transform with a rotation and translation
    
let translateMat mat4.create();   mat4.fromTranslationtranslateMatObject.values(p) );
    
let rotateXMat   mat4.create();   mat4.fromXRotation(rotateXMatr.x);
    
let rotateYMat   mat4.create();   mat4.fromYRotation(rotateYMatr.y);
    
let rotateZMat   mat4.create();   mat4.fromZRotation(rotateZMatr.z);
    
let scaleMat     mat4.create();   mat4.fromScaling(scaleMatObject.values(s) );

    
mat4.multiply(modelMatrixmodelMatrix,   translateMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateXMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateYMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateZMat);
    
mat4.multiply(modelMatrixmodelMatrix,   scaleMat);
    return 
modelMatrix;
}

// build a model matrix (scale, rotate and position it wherever we want)
let modelMatrix buildMatrix();
   
// setup the projection
let projectionMatrix mat4.create(); 
mat4.perspective(projectionMatrixMath.PI 2canvas.width canvas.height0.0015000.0);

// camera `lookat` - camera is at -4 units down the z-axis looking at '0,0,0'
let viewMatrix mat4.create();
mat4.lookAt(viewMatrix, [0,0,-4],  [0,0,0], [010]);


As a simple example, we'll place a few triangles in the scene and have the camera revolve around them using a sine function.

All the triangles will be positioned using a `model` matrix (so it's the same triangle, but with a different transforms).


We construct a GPU buffer for the 3 matrices as follows:

let mvpUniformBuffer device.createBuffer({
  
size64*3,
  
usageGPUBufferUsage.UNIFORM GPUBufferUsage.COPY_DST
});


We then copy across the values from our library to the GPU buffer using the GPU queue:

device.queue.writeBuffer(mvpUniformBuffer,      0,      modelMatrix);
device.queue.writeBuffer(mvpUniformBuffer,      64,     viewMatrix);
device.queue.writeBuffer(mvpUniformBuffer,      128,    projectionMatrix);


The vertex buffer has to have a structure so that it knows about the matrices, these extra lines are added to identify this in the vertex shader:

const vertWGSL = `
struct Transforms {
    model      : mat4x4<f32>,
    view       : mat4x4<f32>,
    projection : mat4x4<f32>,
};
@group(0) @binding(0) var<uniform> transforms : Transforms;


.... rest of vertex shader code


The pipeline also needs a `layout` of what's connected to what and the format/sizes:


let sceneUniformBindGroupLayout device.createBindGroupLayout({
  
entries: [
    { 
binding0visibilityGPUShaderStage.VERTEX,   buffer:  { type"uniform"  }   } 
  ]
});

let uniformBindGroup device.createBindGroup({
  
layout:   sceneUniformBindGroupLayout,
  
entries: [
    { 
binding 0resource: { buffermvpUniformBuffer        } }
   ],
});

// ----------------------------------------------------------------

const pipeline device.createRenderPipeline({
  
layoutdevice.createPipelineLayout({bindGroupLayouts: [sceneUniformBindGroupLayout]}),
  
  ....


The layout also needs to be set in the draw loop - on the render pass structure:

renderPass.setBindGroup(0uniformBindGroup);


As we rotate the camera around - so that we can see triangles from the front and back (i.e., not culled) - we set the cullmode to 'none' in the pipeline renderer when created:

...
  
primitive: {
    
topology"triangle-list",
    
frontFace"cw",
    
cullMode'none' // disable culling
  
},
...


Since we're going to do multiple render passes with different transforms - that means we'll need to clear the first time (so the backgrounds is cleared) - but on every subsequent render the new contents are added to the output.

....
    const 
renderPassDescription = {
      
colorAttachments: [{
        
viewcontext.getCurrentTexture().createView(),
        
loadOp: (k=="clear":"load"),  // only clear for the first render pass draw
        
clearValue: [00.50.51], // clear screen to color
        
storeOp'store'
....


A final update is to the vertex shader - each vertex is transformed by the model view projection matrices. We combine them together (by multiplication) and then apply it to the position by multiplication! It's that easy!

@vertex
fn main(@location(0inPos  vec3<f32>,
        @
location(1inColorvec3<f32>) -> VSOut 
{
    var 
mvp transforms.projection transforms.view transforms.model;
    
    var 
vsOutVSOut;
    
vsOut.Position mvp vec4<f32>(inPos1.0); // apply transform to position
    
vsOut.color    inColor;
    return 
vsOut;
}



Full Working Example


Bring all the bits and pieces together and look at the complete code:

/*
    WebGPU Example - Color Triangle
    Simple uncluttered example (no libraries)
    
    Key details:
    - Index buffer (faces), vertices, color, depth buffer, shaders, 
*/
let promise      await fetch('https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.6.0/gl-matrix-min.js');
let text         await promise.text();
let script       document.createElement('script');
script.type      'text/javascript';
script.async     false;
script.innerHTML text;
document.body.appendChild(script); 

let canvas document.createElement('canvas');
document.body.appendChildcanvas );
canvas.width  canvas.height 512;

const 
adapter await navigator.gpu.requestAdapter();
const 
device  await adapter.requestDevice();
const 
context canvas.getContext('webgpu');

const 
presentationSize   = [ canvas.width,   
                             
canvas.height 

const 
presentationFormat navigator.gpu.getPreferredCanvasFormat();

context.configure({ device device
                    
format presentationFormat,
                    
size   presentationSize });
const 
vertWGSL = `
struct Transforms {
    model      : mat4x4<f32>,
    view       : mat4x4<f32>,
    projection : mat4x4<f32>,
};
@group(0) @binding(0) var<uniform> transforms : Transforms;

struct VSOut {
    @builtin(position) Position: vec4<f32>,
    @location(0)       color   : vec3<f32>,
};

@vertex
fn main(@location(0) inPos  : vec3<f32>,
        @location(1) inColor: vec3<f32>) -> VSOut 
{
    var mvp = transforms.projection * transforms.view * transforms.model;
    
    var vsOut: VSOut;
    vsOut.Position = mvp * vec4<f32>(inPos, 1.0);
    vsOut.color    = inColor;
    return vsOut;
}
`;

const 
fragWGSL = `
@fragment
fn main(@location(0) inColor: vec3<f32>) -> @location(0) vec4<f32> 
{
    return vec4<f32>(inColor, 1.0);
}
`;

const 
positions = new Float32Array([-1.0, -1.00.0,   // Position Vertex Buffer Data
                                     
1.0, -1.00.0,
                                     
0.0,  1.00.0 ]);
const 
colors    = new Float32Array([ 1.00.00.0,    // Color Vertex Buffer Data
                                     
0.01.00.0
                                     
0.00.01.0  ]);
const 
indices   = new Uint16Array( [ 01]);       // Index Buffer Data

const createBuffer = (arrDatausage) => {
  const 
buffer device.createBuffer({ size            : ((arrData.byteLength 3) & ~3),
                                       
usage           usage,
                                       
mappedAtCreationtrue  });
  if ( 
arrData instanceof Float32Array 
  { (new 
Float32Array(buffer.getMappedRange())).set(arrData) }
  else 
  { (new 
Uint16Array (buffer.getMappedRange())).set(arrData) }
  
buffer.unmap();
  return 
buffer;
}

// Declare buffer handles (GPUBuffer)
var positionBuffer createBuffer(positionsGPUBufferUsage.VERTEX);
var 
colorBuffer    createBuffer(colors,    GPUBufferUsage.VERTEX);
var 
indexBuffer    createBuffer(indices,   GPUBufferUsage.INDEX);

// ----------------------------------------------------------------

function buildMatrixpr// position, rotation, scale
{
    
// if not set fall back to default values
    
if (!s= {x:1y:1z:1};
    if (!
r= {x:0y:0z:0};
    if (!
p= {x:0y:0z:0};
  
    
// Create the matrix in Javascript (using matrix library)
    
const modelMatrix          mat4.create();

    
// create the model transform with a rotation and translation
    
let translateMat mat4.create();   mat4.fromTranslationtranslateMatObject.values(p) );
    
let rotateXMat   mat4.create();   mat4.fromXRotation(rotateXMatr.x);
    
let rotateYMat   mat4.create();   mat4.fromYRotation(rotateYMatr.y);
    
let rotateZMat   mat4.create();   mat4.fromZRotation(rotateZMatr.z);
    
let scaleMat     mat4.create();   mat4.fromScaling(scaleMatObject.values(s) );

    
mat4.multiply(modelMatrixmodelMatrix,   translateMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateXMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateYMat);
    
mat4.multiply(modelMatrixmodelMatrix,   rotateZMat);
    
mat4.multiply(modelMatrixmodelMatrix,   scaleMat);
    return 
modelMatrix;
}

// build a model matrix (scale, rotate and position it wherever we want)
let modelMatrix buildMatrix();
   
// setup the projection
let projectionMatrix mat4.create(); 
mat4.perspective(projectionMatrixMath.PI 2canvas.width canvas.height0.0015000.0);

// default camera `lookat` - camera is at -4 units down the z-axis looking at '0,0,0'
let viewMatrix mat4.create();
mat4.lookAt(viewMatrix, [0,0,-4],  [0,0,0], [010]);


let mvpUniformBuffer device.createBuffer({
  
size64*3,
  
usageGPUBufferUsage.UNIFORM GPUBufferUsage.COPY_DST
});

device.queue.writeBuffer(mvpUniformBuffer,      0,      modelMatrix);
device.queue.writeBuffer(mvpUniformBuffer,      64,     viewMatrix);
device.queue.writeBuffer(mvpUniformBuffer,      128,    projectionMatrix);

// ----------------------------------------------------------------

let sceneUniformBindGroupLayout device.createBindGroupLayout({
  
entries: [
    { 
binding0visibilityGPUShaderStage.VERTEX,   buffer:  { type"uniform"  }   } 
  ]
});

let uniformBindGroup device.createBindGroup({
  
layout:   sceneUniformBindGroupLayout,
  
entries: [
    { 
binding 0resource: { buffermvpUniformBuffer        } }
   ],
});

// ----------------------------------------------------------------

const pipeline device.createRenderPipeline({
  
layoutdevice.createPipelineLayout({bindGroupLayouts: [sceneUniformBindGroupLayout]}),
  
vertex:    { module     device.createShaderModule({code   vertWGSL }),
               
entryPoint 'main',
               
buffers    : [ { arrayStride12attributes: [{ shaderLocation0,
                                                                
format"float32x3",
                                                                
offset0  }]         },
                              { 
arrayStride12attributes: [{ shaderLocation1,
                                                                
format"float32x3",
                                                                
offset0  }]         }
    ]
  },
  
fragment:  { module     device.createShaderModule({ code  fragWGSL }),
               
entryPoint 'main',
               
targets    : [ {formatpresentationFormat } ],
  },
  
primitive: {
    
topology"triangle-list",
    
frontFace"cw",
    
cullMode'none'
  
},
  
depthStencil: {
    
format"depth24plus",
    
depthWriteEnabledtrue,
    
depthCompare"less"
  
}
});

const 
depthTexture device.createTexture({
  
size: [canvas.widthcanvas.height1],
  
format"depth24plus",
  
usage:  GPUTextureUsage.RENDER_ATTACHMENT
})


let counter 0.0;

function 
frame() 
{  
  
// setup a transform for each triangle 
  
let tris = [  { p:{x:0,y:0,z:0}, r:{x:0,y:0.0,z:0.0}, s:{x:1.0y:1.0,z:1.0} },
                { 
p:{x:1,y:0,z:0}, r:{x:0,y:0.2,z:0.0}, s:{x:1.0y:1.1,z:1.0} },
                { 
p:{x:0,y:0,z:2}, r:{x:0,y:2.0,z:0.0}, s:{x:0.7y:1.2,z:1.0} },
                { 
p:{x:1,y:0,z:1}, r:{x:0,y:1.4,z:0.0}, s:{x:1.0y:0.5,z:0.5} } ];
  
  
// loop over each triangle and render it
  
tris.forEach( (t,k)=>{
   
    
let modelMatrix buildMatrix(t.pt.rt.s);
    
// update the local matrix for each triangle draw differently
    
device.queue.writeBuffer(mvpUniformBuffer,      0,      modelMatrix);

    
// Rotate the camera around the origin in the circle
    
let cameraEye = [ Math.cos(counter)*3.00.0Math.sin(counter)*3.0 ];
    
mat4.lookAt(viewMatrixcameraEye,  [0,0,0], [010]);
    
device.queue.writeBuffer(mvpUniformBuffer,      64,     viewMatrix);
    
    
// simple counter
    
counter += 0.001;
    
    
    const 
renderPassDescription = {
      
colorAttachments: [{
        
viewcontext.getCurrentTexture().createView(),
        
loadOp: (k=="clear":"load"), 
        
clearValue: [00.50.51], // clear screen to color
        
storeOp'store'
      
}],
      
depthStencilAttachment: {
        
viewdepthTexture.createView(),
        
depthLoadOp: (k=="clear":"load"), 
        
depthClearValue1,
        
depthStoreOp"store",
      }
    };
    
    
renderPassDescription.colorAttachments[0].view context.getCurrentTexture().createView();
      const 
commandEncoder device.createCommandEncoder();
      const 
renderPass commandEncoder.beginRenderPass(renderPassDescription);
    
    
renderPass.setBindGroup(0uniformBindGroup);
    
renderPass.setPipeline(pipeline);
    
renderPass.setVertexBuffer(0positionBuffer);
    
renderPass.setVertexBuffer(1colorBuffer);
    
renderPass.setIndexBuffer(indexBuffer'uint16');
    
renderPass.drawIndexed(3,1);
    
renderPass.end();
    
device.queue.submit([commandEncoder.finish()]);
  });
  
  
// animate - keep updating
  
requestAnimationFrame(frame);


frame();


console.log('ready...');


Resources


• WebGPU Lab Example (Matrices & Triangles) [LINK]











































WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Compute graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright wgsl webgpugems shading language cookbook kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.