www.xbdev.net
xbdev - software development
Tuesday May 5, 2026
Home | Contact | Support | WebGPU Graphics and Compute ...
     
 

WebGPU/WGSL Tutorials and Articles

Graphics and Compute ...

 


Compute Shader (WGS) and Web CANVAS (NOT Graphics Pipeline)



Compute to Canvas (Screen) using Textures. Loads and maps texture onto the circle surface (more than just a flat color).
Compute to Canvas (Screen) using Textures. Loads and maps texture onto the circle surface (more than just a flat color).


The purpose of this example is to use the compute shader to generate graphics and redirect the output to the CANVAS texture (not via the render pipeline).

For this example, you'll do a simple ray-tracing example - well sort of.... it's a signed distance calculation for a circle.

The meat of the code is actually creating textures with the appropriate flags (e.g., the CANVAS texture needs to allow GPU binding/copying).

To make the example extra juicy - instead of just rendering a color circle to the output - we also load in a texture and bind it to the compute pipeline (texture_2d) which we use to color the circle (in this example it's a rainbow zebra pattern).


The steps for the code:
• 1. Load the texture (basic Javascript load code)
• 2. Initialize WebGPU
• 3. Get WebGPU Canvas (and the texture handle for it) - important the correct flags are set in the context configuration
• 4. Setup the texture buffers (for the pipeline to render to)
• 5. Compute pipeline layout/group (so the shader/pipeline both knows what goes in/out)
• 6. Create pipeline
• 7. WGSL compute shader code (compile this and pass it to the pipeline)
• 8. After the compute shader has finished we do a 'texture-to-texture' copy to put the output on the Canvas.


The demo only does a single shot pass - however, you can modify the final lines of code so they loop (update compute shader, copy to canvas...repeat).


/*
  Create a `compute' pipeline which will do all the work - compute -> texture -> canvas (screen)
*/

let div = document.createElement('div');
document.body.appendChild( div );
div.style['font-size'] = '20pt';
function log( s )
{
  console.log( s );
  let args = [...arguments].join(' ');
  div.innerHTML += args + '<br><br>';
}


const img = document.createElement("img");
img.src = "https://webgpulab.xbdev.net/var/images/zebra.jpg";

await Promise.all([
  img.decode()
]);
let imgFileWidth  = img.width;
let imgFileHeight = img.height;
  
let textureData = null;

const imageCanvas = document.createElement('canvas');
imageCanvas.width =  imgFileWidth;
imageCanvas.height = imgFileHeight;
const imageCanvasContext = imageCanvas.getContext('2d');
imageCanvasContext.drawImage(img, 0, 0, imgFileWidth, imgFileHeight);
const imageData = imageCanvasContext.getImageData(0, 0, imgFileWidth, imgFileHeight);
  
const rowPitch = Math.ceil(imgFileWidth * 4 / imgFileHeight) * imgFileWidth;
textureData= imageData.data;


if (rowPitch == imgFileWidth * 4) {
  textureData= imageData.data;
} else {
  textureData= new Uint8Array(rowPitch * imgFileHeight);
  for (let y = 0; y < imgFileHeight; ++y) {
    for (let x = 0; x < imgFileWidth; ++x) {
      let i = x * 4 + y * rowPitch;
      textureData[i] = imageData.data[i];
      textureData[i + 1] = imageData.data[i + 1];
      textureData[i + 2] = imageData.data[i + 2];
      textureData[i + 3] = imageData.data[i + 3];
    }
  }
}
  
// -----------------------------------------------------------------------------

log('WebGPU Compute Example');

if (!navigator.gpu) { log("WebGPU is not supported (or is it disabled? flags/settings)"); return; }

const adapter = await navigator.gpu.requestAdapter();
const device  = await adapter.requestDevice();


const imgWidth = 256;
const imgHeight = imgWidth;

let canvasa = document.createElement('canvas');
document.body.appendChild( canvasa ); canvasa.height = canvasa.width = imgWidth;

const context = canvasa.getContext('webgpu');
const presentationFormat = navigator.gpu.getPreferredCanvasFormat(); 
console.log('presentationFormat:', presentationFormat );


context.configure({ device: device, 
                    usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC | GPUTextureUsage.COPY_DST,
                    format: "rgba8unorm" /*presentationFormat*/  });

let canvasTexture = context.getCurrentTexture();

// ----------------------------------------------------

const texture0 = device.createTexture({
  size: [imgFileWidth, imgFileHeight, 1],
  format: "rgba8unorm",
  usage: 0x4 | 0x2 // GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING
});

await
device.queue.writeTexture(
    { texture:texture0 },
    textureData,
    { bytesPerRow: imgFileWidth * 4 },
    [ imgFileWidth, imgFileHeight, 1 ]
  );

window.texture0view = texture0.createView();

// -----------------------------------------------------------------------------

const texture1 = device.createTexture({
  size: [imgWidth, imgHeight, 1],
  format: "rgba8unorm",
  usage: GPUTextureUsage.COPY_DST | GPUTextureUsage.COPY_SRC | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.STORAGE_BINDING
});

window.texture1view = texture1.createView();

const sampler = device.createSampler({
  minFilter: "linear",
  magFilter: "linear"
});

const GCOMPUTE = GPUShaderStage.COMPUTE;

// Bind group layout and bind group
const bindGroupLayout = device.createBindGroupLayout({
  entries: [ {binding: 0, visibility: GCOMPUTE, sampler: { type: "filtering"   }   }, 
             {binding: 1, visibility: GCOMPUTE, texture: { sampleType: "float" }   }, 
             {binding: 2, visibility: GCOMPUTE, storageTexture: {format:"rgba8unorm", access:"write-only", viewDimension:"2d"}   }
           ]
});

const bindGroup = device.createBindGroup({
    layout: bindGroupLayout,
    entries: [  {   binding: 0,  resource: sampler                },
                  {   binding: 1,  resource: texture0.createView()  },
                {   binding: 2,  resource: texture1.createView()  }
    ]
});


// Compute shader code
const computeShader = ` 
@group(0) @binding(0) var mySampler: sampler;
@group(0) @binding(1) var myTexture0: texture_2d<f32>;
@group(0) @binding(2) var myTexture1: texture_storage_2d<rgba8unorm, write>;

@compute @workgroup_size(8, 8)
fn main(@builtin(global_invocation_id) globalId      : vec3<u32>,
        @builtin(local_invocation_id)  localId       : vec3<u32>,
        @builtin(workgroup_id)         workgroupId   : vec3<u32>,
        @builtin(num_workgroups)       workgroupSize : vec3<u32>
        ) 
{
    var coords = vec2<f32>( f32(globalId.x), f32(globalId.y) ) * 3.0;
    
    var uv = vec2<f32>( f32(globalId.x), f32(globalId.y) ); // uvs * 2.0 - 1.0; 
    uv = uv / ${imgWidth}; // 0 - 1
    uv = uv * 2.0 - 1.0; // -1 to 1.0

    var radius = 0.5; // Circle radius
    var sdf = length(uv) - radius;

    var color = vec4(0.0, 1.0, 0.0, 1.0); // Outside the circle, color black 
    
       if (sdf < 0.0) {
           color = vec4(1.0, 0.0, 0.0, 1.0); // Inside the circle, color red

        color = textureLoad( myTexture0, vec2<i32>( i32(coords.x), i32(coords.y) ), 0 );

        color.a = 0.0;
       }

    textureStore(myTexture1, vec2<i32>( i32(globalId.x) , i32(globalId.y)  ), color );
}
`;
  

// Pipeline setup
const computePipeline = device.createComputePipeline({
    layout :   device.createPipelineLayout({bindGroupLayouts: [bindGroupLayout]}),
    compute: { module    : device.createShaderModule({code:computeShader}),
               entryPoint: "main" }
});


// Commands submission
const commandEncoder = device.createCommandEncoder();
const passEncoder = commandEncoder.beginComputePass();
passEncoder.setPipeline(computePipeline);
passEncoder.setBindGroup(0, bindGroup);
passEncoder.dispatchWorkgroups( 32, 32 );
await passEncoder.end();


await
commandEncoder.copyTextureToTexture( { texture: texture1 },
                                     { texture: canvasTexture },
                                     { width:imgWidth, height:imgHeight, depthOrArrayLayers:1} );

// Submit GPU commands.
const gpuCommands = commandEncoder.finish();
await device.queue.submit([gpuCommands]);

console.log('all good...');





Resources and Links


• Live Example on WebGPU Lab [LINK]



























101 WebGPU Programming Projects. WebGPU Development Pixels - coding fragment shaders from post processing to ray tracing! WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Games WGSL 2d 3d interactive web-based fun learning WebGPU Compute WebGPU API - Owners WebGPU Development Cookbook - coding recipes for all your webgpu needs! WebGPU & WGSL Essentials: A Hands-On Approach to Interactive Graphics, Games, 2D Interfaces, 3D Meshes, Animation, Security and Production Kenwright graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu for dummies kenwright webgpu wgsl compute graphics all in one kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright WebGPU Shader Language Development: Vertex, Fragment, Compute Shaders for Programmers Kenwright WGSL Fundamentals book kenwright WebGPU Data Visualization Cookbook kenwright Special Effects Programming with WebGPU kenwright WebGPU Programming Guide: Interactive Graphics and Compute Programming with WebGPU & WGSL kenwright Ray-Tracing with WebGPU kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2026 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.