www.xbdev.net
xbdev - software development
Tuesday April 29, 2025
Home | Contact | Support | WebGPU Graphics and Compute ... | WebGPU 'Compute'.. Compute, Algorithms, and Code.....
     
 

WebGPU 'Compute'..

Compute, Algorithms, and Code.....

 


Sound is very important - a fun thing to get the ball rolling - from the famous movie - Dumb and Dumber - "I can
Sound is very important - a fun thing to get the ball rolling - from the famous movie - Dumb and Dumber - "I can't hear you. la la la la la....".


Audio Echo


Write a simple WGSL compute shader for audio processing, specifically for applying an echo effect.

We'll then set up the buffers in JavaScript to pass the audio data to the GPU and visualize the waveform before and after processing.


Adding audio echo effect to sound files
Adding audio echo effect to sound files


Functions Used: requestAdapter(), getPreferredCanvasFormat(), createCommandEncoder(), beginRenderPass(), setPipeline(), draw(), end(), submit(), getCurrentTexture(), createView(), createShaderModule()

Loading audio sample and mixing it with a delayed version of itself (on the compute shader).

Just for fun - also does a visualization of the origin and new sound signal.

Plays the sound as well - so you can actually hear the sound!! For this test example it's just someone saying 'hello'.


The steps:
• Create an AudioContext to load and play the audio file.
• Then load the audio file using fetch and decode it using decodeAudioData.
• Convert the audio data to mono and normalize it to ensure it's in the range [-1, 1].
• Setup GPU buffers for the audio data and copy the audio data to these buffers (WebGPU).
• Run the compute shader to apply the echo effect to the audio data.
• Visualize the waveform before and after processing using the visualizeWaveform function.


JavaScript code to set up the buffers and visualize the waveform:

// Create an AudioContext to load and play audio
const audioContext = new AudioContext();

let audioFileURL 'https://webgpulab.xbdev.net/var/resources/hello3.mp3';

// Load audio file 
let fp          await fetch(audioFileURL);
let arrayBuffer await fp.arrayBuffer();
let audioBuffer await audioContext.decodeAudioData(arrayBuffer);
console.log('audioBuffer:'audioBuffer );


// Convert audio data to mono (if stereo) and normalize it
const channels audioBuffer.numberOfChannels;
const 
samples audioBuffer.getChannelData(0);
const 
normalizedSamples = new Float32Array(samples);
for (
let i 0samples.lengthi++) {
  
normalizedSamples[i] /= Math.max(Math.abs(samples[i]), 1.0);
}


const 
sampleRate audioBuffer.sampleRate;
console.log'sampleRate:'sampleRate );

// Initialize WebGPU
const adapter await navigator.gpu.requestAdapter();
const 
device await adapter.requestDevice();

// Create GPU buffers for audio data
const audioBufferGPU device.createBuffer({
  
sizenormalizedSamples.byteLength,
  
usageGPUBufferUsage.STORAGE GPUBufferUsage.COPY_SRC GPUBufferUsage.COPY_DST
});
const 
processedBufferGPU device.createBuffer({
  
sizenormalizedSamples.byteLength,
  
usageGPUBufferUsage.STORAGE GPUBufferUsage.COPY_SRC GPUBufferUsage.COPY_DST
});

// Copy audio data to GPU buffers
device.queue.writeBuffer(audioBufferGPU0normalizedSamples);


const 
BUFFER_SIZE normalizedSamples.length;


const 
bindGroupLayout device.createBindGroupLayout({
  
entries: [ {binding0visibilityGPUShaderStage.COMPUTEbuffer: {type"storage"}  },
             {
binding1visibilityGPUShaderStage.COMPUTEbuffer: {type"storage"}  },
           ]
});

const 
bindGroup device.createBindGroup({
  
entries: [
    { 
binding0resource: { bufferaudioBufferGPU     }  },
    { 
binding1resource: { bufferprocessedBufferGPU }  }
  ],
  
layoutbindGroupLayout
});


const 
computeShaderModule device.createShaderModule({
  
codewgslcompute,
});

const 
computePipe device.createComputePipeline({
  
layout :   device.createPipelineLayout({bindGroupLayouts: [bindGroupLayout]}),
  
compute: { module    computeShaderModule,
            
entryPoint"main" }
});

{
const 
commandEncoder device.createCommandEncoder();
const 
passEncoder commandEncoder.beginComputePass();
passEncoder.setPipeline(computePipe);
passEncoder.setBindGroup(0bindGroup);
passEncoder.dispatchWorkgroupsBUFFER_SIZE );
await passEncoder.end();

const 
gpuCommands commandEncoder.finish();
await device.queue.submit([gpuCommands]);
}

// ------------------------------------------------------------------

// Get buffer back to visualize the difference

// Note this buffer is not linked to the 'STORAAGE' compute (used to bring the data back to the CPU)
const bufferTmp = new Float32Arraysamples.length );
const 
gbufferTmp device.createBuffer({ size:  bufferTmp.byteLengthusageGPUBufferUsage.COPY_DST GPUBufferUsage.MAP_READ});
//device.queue.writeBuffer(gbuffer3, 0, buffer3);

const commandEncoder device.createCommandEncoder();
// Encode commands for copying buffer to buffer.
commandEncoder.copyBufferToBuffer(
    
processedBufferGPU// source buffer
    
0,                  // source offset
    
gbufferTmp,           // destination buffer
    
0,                  // destination offset
    
bufferTmp.byteLength  // size
);

// Submit GPU commands.
const gpuCommands commandEncoder.finish();
await device.queue.submit([gpuCommands]);


// Map and read buffer
await gbufferTmp.mapAsync(GPUMapMode.READ);
const 
arrayBuffer0 gbufferTmp.getMappedRange();
const 
dataTmp = new Float32Array(arrayBuffer0);
const 
buf0 = Array.from(dataTmp);
console.log('arrayBuffer0 :'dataTmp[0] );
// Clean up
gbufferTmp.unmap();

console.log("Value after computation:"buf0[0]);
  
// Visualize the waveform before and after processing
visualizeWaveform(normalizedSamples"original-waveform");
visualizeWaveform(buf0"processed-waveform");


// Function to visualize the waveform
function visualizeWaveform(samplescanvasId) {
    
//const canvas = document.getElementById('mycanvas');
    
const canvas document.createElement('canvas');
    
document.body.appendChildcanvas );
    
canvas.style.border '1px solid blue';
    const 
context canvas.getContext("2d");
    
context.lineWidth 10;
  
    
canvas.width window.innerWidth;
    
canvas.height 200;

    
context.clearRect(00canvas.widthcanvas.height);
    
context.beginPath();
    
context.strokeStyle '#ff0000';
    
context.moveTo(0canvas.height 2);
  
    const 
scale canvas.width samples.length;
    
console.log'samples.length:'samples.length );
  
    for (
let i 0samples.lengthi++) 
    {
        const 
scale;
        const 
= (samples[i]) * canvas.height 2;
        
context.lineTo(xy);
    }

    
context.stroke();
}


// Play audio

// After processing the audio and copying it back to CPU, create a new audio buffer
const processedAudioBuffer audioContext.createBuffer(
  
1// Number of channels (mono)
  
BUFFER_SIZE// Length of the buffer
  
audioContext.sampleRate // Sample rate
);

await gbufferTmp.mapAsync(GPUMapMode.READ);
// Fill the processed audio buffer with the processed audio data
const processedSamples = new Float32Array(gbufferTmp.getMappedRange());
processedAudioBuffer.copyToChannel(processedSamples00);
gbufferTmp.unmap();

// Create an AudioBufferSourceNode and connect it to the audio context destination
const source audioContext.createBufferSource();
source.buffer processedAudioBuffer;
source.connect(audioContext.destination);

// Start playing the processed audio
source.start();



The compute shader is very simple - but it's a very powerful example for audio processing in the GPU. Essentially showing you how to pass an original signal and create a new output signal.

The compute shader for adding the echo effect is given below:

// Input buffer containing the audio data
@group(0) @binding(0) var<storageread_writeaudioBuffer     : array<f32, ${BUFFER_SIZE}>;
// Output buffer to store the processed audio data
@group(0) @binding(1) var<storageread_writeprocessedBuffer : array<f32, ${BUFFER_SIZE}>;

// Define the size of the audio buffer
const BUFFER_SIZE = ${BUFFER_SIZE};

// Define the parameters for the echo effect
const DELAY_SAMPLES 2200// Delay time in samples
const DELAY_FACTOR  0.5;   // Echo intensity factor

// Define the main function to apply the echo effect
@compute @workgroup_size(11)
fn 
main(@builtin(global_invocation_idglobalId      vec3<u32>,
        @
builtin(local_invocation_id)  localId       vec3<u32>,
        @
builtin(workgroup_id)         workgroupId   vec3<u32>,
        @
builtin(num_workgroups)       workgroupSize vec3<u32>
        ) 
{
    
// Apply the echo effect
    
for (var 0uBUFFER_SIZE1u
    {
        if (
>= DELAY_SAMPLES
        {
            
processedBuffer[i] = audioBuffer[i] + DELAY_FACTOR audioBuffer[DELAY_SAMPLES];
        } 
        else 
        {
            
processedBuffer[i] = audioBuffer[i];
        }
    }
}


For the compute example, it only uses a single thread - but you can change the 'i' so that it uses the 'globalId' and ramp up the workgroup size (distribute the calculation over multiple threads).

The important thing is you send/received and checked the audio is getting processed (visualize and play the new generated audio).

The output plot looks similar - as it's the same signal but with an 'echo' (same signal added but offset). You can notice small differences if you look closely.

If you want to check it's working, change the 'DELAY_FACTOR' to a larger value (e.g., 2.0) and you'll see the signal size jump and you'll notice a step in size when the echo signal kicks in.


Output plot of the audio signals (original top and the new signal on the bottom with the echo)
Output plot of the audio signals (original top and the new signal on the bottom with the echo)


When you run the code - the audio also plays - so you'll hear someone say 'hello' but with an echo overlayed (like they're saying it in a tunnel).


Things to Try


• Try modifying the 'audio' delay (for the echo in the compute shader)
• Scale the echo (louder than the original)
• Mix a 'sine' wave with the echo
• Try other sounds (longer ones)
• Load in 2 sound buffers and pass them to the compute shader - do the mixing on the compute shader (add the two sounds and create a new one)



Resources and Links


• WebGPU Demo (Echo Sample) [LINK]





































WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Compute graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright wgsl webgpugems shading language cookbook kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.