www.xbdev.net
xbdev - software development
Friday February 6, 2026
Home | Contact | Support | WebGPU Graphics and Compute ... | Neural Networks and WebGPU Compute.. Learning from Data.....
     
 

Neural Networks and WebGPU Compute..

Learning from Data.....

 


Client Side Neural Network



The client side implementation of a neural network is very compact and flexible. You can use it to connect all sorts of differnet configurations and run simple test cases before scaling things up (running a larger version on the GPU).

Most importantly, it provide a test case for your parallel compute version - if there are any problems or bugs - then you can compare the two version. For instance, you can pre-train the network weights and biases - save them and load them onto your GPU version (check the forward phase generates the same results).

Data is the real beast! Neural networks are very elegent and beautify? I guess you could say, bringing them together creates the 'beauty and the beast' scenario. Data is huge, non-linear, dirty and difficult to model - but with a bit of time - neural networks works with the data (come together).


XOR (2-3-1 Network)


The implementation for the neural network is very elegant and straightfoward - it doens't include any bells and whistles at this stage. Just about getting thigns up and running - them porting it to the compute shader.

Checking the code works and that it matches the CPU (JavaScript) version.

• Simple neural networks - tested with xor and back propagation
• 'Arrays' of data - shifting all the neural network data to large blocks of arrays - accessed using indexes
• Scalable solution - define the network dimensions, e.g., layers = [2,3,4.....]
• Be warned - adding lots of extra layers - really slows down the training
• Left the debug check code in - e.g., manual unrolling of arrays/loops for the fixed case test [2,3,1] - xor.
• Some asserts scattered around to check basic array size aligment/data

The following shows what the output for implementation looks like. Note there will be slight differences on the expected numerical results - as the starting weights are picked randomly.

["epoch 0 mean squared error: 0.2505780004933572"]
[
"epoch 1000 mean squared error: 0.2500711230602899"]
[
"epoch 2000 mean squared error: 0.24999781937254667"]
[
"epoch 3000 mean squared error: 0.24964737287776492"]
[
"epoch 4000 mean squared error: 0.22870728006502195"]
[
"epoch 5000 mean squared error: 0.12208624314358762"]
[
"epoch 6000 mean squared error: 0.019941250349789837"]
[
"epoch 7000 mean squared error: 0.006708260529154022"]
[
"epoch 8000 mean squared error: 0.0036642395367303708"]
[
"epoch 9000 mean squared error: 0.0024493101347769163"]
[
"epoch 10000 mean squared error: 0.001817150331867134"]
[
"for input 0,0 expected 0 predicted 0.0328 which is correct"]
[
"for input 0,1 expected 1 predicted 0.9503 which is correct"]
[
"for input 1,0 expected 1 predicted 0.9717 which is correct"]
[
"for input 1,1 expected 0 predicted 0.0540 which is correct"]
```
  

A 2-input XOR (exclusive OR) gate produces a true (1) output only when the number of true (1) inputs is odd. 

Here is the truth table for a 2-input XOR gate:

| A | B | XOR |
|---|---|-----|
| 0 | 0 |  0  |
| 0 | 1 |  1  |
| 1 | 0 |  1  |
| 1 | 1 |  0  |


The compete JavaScript code for the JavaScript version using arrays and an XOR dataset.

const VARIANCE_W = 0.5;
//const randomUniform = (min, max) => { return 0.123; }; // Math.random() * (max - min) + min;
const randomUniform = (min, max) => Math.random() * (max - min) + min;

const ru = ()=>{ return randomUniform(-VARIANCE_W, VARIANCE_W); }

const xordataset = [ { inputs: [0,0], outputs: [0] },
{ inputs: [0,1], outputs: [1] },
{ inputs: [1,0], outputs: [1] },
{ inputs: [1,1], outputs: [0] } ];

const layers = [ 2, 3, 1 ];

console.assert( xordataset[0].outputs.length == layers[ layers.length-1 ] );


const weights = [];
for (let i=0; i{
weights[i] = Array( layers[i] ).fill(0);
for (let k=0; k {
weights[i][k] = Array( layers[i+1] ).fill(0);
for (let b=0; b {
weights[ i ][ k ][ b ] = ru();
}
}
}

const biases = [];
const outputs = [];
const errors = [];
for (let i=0; i{
biases[i] = Array( layers[i] ).fill( 0 );
outputs[i] = Array( layers[i] ).fill( 0 );
errors[i] = Array( layers[i] ).fill( 0 );
}

/*
const weights = [
[ [ ru(), ru(), ru() ], // 3x2
[ ru(), ru(), ru() ]
],
[ [ ru() ], // 3x1
[ ru() ],
[ ru() ]
]
];
*/

/*
const biases = [ [ 0, 0 ],
[ 0, 0, 0 ],
[ 0 ] ];

let outputs = [ [ 0, 0 ], // input
[ 0, 0, 0 ], // hidden layer
[ 0 ] ] // output

let errors = [ [ 0, 0 ],
[ 0, 0, 0 ],
[ 0 ] ]
*/


const sigmoid = (x) => 1.0 / (1.0 + Math.exp(-x));

const sigmoidPrime = (x) => x * (1 - x);

const activate = (iin) => {
console.assert( iin.length == outputs[ 0 ].length );
console.assert( weights[0].length == layers[0] );
console.assert( weights[0][0].length == layers[1] );
console.assert( weights[1].length == layers[1] );
console.assert( weights[1][0].length == layers[2] );

for (let i=0; i {
if ( i==0 )
{
for (let k=0; k {
outputs[ 0 ][ k ] = iin[ k ];
}
}
else
{

for (let k=0; k {
var sum = 0.0;
for (let b=0; b {
sum += outputs[i-1][b] * weights[i-1][b][k];
}
outputs[i][k] = sigmoid( sum + biases[i][k] );
}

}
}


/*
outputs[0][0] = iin[0];
outputs[0][1] = iin[1];

outputs[1][0] = sigmoid( outputs[0][0] weights[0][0][0] + outputs[0][1] weights[0][1][0] + biases[1][0] );
outputs[1][1] = sigmoid( outputs[0][0] weights[0][0][1] + outputs[0][1] weights[0][1][1] + biases[1][1] );
outputs[1][2] = sigmoid( outputs[0][0] weights[0][0][2] + outputs[0][1] weights[0][1][2] + biases[1][2] );

outputs[2][0] = sigmoid( outputs[1][0] * weights[1][0][0] +
outputs[1][1] * weights[1][1][0] +
outputs[1][2] * weights[1][2][0] + biases[2][0] );

*/


return outputs[ outputs.length-1 ];
};

const propagate = (iin, target, alpha = 0.2) => {

/*
errors[2][0] = ( target[0] - outputs[2][0] ) * sigmoidPrime( outputs[2][0] );

errors[1][0] = errors[2][0] weights[1][0][0] sigmoidPrime( outputs[1][0] );
errors[1][1] = errors[2][0] weights[1][1][0] sigmoidPrime( outputs[1][1] );
errors[1][2] = errors[2][0] weights[1][2][0] sigmoidPrime( outputs[1][2] );
*/
for (let i=layers.length-1; i>0; i--)
{
for (let k=0; k {
if ( i==layers.length-1 )
{
errors[i][k] = ( target[k] - outputs[i][k] ) * sigmoidPrime( outputs[i][k] );
}
else
{
errors[i][k] = 0.0;
for (let g=0; g {
errors[i][k] = errors[i+1][g] weights[i][k][g] sigmoidPrime( outputs[i][k] );
}
}
}
}

/*
weights[1][0][0] += alpha outputs[1][0] errors[2][0];
weights[1][1][0] += alpha outputs[1][1] errors[2][0];
weights[1][2][0] += alpha outputs[1][2] errors[2][0];
biases[2][0] += alpha * errors[2][0];

weights[0][0][0] += alpha outputs[0][0] errors[1][0];
weights[0][1][0] += alpha outputs[0][1] errors[1][0];
biases[1][0] += alpha * errors[1][0];

weights[0][0][1] += alpha outputs[0][0] errors[1][1];
weights[0][1][1] += alpha outputs[0][1] errors[1][1];
biases[1][1] += alpha * errors[1][1];

weights[0][0][2] += alpha outputs[0][0] errors[1][2];
weights[0][1][2] += alpha outputs[0][1] errors[1][2];
biases[1][2] += alpha * errors[1][2];
*/

for (let i=0; i {
for (let k=0; k {
if ( i < layers.length-1 )
for (let g=0; g {
weights[i][k][g] += alpha outputs[i][k] errors[i+1][g];
}

biases[i][k] += alpha * errors[i][k];
}
}

};

// - test the neural network using iteration loop - xor dataset

console.log( new Date() );


for (let epoch = 0; epoch <= 10000; epoch++) {
let indexes = Array.from(Array( xordataset.length ).keys());
indexes.sort(() => Math.random() - 0.5);
for (let j of indexes) {
activate( xordataset[j].inputs );
propagate( xordataset[j].inputs, xordataset[j].outputs, 0.2);
}

if (epoch % 1000 === 0) {
let cost = 0;
for (let j = 0; j < xordataset.length; j++) {
let o = activate( xordataset[j].inputs );
for (let b=0; b {
cost += Math.pow( xordataset[j].outputs[b] - o[b], 2);
}
}
cost /= 4;
console.log(`epoch ${epoch} mean squared error: ${cost}`);
}
}


for (let i = 0; i < xordataset.length; i++)
{
const result = activate( xordataset[i].inputs );

console.log(`for input ${xordataset[i].inputs} expected ${xordataset[i].outputs} predicted ${result[0].toFixed(4)} which is ${Math.round(result[0]) === xordataset[i].outputs[0] ? "correct" : "incorrect"}`);
}


### Transitioning to Array 'Blocks'

At this pointyou need to start thinking ahead for when the implementation transitions to the GPUThe nested arrays of arrays (of different sizes and dimensionsmight be a problem for the GPU (managing and keeping track of the data).

The data for the weights and biases will shift to **fixed-size** array blocks.

Test the implementation using the xor and back propagation
Shift to fixed size Array 'Blocks' for the Neural Network Data (layer outputsweightserrors, ..)
Scalable solution specificy the network dimensionse.g., [2,3,1]...
Some asserts scattered around to check basic array size aligment/data


To manage the code complexity we add helper functions 
``getWeight``, ``getBias``, and ``getOutput`` are used to calculate the offset and value in the block array for the weightsbiases, and output buffer respectively.

  
  

const VARIANCE_W = 0.5;
//const randomUniform = (min, max) => { return 0.123; }; // Math.random() * (max - min) + min;
const randomUniform = (min, max) => Math.random() * (max - min) + min;

const ru = ()=>{ return randomUniform(-VARIANCE_W, VARIANCE_W); }

const layers = [ 2, 3, 1 ];
const maxLayerSize = [...layers].sort( (a,b)=>b-a )[0];

const xordataset = [ { inputs: [0,0], outputs: [0] },
{ inputs: [0,1], outputs: [1] },
{ inputs: [1,0], outputs: [1] },
{ inputs: [1,1], outputs: [0] } ];
console.assert( xordataset[0].outputs.length == layers[ layers.length-1 ] );

const weights = Array( layers.length maxLayerSize maxLayerSize ).fill( 0 );
const biases = Array( layers.length * maxLayerSize ).fill(0);
const loutputs = Array( layers.length * maxLayerSize ).fill(0);
const errors = Array( layers.length * maxLayerSize ).fill(0);

const MAX_NEURONS_PER_LAYER = maxLayerSize;
const NUM_LAYERS = layers.length;

//------------------------------------------------------------------------------

for (let i=0; i{
for (let k=0; k {
for (let g=0; g {
setWeight( i, k, g , ru() );
}
}
}

//------------------------------------------------------------------------------

function getBias(layer, neuron) {
return biases[layer * MAX_NEURONS_PER_LAYER + neuron];
}

function setBias(layer, neuron, value) {
biases[layer * MAX_NEURONS_PER_LAYER + neuron] = value;
}

function setOutput(layer, neuron, value) {
loutputs[layer * MAX_NEURONS_PER_LAYER + neuron] = value;
}

function getOutput(layer, neuron) {
return loutputs[layer * MAX_NEURONS_PER_LAYER + neuron];
}

function getWeight(layer, fromNeuron, toNeuron) {
return weights[layer MAX_NEURONS_PER_LAYER MAX_NEURONS_PER_LAYER
+ fromNeuron * MAX_NEURONS_PER_LAYER
+ toNeuron];
}

function setWeight(layer, fromNeuron, toNeuron, value) {
weights[layer MAX_NEURONS_PER_LAYER MAX_NEURONS_PER_LAYER
+ fromNeuron * MAX_NEURONS_PER_LAYER
+ toNeuron] = value;
}

function setError(layer, neuron, value) {
errors[layer * MAX_NEURONS_PER_LAYER + neuron] = value;
}

function getError(layer, neuron) {
return errors[layer * MAX_NEURONS_PER_LAYER + neuron];
}

//------------------------------------------------------------------------------

const sigmoid = (x) => 1.0 / (1.0 + Math.exp(-x));

const sigmoidDerivative = (x) => x * (1 - x);

const relu = (x) => { return Math.max(0.0, x); }

const reluDerivative = (x) => { if (x > 0.0) { return 1.0; } return 0.0; }

const leakyRelu = (x, alpha = 0.01) => { return x > 0 ? x : alpha * x; }

const leakyReluDerivative = (x, alpha = 0.01) => { return x > 0 ? 1 : alpha; }

const activate = (iin) => {
//console.assert( iin.length == outputs[ 0 ].length );
//console.assert( weights[0].length == layers[0] );
//console.assert( weights[0][0].length == layers[1] );
//console.assert( weights[1].length == layers[1] );
//console.assert( weights[1][0].length == layers[2] );

for (let i=0; i {
if ( i==0 )
{
for (let k=0; k {
setOutput(0, k, iin[ k ] );
}
}
else
{
for (let k=0; k {
var sum = 0.0;
for (let b=0; b {
sum += getOutput( i-1, b ) * getWeight( i-1, b, k );
}
setOutput( i, k, sigmoid( sum + getBias( i, k ) ) );
}

}
}


return [ getOutput( NUM_LAYERS-1, 0 ) ];
};

//------------------------------------------------------------------------------


const propagate = (target, alpha = 0.2) => {

for (let i=NUM_LAYERS-1; i>0; i--)
{
for (let k=0; k {
if ( i==NUM_LAYERS-1 )
{
let error = ( target[k] - getOutput( i, k ) ) * sigmoidDerivative( getOutput( i,k ) );
setError( i, k, error );
}
else
{
setError( i, k, 0.0 );
for (let g=0; g {
let error = getError( i+1, g ) getWeight( i,k,g ) sigmoidDerivative( getOutput(i,k) );
setError( i, k, error );
}
}
}
}


for (let i=0; i {
for (let k=0; k {
if ( i < NUM_LAYERS-1 )
for (let g=0; g {
var weight = getWeight( i, k, g );
weight += alpha getOutput(i,k) getError(i+1,g);
setWeight( i, k, g, weight );
}

let bias = getBias( i,k );
bias += alpha * getError( i, k );
setBias( i, k, bias );
}
}
};

//--------------------------------------------------------------------------------------

// - test the neural network using iteration loop - xor dataset

console.log( new Date() );

for (let epoch = 0; epoch <= 10000; epoch++) {
let indexes = Array.from(Array( xordataset.length ).keys());
indexes.sort(() => Math.random() - 0.5);
for (let j of indexes) {
activate( xordataset[j].inputs );
propagate( xordataset[j].outputs, 0.2);
}

if (epoch % 1000 === 0) {
let cost = 0;
for (let j = 0; j < xordataset.length; j++) {
let o = activate( xordataset[j].inputs );
for (let b=0; b {
cost += Math.pow( xordataset[j].outputs[b] - o[b], 2);
}
}
cost /= 4;
console.log(`epoch ${epoch} mean squared error: ${cost}`);
}
}

for (let i = 0; i < xordataset.length; i++)
{
const result = activate( xordataset[i].inputs );

console.log(`for input ${xordataset[i].inputs} expected ${xordataset[i].outputs} predicted ${result[0].toFixed(4)} which is ${Math.round(result[0]) === xordataset[i].outputs[0] ? "correct" : "incorrect"}`);
}

console.log('done');

Example output for the array block version should correlate with the previous versionhoweverit's more suited to the GPU. In the next part, we'll shift the code over to the WebGPU compute shader.


["epoch 0 mean squared error: 0.2500029761479372"]
["epoch 1000 mean squared error: 0.2500005087277707"]
["epoch 2000 mean squared error: 0.24999930163495981"]
["epoch 3000 mean squared error: 0.24997979905862644"]
["epoch 4000 mean squared error: 0.24987478009350658"]
["epoch 5000 mean squared error: 0.24830663938750314"]
["epoch 6000 mean squared error: 0.19418730685335842"]
["epoch 7000 mean squared error: 0.08972768776423953"]
["epoch 8000 mean squared error: 0.013158872406992626"]
["epoch 9000 mean squared error: 0.005263020428948086"]
["epoch 10000 mean squared error: 0.0031027467682457916"]
["for input 0,0 expected 0 predicted 0.0469 which is correct"]
["for input 0,1 expected 1 predicted 0.9501 which is correct"]
["for input 1,0 expected 1 predicted 0.9453 which is correct"]
["for input 1,1 expected 0 predicted 0.0687 which is correct"]
["done"]
```




Resources and Links


• Notebook Example (Arrays) [LINK]

• Notebook Example (Arrays to Blocks) [LINK]

















WebGPU by Example: Fractals, Image Effects, Ray-Tracing, Procedural Geometry, 2D/3D, Particles, Simulations WebGPU Compute graphics and animations using the webgpu api 12 week course kenwright learn webgpu api kenwright programming compute and graphics applications with html5 and webgpu api kenwright real-time 3d graphics with webgpu kenwright webgpu api develompent a quick start guide kenwright webgpu by example 2022 kenwright webgpu gems kenwright webgpu interactive compute and graphics visualization cookbook kenwright wgsl webgpu shading language cookbook kenwright wgsl webgpugems shading language cookbook kenwright



 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.