Shift from a single thread to multiple threads on the GPU.
Forward phase (activation) - Single to Multiple Threads
As the input 'propogates' from the input to the output - the neurons near the output cannot perform their calculations until they get the data reaches them.
However, the layers can be processed in parallel (each layer is capable of containing lots of neurons).
Shift from all the layers (Input - Hidden - Output) running on a single thread to splitting the layers across dispatch calls (execute them one after the other). This then lets us distribute the perceptron calculation onto different threads.
Forward propogation - the layers can be grouped together into dispatch calls - this means each neuron in each layer is independent of each other and can be processed concurrently on the GPU without any issues.
Take the single threaded activation function, as shown below, notice that the calculation moves from input to output over the layers.
Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.