WebGPU programming API is one of my all time favorites. Though, the future might not be so bright! Why?
New and untested, large learning curve, browser support, security risks, ...
WebGPU with WGSL is great for web-based graphics and compute tasks. It is has a high learning curve, and might be disabled by default on most browsers (such as universities), but the additional power and results can't be argued with. It has a great deal of potential. But if you need something to be accepted on the web, it has to be accessable everyone and run everywhere.
It's the old tale, "Why are you still using that old library?", because it still works with old and new customers - so we don't want to risk losing any clients.
Recently, I'm developing more prototypes and demos with WebGPU - which I ask friends and colleagues to try - however, I'm often getting told - "it doesn't work on my tablet", "I can't see anything, I think it's disabled", "my graphics card doesn't seem to like it"
Ralph Breaks the Internet (2018) - Ralph Sees the Internet. WebGPU has the ability to unlock the true potential of the internet (computing, graphics, security, encryption, ...) - just need to see what new things it can do (not just repackage WebGL programs).
It might seem a bit controversial to write this article, as I'm a strong fan of WebGPU, and strongly believe it's the way forward - but it's reguarly hit with challenges (e.g., teaching WebGPU in a web programming course - usually I'd use WebGL, as I can't be sure all students will be able to run WebGPU programs).
Who uses WebGPU?
There are a mix of beginner and hardcore developers within groups (e.g., academics, scientists, business analysts etc.) - and I often stumple across interesting WebGPU projects that really show the potential of WebGPU (e.g., WebLLM and WebDiffusion). Neverthless, the vast majority of graphics demos online still continue to use WebGL.
I guess they've just not needed the extra punch yet? Or they're just waiting and holding on until WebGPU is as supported as WebGL?
Thoughts
Following items are my thoughts and perception.
I might be wrong about some or all of them, please correct me if I'm wrong.
• WebGPU acceptance is growing (but slowly) - as support and applications grow
• WebGPU API version changes have settled down - there was a point when it would break every other week due to standards and specification updates which meant older versions/syntax would cause problems/break. Of course, these changes were important - better to tighten up and sharpen the language earlier on than later
• Performance improvements - browsers versions continue to make significant updates.
• Currently, there is limited debugging and support tools (web-based GPU/WebGPU targetted software)
• I don't think WebGPU is taking WebGLs work/programs - another way to look at WebGPU is like a new tool that can do things that WebGL couldn't - so as new innovative web-technologies are developed the WebGPU API will become more and more important (won't work if we keep doing the same things)
While WebGPU has settled down recently, hopefully, it will take a bigger role in web-technologies in the future. The biggest push for WebGPU will be when develops have no choice but to use WebGPU for their project (existing APIs and libraries can't do the job).
WebGPU or no WebGPU?
From my perspective, I switched a while ago and will continue to use WebGPU - great to see what it's capable of and what new cool things I can do with it.
However, from a commercial and developers point of view, the ultimate decision behind using the WebGPU API will be, "do I have to use it?" Especially for products making money - if it's about profit and sales - then it's about ensuring as many people as possible can use your solution (not what's the latest and best).
Do the majority of people still only use their web-browser for basic surfing and run small (no GPU) graphics and games?
Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.