www.xbdev.net
xbdev - software development
Monday February 9, 2026
Home | Contact | Support | Artificial Intelligence (AI)... going beyond 'knowledge' .... | Artificial Intelligence (AI) How far would you 'trust' AI...
     
 

Artificial Intelligence (AI)

How far would you 'trust' AI...

 

AI > Trustworty AI > Regulatory Landscape for Trustworthy AI


The law is the law - no one is above the law - not even AI!

(Of course, like in Judge Dredd, maybe one day, we'll have machines saying they're the law?)

As AI is increasibly being freedom to evolve based on data and training algorithms - could they
As AI is increasibly being freedom to evolve based on data and training algorithms - could they 'reinterpet' things wrongly? For example, the image is from I-Roboto - where VIKI (AI) explains, she is a true AI and has managed to evolve and reinterpreted the 3 laws.


Moving beyond the science-fiction - the world is currently fighting with regulations to make AI safe - not just safe from taking over the world - but safe so the AI models aren't used to harm or cause distress.

For example, another way of thinking of AI is a 'weapon' - when nuclear fusion was discovered it opened all sorts of doors and possiblities - however, it's when people use this discoverly for dangerous weapons - the same for AI models - instead of using them to help and lift mankind - they could be used as a weapon - to generate fake content, attack people, crack passwords, and so on.

Regulations are like Guardrails


When we talk about AI regulations and standards - we are talking about guardrails - so the AI train doesn't fall off this twisty mountain road it's on.

They don't just keep the AI from going rogue; they help everyone playing in this space speak the same language. Standards define how to make AI systems fair, secure, explainable, and consistent - basically, all the things that make people say "Yeah, I can rely on that." Without them, it's like trying to assemble IKEA furniture with no instructions and five extra screws.

Standards give developers, businesses, and users a solid foundation so AI doesn't just feel like guessing and random luck - it feels safe and legit.


Show me some Examples


A few examples of laws/regulation to give you an idea are:

European Union – AI Act: The EU’s AI Act, which became law in 2024, is the world’s first comprehensive AI regulation. It classifies AI systems by risk level—like “unacceptable,” “high,” or “minimal”—and imposes stricter rules on high-risk applications, such as those used in hiring or law enforcement. Some parts are already in effect, while others will roll out by 2026 or 2027

United Kingdom – Pro-Innovation Framework: The UK has taken a lighter, principles-based approach. Instead of a single AI law, it encourages regulators in different sectors to apply five core principles: safety, transparency, fairness, accountability, and contestability. It’s more flexible but may evolve into formal legislation down the line

Just a couple of laws but these are still evolving, but they show how governments are trying to keep AI powerful and responsible.


Laws are for People Not Machines


We have to remember - any laws and regulations are there for what 'humans' can and cannot do - it will not be the machine that is punished - but the people. If a fridge kills someone - we do not blame the fridge - but the person who built it or sold it. They are the ones who must be prosectured and held responsible.


We have to remember - that all these laws and regulations - are for people not machines - we do not punish and imprison machine...
We have to remember - that all these laws and regulations - are for people not machines - we do not punish and imprison machines - it's people who are held responsible.










 
Advert (Support Website)

 
 Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.