现在就致电或发送电子邮件给我们!+44 (0) 7891 920 370 | [email protected]

未分类

What’s next for AI regulations?

Since artificial intelligence (AI) entered the public spotlight with the launch of ChatGPT under three years ago, it has changed the way our society works. This is not only limited to the technology sector, as AI adoption is gaining grounds in many other sectors too. But what have been the legal ramifications of it all?

The EU is emerging as a global frontrunner when it comes to AI governance. The AI Act, which officially entered into effect in August 2024, is the first comprehensive legal framework for tech oversight, and it’s a positive sign that AI and its usage – both within the workplace and outside of it in our day-to-day lives – is being closely regulated by government entities.

Specifically, the AI Act introduced multi-layered regulations that allocate AI systems into distinct risk categories. This landmark act is an attempt to regulate data usage by AI firms and inspire other global leaders to do the same. This is in response to the unregulated growth that some AI developers are experiencing, and the EU’s goal is to prohibit technology that may pose an unacceptable level of societal risk.

In particular, the EU’s regulations outright ban several types of AI, including societal scoring mechanisms, invasive biometric technology (like facial recognition and identification), and cognitive technology that could control the behaviour of human beings, especially those that may be vulnerable. While strict, the legislation has been implemented with the goal of being as fair as possible while still enabling companies to use AI. This includes smaller-to-medium sized businesses that could make the best use out of the autonomous potential of AI, saving company time, resources, and effort for more important tasks.

The EU wasn’t alone in its push to regulate AI. Several US states followed in its footsteps, with California serving as an example that had already introduced penalties for AI violations. Across the globe, China also implemented strict guidelines on AI usage, with a focus on restricting machine learning and biometric recognition technology. China’s current legal priority is to create at least fifty legal AI standards both on a national and industrial level by 2026.

In September 2024, an international AI treaty was officially opened for signature. Dubbed the Framework Convention on Artificial Intelligence, this treaty has already seen members including the EU, the UK, Australia, Israel, Japan, and more commit to a shared, fair framework for addressing the risks of AI, all while promoting its responsible usage for the good of society. Soon, countries from all over the world will be eligible to join and commit to complying with its provisions.

Governments aren’t the only entities promoting the responsible use of AI. Tech giant Microsoft is pioneering its Responsible AI Standard, aiming to establish a framework built on six fundamental principles; accountability, transparency, inclusiveness, security, privacy, fairness, and reliability. The goal of this Standard is to ensure that the focus of AI system development always prioritises human wellbeing.

Companies all over the world are becoming aware of the dangers of irresponsible usage of AI, and those that ignore this risk may eventually lose customers and, subsequently, tarnish their reputation. In addition, companies are also becoming aware of the risks that can come with the irresponsible use of AI, which should only strengthen support for more regulations as AI developers continue to push the boundaries of what’s possible.

Now that several years have passed since AI became a public tool, companies are increasingly using it for more advanced tasks. The B2B world is slowly starting to understand the potential – as well as the limitations – that AI has.

Five years ago, the day-to-day use of artificial intelligence was a mere fantasy. Now, it’s a reality and the world is starting to realise its full potential.