AI Regulation: ADGM, Abu Dhabi

With AI making waves - from Senate hearings in the US to the European AI Act -it's clear that governance is urgently needed. Questions around the ethics and ownership of training data, as well as the environmental and financial impact, are becoming impossible to ignore.

AI Regulation: ADGM, Abu Dhabi

Good morning! Welcome to another insightful look into the evolving world of AI, with today's focus on a topic that's making headlines—AI regulation. All from Abu Dhabi Global Markets (ADGM), a significant global financial hub, which also happens to be a leading regulator in technology.

Let's dive into what AI regulation might look like and how it will affect us on multiple fronts, from technology to social dynamics.

You can read or watch


The Growing Importance of AI Regulation

Abu Dhabi Global Markets is one of the world's most progressive regulators. They have been proactive in providing guidance on emerging technologies like cryptocurrency, and I expect we'll hear more from them concerning AI regulation. These are indeed exciting times for tech governance, with closed-door Senate hearings in the US and the European AI Act making waves. Against a backdrop of global events like chip export bans and Nvidia's soaring market share, it's clear that AI regulation is not just imminent but necessary.

The Intricacies of Training Data

When we talk about AI, the question of training data is unavoidable. Consider large language models like ChatGPT and GPT-4, which have caught the mainstream's eye for their human-like interactivity. These models rely heavily on extensive data sets, raising vital questions. Where does this data come from? Who owns it? How should it be managed, especially in relation to intellectual property laws?

The Environmental and Financial Toll

Another angle for potential regulation is the environmental and financial cost of training these large models. The immense computational requirements translate into significant energy consumption and capital expenditure. As the world wrestles with climate change—a topic of special relevance to the UAE, given its role in hosting COP28 in Dubai—it's clear that AI's environmental impact will likely be a regulatory focal point.

Balancing Innovation and Ethics

AI offers massive potential for addressing challenges like climate change, but this brings us to a conundrum: should we curb AI's progress to enforce regulations, or should we forge ahead? Regulatory stances could dramatically affect how AI evolves and how its benefits are distributed globally.

Trust and Safety Layers

Once a model is operational, it starts influencing various aspects of our lives. For instance, I use ChatGPT and GPT-4 almost daily as a virtual colleague. However, the model's trust and safety layer, governed by OpenAI, sometimes restricts the kind of questions I can ask. This raises questions about the universality of AI governance, which may be based on a particular geopolitical worldview.

Inclusion and Diversity in AI

On that note, there is genuine concern, especially among countries in Africa, about not having enough digitised information to train models that reflect their local contexts. Similarly, there's apprehension about the social implications of AI, such as job displacement, prompting urgent discussions about universal basic income (UBI) and retraining programmes.

The Future: Data Ownership and Collaboration

Companies are keen to adopt AI technologies but are equally concerned about data privacy. This creates a situation where the same models are hosted separately for different companies, causing inefficiency. Given the increasing focus on data privacy and GDPR regulations, could this be an opportunity to redefine data ownership?

A Global Dialogue on AI Regulation

What I find particularly encouraging is the consultative approach of regulators like ADGM, who share ideas and seek feedback from both the public and experts. This stands in contrast to the behind-the-scenes, closed-door Senate hearings we see in other jurisdictions.

Your Thoughts?

If you've got any thoughts on how AI regulation is going to shape up, I'd love to hear from you. AI is a double-edged sword, and the nuances of its regulation will impact its utility, ethics, and social integration in ways we can't yet fully predict.

So, here's to a more collaborative and inclusive future, where the regulatory frameworks for AI are both robust and flexible, allowing us to harness its full potential while mitigating its risks.

Thank you for reading or watching, and have a lovely evening!