I am all for AI Safety (via Regulation), but the way governments and org’s are going about it right now is untenable. AI is not a gadget; it’s a new layer of capability being woven into everything.

Lessons from History

Look back a century ago, when electricity was becoming commonly available. If governments at the time had decided, “Oh, people can get electrocuted, or electric chairs can be invented, so we should control electricity by whitelisting only legitimate uses,” we would have stifled most genuine and legitimate innovative uses. Innovation would have been crippled by early whitelisting. Governments cannot begin with a proscribed whitelist on day one and demand that anything beyond their definition is illegal or requires permission.

One more example, from a few millennia ago:
When metallurgy was just emerging. Imagine if kingdoms had said, “Metallurgy is dangerous because you can make swords, knives, and daggers (and more recently, bullets and guns).” Pre-emptive restrictions on tools would have delayed societal advancement.

In both these cases, top-down whitelisting would have been counter-productive. What was demonstrated as tenable was regulating outcomes after the fact, once the many uses of metallurgy and electricity were better understood. These post-facto Regulations are the reason a person cannot walk into a store and buy an electric chair. Or why children can’t go and buy a sword or bullets at the shop next door. Regulation is designed to limits identifiable dangerous uses, while keeping open the doors to experimentation and innovation.

Similarly, we cannot begin AI regulation with a whitelist of Permitted Actions, and defaulting everything else to a blacklist. That is simply not how Fundamental Technologies work. Like it or not, AI is a modern-day fundamental technology, just as electricity and metallurgy were in their time.

What do I mean by “Fundamental Technology”?

A fundamental technology is one that is general-purpose, deeply transformative, and widely applicable across sectors and domains. It doesn’t serve a narrow use case, but instead becomes part of the infrastructure of society. Electricity powers everything from factories to homes to hospitals. Metallurgy enables tools, automobiles, infrastructure, and art. Similarly, AI isn’t just a chatbot or a recommendation engine. It’s already shaping healthcare, agriculture, logistics, education, governance, media, and these are still early days. Trying to regulate AI as if it were a finished product, rather than a platform, misses the point entirely. AI is not a gadget; it’s a new layer of capability being woven into everything.

I believe the way forward is through Adaptive Regulation:

  • Don’t start with a rigid whitelist of permitted AI activities
  • Observe AI’s actual outputs and developments
  • Create targeted restrictions on demonstrably harmful applications
  • Recognise AI as a fundamental technology that will evolve dynamically

Let learning, not fear, guide us:

Regulation should come from a place of learning, not fear. We should regulate the impacts, not the tools themselves.

If we treat AI like a tool whose safe uses are known upfront, we risk doing more harm than good.

A flexible, adaptive regulation model that grows alongside the technology is our best chance at fostering both innovation and safety.

Comments

comments