I was re-reading and enjoying some of my favourite Isaac Asimov books recently. It’s remarkable how these books, written so long ago, remain incredibly relevant today, particularly in the field of AI. In addition to the physical characteristics of the robots being prominently detailed in the books, we also get to see how AGI or even ASI (Artificial Super Intelligence) works when technological singularity is achieved.

Perhaps Asimov’s most notable contribution to the sci-fi world was the Three Laws of Robotics, frequently cited in his ‘robots’ series. These laws mandate that all robots incorporate three basic governing principles, designed to ensure that humans are not harmed by robots, whether intentionally or unintentionally. No robot can be constructed without these laws embedded within.

Maybe what we need now is a similar set of Three Laws for AI. In current discussions about AI, concerns about ethics, biases, hallucinations, and the potential for AGI to annihilate humanity are widespread. The debate over AI regulations and governance is present everywhere, yet reaching a consensus on what rules and controls to implement is challenging. Each group, institution, or government seems to desire their own set of laws or controls.

However, if we could establish three simple, universal, all-encompassing laws to ensure that neither humans nor humanity is harmed by AI, it would be a triumph for the human race. It could become mandatory for every AI implementation to incorporate these three basic laws, with compliance checks at every interface.

Achieving this is no small feat. The complexities involved in defining ‘harm’ and its interpretations, whether concerning an individual or a group (consider the trolley problem), and crafting a solution that is straightforward enough for universal implementation, present nearly insurmountable challenges. Perhaps it’s time for some of the brightest minds in the world to work together on this. Another Manhattan Project (this time to save the world), anyone?

By Finny Mathews