NJORD Estonia: Should we regulate artificial intelligence?
Businesses invest billions of dollars into artificial intelligence (AI) development, which has caused relevant government institutions to advance research into AI strategies. Therefore, the hype around AI has also led to the discussion on the need for regulation in the field. Elon Musk’s dispute with Mark Zuckerberg has brought the subject to the general public’s attention. The former insists on regulating AI and the latter sees only benefits, not threats. You can find different opinions on this issue online.
There are many reasons for the call for regulation. The most radical is that robots with AI can take away individual freedoms in the future. Although that reason might be hard to believe, there are real AI threats that may affect our life nowadays.
Another common argument for AI regulation is the creation of fake news. Recently, Elon Musk has shared an AI system developed by OpenAI, a non-profit organization that can create fake news content which looks like natural human literature. During testing, sentences in the articles were logically connected and citations were made of relevant sources. The full code of the AI system has not been revealed to the public due to the fear that it can be misused.
Along with the creation of fake news, there is a concern that AI can have an impact on employment as jobs are being reduced in favour of robots performing tasks. It is also debated whether an employer should pay taxes on deploying robots with AI, the same way they do for natural persons. Even the concept of guaranteed minimum income (i.e. every employee gets at least minimum wage regardless of socioeconomic status) is widely discussed in this context. There are thoughts that such income needs to be paid by AI tech companies, and not by the state.
Another reason in favour of regulating AI is to make clear who will be responsible if something goes wrong. For example, if a device powered by AI harms a human, who should take responsibility and what would be the consequences? It could be the manufacturer, the seller, or the end user: identifying the culprit will come down to the details of a specific incident. Since AI is about machine learning which performs different tasks based on patterns with no input following implementation, it may be difficult to determine the responsible party.
Arguments against AI regulation
The skeptics of AI regulation emphasize that there is currently nothing to regulate, as AI is in its infancy. Presently, development has produced narrow AI as it can do only one routine task. General level (also known as the strong level) is reached when AI will be able to handle complex tasks like humans. However, it is difficult to predict when AI can grasp such levels.
Consequently, there is support not to regulate the field, as it may impede the development of AI. Putting AI in a limited scope of development/or usage, without knowing how it can affect society, is a risky game that may stop innovation.
There is also an opinion that existing laws, especially general legal principles, are applicable to AI and considering its current status, there is no need for specific AI regulation.
Conclusion
Even among those who would like to regulate AI, there is no one clear vision of how and what to regulate. One can find different initiatives such as creating an AI ethics code, drafting the international agreement on AI regulation, or specific local laws on AI.
At the end of the day, it is beneficial that people pay attention to AI technologies and their further development. Such discussion helps everyone to understand what AI is and how it can affect us. Usually, the laws on new technologies are written when they have already impacted society to a great extent. With AI, it can be tricky to wait that long. Perhaps Elon Musk with his proactive approach to AI regulation is right. Considering how AI can be used and what potential threats could stem from that use, it would be better to have a legal basis already in place.