Fri | Nov 29, 2024

IP expert wary of nationalistic approaches blocking policy harmonisation on AI

Published:Monday | September 11, 2023 | 12:08 AMSashana Small/Staff Reporter
Intellectual property lawyer Shabbi S. Khan.
Intellectual property lawyer Shabbi S. Khan.

Massachusetts, United States:

Policy harmonisation in artificial intelligence (AI) is critical in ensuring the responsible and safe development, deployment, and use of the technology. But, with some countries taking a nationalistic approach to their AI evolution, this could be challenging, argued intellectual property lawyer Shabbi S. Khan.

Khan, who is partner at law firm, Foley and Lardner LLP, based in Boston, Massachusetts, contends that the adoption of similar laws and policies will result in fair competition and promote a global marketplace for the use of AI technologies with countries across jurisdictions abiding by the same rules.

“If Europe has a certain AI system that the US ban or doesn’t allow, you’re gonna run into this issue of that company cannot operate in the United States and now you kind of made it harder for trade to really happen,” he said.

He was speaking to journalists on an international reporting tour exploring innovations in tech policy and navigating AI, organised by the US Department of State’s Foreign Press Centre.

His comments come as Senator Dr Dana Morris Dixon, Jamaica’s minister without portfolio in the Office of the Prime Minister with responsibility for skills and digital transformation, recently announced the formation of a task force to conduct research into AI. The task force will be directed to provide an evidence-based foundation for the development of a national AI policy.

Khan, whose practice focuses on intellectual due diligence, primarily in the fields of AI and learning, computer software, digital therapeutics, medical devices and artificial intelligence in health care, pointed to the The Nuclear Non-Proliferation Treaty of 1968 as an example of the significance of policy harmonisation in managing global technology. The treaty was signed by the United Kingdom, the US, the Soviet Union, and 59 other states, and was aimed to prevent the spread of nuclear weapons and weapons technology.

However, he noted that this agreement came almost 25 years after the atomic bombs over the Japanese cities of Hiroshima and Nagasaki. He said he feared that generative AI has the potential to be “just as catastrophic”.

Generative AI allows users to generate new content such as text, images, sounds, animation and 3D models or other types of data by leveraging existing data from a variety of inputs. Its usage has raised a lot of ethical concerns which include data privacy violation, amplification of existing bias, and a lack of transparency.

“We’re in an interesting time,” he stated. “And it’s going to be interesting to see what kind of regulation they’re gonna have either before, during … when people recognise the issue, but then maybe many years after it’s taken place.”

REGULATORY APPROACHES

Three major players in AI development – the US, China and the European Union (EU) – have taken different approaches in how they regulate AI. The EU took the lead on AI regulation through the EU AI Act which is expected to be passed this year. The legislation works by classifying AI systems based on the risk they pose to users and banning or allowing them on that account.

China’s regulations include measures governing recommendation algorithms, as well as new rules for synthetically generated images and chatbots in the mould of ChatGPT.

In the US, where a lot of innovation in AI is taking place, such as with ChatGPT, there are no AI regulations and Khan believes the political climate is making it even more difficult.

The country’s Office of Science and Technology has disseminated a blueprint for the AI bill of rights which consists of five clauses: to protect citizens from unsafe and ineffective AI systems, protect against bias and discrimination based on algorithms, data are protected and for users to know they are interacting with an AI system, and that a human alternative be provided if someone is not comfortable communicating with a machine.

Khan outlined that states such as California, Illinois, and New York have issued their own legislation. Additionally, he said federal agencies have power to preside over the use of AI technology, but they are industry specific.

Although stating that the EU regulations lack details, Khan believes its law will form the foundation on which policy harmonisation can be built.

“Once EU adopts its legislation, the US is going to have to create something similar,” he said.

Ultimately, Khan asserted that AI regulation should strike a balance between driving innovation and hindering growth.

“If you have lax regulations you are taking on a lot more risks to society at large,” he said. “On the other side, when you think about strict regulation, it can actually hinder growth because it will make it slower to innovate because you have to go through all regulatory processes, but ultimately with the end goal of making it safer.”

sashana.small@gleanerjm.com