AI & Regulation

Why regulate AI?

Drug discovery; Intelligent cybersecurity; Automated web-scraping, these are just a few of the numerous applications of artificial intelligence (AI) currently employed around the world. Whilst the pace of global innovation has undoubtedly been intensified by developments in AI, such seismic changes have inevitably garnered scrutiny from governments and regulators. However, there is growing fear of a so-called ‘information gap’, in which authorities, regulators and external agencies struggle to understand the inner workings of AI models and algorithms amongst an industry that prides itself on rapid innovation and secretive methodologies.

While AI is engaged in fostering widespread technological development and automating both simple and complex tasks, we must consider the other side of the coin. If left unsupervised and unregulated, or perhaps worse, supervised by agencies and regulations that implement incorrect or ill-advised measures, then the results could be catastrophic, stifling any potential benefits AI could bring. The question will be how to negotiate this complex interaction between public and private bodies, balancing market competitiveness and innovation with ethical and social concerns. In exploring this, we will examine a seemingly-positive application of AI in reviewing the employment market and then consider a potentially murkier aspect of AI in the creation and dissemination of Deepfake technology.

AI in employment

Nowadays in the job market, many interviews are conducted via video, with an increasing number of companies using AI. The AI uses speech-to-text interpretation along with visual cues to score a candidate (Figure 1) [1].

Figure 1 – Representation of AI in a video interview

Source: psfk.

In 2019, the State of Illinois passed the “Artificial Intelligence Video Interview Act”, which imposed strict reporting requirements on video interviews conducted by AI software [2]. The Act mandates the consent of each interview applicant to be confirmed before starting the AI-led interview, and also places a restriction on employee videos being shared. While this regulation may address privacy concerns, despite this, many people maintain that the AI used in the job interview process is simply another ‘blanket screening’ procedure that perpetuates an identical workforce rather than a diverse range of different character types [3]. Law Professor Andrew Murray claims that amidst AI screening techniques, choices in the selection process may be removed based on static pre-selected values, which leads to the algorithm removing potentially strong candidates from consideration before a human can make a decision [4]. Furthermore, vendors of these programs do not legally have to provide robustness tests on the various techniques used or carry out studies to assess any potential discriminatory effects. Therefore, this gives rise to the issue at hand where AI is being used freely to make life-changing decisions, but regulators do not understand the intricacies of the methods used enough to implement the correct regulation.

Rules differ worldwide

The world has not yet reached a consensus on how, or how much, AI regulation should be implemented. The European Union is one of the world’s most progressive regions for enforcing regulation in the AI sector. In 2018 it published guidelines suggesting ways to address potential issues with AI, and in 2020 it proposed a regulatory framework for AI technology [5]. This involved separating high-risk and low-risk AI applications into ‘buckets’. Regulation in the UK is primarily focused on data governance, with the Data Protection Act of 2018 being the main vehicle by which AI may be regulated [7]. There are also pan-European calls for a decentralised exploratory agency, a so-called ‘CERN for AI’, to avoid competing national interests and enable regulators to operate with the same ethos and innovative streak as competitive private firms [6].

Looking further afield to one of Europe’s primary competitors in the market for AI innovation, China is yet to establish a distinct regulatory body to govern AI. In a similar fashion to much of China’s semi-private market system (think Alibaba and WeChat), this has enabled a relative merging between public and private bodies. Chinese firms can draw on information, expertise, and support from the vast reservoirs of government, whilst sharing developments with the state itself. Whilst this system has enabled Chinese AI to make great strides in some sectors that Western firms have felt unable, or unwilling, to extend their efforts (notably privacy, data surveillance, and urban management), such synergies have also fostered criticism on divided national interests, unfair competition in international markets, and undermined scientific integrity [8].

The dark side of innovation – Deep Fakes

Deepfakes are falsified videos created by AI, where a person in an existing image or video has been digitally altered so that they appear as someone else’s likeness. Deepfakes use a specific type of deep neural network, namely generative adversarial networks [9]. To create a Deepfake, two algorithms are combined together. The first algorithm (known as the generator) is fed random noise which is then turned into an image. This image is then added to a stream of real images (of celebrities, for example) and then this is processed using the second algorithm (the discriminator). This process iterates millions of times, with the algorithm optimising itself after every iteration to achieve a high fidelity, realistic representation of a face.

Deepfakes have been used to add a new dynamic to museums through interactive displays and have also been able to remove language barriers. For example, in the “Malaria Must Die” campaign, David Beckham was shown to speak nine languages through the use of this technology [10]. However, the more infamous applications of Deepfakes are often used in scenarios that involve fake news. Last year, Cameroon’s minister of communication had to dismiss a deep fake that showed the country’s soldiers executing civilians [11]. The accuracy by which deepfakes can be used to give a false representation of the world clearly raises problems with regulators. Several popular social media companies have officially banned deepfakes from their platforms. These include Facebook, Instagram and Twitter [12].

Therefore, one could argue that the more malign applications of AI such as Deepfakes should yield a higher priority in finding regulatory solutions than in more ethically ambiguous cases such as video interviews. Ironically, AI may be the answer to debunking Deepfakes. Various technology firms are currently working on systems that detect and flag fake videos as they appear.

The Big Picture

Georgetown University Professor Mark MacCarthy surmises the issue concisely:

“AI is too important and too promising to be governed in a hands-off fashion, waiting for problems to develop and then trying to fix them after the fact.” [13]

Therefore, for AI to be regulated while still being allowed to innovate, it is crucial to bring in industry leaders to work with policymakers in acting in such a way that protects consumers and provides room for innovation and growth. We recommend the solutions will be most effective when there are preventative rather than implemented ex-post. Communication channels between the public and private sector must be encouraged, whilst policy-makers should be drawn from the industries they are expected to govern, alongside academic and political circles.

References

[1] Psycruit, 2019. Using Artificial Intelligence in Video Interviews. Available at: https://www.psycruit.com/blog/using-artificial-intelligence-in-hiring

[2] Ilga, 2019.
Artificial Intelligence Video Interview Act. Available at: https://www.ilga.gov/legislation/fulltext.asp?DocName=&SessionId=108&GA=101&DocTypeId=HB&DocNum=2557&GAID=15&LegID=&SpecSess=&Session=

[3] MacCarthy, M. Brookings, 2020.
AI needs more regulation, not less. Available at:

https://www.brookings.edu/research/ai-needs-more-regulation-not-less/

[4] Opinio Juris, 2020.
‘The Time has Come for International Regulation on Artificial intelligence’ – An Interview with Andrew Murray. Available at:

http://opiniojuris.org/2020/11/25/the-time-has-come-for-international-regulation-on-artificial-intelligence-an-interview-with-andrew-murray/

[5] European Commission, 2018,
Communication Artificial Intelligence for Europe Available at:

https://digital-strategy.ec.europa.eu/en/library/communication-artificial-intelligence-europe

[6] Éanna Kelly, 2021, Science Business Available at: https://sciencebusiness.net/news/call-cern-ai-parliament-hears-warnings-risk-killing-sector-over-regulation

[7] Gov.UK, 2018,
The Data Protection Act. Available at:

https://www.gov.uk/data-protection#:~:text=The%20Data%20Protection%20Act%202018,Data%20Protection%20Regulation%20(GDPR).&text=They%20must%20make%20sure%20the,used%20fairly%2C%20lawfully%20and%20transparently

[8] Global Data Thematic Research, 2021, Artificial Intelligence: Regulatory Trends Available at: https://www.verdict.co.uk/artificial-intelligence-regulatory-trends/

[9] Shen et. al., 2018. “Deep Fakes” using Generative Adversarial Networks (GAN). Available at:

http://noiselab.ucsd.edu/ECE228_2018/Reports/Report16.pdf

[10] Think Automation, 2021. Yes, positive deepfake examples exist. Available at:

https://www.thinkautomation.com/bots-and-ai/yes-positive-deepfake-examples-exist/

[11] Guardian, 2020. What are deepfakes – and how can you spot them? Available at:
https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

[12] DAC Beachcroft, 2020. The legal implications and challenges of deepfakes. Available at:

https://www.dacbeachcroft.com/es/gb/articles/2020/september/the-legal-implications-and-challenges-of-deepfakes/

[13] Eightfold, 2021. The State of International Regulation of Artificial Intelligence.  Available at:

https://eightfold.ai/blog/artificial-intelligence-international-regulation/