Artificial Intelligence – regulated or not?
In our previous article Artificial Intelligence – our daily companion, we discussed how AI has infiltrated almost every aspect of our daily routine. Both in a personal and professional respect.
We sometimes use AI without even knowing it. But far from it being an intelligence that will take over the world and destroy humanity as we know it (aka The Terminator), AI has helped us by simplifying tasks, by doing laborious tasks for us and even assisting in making our workday more efficient and therefore more productive. AI has added immense value to our society, something it does with ease.
We discussed how AI has helped in the healthcare profession, how it has improved the process altogether for assessing and preventing crime and fraud and AI has even managed to make the lives of lawyers easier. All in all, positive outcomes.
But we did say that – like with everything in life – even the things that offer immense value carry risk. AI with its machine learning capabilities, uses vast amounts of data to perform its tasks. And this collection of data brought on by the increased use of AI has – inevitably - picked up speed (and in larger volumes). That makes data an extremely valuable asset – when in the right hands. When not, legal risks inevitably increase.
Some businesses by their very nature use data for a myriad of reasons but businesses shouldn’t be allowed to use data in any manner they choose. Especially when that data is confidential. Issues also arise if a third party aka “hackers” manage to gain access to data held by a business (as has been the situation with data breaches around the world – the Twitter data breach with its 200 Million-User Email Leak and the Uber data breach which resulted in the acquisition of email addresses and Windows Active Directory information for over 77,000 Uber employees in 2022).
And that kind of spells disaster!
First and foremost - data must be protected!
It’s clear that in using AI, data breaches have become not just a common occurrence but an expensive one too.
Each time a company is hacked, and private information is stolen, it costs sometimes (often times actually) millions of dollars to “fix” the problem. The solution to this mess? Legal and regulatory compliance.
Both South Africa’s Protection of Personal Information Act 4 of 2013 (POPIA) and the EU’s General Data Protection Regulation (GDPR) regulate the automated processing of data which makes for a good start.
From a local perspective, businesses will need to ensure that their computer systems and specifically their use of AI is compliant with POPIA.
Of particular importance for AI systems is –
Ø Section 71(1) governs automated decision-making - “a data subject may not be subject to a decision which results in legal consequences for him, her or it, or which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its credit worthiness, reliability, location, health, personal preferences or conduct”. An example of this is when applying for a bank loan. An AI system may be used to profile potential clients looking to get the bank loan by determining their creditworthiness based on other loan’s they may have, their income and their indebtedness. Sec 71(1) prohibits banks from making decisions to either grant or reject a loan application based solely on the profile created by the AI system.
Ø Section 57(1)(a) governs prior authorisation for processing a subject – “the responsible party must obtain prior authorisation from the Regulator, in terms of section 58, prior to any processing if that responsible party plans to— process any unique identifiers of data subjects — for a purpose other than the one for which the identifier was specifically intended at collection; and with the aim of linking the information together with information processed by other responsible parties”. An example here would be during work applications. One company (A) wants to determine whether a potential employee is at a higher risk for work-related injuries based on age or other qualifying factors using an AI system. To do so, they need to utilise information gained by the potential employees’ previous employer (B). Before A is allowed to do so, they need to approach the Information Regulator before utilising the AI system. The responsible party must consider not only what information will be processed by the AI system but also how the AI system will use it, to ensure that all data protection compliance requirements have been met.
From a global perspective, the GDPR which is noted as being one of the toughest privacy and security laws in the world, was drafted and passed in the European Union (EU) and put into effect on May 25, 2018.
While it imposes obligations on organisations within the EU it also imposes obligations onto organizations anywhere, so long as they target or collect data related to people in the EU. The GDPR will levy harsh fines against those who violate its privacy and security standards, with penalties reaching into the tens of millions of euros.
With the GDPR, Europe is signalling its firm stance on data privacy and security at a time when more people are entrusting their personal data with cloud services and breaches are a daily occurrence. The regulation itself is large, far-reaching, and fairly light on specifics, making GDPR compliance a daunting prospect, particularly for small and medium-sized enterprises (SMEs) (GDPR.EU)
But… is AI being successfully regulated?
As set out in Regulating AI: 3 experts explain why it’s difficult to do and important to get right -
“To regulate AI well, you must first define AI and understand anticipated AI risks and benefits. Legally defining AI is important to identify what is subject to the law. But AI technologies are still evolving, so it is hard to pin down a stable legal definition”.
And it would seem – as of 2023 - that global legal systems would agree because the formal regulation of AI is still kind of sketchy, to say the least.
In fact, the Future of Life Institute recently published an open letter dated 22 March 2023 from signatories that include Elon Musk and Steve Wozniak calling for a 6 month moratorium on the development of high functionality AI to allow the world to decide how to ensure that AI serves rather than destroys humanity (again The Terminator running through our collective consciousness).
But when looking at the world to see which country has actually enacted laws that regulate AI, we came up kind of empty.
For example, South Africa currently has no laws that specifically regulate AI (besides of course POPIA, which as we already know regulates the automated processing of data). While there was a positive step towards formalising AI in 2019 in the form of the Presidential Commission on the Fourth Industrial Revolution (4IR) aimed at prioritising interventions to take advantage of rapid technological changes. However it has been pointed out by WITS that there is a Huge gap between SA's 4IR strategy and what commission recommends.
And that means should South Africa look to draft its own AI legislation it may have to use foreign legislation as the basis for drafting it. But it will have to be adapted to meet local challenges. In this regard read the Daily Maverick’s article South Africa faces many challenges in regulating the use of artificial intelligence for more information.
What about elsewhere?
Ø In April 2021, the European Commission presented its proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules for AI, also known as the AI Act of the European Union.
The AI Act of the EU sets out an overarching framework for governing AI at EU level, providing a framework of requirements and obligations for its developers, deployers and users, together with regulatory oversight. The framework is underpinned by a risk-categorisation system for AI with 'high' risk systems subject to the most stringent obligations, and a ban on 'unacceptable risk' AI. The EU is hoping to pass the legislation by the end of 2023 (Taylor Wessing).
Ø On 16 June 2022, the Canadian federal government submitted the draft law C-27, also known as the Digital Charter Implementation Act 2022. The Digital Charter Implementation Act, 2022 introduces three proposed acts: the Consumer Privacy Protection Act, the Artificial Intelligence and Data Act (AIDA,) Canada's first AI Act, and the Personal Information and Data Protection Tribunal Act.
The proposed Consumer Privacy Protection Act will address the needs of Canadians who rely on digital technology and respond to feedback received on previous proposed legislation. This law will ensure that the privacy of Canadians will be protected and that innovative businesses can benefit from clear rules as technology continues to evolve.
AIDA seeks to introduce new rules to strengthen Canadians’ trust in the development and deployment of AI systems, including:
- protecting Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses, and mitigates the risks of harm and bias;
- establishing an AI and Data Commissioner to support the Minister of Innovation, Science, and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and
- outlining clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment (Government of Canada).
Ø The United States - on 4 October 2022, the White House Office of Science and Technology Policy published a Blueprint for the Development, Use and Deployment of Automated Systems (Blueprint for an AI Bill of Rights). The Blueprint consists of voluntary guidelines and recommendations and is therefore and furthermore not a regulation (in the normal sense of the word). It lists 5 principles that are intended to minimise potential harm from AI systems. On 18 August 2022, the National Institute of Standards and Technology (NIST) published the second draft of its AI Risk Management Framework for comments. The original version dates back to March 2022 and is based on a concept paper from December 2021. The AI Risk Management Framework is intended to help companies that develop or deploy AI systems to assess and manage risks associated with these technologies (Taylor Wessing).
Ø On 29 March 2023, the United Kingdom’s Department for Science, Innovation, and Technology published a White Paper on AI, on AI regulation. The UK White Paper sets out the ambition of being "the best place in the world to build, test and use AI technology". To illustrate this ambition five principles to guide the growth, development, and use of AI across sectors were set out -
- Principle 1: Safety, security, and robustness. This principle requires potential risks to be robustly and securely identified and managed;
- Principle 2: Appropriate transparency and “explainability”. Transparency requires that channels be created for communication and dissemination of information on the AI tool. The concept of explainability, as referred to in the UK White Paper, requires that people, to the extent possible, should have access to and be able to interpret the AI tool’s decision-making process;
- Principle 3: AI tools should treat their users fairly and not discriminate or lead to unfair outcomes;
- Principle 4: Accountability and governance. Measures must be deployed to ensure oversight over the AI tool and steps must be taken to ensure compliance with the principles set out in the UK White Paper; and
- Principle 5: Contestability and redress. Users of an AI tool should be entitled to contest and seek redress against adverse decisions made by AI (the Daily Maverick).
Ø Closer to home, Kenya published the Artificial Intelligence Practitioners' Guide in April 2023. According to the Global Partnership for Sustainable Development Data, the “AI Practitioners’ Guide is an actionable guidance framework to give concrete practical guidance for those involved in AI-based development and use and also help in shaping upcoming regulatory efforts undertaken by Kenyan regulators”. Taken from the foreword of the guide itself, the following is set out -
“The Guide was created by the Global Partnership for Sustainable Development Data (the Global Partnership) in collaboration with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), under the umbrella of the Digital Transformation Centre Kenya. These artificial intelligence (AI) guidelines for practitioners have been co-developed with the AI community in Kenya and come at a critical time in the country’s development discourse, especially with increased focus on tech and digitization. We learned there’s a large and highly progressive, but siloed AI sector in Kenya. Area’s span tackling critical elements related to the building blocks of AI (infrastructure, data, capacity and skills, investments, and financing), the principles of responsible AI and enabling factors (tools, barriers, bias, risks, and harms), and the legal landscape for AI in Kenya (legislative, regulatory, and ethical environment, national policy, and institutional frameworks)”.
Despite the above regulations, there is currently no global approach to the regulation of AI. There are various countries that are in various stages of developing AI regulation. But with differing views on exactly how to regulate AI, it doesn’t leave those that are developing and/or using AI with much comfort.
Perhaps the 6-month moratorium asked for in the open letter is correct – stop the development of AI and wait for regulation to catch up. Whether that will be taken to heart, we will have to wait to see. In the meantime, South Africa can only look to its international counterparts and see what route they take.
But one thing is extremely clear – AI needs to be (better) regulated.
If you have any questions on the information we have set out above or have a personal issue which you want to discuss with us, please don’t hesitate to contact us at NVDB Attorneys.
We are a law firm that considers honesty to be core to our business. We are a law firm that will provide you with clear advice and smart strategies - always keeping your best interests at heart!