The alarming results from thispersondoesnotexist.com suggest that there could be a plethora of AI that encompasses similar flaws. Such flaws were echoed in a study by Charles (2023). Charles argued that mixed-race personnel were at a high risk of being misclassified as African, Indian, or Hispanic. These concerns were raised by Charles following a racial classification experiment with computer vision. The findings from his undisclosed computer vision suggest that it struggled to categorise the nuanced racial categories. The concerns raised by Charles are alarming and highlight the challenges faced by AI developers. Races are diverse, and in the case of South Africa, numerous ethnic groups need to be catered for by this encroaching AI.
Thispersondoesnotexist.com’s poor performance thus suggests a need to ensure that sufficient guardrails are placed to help regulate extreme discrimination from AI algorithms. As artificial intelligence becomes further mainstreamed, South Africa’s lack of regulations exposes the country to a plethora of context-specific risks. However, the forecast cited by Rutkin (2013) suggests that AI might lead to the erasure of nearly half of all jobs. A less pessimistic view from Strack, Carrasco, Kolo, Nouri, Priddis and George (2021) anticipates the creation of new jobs that emerge in the realm of these new technologies. Job creation is critical in the South African context, and one of those risks is the loss of opportunity for the historically marginalised. Therefore, the issue of race should not act as a hindrance when operating these technologies.
In some instances, the poor performance of AI witnessed in this study could be a health hazard. The above-noted woeful results suggest that the issue of algorithmic biases is a reality in some AI systems. This paper particularly found that the studied website had failed to generate a single black face, and in a country where black people are the majority, adopting such technology may become a hazard. In the case of self-driving cars, and facial recognition software, the failure to recognise historically subjugated races may lead to the proliferation of dysfunctional AI. This risk of potential lives lost, and prejudices poses a dilemma for policymakers globally, and in South Africa. The South African government's presidential commission on the 4th Industrial Revolution was established to ensure that South Africa was best suited for the era of digitisation, however, I fear that the state is too quick in its adoption and there needs to be a reduction in eagerness and the adoption of patience.
It is, thus, imperative to ensure that the algorithms adopted are sufficiently vetted and studied before launch, otherwise, the encroaching self-driving cars may pose a serious challenge to the country if race remains embedded in these algorithms. There are some cases in which algorithmic biases have been partly remedied. There is a study by Roji and Boulamwini (2019) that explored numerous attempts at algorithmic auditing, and one such approach rendered from their results witnessed a 17.7–30.4% enhancement in algorithmic accuracy when classifying darker-skinned black women. The studies' encouraging results highlight the benefits of algorithmic auditing. It is imperative to not romanticise AI and understand that the technology, if not regulated, has the potential to be prompted to create computer viruses, chemical weapons, bombs, and more. Ubina, Lentzos, Invernizzi, and Ekins (2022) found that if the right expertise had malicious intent, they could utilise AI to isolate toxic compounds for chemical weapons.
This paper’s focus on the AI's failure to register blackfaces is worrying because if these poorly trained algorithms are weaponised in defense drones, it could lead to poor performance, misidentification, and attacks on innocent people (Nasim, Ali and Kulsoom). This misidentification concern links to the racial profiling concern raised by Curtis and Brolan (2023). By acknowledging these risks in AI and implementing the necessary regulations to implement them, it could be possible to mitigate some of these risks. Algorithmic auditing is one of a plethora of attempts that have been used to regulate the use of AI. There are mathematical approaches, such as reinforcement learning, which retrains AI on its limited data sets to help enhance accuracy (Mbalaka, 2023). These attempts have also been argued by Mbalaka (2023) to contribute to algorithmic hallucinations from ill-trained AI. This study explored the issues with computer vision and the generation of the marginalized, but future studies can address various niches of other AI algorithms to inform better AI algorithmic design. It is important to disclose that the efforts to regulate AI are efforts that will be a serious challenge for the state. The above-stated EU AI Act is a proposition that explores the risks, which include algorithmic biases. However, the issue of guaranteeing compliance on the Internet is a cause for concern. Historically, the internet has been a chaotic landscape, encompassing heinous websites that circumvented state regulations. These sites include piracy websites such as Pirate Bay and the infamous Silkroad. The website SilkRoad, which was eventually shut down, is merely one of the plethora of illegal websites present on the internet. These regulatory circumvention dilemmas pose an opportunity for immoral and unethical publishing of AI tools that disregard these AI concerns. However, one tool gaining notoriety is the AI tool called Dark Bert. This tool is an AI model that was trained on a few terabytes of dark web data and has been framed as a crime detector for dark web internet activity. If a similar program is repurposed for such use, it may be possible to identify and regulate problematic and illegal AI tools that disregard algorithmic justice or act as tools for criminal activity. The use of such tools could be useful to the global effort towards warranted internet censorship. In the case of South Africa, internet censorship policies are legislatively enforced by the Films and Publications Amendment Act (11 of 2019).
The Films and Publications Amendment Act (11 of 2019) was introduced to help censor harmful and illegal content in South Africa. The Act aimed to help circumvent issues such as child pornography, pornography, piracy, extremist organisations, and other harmful material that may incite violence or spread misinformation. This Act, however, now has to be amended to work within the era of artificial intelligence. However, one could argue that real solutions would arise from entirely new legislation and new departments for AI affairs. The enhanced specificity could be guided by the tenets of the EU AI Act model and amended to include South African-specific problems. Another issue seen in some AI algorithms is the issue of cultural homogenisation raised by Mbalaka (2023). This issue was seen when two AI algorithms generated cultural attire errors or hallucinations. These errors suggest poor training of the models. These cultural hallucinations can potentially lead to offensive generations which may provoke minority groups. This issue of AI regulation transcends these cultural considerations and delves into the realm of criminality. Examples of such criminality include revenge porn and cybercrime (Fido, Rao and Harper, 2022; Henry, Powell and Flynn, 2018).
If guardrails are not instilled, then the technology could be exploited by malicious actors who weaponize it for their insidious intent. South Africa, like much of the globe, needs to thus ensure that algorithmic guardrails are considered to help mitigate the woes of criminality and structurally induced underperformance. These proposed regulations should work on malicious uses such as the above-discussed revenge pornography. The proposed regulations in South Africa will ensure that there are sufficient guardrails in place for the regulation of these AI models. AI has come a long way since its inception at the Dartmouth conference (Moore 2006), but the technology remains in its infancy and encompasses numerous issues that can be difficult, if not impossible, to mitigate (Srinivasan and Chander 2021). It is important to ensure that this technology’s evolution is closely monitored by policymakers and regulatory authorities to address the risks contained in this escalating age of AI. Unfortunately, to date, there is a dilemma on the dark web that needs to be addressed. The dark web poses a serious challenge for policymakers. The following section will explore how the dark web makes regulation problematic. The sections that follow also explore how the open-source movement and the availability of source code could pose a challenge for regulators. These challenges raised below are an imperative dialogue that highlights some of the limitations that may come from attempts at regulations.
The Moral Dilemmas of Race-based Algorithmic Auditing
The South African government, as with all governments globally, is in a precarious position to try and remedy this predicament of structurally induced limitations in data-driven applications such as AI. Chi, Lurie and Mulligan (2021) raised these same concerns in their study, which looked to study the ramifications of this structurally induced omission of minorities. Their study found that structural inequalities have warranted a re-strategizing by corporations to redesign workflows and processes that seek to mitigate the impact these racial issues may have on AI and other data-driven applications. There is one unorthodox remark that these scholars made regarding this predicament. These authors raised the point that companies like Google cannot remedy the civil rights movement and instead need to strategize and find systems that work to navigate this environment. This paper’s results (Fig. 1) are merely an indicator of how these data-driven tools can fail when provisions are not made for these systemically induced data deficiencies. The point Chi, Lurie and Mulligan made is important because waiting for civil rights to change and for systemic change to be remedied is a process that technology firms cannot wait for, and technology firms need to restructure their workflows and personnel to ensure that diverse groups are not negated in this technological evolution.
Another moral dilemma that was raised by Cave and Dihal pertains to how these pre-discussed systemic frailties can lead to the racialisation of AI systems. Cave and Dihal’s argument through critical race theoretical reasoning argues that the historical subjugation of inferior races has contributed to a racialisation of computational systems to portray a certain image. This point, in their study, is justified by their evaluation of the phenotypical appearance of humanoid and robotic bodies in AI systems. Their point is warranted; however, some could argue that the racialisation of the humanoid body is merely a cosmetic design that is similar to the design of a Barbie doll. The problem that needs to be emphasised instead is the performance of these AI systems because cosmetic designs are correctable. Despite the intention to remedy this predicament, the technology firms still face another moral dilemma, and this is who gets to determine the classification criteria (Mishra and Gorana, 2021).
Similarly, to how Boulamwini (2017) argued for representation, the lack of representation may lead to the inability to discern certain racial nuances, for example, the distinction between Latin Americans and mixed-race South Africans. The above-noted paper by Charles and this paper's noted limitations highlights the ambiguity faced in identifying mixed races. One could find a challenge in distinguishing between a Latin American and a coloured or fair-skinned black person. However, the quest for race identification could be unnecessary if the performance issues are resolved. The problem, thus, only becomes intensified when the performance seems discriminatory, as seen in Fig. 1’s poor representation of minorities. One way to remedy this predicament is to perhaps acknowledge that systemic issues are prevalent and help ensure AI developers make provisions for the issues that arise by making iterative updates to their software when reviews are present. Another approach could be to ensure that there is an emphasis on customer feedback and subsequent customer-informed revisions. The availability of feedback could help guide companies into creating better-informed AI. This suggestion of a qualitative feedback approach is, of course, problematic for being a slow process (Fabijan, Olsson, and Bosch, 2015). There are ways to streamline the customer feedback experience to address glaring coding flaws, and it seems sensible to implement such changes. The next section delves into how restrictive parameters could be used to regulate AI.
6.5. Restrictive Parameters for AI Applications
The state can coerce ISPs to ban websites and applications which have not been approved by a regulatory authority. The use of restrictive parameters needs to be considered because it can potentially mitigate some illegal-use cases of AI. Some companies opt to govern themselves before the government can enforce it, and Open AI is one such example. Open AI does have such parameters on its DALL-E text-to-image AI program. According to Open AI’s GitHub, the DALL-E is guided by the following restrictive parameters (Open AI 2022a; Open AI 2022b):
-
The prevention of hyper-realistic faces of people and celebrities.
-
The prevention of offensive content, such as (American) hate symbols.
-
The prevention of illegal image renders
-
Sexualised content
-
Suggestive images of children
-
Violent content
-
Political content
-
Toxic content
The first category already works to prevent the rendering of real faces, which could be used to commit fraud. Fraud-GPT is a concerning tool that was created without such restrictive parameters (Economic Times, 2023). The AI provides its users with the ability to generate cybercriminal activities such as phishing scams. AI needs restrictive parameters to prevent such weaponization, as seen in Fraud GPT. In practice, Open AI’s DALL-E, upon receiving requests to generate offensive images such as naked women or any socially incorrect requests, can refuse to render such requests. If this potential approach is made a requirement in all AI applications, then the emergence of offensive AI may be reduced. The already existing application could be the European Union’s (EU) AI Act. This particular act was created to establish a global regulatory initiative for the use of AI. The word global may be a bit hegemonic because of its proposed imposition. The law does, however, encompass various necessary considerations that may warrant the proposed global adoption. There remains a need to ensure that non-EU member state needs are considered in this law to ensure that potentially geopolitically specific considerations are identified and amended. The aspects the Act covers include:
-
Ban AI systems that manipulate human behaviour in a way that could be harmful or deceptive.
-
Establish rigorous testing of AI algorithms and approval certification before they can be deployed.
-
To ensure that clear and transparent information is provided for users interacting with an AI system.
-
Establish requirements for developers to ensure that AI systems do not reinforce or exacerbate existing biases or discrimination.
-
Establish stronger data protections and rules for AI systems that process personal data.
-
Establish a European artificial intelligence Board that oversees the regulation of AI within the EU.
The argument raised in this paper seems to be something that this Act actively covers, but the issues raised regarding the potential illicit digital migration to the dark web are one aspect that may be a problem for regulators in the future (McCoy, Bauer, Grunwald, Kohno, and Sicker, 2008). Still, a global effort is required to help ensure that the AI that is published qualifies to serve society ethically. To ensure that these parameters are adhered to, the implementation of mandatory pre-launch testing could be used to help regulators assess bugs in a peer-review stage. As with all academic publications, this process could be made mandatory for private companies that act under a regulatory authority.
6.6. A Need for Pre-launch Testing?
If the technology is not tested or allowed to be launched to the public, then potential issues may not be identified and revised. It is thus imperative to ensure that the AI algorithms that are published are accurately vetted and discussed by data ethicists and computational social scientists. The process can be done through the implementation of a required certificate, which is attained after a rigorous peer review test by diverse groups and researchers. It is also important to build state capacity and competency in AI. These audits are important to ensure that the AI published does not contain the ability to generate dangerous or unfair biases or malicious activities. The case study of Tay, the racist, sexist AI chatbot, highlights the importance of close monitoring. The AI Tay was launched in 2016, and upon its launch, it was able to absorb harmful, dangerous, and prejudiced worldviews (Davis 2016). The adoption of Tay’s worldview was due to either purposeful sabotage or a design flaw. What is important to note is that, since Tay’s error, major AI producers have created restrictions in their AI algorithms that act to mitigate these biases. Unfortunately, these efforts to regulate may become difficult, especially if the code used to create AI systems is public knowledge. The internet could quickly become flooded with copycat software, which may not have restrictive parameters embedded. The Fig. 1 results from Nvidia’s image generator need to be looked at critically to help foster a future with inclusive AI, AI that is tested for performance on minorities, so that such poor performances can be mitigated.