Respect, protection, and promotion of human rights, fundamental freedoms, and human dignity
Living in peaceful, just, and interconnected societies
Ensuring diversity and inclusiveness
Environment and ecosystem flourishing
The UK's Alan Turing Institute defines AI ethics as a set of values, principles, and techniques that employ widely accepted standards of "right" and "wrong" to guide the development and use of AI technologies.
Stanford's Encyclopedia of Philosophy delves more deeply into the ethical issues that arise from AI systems as objects or tools made and used by humans. It provides general explanations of ethical issues such as privacy, manipulation, opacity, bias, human-robot interaction, employment, effects of autonomy, and ethics for machine systems. Additionally, existing positions and arguments are analyzed for how they interact with current technologies and, finally, what policy consequences may be drawn.
Wikipedia defines algorithmic bias as Systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. Algorithmic bias can present itself in many ways. One example provided by the Brookings Institution is:
Bias in online recruitment tools
Online retailer Amazon, whose global workforce is 60 percent male and where men hold 74 percent of the company’s managerial positions, recently discontinued using a recruiting algorithm after discovering gender bias. The data engineers used to create the algorithm were derived from the resumes submitted to Amazon over a 10-year period, predominantly from white males. The algorithm was taught to recognize word patterns in the resumes rather than relevant skill sets, and these data were benchmarked against the company’s predominantly male engineering department to determine an applicant’s fit. As a result, the AI software penalized any resume that contained the word “women’s” in the text and downgraded the resumes of women who attended women’s colleges, resulting in gender bias.
Several elements of facial recognition technology are widely viewed as unethical due to their potential for abuse/misuse. For example, some governments have used facial recognition technology to monitor and track citizens, violating their privacy and civil liberties. Facial recognition could also target specific groups or individuals, leading to discrimination and bias.
Additionally, facial recognition software is less accurate for certain groups, such as people of color and women, due to bias in the algorithms and data used to train them. This can lead to false positives and false accusations, further perpetuating discrimination and injustice.
Another concern is the lack of transparency and consent in facial recognition technology. Many people may not be aware that their image is being captured and analyzed and may not have consented to use their data. According to Politico, Amazon shares video from Ring doorbell cameras without permission.
Chatbots are trained through Natural Language Processing (NLP). IBM defines Natural Language Processing as
"A branch of computer science—and more specifically, the branch of artificial intelligence or AI—concerned with giving computers the ability to understand text and spoken words in much the same way human beings can.
NLP combines computational linguistics—rule-based modeling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in text or voice data and ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment."
This means the more conversations it processes, the more it learns, and the smarter it gets.
Training AI bots is a practice that has the potential to be very exploitative.
An investigative report by Time revealed that OpenAI, ChatGPT's parent company, outsourced work to Kenyan laborers who earned less than $2 per hour. OpenAI is one of the most valuable AI companies, and it is in talks with investors to raise funds at a bank valuation of $29 billion.
Another element of the "ethical grey area" of training AI chatbots is using public websites, like Reddit, to analyze and interpret conversations to continue training them. The New York Times said, "In recent years, "Reddit's chats have been a free teaching aid for companies like Google, OpenAI, and Microsoft. Those companies are using Reddit's conversa Reddit's developing giant artificial intelligence systems that many in Silicon Valley think are on their way to becoming the tech industry's next big thing."
While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying entirely on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.
Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Additionally, the information presented by generative AI tools may lack currency, as some systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape.