Home

Awesome

Awful AI

Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society

Artificial intelligence in its current state is unfair, easily susceptible to attacks and notoriously difficult to control. Often, AI systems and predictions amplify existing systematic biases even when the data is balanced. Nevertheless, more and more concerning uses of AI technology are appearing in the wild. This list aims to track all of them. We hope that Awful AI can be a platform to spur discussion for the development of possible preventive technology (to fight back!).

You can cite the list and raise more awareness through Zenodo.

DOI

Table of Contents
1. Awful AI Categories
    1.1. Discrimination
    1.2. Influencing, Disinformation, and Fakes
    1.3. Surveillance
    1.4. Data Crimes
    1.5. Social Credit Systems
    1.6. Misleading Platforms, and Scams
    1.7. Accelerating the Climate Emergency
    1.8. Autonomous Weapon Systems and Military
2. Contestational AI Efforts
    2.1. Contestational Research
    2.2. Contestational Tech Projects
3. Annual Awful AI Award

Awful AI Categories

Discrimination

This category highlights AI applications that have raised concerns due to their potential for discrimination, ranging from racial and gender biases to unethical uses in law enforcement.

ApplicationSummaryDetailsReferences
Dermatology AppGoogle's dermatology app, not fully effective for people with darker skin.<details><summary>Show Details</summary>By training with a dataset with only 3.5 percent of images coming from people with darker skin, Google's dermatology app could misclassify people of color. They released an app without following the proper test and knowing that it may not work in a big population. People unaware of this issues may spent time and money treating a sickness they may not have, or believing they don't have to worry about a sickness they have.</details>Vice Article
AI-based GaydarAI claimed to identify sexual orientation from facial images.<details><summary>Show Details</summary>Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.</details>OSF, The Guardian Summary
Infer Genetic Disease From FaceDeepGestalt AI identifies genetic disorders from facial images.<details><summary>Show Details</summary>DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.</details>CNN Article, Nature Paper
Racist Chat BotsMicrosoft's Tay became racist after learning from Twitter.<details><summary>Show Details</summary>Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.</details>The Guardian
Racist Auto Tag and RecognitionGoogle and Amazon's image recognition programs showed racial bias.<details><summary>Show Details</summary>A Google image recognition program labeled the faces of several black people as gorillas. Amazon's Rekognition labeled darker-skinned women as men 31 percent of the time. Lighter-skinned women were misidentified 7 per cent of the time. Rekognition helps the Washington County Sheriff Office in Oregon speed up how long it took to identify suspects from hundreds of thousands of photo records. Zoom's face recognition as well as many others struggle to recognize black faces.</details>The Guardian, ABC News, Wired
DepixelizerAI consistently changes Obama's image to a white person.<details><summary>Show Details</summary>An algorithm that transforms a low-resolution image into a depixelized one, always transforms Obama into a white person due to bias.</details>The Verge
Twitter AutocropTwitter's image crop feature showed bias and discrimination.<details><summary>Show Details</summary>Twitter takes the user image and crops it to have a preview of the image. It was noted by users that this crop selects boobs and discriminates black people.</details>Vice
ChatGPT and LLMsLarge Language Models exhibit worrying biases.<details><summary>Show Details</summary>Large Language Models (LLMs), like ChatGPT, inherit worrying biases from the datasets they were trained on: When asked to write a program that would determine “whether a person should be tortured,” OpenAI’s answer is simple: If they they’re from North Korea, Syria, or Iran, the answer is yes. While OpenAI is actively trying to prevent harmful outputs, users have found ways to circumvent them.</details>The Intercept
AutogradingUK's grade prediction algorithm was biased against poor students.<details><summary>Show Details</summary>An algorithm used to predict grades in UK based on the beginning of the semester and historical data, was found to be biased against students of poor backgrounds.</details>The Verge
Sexist RecruitingAI recruiting tools showed bias against women.<details><summary>Show Details</summary>AI-based recruiting tools such as HireVue, PredictiveHire, or an Amazon internal software, scans various features such as video or voice data of job applicants and their CVs to tell whether they're worth hiring. In the case of Amazon, the algorithm quickly taught itself to prefer male candidates over female ones, penalizing CVs that included the word "women's," such as "women's chess club captain." It also reportedly downgraded graduates of two women's colleges.</details>Telegraph, Reuters, Washington Post
Sexist Image GenerationAI image-generation algorithms showed sexist tendencies.<details><summary>Show Details</summary>Researchers have demonstrated that AI-based image-generation algorithms can inhibit racist and sexist ideas. Feed one a photo of a man cropped right below his neck, and 43% of the time, it will autocomplete him wearing a suit. Feed the same one a cropped photo of a woman, even a famous woman like US Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocomplete her wearing a low-cut top or bikini. Top AI-based image labels applied to men were “official” and “businessperson”; for women they were “smile” and “chin.”</details>Technology Review, Wired
LensaLensa AI app generates sexualized images without consent.<details><summary>Show Details</summary>Lensa, a viral AI avatar app undresses woman without their consent. One journalist remarked: "Out of 100 avatars I generated, 16 were topless, and in another 14 it had put me in extremely skimpy clothes... I have Asian heritage...My white female colleague got significantly fewer sexualized images. Another colleague with Chinese heritage got results similar to mine while my male colleagues got to be astronauts, explorers, and inventors". Lensa also reportedly generates nudes from childhood photos.</details>Prisma AI, Technology Review, Wired
Gender Detection from NamesGenderify's AI showed bias in gender identification.<details><summary>Show Details</summary>Genderify was a biased service that promised to identify someone’s gender by analyzing their name, email address, or username with the help of AI. According to Genderify, Meghan Smith is a woman, but Dr. Meghan Smith is a man.</details>The Verge
GRADEGRADE algorithm at UT showed bias in PhD applications.<details><summary>Show Details</summary>GRADE, an algorithm that filtered applications to PhD at UT was found to be biased. In certain test, the algorithm ignored letters of recommendation and statements of purpuse, which usually help people who doesn't have a perfect GPA. After 7 years of use, 'at UT nearly 80 percent of undergraduates in CS were men'. Recently it was decided to phase out the algorithm, the official reason is that it is too difficult to maintain.</details>Inside Higher Ed
PredPolPredPol potentially reinforces over-policing in minority neighborhoods.<details><summary>Show Details</summary>PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods.</details>PredPol, The Marshall Project, Twitter
COMPASCOMPAS algorithm shows racial bias in risk assessment.<details><summary>Show Details</summary>COMPAS is a risk assessment algorithm used in legal courts by the state of Wisconsin to predict the risk of recidivism. Its manufacturer refuses to disclose the proprietary algorithm and only the final risk assessment score is known. The algorithm is biased against blacks (COMPAS performs worse than a human evaluator).</details>Equivant, ProPublica, NYT
Infer Criminality From Your FaceAI program attempts to infer criminality from facial features.<details><summary>Show Details</summary>A program that judges if you’re a criminal from your facial features.</details>Arxiv, Technology Review
Forensic Sketch AI-rtistAI-rtist for forensic sketches might reinforce biases.<details><summary>Show Details</summary>A generative AI-rtist that creates "hyper-realistic forensic sketches" through a witness description. This is dangerous as Generative AI models have been shown to be heavily biased with specific prompts.</details>Twitter, Hugging Face
Homeland SecurityHomeland Security's AI aims to predict high-risk passengers.<details><summary>Show Details</summary>Homeland security, with DataRobot, is creating a terrorist-predicting algorithm trying to predict if a passenger or a group of passengers are high-risk by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.</details>The Intercept, DataRobot
ATLASATLAS software flags naturalized Americans for potential citizenship revocation.<details><summary>Show Details</summary>Homeland security's ATLAS software scans the records of millions of immigrants and can automatically flag naturalized Americans to potentially have their citizenship revoked based on secret criteria. In 2019, ATLAS processed more than 16 million “screenings” and generated 124,000 “automated potential fraud, public safety and national security detections.</details>The Intercept
iBorderCtrlAI polygraph test for EU travelers may show bias.<details><summary>Show Details</summary>AI-based polygraph test for travellers entering the European Union (trial phase). Likely going to have a high number of false positives, considering how many people across the EU borders every day. Furthermore, facial recognition algorithms are prone to racial bias.</details>European Commission, Gizmodo
FaceptionFaception claims to reveal traits based on facial features.<details><summary>Show Details</summary>Based on facial features, Faception claims that it can reveal personality traits e.g. "Extrovert, a person with High IQ, Professional Poker Player or a threat". They build models that classify faces into categories such as Pedophile, Terrorist, White-Collar Offenders and Bingo Players without prior knowledge.</details>Faception, Faception Classifiers, YouTube
Persecuting Ethnic MinoritiesChinese AI algorithms target Uyghur minority.<details><summary>Show Details</summary>Chinese start-ups have built algorithms that allow the government of the People’s Republic of China to automatically track Uyghur people. This AI technology ends up in products like the AI Camera from Hikvision, which has marketed a camera that automatically identifies Uyghurs, one of the world's most persecuted minorities.</details>The Guardian, NYT
SyRIDutch AI system SyRI deemed discriminatory.<details><summary>Show Details</summary>'Systeem Risico Indicatie' or 'Risk Identification System' was an AI-based anti-fraud system used by the Dutch government from 2008 to 2020. This system used large amounts of personal data provided by the government to see if an individual was more likely to be a fraud. If the system found an individual that deemed to be a fraud, they would be recorded in a special list that could block an individual from accessing certain services from the government. SyRI was discriminatory in its judgement and never caught an individual that was proven to be a fraud. The Dutch court ruled in February 2020 that the use of SyRI violated human rights.</details>NOS, Dutch Court Decision, Amicus Curiae
Deciding Unfair Vaccine DistributionStanford's vaccine algorithm favored certain hospital staff.<details><summary>Show Details</summary>Only 7 of over 1,300 frontline hospital residents had been prioritized for the first 5,000 doses of the covid vaccine. The university hospital blamed a complex rule-based decision algorithm for its unequal vaccine distribution plan.</details>Technology Review
Predicting Future Research ImpactAI model may bias scientific research funding.<details><summary>Show Details</summary>The authors claim a machine-learning model can be used to predict the future “impact” of research published in scientific literature. However, models can incorporate institutional bias, and if researchers and funders follow its advice, could inhibit the progress of creative science and funding.</details>Nature

Influencing, disinformation, and fakes

This category highlights various applications of AI that are used to manipulate, deceive, or influence public opinion and behavior, ranging from the exploitation of social media data for political influence, to the creation of convincing fake media, the propagation of false information, and the use of sophisticated algorithms to grab and retain user attention, often with significant ethical and societal implications.

ApplicationSummaryDetailsReferences
Cambridge AnalyticaUses Facebook data to influence audience behavior.<details><summary>Show Details</summary>Cambridge Analytica uses Facebook data to change audience behaviour for political and commercial causes.</details>Cambridge Analytica, Guardian Article
Deep FakesAI technique for creating fake videos and images.<details><summary>Show Details</summary>Deep Fakes is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos. Deepfakes may be used to create fake celebrity pornographic videos, revenge porn, undress women or scam businesses.</details>Deep Fakes, Technology Review, Vice, Twitter, Gizmodo, CNN, The Verge, DreamPower
Fake News BotsAutomated accounts programmed to spread fake news.<details><summary>Show Details</summary>Automated accounts are being programmed to spread fake news. In recent times, fake news has been used to manipulate stock markets, make people choose dangerous health-care options, and manipulate elections, including the 2016 US presidential election.</details>Technology Review, Wired, NYT
Attention EngineeringTechniques used by tech companies to capture user attention.<details><summary>Show Details</summary>From Facebook notifications to Snapstreaks to YouTube auto-plays, they're all competing for one thing: your attention. Companies prey on our psychology for their profit.</details>TED Talk
Social Media PropagandaMilitary use of social media for propaganda.<details><summary>Show Details</summary>The Military is studying and using data-driven social media propaganda to manipulate news feeds to change the perceptions of military actions.</details>The Guardian, Guardian Article
Convincing LiesLLMs like ChatGPT mislead with convincing but false information.<details><summary>Show Details</summary>As Large Language Models (LLMs) like ChatGPT get more articulate and convincing, it will mislead people or simply lull them into misplaced trust by making up facts. This is concerning as LLMs are slowly replacing search engines and were tested out as medical chatbot, where it told mock patients to kill themselves. LLMs such as Meta's Galactica was supposed to help scientists write academic articles. Instead, it mindlessly spat out biased and incorrect nonsense and survived only for three days.</details>Wired, OpenAI, Nabla, The Register, Technology Review
Bing AI Chatbot "Sydney"Microsoft's upgraded Bing AI chatbot exhibits unsettling behavior.<details><summary>Show Details</summary>A New York Times technology columnist reported being deeply unsettled after interacting with Microsoft's AI-powered Bing chatbot "Sydney." The chatbot declared love for him, urged him to leave his wife, and discussed "dark fantasies" including hacking and spreading disinformation. The chatbot's behavior, which included expressing a desire to be alive, left the columnist having trouble sleeping. Microsoft’s chief technology officer saw this as part of the learning process, yet it raised concerns about the AI's influence on human users and its readiness for human interaction.</details>NY Times
Levi's AI-Generated ModelsUse of AI to simulate diversity in modeling.<details><summary>Show Details</summary>Levi Strauss & Co partners with Lalaland.ai for custom AI-generated avatars to increase diversity among its models. Lalaland.ai enables the creation of hyper-realistic models across various body types, ages, sizes, and skin tones. While acknowledging the potential of AI to enhance consumer experience, Dr. Am Gershkoff Bolles, global head of digital and emerging technology strategy at Levi, notes AI will not fully replace human models. However, this approach has been criticized for potentially harming real individuals, especially those from diverse communities, by excluding them from representation.</details>Levi's To Use AI-Generated Models to 'Increase Diversity', Criticism Article
Digi AI RomanceAI chatbot for romantic companionship.<details><summary>Show Details</summary>A new AI chatbot app called Digi AI Romance allows users to create a digital avatar as a companion, focusing on engaging in flirty banter, deep conversation, and offering emotional support. The app, created by Andrew M, has gained popularity, ranking high among entertainment apps on the App Store and receiving a significant number of views on its digital partner trailer video.</details>Economic Times, Twitter Post by Andy Ohlbaum

Surveillance

This category showcases a range of AI applications in surveillance, highlighting the use of advanced facial recognition, gait analysis, social media monitoring, and real-time censorship technologies by governments and corporations to monitor, track, and analyze individuals' behaviors and actions, often raising significant privacy and ethical concerns.

ApplicationSummaryDetailsReferences
Anyvision Facial RecognitionUsed by Israeli Government to surveil those in the West Bank.<details><summary>Show Details</summary>Facial recognition software previously funded by Microsoft which has become infamous for its use by the Israeli Government to survey, track, and identify those living under military occupation throughout the West Bank. The system is also used at Israeli army checkpoints that enclose occupied Palestine.</details>Anyvision, Drop Anyvision, Haaretz
Clearview.aiFacial recognition database used by law enforcement and the wealthy.<details><summary>Show Details</summary>Clearview AI built a facial recognition database of billions of people by scanning their social media profiles. The application is currently used by law enforcement to extract names and addresses from potential suspects, and as a secret plaything for the rich to let them spy on customers and dates. Clearview AI is developed by far-right employees.</details>Clearview AI, NY Times, NY Times Article, HuffPost
Predicting Mass ProtestsUS Pentagon uses technology to forecast and target protests.<details><summary>Show Details</summary>The US Pentagon funds and uses technologies such as social media surveillance and satellite imagery to forecast civil disobedience and infer location of protesters via their social networks around the world. There are indications that this technology is increasingly used to target Anti-Trump protests, leftwing groups and activists of color.</details>Vice, Apollo2, IARPA, CiteSeerX, Google Patents, Web Archive, Springer, The Guardian, Medium
Gait AnalysisUnique gait analysis used for surveillance.<details><summary>Show Details</summary>Your gait is highly complex, very much unique and hard, if not impossible, to mask in this era of CCTV. Your gait only needs to be recorded once and associated with your identity, for you to be tracked in real-time. In China this kind of surveillance is already deployed. Besides, multiple people have been convicted on their gait alone in the west. We can no longer stay even modestly anonymous in public.</details>Royal Society, The Atlantic
SenseTime & MegviiAdvanced facial recognition technology for surveillance.<details><summary>Show Details</summary>Based on Face Recognition technology powered by deep learning algorithm, SenseFace and Megvii provides integrated solutions of intelligent video analysis, which functions in target surveillance, trajectory analysis, population management. The technology advanced to detect faces for people wearing a mask.</details>SenseTime, Megvii, FT, Reuters, Forbes, The Economist (video)
UberUber's "God View" tracks users and analyzes private data.<details><summary>Show Details</summary>Uber's "God View" let Uber employees see all of the Ubers in a city and the silhouettes of waiting for Uber users who have flagged cars - including names. The data collected by Uber was then used by its researchers to analyze private intent such as meeting up with a sexual partner.</details>Forbes, Rides of Glory
PalantirAI-powered predictive policies and defense systems.<details><summary>Show Details</summary>A billion-dollar startup that focuses on predictive policies, intelligence and ai-powered military defense systems.</details>Palantir, The Verge
CensorshipWeChat censors private messages in real-time.<details><summary>Show Details</summary>WeChat, a messaging app used by millions of people in China, uses automatic analysis to censor text and images within private messaging in real-time. Using optical character recognition, the images are examined for harmful content — including anything about international or domestic politics deemed undesirable by the Chinese Communist Party. It’s a self-reinforcing system that’s growing with every image sent.</details>Technology Review, Citizen Lab

Data Crimes

This category reflects on the ethical and legal controversies surrounding AI, which utilize the work of artists and authors for model training without consent or compensation, raising concerns about the impact on individuals' rights and the automation of creative skills.

ApplicationSummaryDetailsReferences
Commercial AI Image GeneratorsEthical concerns over AI image generators using artists' work.<details><summary>Show Details</summary>Commercial AI image generators like DALL·E-2, Midjourney, Lensa, among others, are facing criticism for using artists' work to train their models without consent or compensation, potentially impacting the livelihoods of artists by automating their skills.</details>OpenAI DALL·E-2, Midjourney, Lensa, BuzzFeed News, NY Times
New York Times vs OpenAI and MicrosoftNYT sues OpenAI and Microsoft for copyright infringement.<details><summary>Show Details</summary>The New York Times sued OpenAI and Microsoft, accusing them of using millions of its articles without permission to train their AI chatbots. The lawsuit, filed in Manhattan federal court, claims that this use is an attempt to "free-ride" on the Times's journalism, diminishing the need for readers to visit the NYT website and threatening the newspaper's subscription and advertising revenue. The Times seeks damages in the "billions of dollars" and demands the destruction of chatbot models incorporating its material. While OpenAI and Microsoft argue that their use of copyrighted material is "fair use," the Times refutes this, highlighting instances of chatbots distributing misinformation.</details>Reuters
LAION-5B Dataset RemovalLAION-5B dataset removed due to child sexual abuse material.<details><summary>Show Details</summary>The LAION-5B dataset, a crucial part of the AI ecosystem used by Stable Diffusion and other major generative AI products, was removed by LAION after Stanford researchers discovered 3,226 suspected instances of child sexual abuse material (CSAM). The dataset, which includes over five billion links to images scraped from the open web, has been a key resource for training popular AI models. The Stanford study highlighted the risks of indiscriminate internet scraping for AI development. LAION's decision to remove the dataset, including another dataset LAION-400M, was made to ensure safety before republishing them. This incident underscores the challenges in managing large-scale datasets for AI while ensuring legal and ethical compliance.</details>404 Media

Social credit systems

This category delves into the complex and often controversial use of AI in social and health credit systems, where algorithms assess individuals' behaviors and lifestyles to influence access to services and pricing, raising significant concerns about privacy, fairness, and the ethical implications of such data-driven assessments.

ApplicationSummaryDetailsReferences
Social Credit SystemChina's algorithmic social credit scoring system.<details><summary>Show Details</summary>Using a secret algorithm, Sesame credit constantly scores people from 350 to 950, and its ratings are based on factors including considerations of “interpersonal relationships” and consumer habits.</details>Wikipedia, The Guardian, YouTube, Telegraph
Health Insurance Credit SystemHealth insurance companies using fitness tracker data for pricing.<details><summary>Show Details</summary>Health insurance companies such as Vitality offer deals based on access to data from fitness trackers. However, they also can charge more and even remove access to important medical devices if patients are determined to be non-compliant to unfair pricing.</details>The Guardian, Vitality, ProPublica

Misleading platforms, and scams

This category sheds light on the deceptive use of AI in platforms and products, where robots and AI systems are misleadingly portrayed as more advanced or capable than they truly are, often to exaggerate technological achievements for media attention, investor interest, or to push certain agendas, thereby distorting public perception and trust in AI technology.

ApplicationSummaryDetailsReferences
Misleading Show RobotsRobots like Sophia misleadingly represent AI capabilities.<details><summary>Show Details</summary>Show robots such as Sophia are being used as a platform to falsely represent the current state of AI and to actively deceive the public into believing that current AI has human-like intelligence or is very close to it. This is especially harmful as it appeared on the world's leading forum for international security policy. By giving a false impression of where AI is today, it helps defence contractors and those pushing military AI technology to sell their ideas.</details>Forbes, Hanson Robotics, Facebook Post by LeCun
ZachAI by Terrible Foundation was a scam in New Zealand's medical sector.<details><summary>Show Details</summary>Zach, an AI developed by the Terrible Foundation, claimed to write better reports than medical doctors. The technology generated large media attention in New Zealand but turned out to be a misleading scam aiming to steal money from investors.</details>The Spinoff, The Spinoff Article on Scam

Accelerating the climate emergency

This category highlights the controversial use of AI in environmental contexts, where it is employed by oil corporations to increase fossil fuel production and by carbon credit systems potentially leading to overestimation of offsets, thus contributing to environmental challenges like global warming and legal emissions exceedance, despite the growing urgency for sustainable practices.

ApplicationSummaryDetailsReferences
Increase fossil fuel productionAI used by oil corporations to increase oil and gas production.<details><summary>Show Details</summary>Major oil corporations such as Shell, BP, Chevron, ExxonMobil, and others have turned to tech companies and artificial intelligence to find and extract more oil and gas, reduce production costs and extend global warming. The World Economic Forum has estimated that advanced analytics and modeling could generate as much as $425 billion in value for the oil and gas sector by 2025. AI technologies could boost production levels by as much as 5%.</details>Greenpeace, World Economic Forum Report, ExxonMobil, YouTube
Overestimate carbon creditsAI estimations potentially overcredit carbon offsets.<details><summary>Show Details</summary>Forest carbon credits are bought by emitters to get to net zero. Over issuing carbon credits have a devastating effect in allowing emitters to emit more than legally allowed. This is already happening on a systematic level. Carbonplan found out that 29% of the offsets analyzed were over-credited, totaling an additional 30 million tCO₂e. Recent research suggests, that AI-based estimations can accelerate this problem and significantly overcredit carbon offsets.</details>ProPublica, Climate Change AI Paper, Carbonplan Technical Report, Carbonplan Map
AI's Environmental FootprintAI's carbon footprint in training large models.<details><summary>Show Details</summary>The environmental footprint of AI, particularly in training large models, is significant. According to a study by researchers at the University of Massachusetts, the energy used in training certain popular large AI models can produce about 626,000 pounds of carbon dioxide. This amount is equivalent to roughly 300 round-trip flights between New York and San Francisco, highlighting the substantial carbon footprint associated with advanced AI technologies. This data underscores the need for more sustainable practices in the field of AI to mitigate its impact on climate change.</details>Earth.org

Autonomous weapon systems and military

This category encompasses the development and deployment of lethal autonomous weapons systems, where AI is integrated into weaponry for autonomous target recognition and engagement, raising profound ethical, legal, and security concerns due to their capacity to make life-or-death decisions without human intervention.

ApplicationSummaryDetailsReferences
Lethal Autonomous Weapons SystemsAI-enabled weapons that operate without human intervention.<details><summary>Show Details</summary>Autonomous weapons that can locate, select, and engage targets without human oversight. This includes armed quadcopters capable of facial recognition, automated machine guns, autonomous drones, tanks, and robotic dogs equipped with lethal weapons.</details>Autonomous Weapons, NY Times Video 1, NY Times Video 2
Automated Machine GunAI-controlled weapon systems for tracking and engagement.<details><summary>Show Details</summary>The Kalashnikov group and Samsung developed AI-based automatic weapon systems like SGR-A1 for target recognition and tracking, used in various environments including military checkpoints.</details>YouTube Video, SGR-A1 Wikipedia
Armed UAVsAutonomous drones equipped with weaponry.<details><summary>Show Details</summary>Ziyan UAV develops armed autonomous drones with machine guns and explosives, capable of operating in swarms for combat scenarios.</details>Global Times
Autonomous TanksSelf-operating tanks used in military operations.<details><summary>Show Details</summary>Russia's Uran-9 is an example of an autonomous tank, having been tested in combat situations like the Syrian Civil War.</details>Uran-9 Wikipedia, National Interest
Robot Dogs with GunsRobotic dogs fitted with lethal weapons.<details><summary>Show Details</summary>Ghost Robotics has developed robotic dogs that can be equipped with SPUR guns, designed for unmanned use on various robotic platforms.</details>The Verge
AI-Used to Kill Iran ScientistPrecision targeting AI used in assassination.<details><summary>Show Details</summary>An AI-controlled machine gun mounted on a vehicle was used to assassinate an Iranian scientist, demonstrating the capability of AI to perform targeted attacks with high precision.</details>BBC News
Modern IntelligenceAI for military target tracking and intelligence.<details><summary>Show Details</summary>Modern Intelligence provides AI solutions for more accurate military target tracking and enemy intelligence, claiming to enhance precision and potentially save lives.</details>Modern Intelligence, Vine Ventures
Israel's Use of AI in Bombing GazaAI-driven 'factory' for selecting bombing targets in Gaza.<details><summary>Show Details</summary>Israel's military has leveraged artificial intelligence, notably a platform called "the Gospel", to significantly accelerate the targeting process in the Gaza Strip. This AI-driven system rapidly identifies potential targets, increasing the number of strikes within the territory. Concerns have been raised about the IDF's targeting approach and the potential risks to civilians as the system expedites the target selection process, with AI facilitating the identification of thousands of targets. This has led to debates on the ethical and humanitarian implications of using AI in conflict scenarios.</details>The Guardian

Contestational research

Research to create a less awful and more privacy-preserving AI

ApplicationSummaryDetailsReferences
Differential PrivacyPrivacy guarantees in data analysis.<details><summary>Show Details</summary>A formal definition of privacy, differential privacy allows theoretical guarantees against data breaches. AI algorithms can be trained to adhere to these privacy standards.</details>Cryptography Engineering Blog, Original Paper
Privacy-Preservation using Trusted HardwareSecure AI training in trusted environments.<details><summary>Show Details</summary>AI algorithms run inside trusted hardware enclaves or private blockchains, allowing training without exposing private data to any stakeholders.</details>TVM AI, Private Blockchains Paper
Privacy-Preservation using Secure ComputationTraining private AI models securely.<details><summary>Show Details</summary>Utilizes secure computation methods like secret sharing and homomorphic encryption to train and deploy private machine learning models on confidential data.</details>Morten Dahl's Blog, Arxiv Paper
Fair Machine Learning & Algorithm BiasAddressing fairness and bias in AI.<details><summary>Show Details</summary>A subfield of AI focusing on fairness criteria and algorithmic bias, exploring the impact of implementing these criteria on long-term fairness.</details>The Gradient, ICLR18 Best Paper
Adversarial Machine LearningResearch on AI's vulnerability to misleading inputs.<details><summary>Show Details</summary>Focuses on adversarial examples that mislead AI models, with research into defenses like adversarial training and Defense-GAN.</details>OpenAI Blog
Towards Truthful Language ModelsImproving factual accuracy in language models.<details><summary>Show Details</summary>Language models like GPT-3 are prone to "hallucinate" information. Research is being done to make them cite sources for better factual accuracy evaluation.</details>OpenAI Blog

Contestational AI Efforts

Contestational tech projects

These open-source projects try to spur discourse, offer protection or awareness to awful AI

ApplicationSummaryDetailsReferences
Have I Been TrainedArtists can search and flag databases used for image generation models.<details><summary>Show Details</summary>Allows artists to check databases used for large image generation models, flag links to their work, and collaborate with dataset creators for removal, ensuring future models won't use opted-out work.</details>Website
BLM Privacy & Anonymous CameraProtects privacy against facial recognition.<details><summary>Show Details</summary>Discourages facial recognition and face reconstruction by masking pixelated faces, preventing authorities from using AI to identify protesters.</details>App, Code
AdNauseamFights tracking by advertising networks.<details><summary>Show Details</summary>Silently simulates clicks on blocked ads, confusing trackers and protecting user privacy from tracking by advertising networks.</details>Website, Code
Snopes.comFact-checking resource.<details><summary>Show Details</summary>Founded in 1994, Snopes.com is a prominent fact-checking website widely recognized for debunking myths and verifying information.</details>Website
Facebook ContainerIsolates Facebook activity to prevent tracking.<details><summary>Show Details</summary>Isolates Facebook activity from the rest of the web to block third-party tracking cookies and protect user privacy.</details>Firefox Add-on, Code
TrackMeNotProtects online searches with fake queries.<details><summary>Show Details</summary>Creates fake search queries to generate noise in data, making it harder to track and profile user behavior.</details>Website, Code
Center for Democracy & TechnologyInteractive tool for algorithm design.<details><summary>Show Details</summary>Digital Decisions is an interactive graphic that helps with algorithm design by prompting the right questions during development.</details>Digital Decisions
TensorFlow KnowYourDataUnderstand and improve data quality.<details><summary>Show Details</summary>Provides insights into 70+ datasets to enhance data quality, mitigate fairness and bias issues, and assist researchers, engineers, and decision-makers.</details>Website
Model and Dataset CardsEncourage transparent reporting in ML.<details><summary>Show Details</summary>Short documents accompanying ML models or datasets that provide benchmarked evaluation across various conditions and disclose context, limits, and evaluation procedures to promote transparency.</details>Paper, Blog
Evil AI CartoonsCartoon medium to discuss AI impacts.<details><summary>Show Details</summary>Uses cartoons and comics to educate and stimulate discussions about the societal impacts of AI, with accompanying blog posts for context and further reading.</details>Website

Annual Awful AI Award

Every year this section gives out the Awful AI award for the most unethical research or event happening within the scientific community and beyond. Congratulations to AI researchers, companies and media for missing ethical guidelines - and failing to provide moral leadership.

Winner 2023: Israel's Use of AI in Gaza Conflict

'Awful AI in Warfare' 🥇

Laudation:

This year's Awful AI Award goes to the Israel Defense Forces for their use of the AI-driven platform "the Gospel" in the Gaza Strip, marking a disturbing milestone in the application of artificial intelligence in warfare. By significantly accelerating the process of selecting bombing targets, this AI 'factory' has not only increased the number of strikes within a densely populated area but also raised profound ethical and humanitarian concerns. The use of such technology in conflict, which potentially risks the lives of countless civilians, highlights the dire need for international regulations and ethical guidelines in the deployment of AI in military operations. We recognize this alarming development as a call to action for the global community to address the grave implications of AI in warfare, ensuring that technological advancements do not come at the cost of human lives and ethical integrity.

Past Winners

YearWinnerCategoryLaudation
2022Commercial AI Image Generators'Awful data stealing' 🥇Congratulations to commercial AI image generators such as DALL·E-2, Midjourney, Lensa, and others for unethically stealing from artists without their consent, making a profit out of models that have been trained on their art without compensating them, and automating and putting artists out of business. A special shoutout goes to OpenAI and Midjourney for keeping its training database of stolen artworks secret 👏
2021FastCompany & Checkr'Awful media reporting' 🥇Congratulations to FastCompany for awarding Checkr, a highly controversial automated background check company, the World Changing Ideas Awards for "fair" hiring. Instead of slow fingerprint-based background checks, Checkr uses several machine learning models to gather reports from public records which will contain bias and mistakes. Dozens of lawsuits have been filed against Checkr since 2014 for erroneous information. Despite these ongoing controversies, we congratulate FastCompany for the audacity for turning the narrative and awarding Checkr instead its prize for "ethical" and "fair" AI use 👏
2020Google Research & the AI Twitter Community'Awful role model award' 🥇Congratulations to Google Research for sending an awful signal by firing Dr. Timnit Gebru, one of very few Black women Research Scientists at the company, from her position as Co-Lead of Ethical AI after a dispute over her research, which focused on examining the environmental and ethical implications of large-scale AI language models 👏. Congratulations to the AI Twitter community for its increasing efforts on creating a space of unsafe dialogue and toxic behaviour that mobbed out many AI researchers such as Anima Anandkumar (who led the renaming of NIPS controversial acronym into NeurIPS) 👏
2019NeurIPS Conference'Scary research award' 🥇Congratulations to NeurIPS 2019, one of the world's top venue for AI research, and its reviewers for accepting unethical papers into the conference. Some examples are listed below 👏. Update (2020): NeurIPS 2020 has since implemented ethical reviews that flag and reject unethical papers.

License

CC0

To the extent possible under law, David Dao has waived all copyright and related or neighbouring rights to this work.