Ethics of AI: a librarian's perspective
Nausicaa L. Rose
2025-10-03
A talk given at Iowa Library Association 2025
Table of Contents
- 1. Introduction
- 2. What is AI?
- 3. What is generative AI?
- 4. ALA Code of Ethics: 1
- 5. Accuracy and AI
- 6. ALA Code of Ethics: 2
- 7. ALA Code of Ethics: 3
- 8. Privacy and AI
- 9. ALA Code of Ethics: 4
- 10. Intellectual property and AI
- 11. ALA Code of Ethics: 5
- 12. AI and working conditions
- 13. Working conditions for AI data workers
- 14. ALA Code of Ethics: 6 & 7
- 15. ALA Code of Ethics: 8
- 16. ALA Code of Ethics: 9
- 17. Racial, gender, and other biases in AI
- 18. Beyond the ALA Code of Conduct
- 19. Conclusion
- 20. Footnotes
1. Introduction
Hello, and welcome to "Ethics of AI." I'm here to share my perspective on the ethics of using AI within libraries. No facts or opinions shared during this presentation represent my employer.
I would like to note, I am not the first librarian to explore the ethics of AI through the lens of the ALA Code of Ethics. Part way through the drafting this presentation I realized it pretty closely mirrored the structure of Violet Fox's "A Librarian Against AI or I Think AI Should Leave" zine which provides a good, and often entertaining, overview of various ethical concerns about AI.1
2. What is AI?
Before we begin in ernest, I'd like to define artificial intelligence, or AI, what it is and what it isn't.
AI is, above all else, a marketing term.2 Since the mid-1950s, AI has been applied to a wide variety of technologies and fields of study, such as machine learning, natural language processing, neural networks, image recognition, transformers, deep learning, rules-base systems, machine translation, search engines, production systems, robotics, expert systems, recommendation engines, video games, information filtering systems, and autonomous vehicles. Some of these technologies are closely related, some have very little in common with others. What they do all have in common is they are attempts, to paraphrase John McCarthy, who coined the term artificial intelligence, to make machines simulate intelligence.3
From its first use to the present day, AI has been used to make big promises but few deliveries. McCarthy coined the term in a proposal for a workshop where he expected that he and nine of his colleagues could make "a significant advance" in making machines simulate intelligence in only two months. They didn't.4
This initial workshop set the pattern for AI ever since. Throughout the 1960s and early 1970s, artificial intelligence research garnered significant funding. Researchers made grandiose statements, but failed to deliver promised results.5 The failure to deliver on promises led to reduced federal research funding in the mid-1970s.6 In the wake of this first boom and bust came an even larger one, where AI would see commercial breakthroughs in the early 1980s only to have research funding and commercial interest largely disappear by the later part of the decade.7 We are now in the midst of another boom.
The failures of AI in the past does not mean that no useful technologies came out of such research, just that what has emerged has usually been more modest than what marketers promised or come far later. Some truly useful technological advances may come out of this current boom, but they almost certainly won't live up to the promises AI vendors are making.
An important aspect of information literacy is being able to spot misinformation and promises that are too good to be true. It's my hope that a small side effect of this talk will be to provide some tools to allow those who are not deeply immersed in tech to more accurately evaluate the claims made around AI.
While I outlined a broad, and incomplete overview of various technologies marketed as AI, this presentation will focus on one particular kind: generative AI.
3. What is generative AI?
I've chosen to focus on generative AI because that is the technology being most heavily marketed toward consumers, businesses, and academia. Generative AI has become so ubiquitous over the last few years that it's what most people mean when they say "AI." I will likewise treat "AI" and "generative AI" as interchangeable in this talk. I will be focusing primarily on text-based generative AI programs, also known as large language models or LLMs, software like ChatGPT, Gemini, and CoPilot.
Generative AI is a label used for various programs that are able to output text, images, audio, or video, based on large amounts of human-created text, images, audio, or video. This includes programs like ChatGPT, Microsoft CoPilot, DALL-E, Stable Diffusion, MusicLM, Sora, and others. It's also the technology behind the creations of deepfakes, seemingly real audio, images, and video, that have been used to interfere in elections, spread misinformation, and generate nonconsensual pornagraphic images and videos of adults and children.8
On the surface, generative AI programs can seem impressive and even human. This effect is enhanced by the massive sums of money that the AI industry has spent on advertising in the past few years to promote the idea that AI is a near-magical solution to all our problems.9 But much of what makes AI programs seem so impressive is due to the massive amount of human labor behind them.10 As AI researcher Adio-Adet Dinika put it, "AI isn’t magic; it’s a pyramid scheme of human labor."11 From the vast body of creative work that is used to build their input data, to the software engineers that design them, to the countless people who work to label data and rate responses before a program is released to the public, the ability of AI programs to simulate intelligence relies entirely on human labor and actual human intelligence. Without massive amounts of human input, these programs wouldn't even function. What we get when ChatGPT responds to a question isn't so much the output of a sophisticated algorithm as it is the mangling of an almost immeasurable amount of human labor. In the end, despite the promises of corporate propaganda, generative AI programs are little more than plagiarism machines and statistical word spitters.12
4. ALA Code of Ethics: 1
For the discussion of ethics, I'll be leaving aside deepfakes. Since their uses range from the merely plagiaristic to the personally and socially destructive, I believe their ethical problems need no further explication. Instead I'll be exploring how more apparently benign uses of generative AI, like information seeking, productivity boosting, and personal amusement, relate to the ALA Code of Ethics.
The ALA Code of Ethic's first point states:
1. We provide the highest level of service to all library users through appropriate and usefully organized resources; equitable service policies; equitable access; and accurate, unbiased, and courteous responses to all requests.13
5. Accuracy and AI
So can we, as library workers, use generative AI to provide the highest level of service and provide our communities with accurate and unbiased information? I imagine most of you have heard about AI hallucinations. "Hallucinations" is the misleading term used by AI researchers to describe inaccurate results, what anyone in any other field would simply call errors. The use of hallucination is misleading because it implies the error was caused by a malfunctioning mind when no mind exists, only statistics. It also inaccurately suggests that the error is an aberration rather than a result of an AI program working as designed.14
No generative AI program is designed to or capable of providing only factually accurate responses. They can't be. They have no capacity to understand or reason.15 They're designed to provide responses that appear correct. Sometimes they do provide factual responses. The process by which they do so is identical to the process they use to provide fictional ones. The accuracy or inaccuracy of a response is happenstance, a byproduct of how responses are generated. Moreover, much of the meaning perceived in AI output comes from users reading meaning into statistically generated text.16
That generative AI programs have no means to understand user input or the content of their output, does not in and of itself mean their output is not usable. Perhaps the designers of these programs have crafted statistical systems so sophisticated that accuracy in AI output is a mathematical certainty. They didn't and it isn't. Generative AI programs frequently produce inaccurate answers, often presenting their misinformation in confident language, while often failing to cite their sources. When researchers Klaudia Jaźwińska and Aisvarya Chandrasekar compared OpenAI’s ChatGPT Search, Perplexity, Perplexity Pro, DeepSeek Search, Microsoft’s Copilot, xAI’s Grok-2 and Grok-3 (beta), and Google’s Gemini they found that each service frequently provided incorrect answers with ChatGPT, Deepseek Search, Copilot, both versions of Grok Search, and Gemini providing far more incorrect answers than correct ones.17 Another study found that when asked about federal court cases, OpenAI's ChatGPT 4's output was incorrect 58 percent of the time and Facebook's Llama 2 was incorrect 88 percent of the time.18 Inaccurate responses are so common, it's hard to keep up with all of them. Whether it's lawyers including fabricated quotes and cases in court documents19 or AI programs providing dangerous medical advice, even in clinical settings,20 or Google telling people to put glue on pizza and to eat rocks,21 news of AI's untrustworthy output is nearly unavoidable.
Even apparently simple tasks, like summarizing documents can be exceedingly difficult for generative AI programs. A test of ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity, found that 91 percent of responses had some issue: slightly more than half of the answers had significant issues, 19 percent of them had factual errors and 13 percent had manufactured or mangled quotations.22 In evaluating different AI programs for this purpose, the Australian Securities and Investments Commission found that, of the three programs tested, the highest performing one only managed to score 60 percent on their test criteria, what would earn any school child a D-. When they tested this program against human-written summaries it scored worse across all five criteria tested, even after it had been optimized for the task.23 The problem is bad enough that even OpenAI, the company responsible for ChatGPT, published a study that found incorrect responses are inevitable given current implementation practices.24 This finding echoes a study published a few months prior that characterized factually incorrect output as an "innate limitation" of large language models.25
In the library context, AI inaccuracy is, unsurprisingly, just as much of a problem as it is in the broader world. Ingrid Reiche found in a study of two different open source applications for creating digital image metadata that neither tool was capable of producing accurate metadata on its own, necessitating human intervention.26 This echoes a study from the University of Utah that found providing accurate, detailed, input data that included proper nouns dramatically reduced AI accuracy. That is, the high-quality, human-produced metadata from their digital collections made their AI program worse. The kind of metadata that enhances discoverability and provides useful information to researchers is something their AI program couldn't handle. They used a script to strip proper nouns from the input data and fed this genericized data into their program. The only evidence they presented to demonstrate their model functioned at all was a single set of keywords for a single image. Of the eight keywords provided, only one accurately described anything in the image. According to their abstract, this one-for-eight hit rate for a single image only cost $34,700 to develop.27 Sai Deng explored using AI programs to generate MARC catalog records and demonstrated that AI programs from Google, OpenAI, and Microsoft all produced numerous errors.28 Ex Libris's own documentation for the "AI assistant" built into the Alma Metadata Editor, notes that the "assistant" program returns "inaccurate or generic data," doesn't "follow cataloging standards," doesn't reliably "return all requested metadata fields," fails to accurately process "some languages," doesn't "return correct subjects for certain authority vocabularies," and cannot always process images.29
This inability to produce accurate results is bad enough even proponents of using AI in libraries note that AI chatbots have trouble handling all but the most basic of inquiries and that ChatGPT's output is often "inaccurate to the point of complete fabrication."30 If even its boosters openly admit how wildly inaccurate generative AI is, it hardly seems like something that would benefit institutions that are trusted to be repositories of knowledge. Instead using AI to provide information would lower the quality of our service and prevent us from providing accurate, unbiased information.31
6. ALA Code of Ethics: 2
Point 2 in the Code of Ethics states:
2. We uphold the principles of intellectual freedom and resist all efforts to censor library resources.32
Being able to learn about and use generative AI fits well within a person's right to intellectual freedom and within the ALA Code of Ethics.
That said, AI does have some potential to harm individuals' intellectual freedom. Catherine Smith notes that the use of AI in description and discovery can potential contribute to a false information landscape for users.33 She further notes that how many AI programs function is unknown or unclear due to vendor secrecy or the way their workings are often portrayed as nearly magical and too complicated for most people to understand.34 This obscurity can make it hard to evaluate how such programs may or may not impact the information environments they build for the communities we serve.
7. ALA Code of Ethics: 3
3. We protect each library user's right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted.35
8. Privacy and AI
When it comes to privacy, long a cornerstone of library service, AI has a checkered history. AI vendors appear to have little to no interest in protecting user privacy.36 In 2024, Microsoft began to secretly collect LinkedIn user data without prior consent.37 Microsoft has also been developing programs like Copilot Recall and Copilot Vision that capture sensitive information including credit card numbers and social security numbers and share them with Microsoft.38 In May 2025, generative AI vendor Luka was fined 5 million euros by Italy for illegally collecting user data via its Replika program.39 Meta's AI app made it easy for users to accidentally share their private usage of the chatbot to their public Facebook timelines.40 This year, ChatGPT has begun monitoring peoples chats in order to facilitate turning chats over to law enforcement.41 Many users of ChatGPT and Grok have had seen their prompts become part of the public internet and showing up in search results.42 This is just a small sampling of the privacy issues that have come to light in the last few years.43
The general refusal of AI programs to respect user privacy is viewed by some who promote its use in library services as a benefit. In an article promoting the use of AI in reference services, Md. Ashikuzzaman cites being able to collect and analyze user data as one of the primary benefits of employing an AI chatbot.44 Fatouh and Hamam likewise see ignoring patron privacy in favor of conducting "in-depth analysis of user interactions with chatbots" to "gain meaningful insights into patron behavior, preferences, and information needs" as a benefit of using chatbots in libraries.45
Privacy concerns can be mitigated by only using AI programs that are vetted for proven privacy protections, especially those that can be run by a library itself and don't rely on vendors uninterested in protecting privacy.
9. ALA Code of Ethics: 4
4. We respect intellectual property rights and advocate balance between the interests of information users and rights holders.46
10. Intellectual property and AI
The different generative AI programs all have some features in common. One, they require an enormous amount of input data. In the cases of text-based programs like ChatGPT, that means a huge volume of words written by people: news articles, books, Wikipedia, and so on. For image-based programs it means a vast corpus of human-made art and photographs. Two, they use statistics to generate output, using elements pulled directly from their input data that are then reassembled as output appropriate for the provided prompt. At their core, generative AI programs are plagiarism engines that use statistics to recombine human input into computer output.47 Often the plagiarism is a pastiche of inputs, but other times it is unattributed verbatim or near verbatim quotes from sources used in the input data.48 The New York Times, in the lawsuit they filed against Microsoft and OpenAI thoroughly documents GPT-4, a program used by both companies, reproducing verbatim passages of Times stories.49
If all the text and images used as input data for various AI programs were legally acquired, used with permission, and its creators were fairly compensated, then the fact that the output of generative AI is entirely a regurgitation of its input data might seem like a minor complaint. Instead, the input data is a combination of freely available and usable sources and wholesale copyright and licensing violations.50
To populate the AI responses in its search results Google takes content from publishers without their permission and chose to give them only limited options to opt out according to an internal document.51 Danielle Coffey, CEO of News/Media Alliance characterizes this practice as "theft."52
AI companies refusal to respect existing intellectual property laws has resulted in lawsuits being filed against OpenAI, Anthropic, Facebook/Meta, Ross Intelligence, Stability AI, Udio, ElevenLabs, Perplexity AI, Suno, MiniMax, Microsoft, GitHub, and Cohere.53 On September 25, Anthropic received preliminary approval for 1.5 billion dollar settlement due to its use of pirated books as input data for its AI program Claude.54
Given commercial AI's problem with plagiarism and piracy, thorough vetting is necessary to verify that any such products used by libraries are built using only legally-acquired input data. Given the previously addressed problems with accuracy, there's no way to ensure proper citations from existing commercial AI programs. Their use in libraries then, results in a tacit endorsement of plagiarism.
If libraries build their own AI programs using only input data they have a right to, they can avoid most of the intellectual property issues inherent in commercial AI. However, the problem of plagiarizing prior work, often without proper attribution, remains.
11. ALA Code of Ethics: 5
5. We treat co-workers and other colleagues with respect, fairness, and good faith, and advocate conditions of employment that safeguard the rights and welfare of all employees of our institutions.55
12. AI and working conditions
One of the purposes of AI is to replace human workers.56 This usage is hitting young workers hard as many employers have replaced entry-level positions with AI.57 Companies, including Klarna, UPS, DuoLingo, Intuit, and Cisco, have pivoted to AI while laying of tens-of-thousands workers.58 AI is also being used to replace thousands of Information Technology (IT) and Human Resources (HR) workers at companies including Microsoft, IBM, and Walmart.59 This summer, Amazon CEO Andy Jassy announced the company's plan to replace workers with AI60 The federal government is also exploring replacing workers with generative AI.61
Libraries are not immune to this approach. Corey Halaychik encourages us to "celebrate" AI replacing library jobs. He encourages using AI for technical services and cataloging, where its impact will be invisible to the public, while avoiding its use in public-facing services because that "looks better." Although he makes grandiose claims about what AI can accomplish, Halaychik presents no evidence to support those claims.62
Md. Ashikuzzaman suggests using chatbots for reference work would be a "cost-saving measure" by reducing "the need for additional personnel to handle reference services."63 Of course to make the added expense of an additional software system and the IT costs to maintain it and integrate it with library services cheaper than not adding those expenses requires cutting staff or, at best, not rehiring vacated positions.
Carlo Iacono envisions a future for libraries so different that many traditional library jobs, and implicitly the workers that fill them, will go by the wayside. Where traditional jobs remain, they are greatly reduced in his vision. Gone are reference and instruction staff. In their place are so called "AI Literacy Specialists," whose only purpose is to guide other's usage of AI.64 Of course, it's possible in this imagining that libraries could retrain all their existing workers into new roles, assuming, of course, that that makes budgetary sense.
When AI is not touted as a way to replace workers, it's promoted as a means of improving worker efficiency and freeing them from drudgery. It hasn't yet been shown to do either, but it can do the opposite. Despite widespread adoption,65 generative AI has yet to demonstrate measurable productivity boosts.66 A report from MIT put it starkly, "Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return."67
A study from the Harvard Business Review posits that one reason for the failure may be that many workers use AI to generate low-quality work, or "workslop." This workslop often requires coworkers to work harder to rewrite it or even understand its meaning and can reduce productivity. Workslop reduces other's opinion of coworkers that use it and reduces their desire to collaborate with those coworkers in the future.68 Even when it's not being used to produce workslop, the assistive approach to using AI in workplaces can result in skilled workers being replaced by lower-paid workers fulfilling the same duties while babysitting generative AI programs.69
Within the library world, the picture appears much the same. Even boosters of using AI in libraries often demonstrate its troubling shortcomings. In a paper supportive of using ChatGPT for library metadata, Jenny Bodenhamer found that its attempts to generate call numbers and subject headings were riddled with errors, although she found its ability to extract keywords satisfactory.70 Halaychik argues that using AI for public-facing services could "negatively impact" public services and should be avoided.71 Since their output can't be trusted, AI programs that generate metadata would require human workers to verify and correct its output to ensure accuracy. Such a workflow reduces a skilled worker to a babysitter for a misbehaving machine while still requiring specialized knowledge to ensure correct call numbers and subject headings.
Beyond the direct negative effects AI can have on workers, the constant barrage of messaging telling us AI will take our jobs or will transform our worklives, privacy concerns around AI, the use of AI to replace human interaction, the rise of AI-generated misinformation, and concerns about bias all contribute to increased anxiety among workers, further degrading workplace morale.72
Instead of improving working conditions, generative AI more often demoralizes and devalues human workers while adding to their drudgery and stress.
13. Working conditions for AI data workers
Beyond our own institutions, artificial intelligence companies rely on exploitative labor practices to generate and prepare data for input into AI programs and to rate and refine the output of their AI programs. How AI companies treat their workers may seem beyond the scope of the ALA Code of Ethics, but if we bring these technologies into our workplaces, then the people that built them become our colleagues. Our labor, which is fully within the scope of the Code of Ethics, becomes part of their labor. If you think I'm venturing a little far afield, I hope you'll indulge me all the same. I will get to one tech giant whose labor practices include extracting unpaid labor out of everyone in this room.
Generative AI programs require enormous amounts of input data, which usually needs to be labeled by people so the statistical models at the core of these programs have values to weight different pieces of input by. This work is often done by people paid well below poverty rates. Some rate image labels through Amazon's Mechanical Turk. Others are refugees, prisoners, and others in positions of precarity around the world. All of these people's work is absolutely vital to the functioning of the AI industry. They are the foundation upon which the entire generative AI industry is built upon. Yet, despite the crucial role they play they are paid appallingly low wages while often having to endure horrid work environments.73 When these workers seek better pay or work conditions, they risk losing their jobs, as has happened on at least two occasions to workers contracted to work on Google's AI products.74
Many companies, including those working in other areas of AI, also rely on fully uncompensated labor by people who are often unaware that they are working to label data for AI or otherwise generating data for AI-related use. This includes children, prisoners, and refugees.75 It also includes everyone that's been forced to use Google's reCAPTCHA service to prove your humanity on some website.76 Whether you're aware of it or not, whether you consent to it or not, you are forced to become an unpaid Google employee every you access some basic website function hidden behind a reCAPTCHA.
14. ALA Code of Ethics: 6 & 7
6. We do not advance private interests at the expense of library users, colleagues, or our employing institutions. 7. We distinguish between our personal convictions and professional duties and do not allow our personal beliefs to interfere with fair representation of the aims of our institutions or the provision of access to their information resources.77
When considering AI, we must base our decisions and discussions of AI firmly in facts and ensure our opposition to or support of their use is inline with the rest of the Code of Ethics.
15. ALA Code of Ethics: 8
8. We strive for excellence in the profession by maintaining and enhancing our own knowledge and skills, by encouraging the professional development of co-workers, and by fostering the aspirations of potential members of the profession.78
When it comes to enhancing our knowledge and skills, staying informed on developments in AI is certainly in line with the ALA Code of Ethics. While doing so, of course, it is important to make sure we distinguish facts from misleading marketing hype.
16. ALA Code of Ethics: 9
9. We affirm the inherent dignity and rights of every person. We work to recognize and dismantle systemic and individual biases; to confront inequity and oppression; to enhance diversity and inclusion; and to advance racial and social justice in our libraries, communities, profession, and associations through awareness, advocacy, education, collaboration, services, and allocation of resources and spaces.79
17. Racial, gender, and other biases in AI
Generative AI programs have an extensive and well-documented problem with racial and gender biases.80 They promote harmful racial and gender stereotypes around what kind of people hold what kind of job and what race "criminals" are, and hypersexualizeswomen.81 They do so even based on the dialect a person speaks. Current techniques to reduce racial bias can worsen the problem by obscuring, but not removing the racist weighting an AI program uses.82 Racial bias in AI appears to get worse the more input data they have.83
The bigotry built into commercial AI sometimes becomes apparent in dramatic ways, like when chatbots start spewing racist, misogynistic, and anti-Semitic language as has happened with Microsoft's Tay and xAI's Grok.84
Other times, it makes itself apparent in subtler, but more damaging ways, like when AI is used to discriminate in hiring practices,85 as it has on more than one occasion. Amazon scrapped an AI recruiting program that exhibited bias against female job candidates.86 iTutorGroup settled a lawsuit in 2023 over its AI recruiting software filtering out female job candidates over the age of 55 and male candidates over the age of 60.87
AI bias also shows up when people seek medical treatment.88 Studies have found that AI used in clinical settings and the data used to train them both demonstrate racial bias.89 One such study found that AI used in mental health treatment recommended inferior treatment for Black patients.90 Another recent study found similar bias against female patients. The AI programs they studied treated male patients more consistently while they were much more likely to erroneously reduce care for female patients.91
The combined racism and misogyny embedded in generative AI programs means that BIPOC women are subject to a double dose of discrimination.
As long as racial, gender, and other biases remain in AI programs, we cannot endorse or promote their use within libraries. To do so violates point nine of the Code of Conduct and would inhibit the ability of affected members of our community to fully exercise their right to intellectual freedom within our libraries. Morally and ethically it would take us back to the days, not that long ago, when white librarians across the US proudly enforced segregation in their libraries.92 It would take us back to the very ethical stance that point 9 was so recently adopted to address.93
18. Beyond the ALA Code of Conduct
There are various ethical concerns around the use of generative AI programs beyond the scope of the Code of Ethics that are worth considering. I will briefly touch on a few of them.
Some studies have shown that use of generative AI can have negative effects on learning outcomes and cognition.94 One study comparing essay writers who used AI to ones who didn't found that the AI user group demonstrated reduced neural connectivity and activity, reduced memory performance, had a reduced sense of agency in the writing process, and wrote more homogenous essays than the groups that did not use AI.95 A study of high school students using AI tutors found that those who used a standard GPT chatbot performed worse on tests when they couldn't use the program compared to students who had no AI assistance. Those who used a GPT chatbot specifically designed for educational use showed a statistically insignificant performance reduction compared to the control group.96 In the medical field, several studies have found that reliance on AI programs can reduce the skill of medical professionals, including making it harder for them to correctly identify cancer.97 As knowledge workers in institutions of learning, we have a responsibility not to promote tools that inhibit learning.
There have also been instances where use of generative AI has exacerbated or possibly caused mental health problems.98 In the case of two teenagers who died by suicide, ChatGPT and a Character.AI chatbot have been accused of encouraged teens to kill themselves and of helping them plan their suicides.99 A custom chatbot used by the National Eating Disorders Association gave users weight loss advice, potentially worsening their existing eating disorders.100 A man in Connecticut killed himself and his mother after months of having his paranoid delusions confirmed and reinforced by ChatGPT.101 The danger this poses only increases as more people are turning to generative AI for therapy, a task that such programs are woefully unsuited for as they tend to reinforce delusions and stigmatize mental health problems.102
Beyond the direct human impacts of AI we also need to consider its environmental impacts. As the effects of human-driven climate change continue to become more severe we need to be more conscious than ever about the environmental impact of the technology we use. Generative AI programs are wildly inefficient and require enormous amounts of energy to function. For answer seeking tasks, AI requires far more energy than search engines.103 The voracious energy consumption of generative AI programs has led to a massive increases in carbon emissions.104 This unfettered energy consumption requires billions of gallons of water to cool the machines powering commercial AI programs and threatens the water supply for communities that live near AI-hosting data centers.105 To use commercially available AI in libraries or to promote its use is to participate in environmentally devastating business practices that worsen living conditions for people around the globe.
Using commercial AI in libraries also means doing business with companies that have long histories of unethical business practices. As previously detailed, exploitative labor practices and disregard for intellectual property rights and user privacy are endemic to the industry. These issues alone put these companies and products at odds with our professional ethics. Additionally, the industry has a documented history of making misleading and exaggerated statements about their products that the uncharitable could characterize as fraud.106 Do we really want to use our institutions to launder the reputations of companies like these all for inferior products that don't work as advertised?
19. Conclusion
Given how deeply at odds with our professional ethics most existing AI programs are, we face great difficulties in using them without violating those ethics. There may be ways to use or create generative AI programs that are ethical, but utmost caution must be taken at every step of their development and evaluation to avoid repeating the ethical pitfalls of existing options. Often, it may be better to heed the words of Rea N. Simons and recognize the wisdom that sometimes inaction is the best path.107 If we cannot use AI the right way, it is better we do not use it at all rather than collaborate in the harm it causes. Does this mean we'll miss out on the AI revolution? Maybe, but since generative AI has yet to produce any demonstrable benefits, we're not missing out on much.
20. Footnotes
Violet Fox, "A Librarian Against AI or I Think AI Should Leave," (pub. by author, 2025), https://violetbfox.info/against-ai/.
Emily M. Bender and Alex Hanna, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, Harper 2025, 5. Ali Alkhatib, "Defining AI," Decemeber 6, 2024, https://ali-alkhatib.com/blog/defining-ai. Mandy Brown, "Toolmen," A working library, May 30, 2025, https://aworkinglibrary.com/writing/toolmen.
Peter Norvig and Stuart J. Russell, Artificial Intelligence: A Modern Approach (Pearson, 2022), 36.
Ibid.
Daniel Crevier, AI : the tumultuous history of the search for artificial intelligence (BasicBooks, 1993), 114-115. Norvig and Russell, 39.
Crevier, 117.
Crevier, 210-212.
Cristian Vaccari and Andrew Chadwick, "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News," Social Media + Society 6, no. 1, (2020): https://doi.org/10.1177/2056305120903408. Shannon Bond, "How AI deepfakes polluted elections in 2024," NPR, December 21, 2024, https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. Wayne Unger, "AI-generated child pornography is surging − a legal scholar explains why the fight against it is complicated and how the law could catch up," The Conversation, February 11, 2025, https://theconversation.com/ai-generated-child-pornography-is-surging-a-legal-scholar-explains-why-the-fight-against-it-is-complicated-and-how-the-law-could-catch-up-247980. Muhammad Tuhin, "The Ethics of Deepfake Technology," Science News Today, March 28, 2025, https://www.sciencenewstoday.org/the-ethics-of-deepfake-technology. Taylor Percival James, " Not Her Fault: AI Deepfakes, Nonconsensual Pornography, and Federal Law’s Current Failure to Protect Victims," BYU Law Review 50, no. 4 (2025): https://digitalcommons.law.byu.edu/lawreview/vol50/iss4/10.
Chris Wood, "AI ad spending has skyrocketed this year," MarTech, November 3, 2023, https://martech.org/ai-ad-spending-has-skyrocketed-this-year/. Marty Swant, "From the Super Bowl to the Olympics, AI companies are spending more on AI-related advertising," Digiday, August 2, 2024, https://digiday.com/marketing/from-the-super-bowl-to-the-olympics-ai-companies-are-spending-more-on-ai-related-advertising/. Shira Ovide, "Here's how much tech companies are spending to thell you AI is amazing," The Washington Post, August 13, 2024, https://www.washingtonpost.com/technology/2024/08/13/ai-ads-olympics-commercials-overload-google/.
The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence," Big Data & Society 7, no. 1 (2020): https://doi.org/10.1177/2053951720919776. Niamh Rowe, "Millions of Workers are Training AI Models for Pennies," Wired, October 16, 2023, https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/. Stephanie Wangari and Gayathri Vaidyanathan, "How Big Tech hides its outsourced African workforce," Rest of World, https://restofworld.org/2025/big-tech-ai-labor-supply-chain-african-workers/. Varsha Bansal, "How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart," The Guardian, September 11, 2025, https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans.
Bansal, "Overworked, Underpaid."
Bender and Hanna, AI Con 53. Emily M. Bender, Angelina McMillan-Major, Timnit Gebru, Shmargaret Shmitchell, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?," FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 611-612, https://doi.org/10.1145/3442188.3445922.
American Library Association, "ALA Code of Ethics," American Library Association, June 29, 2021, https://www.ala.org/tools/ethics.
Baldur Bjarnason, "Hallucinations," Generative AI: What You Need To Know, undated, https://needtoknow.fyi/card/hallucinations/. Michael Townsen Hicks, James Humphries, and Joe Slater, "ChatGPT is bullshit," Ethics and Information Technology 26, no. 38 (2024): https://doi.org/10.1007/s10676-024-09775-5. Bender and Hanna, AI Con, 167. Wim Vanderbauwhede, "Demystifying AI," Musings of an Accidental Computing Scientist, September 18, 2025, https://limited.systems/articles/demystifying-ai/. Lauren Leffer, "AI Chatbots Will Never Stop Hallucinating," Scientific American, April 5, 2024, https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/.
Emily M. Bender and Alexander Koller, "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data," Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020), 5185-5198: https://doi.org/10.18653/v1/2020.acl-main.463. Will Douglas Heaven, "OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless," MIT Technology Review, July 20, 2020, https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/. Parshin Shojaee, Iman, Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar, "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," July 18, 2025, https://doi.org/10.48550/arXiv.2506.06941. Alex Hanna and Emily M. Bender, "'AI' Hurts Consumers and Workers – and Isn’t Intelligent," Tech Policy Press, August 3, 2023, https://www.techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/. Baldur Bjarnason, The Intelligence Illusion, 2nd ed. (pub. by author, 2024), 30-31. Matt White, "I Think Therefore I am: No, LLMs Cannot Reason, March 2, 2025, https://matthewdwhite.medium.com/i-think-therefore-i-am-no-llms-cannot-reason-a89e9b00754f. Vanderbauwhede, "Demystifying AI." Bender and Hanna, AI Con, 23-28. Ivy B. Grey, "Why You're Thinking About 'Reasoning' All Wrong," WordRake, 2025, https://www.wordrake.com/resources/youre-thinking-about-reasoning-wrong.
Bender, et al., "Stochastic Parrots. Baldur Bjarnason, "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con," Out of the Software Crisis, July 4, 2023, https://softwarecrisis.dev/letters/llmentalist/. Takuya Maeda and Anabel Quan-Haase, "When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design," FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (2024): 1068-1077, https://doi.org/10.1145/3630106.3658956. Stephanie M. Tully, Chiara Longoni, and Gil Appel, "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity," Journal of Marketing 89, no. 5 (2025): 1-20, https://doi.org/10.1177/00222429251314491. Bender and Hanna, AI Con, 28-31.
Klaudia Jaźwińska and Aisvarya Chandrasekar, "AI Search Has A Citation Problem," Columbia Journalism Review, March 6, 2025, https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php.
Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel E. Ho, "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models," Journal of Legal Analysis 16, no. 1 (2024): 66, https://doi.org/10.1093/jla/laae003.
Benjamin Weiser, "Here’s What Happens When Your Lawyer Uses ChatGPT," The New York Times, May 27, 2023, https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html. Rod McGuirk, "Australian lawyer apologizes for AI-generated errors in murder case," Associated Press, August 15, 2025, https://apnews.com/article/australia-murder-artifical-intelligence-34271dc1481e079c3583b55953a67c38. Khari Johnson, "California issues historic fine over lawyer’s ChatGPT fabrications," Cal Matters, September 22, 2025, https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/.
Gary Marcus, "Serious medical error from Perplexity’s chatbot," Marcus on AI, February 29, 2024, https://garymarcus.substack.com/p/serious-medical-error-from-perplexitys. Audrey Eichenberger, Stephen Theilke, Adam Van Buskirk, "A Case of Bromism Influenced by Use of Artificial Intelligence," Annals of Internal Medicine: Clinical Cases 4, No. 8 (2024): https://doi.org/10.7326/aimcc.2024.1260. Shruthi Shekar, Pat Pataranutaporn, Chethan Sarabu, Guillermo A. Cecchi, and Pattie Maes, "People Overtrust AI-Generated Medical Advice despite Low Accuracy", The New England Journal of Medicine 2, no. 6 (2025): https://ai.nejm.org/doi/full/10.1056/AIoa2300015. Frank Landymore, "A Single Typo in Your Medical Records Can Make Your AI Doctor Go Dangerously Haywire," Futurism, August 31, 2025, https://futurism.com/typo-ai-doctor-haywire.
Live McMahon and Zoe Kleinman, "Glue pizza and eat rocks: Google AI search errors go viral," BBC, May 24, 2024, https://www.bbc.com/news/articles/cd11gzejgz4o.
Oli Elliot, "Representation of BBC News content in AI Assistants," BBC, February 2025, https://www.bbc.co.uk/aboutthebbc/documents/bbc-research-into-ai-assistants.pdf.
AWS Service Professionals, "Generative Artificial Intelligence (AI) Document Summarization Proof of Concept," March, 2024, https://www.aph.gov.au/DocumentStore.ashx?id=b4fd6043-6626-4cbe-b8ee-a5c7319e94a0.
Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, and Edwin Zhang, "Why Language Models Hallucinate," September 4, 2025, https://doi.org/10.48550/arXiv.2509.04664.
Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models," February 13, 2025, https://arxiv.org/abs/2401.11817.
Ingrid Reiche, "The viability of using an open source locally hosted AI for creating metadata in digital image collections," Code4Lib Journal 56 (2023): https://journal.code4lib.org/articles/17186.
Harish Maringanti, Dhanushka Samarakoon, and Bohan Zhu, "Machine learning meets library archives: Image Analysis to generate descriptive metadata," https://doi.org/10.48609/pt6w-p810.
Sai Deng, "AI, Cataloging & Metadata," November 15, 2023, https://stars.library.ucf.edu/ucfscholar/1251/.
Ex Libris, "The AI Metadata Assistant in the Metadata Editor," ExLibris Knowledge Base, undated, https://knowledge.exlibrisgroup.com/Alma/Product_Documentation/010Alma_Online_Help_(English)/Metadata_Management/005Introduction_to_Metadata_Management/The_AI_Metadata_Assistant_in_the_Metadata_Editor.
Md. Ashikuzzaman, "Artificial Intelligence (AI) Chatbots for Library Reference Services," LIS Education Network, June 27, 2025, https://www.lisedunetwork.com/artificial-intelligence-ai-chatbots-for-library-reference-services/. Jenny Bodenhamer, "Reliability and usability of ChatGPT for library metadata," 2023, https://hdl.handle.net/20.500.14446/339626.
The problem with AI accuracy is thoroughly documented it could be it's own presentation. I eventually had to stop including examples, but here's a few choice ones that didn't quite make the cut: Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I. Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, Jonathan R. Weir-McCall, Zhongzhao Teng, Effrossyni Gkrania-Klotsas, AIX-COVNET, James H. F. Rudd, Evis Sala, and Carola-Bibiane Schönlieb, "Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans," Nature Machine Intelligence 3 (2021): 199-217, https://doi.org/10.1038/s42256-021-00307-0; Rachel Metz, "Zillow’s home-buying debacle shows how hard it is to use AI to value real estate," CNN, November 9, 2021, https://edition.cnn.com/2021/11/09/tech/zillow-ibuying-home-zestimate; Miles Klee, "'Historical Figures' AI Lets Famous Dead People Lie to You," Roling Stone, January 20, 2023, https://www.rollingstone.com/culture/culture-news/historical-figures-ai-chat-bot-lies-dead-people-1234664257/; Jon Christian, "CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors," Futurism, January 29, 2023, https://futurism.com/cnet-ai-errors; Jon Christian, "Magazine Publishes Serious Errors in First AI-Generated Health Article," Futurism, February 18, 2023, https://futurism.com/neoscope/magazine-mens-journal-errors-ai-health-article; Complaint, The New York Times Company v. Microsoft Corporation et al., December 12, 2023, 52-55, https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf; Sharon Adarlo, "Airline’s Chatbot Lies About Bereavement Policy After Passenger’s Grandmother Dies," Futurism, February 17, 2024, https://futurism.com/the-byte/airline-chatbot-bereavement-funeral; Colin Lecher, "NYC’s AI Chatbot Tells Businesses to Break the Law," The Markup, March 29, 2024, https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law; Tom Gerken, "Bacon ice cream and nugget overload sees misfiring McDonald's AI withdrawn," BBC, June 18, 2024, https://www.bbc.co.uk/news/articles/c722gne7qngo; Garance Burke and Hilke Schellman, "Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said," Associated Press, October 26, 2024, https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14; Dan Mihalopoulos, "Syndicated content in Sun-Times special section included AI-generated misinformation," Chicago Sun-Times, May 20, 2025, https://chicago.suntimes.com/news/2025/05/20/syndicated-content-sunday-print-sun-times-ai-misinformation; Paulina Okunytė, Niamh Ancell, "AI coding tool wipes production database, fabricates 4,000 users, and lies to cover its tracks," Cybernews, July 23, 2025, https://cybernews.com/ai-news/replit-ai-vive-code-rogue/; Joe Hindy, "After 2 Million AI Orders, Taco Bell Admits Humans Still Belong in the Drive-Thru," CNET, August 28, 2025, https://www.cnet.com/tech/services-and-software/after-2-million-ai-orders-taco-bell-admits-humans-still-belong-in-the-drive-thru/; Tom Warren, "Microsoft launches 'vibe working' in Excel and Word," The Verge, September 29, 2025, https://www.theverge.com/news/787076/microsoft-office-agent-mode-office-agent-anthropic-models.
American Library Association, "Code of Ethics."
Catherine Smith, "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape," IFLA Journal 48, no. 3, 2022, 427-428, https://repository.ifla.org/rest/api/core/bitstreams/e94c5092-f15f-4b2e-95ff-b4f03ff3cc1d/content. For an example of one way AI can create a false information landscape, besides its aforementioned usage to create disinformation, see Jason Koebler, "AI Slop Silo Machine Is Here," 404 Media, July 15, 2025, https://www.404media.co/the-ai-slop-niche-machine-is-here/.
Ibid., 428-429.
American Library Association, "Code of Ethics."
Andrea Baer, "Unpacking Predominant Narratives about Generative AI and Education: A Starting Point for Teaching Critical AI Literacy and Imagining Better Futures," Library Trends 73, no. 3 (2025): 146, https://muse.jhu.edu/pub/1/article/961189. Kailyn "Kay" Slater, "Against AI: Critical Refusal in the Library," Library Trends 72, no. 4 (2025): 594, https://muse.jhu.edu/pub/1/article/968497.
Kate Irwin, "LinkedIn Is Quietly Training AI on Your Data—Here's How to Stop It," PC Mag, September 20, 2024, https://www.pcmag.com/news/linkedin-is-quietly-training-ai-on-your-data-heres-how-to-stop-it.
Amy Castor and David Gerard, "Windows AI Copilot+ Recall stores screenshots of sensitive data, regardless of 'sensitive information' filter," Pivot to AI, December 13, 2024, https://pivot-to-ai.com/2024/12/13/windows-ai-copilot-recall-stores-screenshots-of-sensitive-data-regardless-of-sensitive-information-filter/. David Gerard, "Copilot Vision AI sends your data to Microsoft," Pivot to AI, July 29, 2025, https://pivot-to-ai.com/2025/07/29/copilot-vision-ai-sends-your-data-to-microsoft/.
Alex McFarland, "Replika’s $5.6M Fine Exposes AI Privacy Concerns in 2025," Techopedia, May 27, 2025, https://www.techopedia.com/ai-privacy-concerns.
Amanda Silberling, "The Meta AI app is a privacy disaster," TechCrunch June 12, 2025, https://techcrunch.com/2025/06/12/the-meta-ai-app-is-a-privacy-disaster/. James Pero, "PSA: Get Your Parents Off the Meta AI App Right Now," Gizmodo, June 12, 2025, https://gizmodo.com/psa-get-your-parents-off-the-meta-ai-app-right-now-2000615122.
OpenAI, "Helping people when they need it most," OpenAI, August 26, 2025, https://openai.com/index/helping-people-when-they-need-it-most/.
Bernard Marr, "AI Chatbots Are Quietly Creating A Privacy Nightmare, Forbes, September 15, 2025, https://www.forbes.com/sites/bernardmarr/2025/09/15/ai-chatbots-are-quietly-creating-a-privacy-nightmare/.
For a sampling of other AI privacy concerns see: "How AI Is Affecting Information Privacy and Data," WGU Blog, September 28, 2021, https://www.wgu.edu/blog/how-ai-affecting-information-privacy-data2109.html; Benji Edwards, "Artist finds private medical record photos in popular AI training data set," Ars Technica, September 21, 2022, https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/; Ryan Brown, "OpenAI CEO admits a bug allowed some ChatGPT users to see others’ conversation titles," CNBC, March 23, 2023, https://www.cnbc.com/2023/03/23/openai-ceo-says-a-bug-allowed-some-chatgpt-to-see-others-chat-titles.html; Amy Castor and David Gerard, "UnitedHealthcare Optum leaves internal AI chatbot open to the world," Pivot to AI, December 14, 2024, https://pivot-to-ai.com/2024/12/14/unitedhealthcare-optum-leaves-internal-ai-chatbot-open-to-the-world/; Muhammad Tuhin, "The Dark Side of AI: Cybersecurity Threats and Privacy Concerns," Science News Today, March 27, 2025, https://www.sciencenewstoday.org/the-dark-side-of-ai-cybersecurity-threats-and-privacy-concerns; Elena Constantinescu, "What are the AI privacy concerns?," Proton, September 11, 2025, https://proton.me/blog/ai-privacy-concerns; Dan Goodin, " New attack on ChatGPT research agent pilfers secrets from Gmail inboxes," Ars Technica, September 18, 2025, https://arstechnica.com/information-technology/2025/09/new-attack-on-chatgpt-research-agent-pilfers-secrets-from-gmail-inboxes/.
Ashikuzzaman, "Chatbots."
Amr Hassan Fatouh and Ahmed Ammar Hamam, "Investing of Chatbots to Enhance the Library Services," American Journal of Information Science and Technlogy 8, no. 1 (2024): 18, https://doi.org/10.11648/j.ajist.20240801.12.
American Library Association, "Code of Ethics."
Gary Marcus, "Partial Regurgitation and how LLMs really work," Marcus on AI, May 22, 2024, https://garymarcus.substack.com/p/partial-regurgitation-and-how-llms. Complaint, 30-52.
Bender and Hanna, AI Con, 53.
Complaint, 30-52.
Matthew Butterick, "This Copilot Is Stupid and Wants to Kill Me," Matthew Butterick, June 25, 2022, https://matthewbutterick.com/chron/this-copilot-is-stupid-and-wants-to-kill-me.html. Elaine Atwell, "GitHub Copilot Isn't Worth the Risk," Kolide, November 2022, https://www.kolide.com/blog/github-copilot-isn-t-worth-the-risk. Katie Paul, "Exclusive: Multiple AI companies bypassing web standard to scrape publisher sites, licensing firm says," Reuters, June 21 2024, https://www.reuters.com/technology/artificial-intelligence/multiple-ai-companies-bypassing-web-standard-scrape-publisher-sites-licensing-2024-06-21/. Kali Hays, "OpenAI and Anthropic are ignoring an established rule that prevents bots scraping online content," Business Insider, June 21, 2024, https://www.businessinsider.com/openai-anthropic-ai-ignore-rule-scraping-web-contect-robotstxt. Ed Newton-Rex, "How AI models steal creative work — and what to do about it," TED, October 2024, https://www.ted.com/talks/ed_newton_rex_how_ai_models_steal_creative_work_and_what_to_do_about_it. David Carson, "Theft isn not fair use," April 21, 2025, https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063. Blake Brittain, "Meta knew it used pirated books to train AI, authors say," Reuters, January 9, 2025, https://www.reuters.com/technology/artificial-intelligence/meta-knew-it-used-pirated-books-train-ai-authors-say-2025-01-09/. Prashant Chaudhary, "AI Companies Are Stealing Creative Work Without Permission, What You Need to Know?," Gadget Insiders, April 9, 2025, https://www.gadgetinsiders.com/artificial-intelligence/ai-companies-are-stealing-creative-work-without-permission-what-you-need-to-know/. Murtaza Hussain, Ryan Grim, and Waqas Ahmed, "LEAKED: A New List Reveals Top Websites Meta Is Scraping of Copyrighted Content to Train Its AI," Drop Site, August 6, 2025, https://www.dropsitenews.com/p/meta-facebook-tech-copyright-privacy-whistleblower.
Davey Alba and Julia Love, "Google Decided Against Offering Publishers Options in AI Search," Bloomberg, May 19, 2025, https://www.bloomberg.com/news/articles/2025-05-19/google-gave-sites-little-choice-in-using-data-for-ai-search.
Sam Quigley, "News/Media Alliance Statement on Google AI Mode," News/Media Alliance, May 21, 2025, https://www.newsmediaalliance.org/google-ai-mode-statement/.
Thomas Claburn, "GitHub accused of varying Copilot output to avoid copyright allegations," The Register, June 9, 2023, https://www.theregister.com/2023/06/09/github_copilot_lawsuit/. Alexandra Alter and Elizabeth A. Harris, "Franzen, Grisham and Other Prominent Authors Sue OpenAI, September 20, 2023, https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html. Blake Brittain, "Music publishers sue AI company Anthropic over song lyrics," Reuters, October 19, 2023, https://www.reuters.com/legal/music-publishers-sue-ai-company-anthropic-over-song-lyrics-2023-10-18/. Kate Knibbs, "Every AI Copyright Lawsuit in the US, Visualized," Wired, December 19, 2024, https://www.wired.com/story/ai-copyright-case-tracker/. Matt Growcoot, "Getty Images Wants $1.7 Billion From its Lawsuit With Stability AI," PetaPixel, December 19, 2024, https://petapixel.com/2024/12/19/getty-images-wants-1-7-billion-from-its-lawsuit-with-stability-ai/. Blake Brittain, "Tech companies face tough AI copyright questions in 2025," Reuters, December 27, 2024, https://www.reuters.com/legal/litigation/tech-companies-face-tough-ai-copyright-questions-2025-2024-12-27/. Transparency Coalition, "Two big rulings: Courts are starting to expose AI piracy of copyrighted material," Transparency Coalition, February 12, 2025, https://www.transparencycoalition.ai/news/two-big-rulings-courts-are-starting-to-expose-ai-piracy-of-copyrighted-material. Tori Noble, "Copyright and AI: the Cases and the Consequences," Electronic Frontier Foundation, February 19, 2025, https://www.eff.org/deeplinks/2025/02/copyright-and-ai-cases-and-consequences. Nigel Bowen, "Hollywood Studios Sue Chinese AI Company Over Piracy Claims," Channel News, September 17, 2025, https://www.channelnews.com.au/hollywood-studios-sue-chinese-ai-company-over-piracy-claims/. Rachel Scharf, "Labels Claim Suno Pirated Songs from YouTube in Bulked-Up AI Copyright Lawsuit," Billboard, September 19, 2025, https://www.billboard.com/pro/suno-lawsuit-ai-company-pirated-youtube-songs-record-labels/.
Blake Bittain, "US judge preliminarily approves $1.5 billion Anthropic copyright settlement," Reuters, September 25, 2025, https://www.reuters.com/sustainability/boards-policy-regulation/us-judge-approves-15-billion-anthropic-copyright-settlement-with-authors-2025-09-25/.
American Library Association, "Code of Ethics."
Matt Egan, "AI is replacing human tasks faster than you think," CNN, June 20, 2024, https://www.cnn.com/2024/06/20/business/ai-jobs-workers-replacing. Jim VandeHel and Mike Allen, "Behind the Curtain: A white-collar bloodbath," Axios, May 28, 2025, https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic. Joe Wilkins, "CEOs Using AI to Terroize Their Employees," Futurism, June 20, 2025, https://futurism.com/ceo-ai-scare-labor.
Megan Cerullo, "Recent college graduates face a new obstacle in finding a job: AI," CBS News, July 11, 2025, https://www.cbsnews.com/news/ai-jobs-unemployment-college-graduate/. Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence," 2025, https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/.
Jack Kelly, "It’s Time To Get Concerned As More Companies Replace Workers With AI," Forbes, May 9, 2025, https://www.forbes.com/sites/jackkelly/2025/05/04/its-time-to-get-concerned-klarna-ups-duolingo-cisco-and-many-other-companies-are-replacing-workers-with-ai/.
Grant Gross, "Company boards push CEOs to replace IT workers with AI," CIO, June 5, 2025, https://www.cio.com/article/4000546/company-boards-push-ceos-to-replace-it-workers-with-ai.html.
Emma Roth, "Amazon CEO says it will cut jobs due to AI’s 'efficiency'," The Verge, June 17, 2025, https://www.theverge.com/news/688679/amazon-ceo-andy-jassy-ai-efficiency.
Matteo Wong, "DOGE’s Plans to Replace Humans With AI Are Already Under Way," The Atlantic, March 10, 2025, https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/.
Cory Halaychik, "Embrace AI in Libraries: Freeing Staff for Meaningful Work While Preserving Human Touch," IFLA Academic and Research Libraries Section Blog, July 24, 2024, https://blogs.ifla.org/arl/2024/07/24/embrace-ai-in-libraries-freeing-staff-for-meaningful-work-while-preserving-human-touch/.
Ashikuzzaman, "Chatbots."
Carlo Iacono, "How AI Will Transform Libraries & Librarianship 2025-2035?," Hybrid Horizons: Exploring Human-AI Collaboration, March 16, 2025, https://hybridhorizons.substack.com/p/how-ai-will-transform-libraries-and.
Ryan Pendall, "AI Use at Work Has Nearly Doubled in Two Years," Gallup, June 15, 2025, https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx.
Atwell, "Isn't Worth the Risk." Baldur Bjarnason, "The hard truth about productivity research," Baldur Bjarnason, April 10, 2023, https://www.baldurbjarnason.com/2023/ai-research-again/. Joel Becker, Nate Rush, Elizabeth Barnes, David Rein, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity," July 25, 2025, https://doi.org/10.48550/arXiv.2507.09089. Aditya Challapally, Chris Pease, Ramesh Raskar, Pradyumna Chari, "The GenAI Divide: The State of AI in Business 2025," MIT NANDA, July 2025, 3, https://nanda.media.mit.edu/ai_report_2025.pdf. Paul Kunert, "UK government trial of M365 Copilot finds no clear productivity boost, The Register, September 4, 2025, https://www.theregister.com/2025/09/04/m365_copilot_uk_government/. Melissa Heikkilä, Chris Cook, and Clara Murray, "America’s top companies keep talking about AI — but can’t explain the upsides," The Financial Times, September 22, 2025, https://www.ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473; Victor Tangermann, "AI Coding Is Massively Overhyped, Report Finds," Futurism, September 28, 2025, https://futurism.com/artificial-intelligence/new-findings-ai-coding-overhyped.
Challapally et al., "GenAI Divide," 3.
Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock, "AI-Generated 'Workslop' Is Destroying Productivity," September 22, 2025, https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity.
Heikkilä, Cook, and Murray, "America's top companies." Alex Hanna and Emily M. Bender, "'AI' Hurts Consumers and Workers – and Isn’t Intelligent," Tech Policy Press, August 4, 2023, https://www.techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/.
Bodenhamer, "Usability of ChatGPT."
Halaychik, "Embrace AI."
Lauren Leffer, "'AI Anxiety' Is on the Rise—Here’s How to Manage It," Scientific American, October 2, 2023, https://www.scientificamerican.com/article/ai-anxiety-is-on-the-rise-heres-how-to-manage-it/. Jeff J. H. Kim, Junyoung Soh, Shrinidhi Kadkol, Itay Solomon, Hyelin Yeh, Adith V. Srivatsa, George R. Nahass, Jeong Yun Choi, Sophie Lee, Theresa Nyugen, and Olusola Ajilore, "AI Anxiety: a comprehensive analysis of psychological factors and interventions," AI and Ethics 5, no. 4 (2025,) 3993-4009: https://link.springer.com/article/10.1007/s43681-025-00686-9. Sally Helm, "AI is causing anxiety about the future of the workforce. But are there AI-proof jobs?," NPR, September 19, 2025, https://www.npr.org/2025/09/19/nx-s1-5544378/ai-is-causing-anxiety-about-the-future-of-the-workforce-but-are-there-ai-proof-jobs. The Editors, "Large Language Muddle," n+1, no. 51 (2025): https://www.nplusonemag.com/issue-51/the-intellectual-situation/large-language-muddle/.
Angela Chen, "Inmates in Finland are training AI as part of prison labor," The Verge, March 28, 2019, https://www.theverge.com/2019/3/28/18285572/prison-labor-finland-artificial-intelligence-data-tagging-vainu. Andy Newman, "I Found Work on an Amazon Website. I Mad 97 Cents an Hour," The New York Times, November 15, 2019, https://www.nytimes.com/interactive/2019/11/15/nyregion/amazon-mechanical-turk.html. Phil Jones, "Refugees help power machine learning advances at Microsoft, Facebook, and Amazon," Rest of World, September 22, 2021, https://restofworld.org/2021/refugees-machine-learning-big-tech/. Milagros Miceli and Julian Posada, "The Data-Production Dispositif," Proceedings of the ACM on Human-Computer Interaction 6, no.CSCW2, 1-37, (2022): https://doi.org/10.1145/3555561. Adrienne Williams, Milagros Miceli, and Timnit Gebru, "The Exploited Labor Behind Artificial Intelligence," Noema, October 13, 2022, https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/. Niamh Rowe, "Millions of Workers." are Training AI Models for Pennies," Wired, October 16, 2023, https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/. Krystal Kauffman and Adrienne Williams, "Turk Wars: How AI Threatens the Workers Who Fuel It," Stanford Social Innovation Review, October 11, 2023, https://ssir.org/articles/entry/ai-workers-mechanical-turk. Adio Dinika, "The Human Cost Of Our AI-Driven Future," Noema, September 25, 2024, https://www.noemamag.com/the-human-cost-of-our-ai-driven-future/. Bansal, "Overworked, Underpaid."
Davey Alba and Josh Eidelson, "Google Illegally Cut Contract Staffers Who Worked on AI, Union Alleges," Bloomberg, August 3, 2023, https://www.bloomberg.com/news/articles/2023-08-03/google-illegally-cut-contract-staffers-who-worked-on-ai-union-alleges. Varsha Bansal, "Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions," Wired, September 15, 2025, https://www.wired.com/story/hundreds-of-google-ai-workers-were-fired-amid-fight-over-working-conditions/.
Adrienne Williams, "Zombie Trainers and a New Era of Forced Labor," Newsweek, May 3, 2024, https://www.newsweek.com/zombie-trainers-new-era-forced-labor-opinion-1896624.
James O'Malley, "Captcha if you can: how you’ve been training AI for years without realising it," TechRadar, January 12, 2018, https://www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it. "By Typing Captcha, you are Actually Helping AI's Training," Access Newswire, November 27, 2020, https://www.accessnewswire.com/618585/By-Typing-Captcha-you-are-Actually-Helping-AIs-Training.
American Library Association, "Code of Ethics."
American Library Association, "Code of Ethics."
American Library Association, "Code of Ethics."
Olga Akselrod, "How Artificial Intelligence Can Deepen Racial and Economic Inequities," ACLU, July 13, 2021, https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities. Kristina Lorch, "Regulating AI: Opportunities to Combat Algorithmic Bias and Technological Redlining," Journal of Public & International Affairs, December 4, 2023, https://jpia.princeton.edu/news/regulating-ai-opportunities-combat-algorithmic-bias-and-technological-redlining. Dayo Ajanaku, "How Artificial Intelligence Impacts Marginalized Communities," UC Berkelye Law, January 26, 2022, Uhttps://sites.law.berkeley.edu/thenetwork/2022/01/26/how-artificial-intelligence-impacts-marginalized-communities/. NESCO International Research Centre on Artificial Intelligence, "'I don’t have a gender, consciousness, or emotions. I’m just a machine learning model'," 2023, https://unesdoc.unesco.org/ark:/48223/pf0000387189. Leonardo Nicoletti and Dina Bass, "Humans are Biased. Generative AI Is Even Worse," Bloomberg, June 9, 2023, https://www.bloomberg.com/graphics/2023-generative-ai-bias/. UNESCO Internationa Research Centre on Artificial Intelligence, "Challenging systematic prejudices: an investigation into bias against women and girls in large language models," 2024, https://unesdoc.unesco.org/ark:/48223/pf0000388971?posInSet=1&queryId=49226140-7668-440e-884a-0fcbef89ac23. Heaven, "Mindless." Oscar Schwartz, "In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation," IEEE Spectrum, January 4, 2024, https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation. Ye Sul Park, "White Default: Examining Racialized Biases Behind AI-Generated Images," Art Education 77, no.4 (2024): https://doi.org/10.1080/00043125.2024.2330340. Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King, "AI generates covertly racist decisions about people based on their dialect," Nature 633 (2024): 147-154, https://doi.org/10.1038/s41586-024-07856-5. Abeba Birhane, Sepehr Dehdashtian, Vinay Uday Prabhu, and Vishnu Boddeti, "The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models," FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (2024): 1229-1244, https://doi.org/10.1145/3630106.3658968. Ashwini K.P., "Contemporary forms of racism, racial discrimination, xenophobia and related intolerance," United Nations General Assembly, June 3, 2024, https://docs.un.org/en/A/HRC/56/68. Reece Rogers and Victoria Turk, "OpenAI’s Sora Is Plagued by Sexist, Racist, and Ableist Biases," Wired, March 23, 2025, https://www.wired.com/story/openai-sora-video-generator-bias/. Slater, "Againt AI," 593-594. Beth Carpenter, "The Bias Is Inside Us: Supporting AI Literacy and Fighting Algorithmic Bias," Library Trends 73, no. 4, 482-483, https://muse.jhu.edu/pub/1/article/968492. Lisa Hagen, Huo Jingnan, Audrey Nguyen, "Elon Musk's AI chatbot, Grok, started calling itself 'MechaHitler'," NPR, July 9, 2025, https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content.
Nicoletti and Bass, "Even Worse." Park, "White Default."
Hofmann et al., "Covertly Racist."
Birhane et al., "Dark Side."
Schwartz, "Racist Chatbot." Hagen et al., "MechaHitler." Raphael Boyd, "'Just the start': X’s new AI software driving online racist abuse, experts warn," The Guardian, Monday 13, 2025, https://www.theguardian.com/technology/2025/jan/13/just-the-start-xs-new-ai-software-driving-online-racist-abuse-experts-warn.
Aditya Malik, "AI Bias in Recruitment: Ethical Implications And Transparency," Forbes, September 25, 2023, https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/. "ACLU Files FTC Complaint Against Major Hiring Technology Vendor for Deceptively Marketing Online Hiring Tests as 'Bias Free'," ACLU, May 30, 2024, https://www.aclu.org/press-releases/aclu-files-ftc-complaint-against-major-hiring-technology-vendor-for-deceptively-marketing-online-hiring-tests-as-bias-free.
Jeffrey Dastin, "Insight - Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, October 10, 2018, https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
Daniel Wiessner, "Tutoring firm settles US agency's first bias lawsuit involving AI software," Reuters, August 20, 2023, https://www.reuters.com/legal/tutoring-firm-settles-us-agencys-first-bias-lawsuit-involving-ai-software-2023-08-10/. ACLU,
Sharona Hoffman and Andy Podgurski, "Artificial Intelligence and Discrimination in Healthcare," Yale Journal of Health Policy, Law, and Ethics 19, no. 3 (2020): https://ssrn.com/abstract=3747737. Andrada-Mihaela-Nicoleta Moldovan, Andreea Vescan, and Crina Grosan, "Healthcare Bias in AI: A Systematic Literature Review," 20th International Conference on Evaluation of Novel Approaches to Software Engineering (2025): https://kclpure.kcl.ac.uk/portal/en/publications/healthcare-bias-in-ai-a-systematic-literature-review. Paola Cozzi, "New critical reflections on the risk of algorithmic discrimination in healthcare," Tech4Future, July 1, 2025, https://tech4future.info/en/algorithmic-discrimination-healthcare/.
Robert Shanklin, Michele Samorani, Shannon Harris, Michael A. Santoro, "Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling," Philosophy & Technology 25, no. 96 (2022): https://doi.org/10.1007/s13347-022-00590-8. Sarah El-Azab and Paige Nong, "Clinical algorithms, racism, and “fairness” in healthcare: A case of bounded justice," Big Data & Society 10, no. 2 (2023): https://doi.org/10.1177/20539517231213820. Syed Ali Haider, Sahar Borna, Cesar A. Gomez-Cabello, Sophia M. Pressman, Clifton R. Haider, and Antonio Jorge Forte, "The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare," Journal of Racial and Ethnic Health Disparities (2024): https://doi.org/10.1007/s40615-024-02237-0. Kadija Ferryman, Nina Cesare, Melissa Creary, Elaine O Nsoesie, "Racism is an ethical issue for healthcare artificial intelligence," Cell Reports Medicine 5, no. 6 (2024): https://doi.org/10.1016/j.xcrm.2024.101617. Ayoub Bouguettaya, Elizabeth M. Stuart, and Elias Aboujaoude, "Racial bias in AI-mediated psychiatric diagnosis and treatment: a qualitative comparison of four large language models," npj Digital Medicine 8, no. 332 (2025): https://www.nature.com/articles/s41746-025-01746-4.
Bouguettaya et al, "Psychiatric diagnosis."
Abinitha Gourabathina, Walter Gerych, Eileen Pan, and Marzyeh Ghassemi, "The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs," FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, p. 1811, (2025): https://doi.org/10.1145/3715275.3732121.
Megan Summers, "Bodies of Knowledge," The Culture Crush, undated, https://www.theculturecrush.com/feature/bodies-of-knowledge. Anna Gooding-Call, "A History of Racism in American Public Libraries," Book Riot, March 8, 2021, https://bookriot.com/racism-in-american-public-libraries/. Willis N. Hackne Library, "Segregated Libraries," African American Libraries, undated, https://barton.libguides.com/c.php?g=1359309&p=10037850. HACKNEY LIBRARY
American Library Association, "Code of Ethics."
Hamsa Bastani, Obert Bastani, Alp Sunga, Haosen Ge, Özge Kabakcı, Rei Mariman, "Generative AI Can Harm Learning," July 15, 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486. Michael Gerlich, "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking," Societies 15, no. 1 (2025): https://www.mdpi.com/2075-4698/15/1/6. Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task," June 10, 2025, https://doi.org/10.48550/arXiv.2506.08872. Krzysztof Budzyń, Marcin Romańczyk, Diana Kitala, Paweł Kołodziej, Marek Bugajski, Hans O Adami, Johannes Blom, Marek Buszkiewicz, Natalie Halvorsen, Cesare Hassan, Tomasz Romańczyk, Øyvind Holme, Krzysztof Jarus, Shona Fielding, Melina Kunar, Prof Maria Pellise, Nastazja Pilonis, Prof Michał Filip Kamiński, Mette Kalager, Michael Bretthauer, and Yuichi Mori, "Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study," Lancet 10, no. 10, (2025): 896-903, https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/. Chiara Natali, Luca Marconi, Leslye Denisse Dias Duran, and Federico Cabitza, "AI-induced Deskilling in Medicine: A Mixed-Method Review and Research Agenda for Healthcare and Beyond," Artificial Intelligence Review 58, no. 356 (2025): https://doi.org/10.1007/s10462-025-11352-1.
Kosmyna et al., "Your Brain," 133-141.
Bastani et al., "Harm Learning," 7-9.
Natali et al., "AI-induced Deskilling." Budzyń, et al., "Endoscopist Deskilling."
Miles Klee, "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies," Rolling Stone, May 4, 2025, https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/. Maggie Harrison Dupré, "People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions," Futurism, June 10, 2025, https://futurism.com/chatgpt-mental-health-crises. Joe Wilkins, "A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say," Futurism, July 18, 2025, https://futurism.com/openai-investor-chatgpt-mental-health. Tracy Swartz, "Bots like ChatGPT are triggering ‘AI psychosis’ — even with no history of mental illness," New York Post, August 7, 2025, https://nypost.com/2025/08/07/health/bots-like-chatgpt-are-triggering-ai-psychosis-how-to-know-if-youre-at-risk/. Noor Al-Sibai, "OpenAI Says It’s Scanning Users’ ChatGPT Conversations and Reporting Content to the Police," Futurism, https://futurism.com/openai-scanning-conversations-police. Marlynn Wei, "The Emerging Problem of 'AI Psychosis,' Pyschology Today, September 4, 2025, https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis.
Kate Payne, "An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges," Associated Press, October 25, 2024, https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0. Angela Yang, Laura Jarret, and Fallon Gallagher, "The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame," NBC News, August 26, 2025, https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147.
Kate Wells, "Eating disorder helpline takes down chatbot after it gave weight loss advice," NPR, June 8, 2023, https://www.npr.org/2023/06/08/1181131532/eating-disorder-helpline-takes-down-chatbot-after-it-gave-weight-loss-advice.
Julia Jargon and Sam Kessler, "A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich," The Wall Street Journal, August 28, 2025, https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb.
Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, and Nick Haber, "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.," FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (2025): 599-627, https://doi.org/10.1145/3715275.373203. Hamilton Morrin, Luke Nicholls, Michael Levin, Jenny Yiend, Udita Iyengar, Francesca DelGuidice, Sagnik Bhattacharyya, James MacCabe, Stefania Tognin, Ricardo Twumasi, Ben Alderson-Day, and Thomas Pollak, "Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)," August 22, 2025, https://doi.org/10.31234/osf.io/cmy7n_v5.
Allison Parshall, "What Do Google’s AI Answers Cost the Environment?," Scientific American, June 11, 2024, https://www.scientificamerican.com/article/what-do-googles-ai-answers-cost-the-environment/. Kosmyna, et al., "Your Brain," 142. Wim Vanderbauwhede, "Estimating the Increase in Emissions caused by AI-augmented Search," arXiv, https://doi.org/10.48550/arXiv.2407.16894.
Alexandra Sasha Luccioni, Sylvain Viguier, Anne-Laure Ligozat, "Estimating the carbon footprint of BLOOM, a 176B parameter language model," The Journal of Machine Learning Research 24, no. 1, https://dl.acm.org/doi/10.5555/3648699.3648952. Renée Cho, "AI's Growing Carbon Footprint," State of the Planet, June 9, 2023, https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/. Lauren Jeffers, "The AI Boom Could Use a Shocking Amount of Electricity," Scientific American, October 13, 2023, https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/. Melissa Heikkilä, "AI’s carbon footprint is bigger than you think," MIT Technology Review, December 5, 2023, https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/. Noman Bashir, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti, "The Climate and Sustainability Implications of Generative AI," An MIT Exploration of Generative AI, March 27, 2024, https://mit-genai.pubpub.org/pub/8ulgrckc/release/2. Parshall, "AI Answers Cost." Adam Zewe, "Explained: Generative AI's environmental impact," MIT News, January 17, 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117. Wim Vanderbauwhede, "The real problem with the AI hype," Limited Systems, January 24, 2025, https://limited.systems/articles/the-real-problem-with-AI/. Michael Vereb, "AI's Environmental Impact: Calculated and Explained," Arbor, August 19, 2025, https://www.arbor.eco/blog/ai-environmental-impact. David Gerard, "Google quietly vanishes its net zero carbon pledge," September 5, 2025, Pivot to AI, https://pivot-to-ai.com/2025/09/05/google-quietly-vanishes-its-net-zero-carbon-commitment/. Victor Tangermann, "OpenAI’s New Data Centers Will Draw More Power Than the Entirety of New York City, Sam Altman Says," Futurism, September 28, 2025, https://futurism.com/artificial-intelligence/openai-new-data-centers-more-power-new-york-city.
Shannon Osaka, "A new front in the water wars: Your internet use," The Washington Post, April 25, 2023, https://www.washingtonpost.com/climate-environment/2023/04/25/data-centers-drought-water-use/. Parshall, "AI Answers Cost." "Data Centers and Groundwater Usage," The Joyce Foundation, August 6, 2024, https://www.joycefdn.org/news/data-centers-and-groundwater-usage. Zewe, "Explained." Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren, "Making AI Less 'Thirsty': Uncovering and addressing the secret water footprint of AI models," Communications of the ACM 68, no. 7 (2025): https://doi.org/10.1145/3724499. Joe Wilkins, "Small Towns Are Rising Up Against AI Data Centers," Futurism, May 4, 2025, https://futurism.com/small-towns-ai-data-centers. Ana Valdivia, "The supply chain capitalism of AI: a call to (re)think algorithmic harms and resistance through environmental lens," Information, Communication & Society 28, no. 12 (2025): https://doi.org/10.1080/1369118X.2024.2420021. "Inside the Water Crisis of Data Centers: Google, Meta, and the Hidden Costs of AI Growth," Digital Information World, August 21, 2025, https://www.digitalinformationworld.com/2025/08/inside-water-crisis-of-data-centers.html. Michelle Fleury and Nathalie Jimenez, "'I can't drink the water' - life next to a US data centre," BBC, July 10, 2025, https://www.bbc.com/news/articles/cy8gy7lv448o.
Baldur Bjarnason, "Why you should ignore most AI research you hear about on social media," Baldur Bjarnason, March 13, 2023, https://www.baldurbjarnason.com/2023/ignore-most-ai-research/. ACLU, "FTC Complaint." Bjarnason, "Intelligence Illusion," 76-80. Deepra Seetharaman, "SEC Investigating Whether OpenAI Investors Were Misled," The Wall Street Journal, February 28, 2024, https://www.wsj.com/tech/sec-investigating-whether-openai-investors-were-misled-9d90b411. Edward Zitron, "Sam Altman Is Full Of Shit," Where's Your Ed At?, May 21, 2024, https://www.wheresyoured.at/sam-altman-is-full-of-shit/. Edward Zitron, "Reality Check," Where's Your Ed At?, April 28, 2025, https://www.wheresyoured.at/reality-check/. Bender and Hanna, "AI Con," 5-23. Edward Zitron, "Oracle and OpenAI Are Full Of Crap," Where's Your Ed At? September 12, 2025, https://www.wheresyoured.at/oracle-openai/. Tangerman, "Massively Overhyped."
Rea N. Simons, "Beyond 'If We Use It Wisely': Character Ethics, the Virtue of Wisdom, and GenAI in Libraries," Library Trends 73, no. 4 (2025): 669, https://muse.jhu.edu/pub/1/article/968500.