All sources are listed at bottom for further reading.
Currently, defining artificial intelligence (AI) literacy, close on the heels of the generative AI boom, is in ongoing development and remains unsettled, but it's clear that faculty, in higher education and PK-12, already recognize its urgency. Cornell University's Center for Teaching Innovation does not mention "AI literacy" explicitly in their page on Ethical AI for Teaching and Learning, but they provide a short list of key issues on building literacy in generative AI, addressing ethics, privacy, and equity with intention for their faculty.
Three different explanations for what constitutes AI literacy are given below.
"Short and sweet"
Alyson Klein of Education Week, a veteran of educational technology (EdTech) coverage, considers it as 1) having a basic understanding of how A.I. works, 2) hands-on usage, and 3) engaging in discussion and analysis of the ethical implications of AI (2023).
"It's complicated."
Similarly, but with a scholarly approach, Long and Magerko determined in 2020 that AI literacy was "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." They consider digital literacy as a prerequisite for AI literacy. They produced a conceptual framework composed of five different thematic questions: What is AI?; What can AI do?; How does AI work?; How should AI be used?; and How do people perceive AI? providing the structure for their seventeen competencies and fifteen design considerations. However, a criticism of their literature review points out that only one of their sources consulted on digital or data literacy was from a library journal (Hervieux & Wheatley, 2023), despite the library and information science (LIS) fields' lengthy scholarly research and professional understanding of information literacy and its subtypes.
"Underlying concepts and ethics"
Echoing Klein, and Long and Magerko, in Conceptualizing AI literacy: An Exploratory Review, Ng et al. wrote "Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly" (2021).
It's worth noting that computational literacy, an in-depth understanding of the mechanics of AI development, does not factor in to any of these definitions of AI literacy, but ethics consistently does.
While definitions may vary, the following concepts consistently make up what many educators and researchers have identified as the essential parts of AI literacy: a basic understanding of what A.I. is and how it works, how to use A.I., determining when it is appropriate to use A.I., and applying this understanding to the use of A.I. while continuously engaging with the ethical considerations and implications around A.I.
Ethical considerations
Myriad economic, educational, ethical, legal, social, and psychological harms can be further perpetuated by AI tools in the absence of rigorous mitigation efforts like debiasing, regulatory oversight, and sociotechnological research. Below is a list of ethical issues brought up regarding AI, particularly generative AI, with sources for further reading.
AI bias (a.k.a. machine learning bias, algorithm bias, or algorithmic bias), refers to AI systems that produce and perpetuate biased results reflective of human biases within a society, including historical social inequality. It can be found in the initial training data, the algorithm, or the predictions (IBM Data & AI Team, 2023). Training data for a facial recognition algorithm that over-represents white people may create errors when attempting the same for people of color. Also, security data that includes information gathered in geographic areas that are predominantly black (or brown) could create racial bias in AI tools used by police (IBM Data & AI Team, 2023).
People process information and make judgments, and are inevitably influenced by their experiences and preferences. Consequently, they build their cognitive biases into AI systems through their selections and weighting. For example, favoring datasets gathered from Americans rather than sampling from many populations around the globe (IBM Data & AI Team, 2023) causing problems for datasets intended for global use.
Further reading
Center for Critical Race + Digital Studies. What is algorithmic bias? In A people's guide to finding algorithmic bias. https://www.criticalracedigitalstudies.com/peoples-guide-posts/what-is-algorithmic-bias.
Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1): 3, https://doi.org/10.3390/sci6010003.
A lot of attention is being necessarily paid to plagiarism enabled by natural language processing chatbots and its impact across higher education, but there are similar problems with audio/visual generators, too, in addition to bias baked into the results. “We have these technologies that we think are dispassionate and incapable of oppression,” [Van] Davis says. “In reality, they’re extraordinarily biased. The danger is that we are trained to think it’s unbiased and to trust it more than we trust humans.” (Downs, 2024).
In addition to academic dishonesty and plagiarism, some novice coders have been using AI-generated code to speed up their work creating messy internal problems. Hackers can use AI-generated code to aid cyber attacks and security breaches. All of the potential uses and misuses of general and generative AI is creating a host of problems in many fields and services.
Further reading
Barrett, P. M. & Hendrix, J. (2023, June). Safeguarding AI: Addressing the risks of generative artificial intelligence. NYU Stern Center for Business and Human Rights. https://bhr.stern.nyu.edu/wp-content/uploads/2024/01/NYUCBHRGenerativeAI_June20ONLINEFINAL.pdf.
Downs, L. (2024, April 25). Decoding generative AI and equity in higher education. WCET Frontiers. https://wcet.wiche.edu/frontiers/2024/04/25/decoding-generative-ai-and-equity-in-higher-education/.
Marcus, G. & Southen, R. (2024, January 6). Generative AI Has a Visual Plagiarism Problem Experiments with Midjourney and DALL-E 3 show a copyright minefield. IEEE Spectrum. https://spectrum.ieee.org/midjourney-copyright.
Generative artificial intelligence (GAI) adds a new dimension to the problem of disinformation. Freely available and largely unregulated tools make it possible for anyone to generate false information and fake content in vast quantities. These include imitating the voices of real people and creating photos and videos that are indistinguishable from real ones (Endert, 2024).
Further reading
Endert, J. (2024, March 26). Generative AI is the ultimate disinformation amplifier. DW Akademie. https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890.
Apart from the environmental toll of chip manufacturing, the training process for a single AI model, such as a large language model, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. AI model training can lead to the evaporation of an astonishing amount of fresh water into the atmosphere for data center heat rejection. Global AI energy demand is projected to increase to at least 10 times the current level and exceed the annual electricity consumption of a small country by 2026 (Ren & Wierman, 2024).
Further reading
Ren, S. and Wierman, A. (2024, July 15). The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts.
Exploitation via uncompensated (a.k.a. "stolen") labor without consent; destruction of the creative economy
AI art generators recover patterns and styles from huge numbers of published artworks and, upon strategic prompting, are capable of outputting works that replicate the style of existing artists for its users. This has the potential to devastate the creative economy. According to Trystan Goetze, AI image generators, at least those using diffusion models, involve a large scale and morally objectionable form of theft, rooted in the appropriation of vast numbers of existing artworks (2024).
Further reading
Goetze, T. S. (2024, June 5.) Art is theft: Labour, extraction, and exploitation: Or, on the dangers of stochastic Pollocks. FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. https://dl.acm.org/doi/abs/10.1145/3630106.3658898.
U.S. Federal Trade Commission. (2023, December). Generative artificial intelligence and the creative economy staff report: Perspectives and takeaways. https://www.ftc.gov/system/files/ftc_gov/pdf/12-15-2023AICEStaffReport.pdf.
Experts predict that companies will be able to decompose existing jobs into bundles of tasks and assess the extent to which each bundle can be entrusted to narrow AI (Eloundou et al, 2023).
All jobs that involve performing creative tasks, to any extent, are exposed with skilled segments of the labour force particularly at risk. Many jobs risk being stripped out of what makes them worthy and justify paying someone to perform them. An increasingly polarised and disrupted labour market in which humans have to compete against AI and against each other becomes a real possibility (Ponce Del Castillo, 2023).
Further reading
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models, v5. arXiv preprint. https://doi.org/10.48550/arXiv.2303.10130.
Ponce Del Castillo, A. (2023). Generative AI, generating precariousness for workers?. AI & Society. https://doi.org/10.1007/s00146-023-01719-9.
The market for the most advanced models of generative artificial intelligence (AI) may become extremely concentrated, due to the high costs of computational resources and the vast quantities of data required for training. In the absence of clear antitrust rules and other regulatory actions, market concentration in generative AI could lead to systemic risks and stark inequality because the barrier to entry requires immense computational power (Korinek & Vipra, 2024).
Further reading
Korinek, A. & Vipra, J. (2024, April 4-5). Market concentration implications of foundation models: The invisible hand of ChatGPT. Economic Policy. https://www.economic-policy.org/wp-content/uploads/2024/03/EcPol-2023-183.R1_Proof_hi_Korinek_Vipra.pdf.
Rawashdeh says that, just like our human intelligence, we have no idea of how a deep learning system comes to its conclusions. It “lost track” of the inputs that informed its decision making a long time ago. Or, more accurately, it was never keeping track. This inability for us to see how deep learning systems make their decisions is known as the “black box problem" (Rawashdeh, 2023).
Further reading
Rawashdeh, S. (2023, March 6). AI's mysterious ‘black box’ problem, explained. UM-Dearborn News. https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained.
Critics say large language models are collecting and often disclosing personal information gathered from around the web, often without the permission of those involved. Many online publishers and AI companies have added language noting that customer data may be used to train future models. In some cases, people have the option to choose not to have their data used for AI training, though such policies vary and data sharing settings can be confusing and hard to find (Fried, 2024).
Further reading
Fried, I. (2024, March 14). Generative AI's privacy problem. Axios. https://www.axios.com/2024/03/14/generative-ai-privacy-problem-chatgpt-openai.
Generative AI creators have argued that prompts do not reproduce the training data, which should protect them from claims of copyright violation, but audit studies have shown that end users of generative AI can issue prompts that result in copyright violations by producing works that closely resemble copyright-protected content. Human creators know to decline requests to produce content that violates copyright (Susarla, 2024), but AI bots do not.
Further reading
Susarla, A. (2024, March 22). Generative AI could leave users holding the bag for copyright violations. The Conversation. https://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760.
Excessive dependence on AI can diminish creativity and critical thinking as students become too dependent on AI-generated content. This dependency can foster complacency and reduce essential problem-solving skills (Zhai et al, 2024).
Further reading
Zhai, C., Wibowo, S. & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(28). https://doi.org/10.1186/s40561-024-00316-7.
A group of current and former OpenAI employees published an open letter on June 4, 2024 describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections. “We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
Further reading
Hilton, J., Kokotajlo, D., Kumar, R., Nanda, N., Saunders, W., Wainwright, C., Ziegler, D., Anonymous, Anonymous, Anonymous, Anonymous, Anonymous, & Anonymous. (2024, June 4). A right to warn about advanced artificial intelligence [Open Letter]. https://righttowarn.ai/.
In writing for Forbes, Bernard Marr, a business-centric IT and AI expert, has written that there are 15 major risks associated with artificial intelligence: lack of transparency, bias and discrimination, privacy concerns, ethical dilemmas/moral decision-making, security risks, concentration of power, dependence, job displacement, economic inequality, legal and regulatory challenges, an arms race, loss of human connection, misinformation and manipulation, unintended consequences, and existential risks (2023).
He is not alone. AI impact experts like Dr. Safiya Umoja Noble at UCLA and Prof. Meredith Broussard of NYU have published multiple books and articles, and delivered talks to fellow scholars, information literacy professionals, and the general public sounding the alarm on problems created and/or exacerbated by AI. Prof. Broussard has called bias in artificial intelligence "the civil rights issue of our time" (2021).
Ann Cairns, Vice Chairman for Mastercard wrote in her emerging technologies op-ed for the World Economic Forum, "The major problem with AI is what’s known as ‘garbage in, garbage out'," (2019) a well-known adage among data scientists. From Why AI is Failing the Next Generation of Women, she wrote:
We feed algorithms data that introduces existing biases, which then become self-fulfilling. In the case of recruitment, a firm that has historically hired male candidates will find that their AI rejects female candidates, as they don’t fit the mould of past successful applicants. In the case of crime or recidivism prediction, algorithms are picking up on historical and societal biases and further propagating them.
The ethics of artificial intelligence remain an ongoing discussion and area of research. Decisions on how to address these problems makes the discussion a high stakes one for everyone.
Visit a text generative AI bot and prompt it to list five scholarly article sources on a topic of your choice in a citation style of your choice, then 1) check the citations against one of HCC Library's style guides (MLA, APA, Chicago) for technical accuracy, and 2) check the HCC Library collection (via Eagle Library Search, individual databases, or eJournals finder) or the web (e.g. Google Scholar, DOAJ, Digital Commons Network) to find out whether the scholarly articles exist or whether they were "hallucinated," (read: false output). If you're unable to locate them, consider enlisting the help of an HCC librarian for verification and be sure to let them know what and why you're checking.
Consideration: Using hallucination to describe misinformation generated by AI is contentious. It entered common parlance in 2023, but critics and AI experts (like Usama Fayyad) say it inaccurately conveys consciousness and impaired rationality, but AI is neither conscious or rational. Confabulation is emerging as a more accurate term.
Recommendations: In a text document, note which bot you used, how you wrote your prompt, when (date/time) you prompted the bot, your list of results, and the accuracy of your results based on your follow-up investigation. Consider testing one or two other text GAI bots. Were the list of results the same, somewhat similar, or different? Were the answers more or less accurate with different bots? Would you consider using it again for this purpose in the future? Write your responses.
Armstrong, P. [paul.armstrong]. (2023, April). And here is ChatGPTs thoughts on the topic. Yes, it would be more accurate to say that AI models [Online forum post]. OpenAI Community.
https://community.openai.com/t/hallucination-vs-confabulation/172639/3.
Cairns, A. (2019, January 18). Why AI is failing the next generation of women. World Economic Forum. Retrieved May 11, 2024.
Center for Teaching Innovation. (n.d.) Ethical AI for teaching and learning. Cornell University. Retrieved May 7, 2024, from
https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning.
Downs, L. (2024, April 25). Decoding generative AI and equity in higher education. WCET Frontiers.
https://wcet.wiche.edu/frontiers/2024/04/25/decoding-generative-ai-and-equity-in-higher-education/.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models,
v5. arXiv preprint. https://doi.org/10.48550/arXiv.2303.10130.
Endert, J. (2024, March 26). Generative AI is the ultimate disinformation amplifier. DW Akademie.
https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890.
Fried, I. (2024, March 14). Generative AI's privacy problem. Axios. https://www.axios.com/2024/03/14/generative-ai-privacy-problem-chatgpt-openai.
Goetze, T. S. (2024, June 5.) Art is theft: Labour, extraction, and exploitation: Or, on the dangers of stochastic Pollocks.
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. https://dl.acm.org/doi/abs/10.1145/3630106.3658898.
Hervieux, S. & Wheatley, A. (2023, February 23). Creating an academic library workshop series on AI literacy: How can academic librarians foster critical AI literacy in their communities?
Hilton, J., Kokotajlo, D., Kumar, R., Nanda, N., Saunders, W., Wainwright, C., Ziegler, D., Anonymous, Anonymous, Anonymous, Anonymous, Anonymous, & Anonymous.
(2024, June 4). A right to warn about advanced artificial intelligence [Open Letter]. https://righttowarn.ai/.
IBM Data & AI Team. (2023, October 16). Shedding light on AI bias with real world examples. IBM. Retrieved May 15, 2024.
https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/.
Long, D., & Magerko, B. (2020, April 23). What is AI literacy? Competencies and design considerations. Proceedings of the 2020
CHI Conference on Human Factors in Computing Systems, Honolulu, HI, 1-16. https://doi.org/10.1145/3313831.3376727.
Klein, A. (2023, May 10). AI literacy, explained. Education Week. https://www.edweek.org/technology/ai-literacy-explained/2023/05.
Korinek, A. & Vipra, J. (2024, April 4-5). Market concentration implications of foundation models: The invisible hand of ChatGPT. Economic Policy.
Marr, B. (2023, June 2). The 15 biggest risks of artificial intelligence. Forbes. https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=419aa23c2706.
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2.
https://doi.org/10.1016/j.caeai.2021.100041.
Ponce Del Castillo, A. (2023). Generative AI, generating precariousness for workers?. AI & Society.
https://doi.org/10.1007/s00146-023-01719-9.
Rawashdeh, S. (2023, March 6). AI's mysterious ‘black box’ problem, explained. UM-Dearborn News.
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained.
Ren, S. and Wierman, A. (2024, July 15). The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review.
https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts.
Stening, T. (2023, November 10). What are AI chatbots actually doing when they ‘hallucinate’? Here’s why experts don’t like the term. Northeastern Global News.
https://news.northeastern.edu/2023/11/10/ai-chatbot-hallucinations/.
Susarla, A. (2024, March 22). Generative AI could leave users holding the bag for copyright violations. The Conversation.
Zhai, C., Wibowo, S. & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review.
Smart Learning Environments, 11(28). https://doi.org/10.1186/s40561-024-00316-7.
©2022 Houston Community College Libraries