Skip to Main Content

Artificial Intelligence in Higher Education: Recommended Listening & Viewing

An introduction to artificial intelligence, particularly generative AI and its impacts and ethical considerations for higher education. It includes resources for best use practices, potential instructional applications or mitigations.

Decorative AI generated waves header

Rainbow Line

Annotated Bibliography

 

Databricks. (2019, April 25). Understanding the limitations of AI: When algorithms fail | Timnit Gebru (Google Brain). [Video]. YouTube. https://www.youtube.com/watch?v=aKf6pB4p06E.

[15m 33s] Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability. [Site]

Gu, A. (Director). (2024). Bad input: Medical devices; Mortgage lending; Facial recognition. Consumer Reports. https://www.consumerreports.org/badinput/.

[Medical Devices, 5m 28s] As COVID-19 and other respiratory illnesses sweep the U.S., healthcare professionals rely on pulse oximeters to measure blood oxygen levels. But the devices are less accurate for patients with darker skin. What happens when a gold standard in diagnostics doesn’t work as well for some groups as it does for others? [Site]

[Mortgage Lending, 5m 49s] Historically, lenders have considered non-white neighborhoods to be at high risk for default, and unfair redlining practices have prevented generations of people of color from accumulating the wealth typically facilitated by homeownership. Many years later, computer algorithms may continue to perpetuate these biased practices. [Site]

[Facial Recognition, 5m 06s] This form of artificial intelligence, which detects physical features to identify individuals, is no longer the stuff of science fiction. In today’s world, this groundbreaking and controversial technology not only unlocks smartphones but also helps corporations and governments in the surveillance of citizens. [Site]

Hall, D. (Producer). (2017, September 5). The age of the algorithm. (No. 274). [Audio podcast episode]. In Technology. 99% Invisible.https://99percentinvisible.org/episode/the-age-of-the-algorithm/.

[28m 48s] O’Neil also sees a more fundamental issue at work: people tend to trust results that look scientific, like algorithmic risk scores. “I call that the weaponization of an algorithm … an abuse of mathematics,” she says, “and it makes it almost impossible to appeal these systems.” And this, in turn, provides a convenient way for people to avoid difficult decision-making, deferring to “mathematical” results. [Site]

Kantayya, S. (Director). (2020). Coded bias. 2020. [Film]. 7th Empire Media.

[1h 26m] When MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all (IMDB, 2020). Sundance Film Festival - Grand Jury prize Nominee, Hamptons International Film Festival - Excellence in Documentary Filmmaking, Calgary International Film Festival - Best International Documentary

Marcus, G. F. (Host). (2020-2023). Humans vs. machines with Gary Marcus. [Audio podcast series.] In Apple Podcasts: The World As You Know It.

https://podcasts.apple.com/us/podcast/humans-vs-machines-with-gary-marcus/id1532110146.

A series about the perils and promise of artificial intelligence with cognitive scientist, Gary Marcus. For all the progress in artificial intelligence over the last 70 years — computers can now beat people at chess and Go, detect fraud, give driving instructions and write like Shakespeare — we still don’t know how to build AI we can trust. The risks are serious, but the potential benefits of AI are too great to be ignored. In this special edition series, host Gary Marcus — cognitive scientist, best-selling author and AI entrepreneur — digs into AI’s history, present and future, bringing to life some of the technology’s most significant breakthroughs and failures. He enlists engineers, scientists, philosophers and journalists working at the forefront of AI to explore what’s wrong with our current approach and ways we might change it. [Site]

Young, J. R. (Producer). (2024, February). Inside the push to bring AI Llteracy to schools and colleges. [Audio podcast episode.] In EdSurge Podcast. Soundcloud.

https://soundcloud.com/edsurge/inside-the-push-to-bring-ai-literacy-to-schools-and-colleges.

[53m 54s] There’s a growing push to add AI literacy as a subject in schools and colleges. But what exactly is AI literacy, and can educators promote curiosity about the subject amid their own concerns, and in some cases fear, around ChatGPT and other generative AI? [Site]

©2022 Houston Community College Libraries