Book Review: Atlas of AI

In Atlas of AI, Kate Crawford challenges the conventional comparison between human intelligence and artificial intelligence. She argues that likening AI to human minds often neglects the embodiment, relationships, and environments integral to human functioning. While AI is typically perceived as disembodied and abstract, Crawford demonstrates that it is deeply material and embodied—constructed from natural resources, human labor, and vast amounts of data.

Crawford highlights that AI involves classification decisions impacting identities, attempts to interpret human emotions, and serves as a tool for state power. Moreover, she emphasizes that AI is not autonomous; it heavily relies on human-provided data and predefined rules. This perspective forms the backbone of the book’s narrative, offering a fresh lens through which to view AI.

An interesting point in the introduction is Crawford’s tendency to equate AI with machine learning. While “AI” is a term prevalent in marketing and popular discourse, “machine learning” is more frequently used in technical literature. However, this view is somewhat narrow. AI encompasses more than machine learning—it includes fields like search algorithms, optimization, planning, scheduling, and knowledge representation. Major AI conferences and journals cover a broad range of topics beyond machine learning, indicating that researchers actively use the term “Artificial Intelligence” in their scientific work.

Crawford also brings attention to parties not traditionally considered in discussions about AI ethics: contract workers, immigrants, environmental impacts, and harm to other species. She sheds light on the complex web of inequality and politics that underpins AI development and deployment.

Chapter 1 delves into AI’s dependency on raw materials like lithium, essential for building rechargeable batteries in AI-powered devices. Crawford visits places like Silver Peak, Nevada—a massive underground lithium lake—to highlight the environmental damage, miners’ health issues, and displaced communities resulting from what she terms “computational extraction.” This phrase refers to the extraction of natural resources to support computational technologies, revealing the often overlooked environmental and human costs of AI.

The computation required to build AI models doesn’t exist in a vacuum. Computers must be manufactured, materials extracted, and energy generated—often through environmentally harmful means. Crawford cites research indicating that training a single natural language processing (NLP) model can produce over 660,000 pounds of carbon dioxide emissions, equivalent to 125 round-trip flights from New York to Beijing. This stark statistic underscores the significant carbon footprint associated with AI development.

In Chapter 2, Crawford explores the evolving relationship between humans and technology in the workplace, focusing on human-robotic hybrid work in warehouses and assembly lines. She situates current practices within a historical context of standardization, simplification, optimization, and scaling—processes that predate AI’s emergence in industries like car manufacturing and meat processing.

The chapter examines how technology is used for worker surveillance and tracking, shedding light on the changing dynamics of labor. Crawford discusses the gig economy and the often precarious conditions faced by workers who support AI systems, highlighting issues of inequality and exploitation.

Chapter 3 addresses AI’s insatiable demand for vast amounts of data. Initially, data about people—such as voices, facial images, and texts—became essential for training and testing AI systems, often detached from the individuals and contexts from which the data was collected. Early AI developments paid little attention to privacy, but increasing awareness and regulations now require some form of consent.

With the advent of the internet, diverse types of data became readily available, blurring the lines between public and private information. Crawford highlights the complex interplay between data collection and privacy concerns, emphasizing that data is not a neutral resource but is embedded with human values and biases.

Data labeling necessitates predefined classes, making the decision on these categories crucial in building AI models. Chapter 4 focuses on the assumptions and implications of these decisions. Classifications reflect specific worldviews and often assume universality, particularly concerning gender, race, and other human characteristics.

Crawford connects the issue of bias to classification, emphasizing that the absence or misrepresentation of certain labels can perpetuate systemic biases in AI systems. She argues that biases are not merely technical errors but are inherent in the data itself, which carries human values and normative frameworks. For example, categorizing race poses complex issues regarding how racial identities are conceptualized and understood.

In Chapter 5, Crawford examines AI’s application in recognizing human emotions through facial analysis. This task is based on the controversial assumption that emotions are universal and can be accurately detected by machines. She traces this assumption back to pre-AI studies on physiognomy and affect recognition, which have faced significant skepticism regarding their validity.

Crawford critiques companies like Emotient (acquired by Apple) and academic endeavors like Affectiva, which claim to detect emotions from facial expressions. She argues that these speculative AI ventures are built upon questionable scientific foundations, such as Paul Ekman’s theories on universal facial expressions, which lack rigorous empirical support. This historical perspective underscores the challenges and ethical concerns in AI’s efforts to interpret human emotions accurately.

Chapter 6 explores how many AI practices have roots in military priorities and methodologies, influenced by early military funding for AI research. Crawford illustrates how military-driven classification and surveillance frameworks have permeated civilian life, affecting areas like banking, airport security, and public surveillance.

She discusses the ethical implications of AI’s use in state power and control, noting that despite corporate ethical guidelines—like Google’s AI Principles prohibiting the development of weapons or technologies that facilitate harm—AI applications in warfare and surveillance raise significant concerns. Bias or discrimination in these contexts can have dire, life-altering consequences.

Atlas of AI presents a compelling argument for understanding AI as an embodied and material phenomenon deeply intertwined with natural resources, human labor, and data. Crawford challenges the conventional view of AI as disembodied intelligence, urging readers to consider the broader implications of AI’s development and deployment.

By examining the historical, social, and ethical dimensions of AI, the book offers a nuanced perspective that enriches our understanding of this rapidly evolving field. Crawford’s insights prompt critical reflection on the often unseen costs of AI and encourage a more responsible and equitable approach to technology.

This book is a thought-provoking read for anyone interested in AI, ethics, and the societal impacts of technology. It serves as a crucial reminder that AI is not just about algorithms and data—it’s about people, resources, and the world we live in.

Leave a comment