Seeing Like a Computer, Reading Like a Computer

Pair of books contrasting training computers to see like humans and teaching humans to read like computers.

Reading Like a Computer

Facebook content moderators are stationed around the globe and are most often hired by third-party companies. These companies create slide decks for training their armies of sub-contractors. Training slides are the only known repository for Facebook’s guidelines and address a vast range of topics from censoring pornography to defining hate speech. Reading Like a Computer organizes and graphically presents the contents from leaked slide decks that describe Facebook’s hate speech moderation rules.

These training materials exhibit PowerPoint design at its worst: bullet-point oversimplification, lack of organizational coherence between rules and examples, and internal inconsistencies. Reading Like a Computer considers the syntactic and semantic discrepancies between what moderators are expected to allow and block on the social networking platform. The title implies that content moderators are trained to think like an algorithm—deciding yes/no responses to complex and contextually layered communication.

Reading Like a Computer exposes underlying problems in Facebook’s policies and questions Facebook’s developing AI to operate on these ad hoc rules. Reading Like a Computer is not only an exercise in decoding a set of byzantine guidelines, but also a guide to the compounding dangers that culturally agnostic, ahistorical, and algorithmic thinking can inflict when used as a guideline for a complex, global “community of friends.”


Seeing Like a Computer

Automatic Alternative Text is a feature offered by Facebook for image captions. The goal of these keyword descriptions is to provide context to photos for visually impaired users who rely on screen readers to interact with Facebook. The computer-generated texts are not visible on the interface, but can be found in the image tag of the HTML source code.

Seeing Like a Computer documents images with their automated captions from Mark Zuckerberg’s Facebook Wall. The images in the book are black-white inverted so that their contents are not immediately recognizable to the reader. This manipulation also enhances the contrast in the photo so that outlines of figures are more prominent. These images complement the focus on objects in the alternative text, for example, phrases like “one person, smiling” or “person, outdoor, nature.” The pairing of the robotic captions with distorted photos of Facebook’s founder produce an eerie effect of diminishing people to objects of surveillance while literally and figuratively hiding from view important content and context.

Diagram from Facebook’s Automated Alternative Text patent application

This book’s exploration of automated captions raises questions about what the computer is trained to recognize and what is left out. While the automated captions may be more helpful than not having alternative text at all, this book explores the experience of automated accessibility.

These books can be purchased at unknownunknowns.org.

Back to Top