In this book, Zuboff explains the new economic and social system that has allowed the likes of Google, Facebook, Amazon to become some of the largest companies in the world today. She then explains the implications of such a system and why we need to be cautious about its current and future developments.
This book caught my eye because it was recommended on multiple best book lists in 2019 (including the TIMES, Bloomberg, New York Times, as well as Obama's personal book list). Is it worth your time? I think the ideas contained in this book are very important. But it is a BIG book, at times unnecessarily wordy, and in parts poorly structured. I hope these notes help distill its core ideas.
This book summary is divided into the following sections:
(1) What is surveillance capitalism?
(2) Three Stages of Surveillance Capitalist System
(3) Should we be worried about Surveillance Capitalism?
"Surveillance capitalism" (SC) is the name that Zuboff gives to the economic system that has promoted the growth of firms that profit from owning and selling data about human behaviour (e.g. our consumption patterns). These firms are "surveillance capitalist" - e.g. Google, Facebook, and Amazon. She defines SC as follows:
"A new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales."
Let's break this down. Zuboff's model of the SC machinery looks like this:
To see the difference between SC and MC, let's think about the process of manufacturing cars (a typical manufacturing process in a MC system):
Here are two key differences between SC and MC:
The first stage of surveillance capitalism is the extraction of data from users. Crucially, Zuboff makes a distinction between data that is collected for the purpose of improving the product (e.g. how well the search engine works) and data that is collected for reasons unrelated to product improvement (i.e. for the benefit of the firms that the surveillance capitalists are selling prediction products to). She calls the latter "behavioural surplus".
A big claim of this book is that:
"Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to product or service improvement, the rest are declared as proprietary behavioural surplus."
As people started to search online more, Google started to produce new data. For example, in addition to key words, each Google search query produces data such as the number and pattern of search terms, how a query is phrased, its spelling, punctuation, dwell times, click patterns, locations.
In its early days, Google started using these data-by-products to improve the search engine. By figuring out what people searched for, what people clicked on when they searched, Google's algorithms could learn to produce more relevant and comprehensive search results for its users. The logic is as follows - more queries mean more learning; more learning means more relevance; more relevance means a better search engine.
However, Google soon figured out that these data could provide a broad sensor of human behaviour. In the late 2000s, Google started to feed this "behavioural surplus" into a predictive model called "matching" in order to target ads at specific users. The basic idea is to match ads with queries. Each time a user queries Google’s search engine, the system simultaneously presents a specific configuration of a particular ad, generated by the matching system.
Google could then track when users actually clicked on an ad (the "click-through" rate). The higher the click-through rate, the more other firms would want to advertise with Google. This matching system was formalized in a patent that they submitted in 2003 titled "Generating User Information for Use in Targeted Advertising". Zuboff argues that this is problematic because the main beneficiaries of this targeted ad were other firms, rather than the person doing the searching on Google.
The second stage of surveillance capitalism is the prediction of our behaviour based on the raw data extracted in stage 1. (This is also called "economies of scope" in the book.)
Surveillance capitalist companies build models to predict what you're going to do next based on the data extracted in stage 1. This is the backbone of the Netflix/Youtube "recommendation" system - based on what you've watched in the past, the model can suggest what you will like in the future. This prediction is therefore often spoken about under the banner of "personalization".
The key to remember is that for these models to make accurate predictions, surveillance capitalist firms need lots of data to "train" the model (i.e. you need to feed in past information about how users behave in order for these models/machines to learn how the current set of users will behave in the future). So as competition between surveillance capitalist companies grow, their incentive to extract more data from us in order to build better prediction models also increase.
The third stage in surveillance capitalism is the modification of our behaviour. (This is also called "economies of action" in the book.)
At this stage, machine processes are configured to intervene in the real world among real people and things. These interventions are designed to enhance the certainty that the user will do certain things - they nudge, tune, herd, and modify behaviour in specific directions.
There are three main paths to behavioural modification:
This game planted pokemons inside cafes to get customers in there. This is an example of gamification as a way to change behaviour. Companies would pay Niantic (the owners of Pokemon Go) to be locations within the virtual game board as a way of getting foot traffic!
“Niantic’s distinctive accomplishment was to manage gamification as a way to guarantee outcomes for its actual customers: companies participating in the behavioral futures markets that it establishes and hosts. Hanke’s game proved that surveillance capitalism could operate in the real world much as it does in the virtual one, using its unilateral knowledge (scale and scope) to shape your behavior now (action) in order to more accurately predict your behavior later.”
Zuboff's answer is a massive "yes". I think all of her arguments boil down to the following two main concerns:
The source of this anti-democratic and anti-egalitarian nature is twofold.
Zuboff claims that SC operations are "a challenge to the right to the future tense, our ability to imagine, intend, promise, and construct a future." The idea here is that if SC operations are able to predict what we're going to do and to use its machinery to eventually modify what we're going to do, then our freedom is gradually being stripped away.