I recently read an article in The Atlantic by Tristan Harris, a former Product Manager at Google who studies the ethics of how the design of technology influences people’s psychology and behavior. The piece was titled: “The Binge Breaker” and it covers similar ground to his previous piece “How Technology Hijacks People’s Minds—from a Magician and Google’s Design Ethicist.”
Harris favors “technology designed to enhance our humanity over additional screen time. Instead of a ‘time spent’ economy where apps and websites compete for how much time they take from people’s lives, [he] hopes to re-structure design so apps and websites compete to help us live by our values and spend time well.” (For more see “Screen Time Statistics: Average Screen Time in US vs. the rest of the world.”)
Harris’ basic thesis is that “our collective tech addiction” results more from the technology itself than from “on personal failings, like weak willpower.” Our smartphones, tablets, and computers seize our brains and control us, hence Harris’ call for a “Hippocratic oath” that implores software designers not to exploit “psychological vulnerabilities.” Harris and his colleague Joe Edelman compare “the tech industry to Big Tobacco before the link between cigarettes and cancer was established: keen to give customers more of what they want, yet simultaneously inflicting collateral damage on their lives.”
[I think this analogy is weak. The tobacco industry made a well-documented effort to make their physically deadly products more addictive while there is no compelling evidence of any similarly sinister plot regarding software companies or their products deadly. Tobacco will literally kill you while your smartphone will not.]
The social scientific evidence for Harris’ insights began when he was a member of the Stanford Persuasive Technology Lab. “Run by the experimental psychologist B. J. Fogg, the lab has earned a cult-like following among entrepreneurs hoping to master Fogg’s principles of ‘behavior design’—a euphemism for what sometimes amounts to building software that nudges us toward the habits a company seeks to instill.” As a result:
Harris learned that the most-successful sites and apps hook us by tapping into deep-seated human needs … [and] He came to conceive of them as ‘hijacking techniques’—the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing … McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call “variable rewards.” Messages, photos, and “likes” appear on no set schedule, so we check for them compulsively, never sure when we’ll receive that dopamine-activating prize.
[Note though that because we may become addicted to technology, and many other things too, doesn’t mean that someone is intentionally addicting you to that thing. For example, you may become addicted to your gym or to jogging but that doesn’t mean that the gym or running shoe store has nefarious intentions.]
Harris worked on Gmail’s Inbox app and is “quick to note that while he was there, it was never an explicit goal to increase time spent on Gmail.” In fact,
His team dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives—or, for that matter, whether each design decision was making our lives worse?
[This is an honorable view, but it is extraordinarily idealistic. First of all, improving email does minimally improve our lives, as anyone in the past who waited weeks or months for correspondence would surely attest. If the program works, allows us to communicate with our friends, etc., then it makes our lives a bit better. Of course, email doesn’t directly help us obtain beauty, truth, goodness, or world peace if that’s your goal, but that seems to be a lot to ask of an email program! Perhaps then it is a case of lowering our expectations of what a technology company, or any business, is supposed to do. Grocery stores make our lives go better, even if grocers are mostly concerned with profit. I’m not generally a fan of Smith’s “invisible hand,” but sometimes the idea provides insight. Furthermore, if Google or any company tried to improve people’s lives without showing a profit, they would soon go out of business. The only way to ultimately improve the world is to effect change in the world in which we live, not in some idealistic one that doesn’t exist.]
Harris makes a great point when he notes that “Never before in history have the decisions of a handful of designers (mostly men, white, living in SF, aged 25–35) working at 3 companies”—Google, Apple, and Facebook—“had so much impact on how millions of people around the world spend their attention … We should feel an enormous responsibility to get this right.”
Google responded to Harris’ concerns. He met with CEO Larry Page, the company organized internal Q&A sessions [and] he was given a job that researched ways that Google could adopt ethical design. “But he says he came up against “inertia.” Product roadmaps had to be followed, and fixing tools that were obviously broken took precedence over systematically rethinking services.” Despite these problems “he justified his decision to work there with the logic that since Google controls three interfaces through which millions engage with technology—Gmail, Android, and Chrome—the company was the “first line of defense.” Getting Google to rethink those products, as he’d attempted to do, had the potential to transform our online experience.”
[This is one of the most insightful things that Harris says. Again, the only way to change the world is, to begin with the world you find yourself in, for you really can’t begin in any other place. I agree with what Eric Fromm taught me long ago, that we should be measured by what we are, not what we have. But, on the other hand, if we have nothing we have nothing to give.]
Harris hope is that:
Rather than dismantling the entire attention economy … companies will … create a healthier alternative to the current diet of tech junk food … As with organic vegetables, it’s possible that the first generation of Time Well Spent software might be available at a premium price, to make up for lost advertising dollars. “Would you pay $7 a month for a version of Facebook that was built entirely to empower you to live your life?,” Harris says. “I think a lot of people would pay for that.” Like splurging on grass-fed beef, paying for services that are available for free and disconnecting for days (even hours) at a time are luxuries that few but the reasonably well-off can afford. I asked Harris whether this risked stratifying tech consumption, such that the privileged escape the mental hijacking and everyone else remains subjected to it. “It creates a new inequality. It does,” Harris admitted. But he countered that if his movement gains steam, broader change could occur, much in the way Walmart now stocks organic produce. Even Harris admits that often when your phone flashes with a new text message it hard to resist. It is hard to feel like you are in control of the process.
[There is much to say here. First of all, there are many places to spend time well on the internet. I’d like to think that some readers of this blog find something substantive here. I also believe that “mental hijacking,” is a loaded term. It implies intent on the part of the hijacker that may not be present. Yes, Facebook, or something much worse like the sewer of alt-right politics, might hijack our minds, but religious belief, football on TV, reading, stamp collecting, or even compulsive meditating could be construed as hijacking our minds. In the end, we may have to respect individual autonomy. A few prefer to read my summaries of the great philosophers, others prefer reading about the latest Hollywood gossip.]
Concluding Reflections – I begin with a disclaimer. I know almost nothing about software product design. But I did teach philosophical issues in computer science for many years in the computer science department at UT-Austin, and I have an abiding interest in the philosophy of technology. So let me say a few things.
All technologies have benefits and costs. Air conditioning makes summer endurable, but it has the potential to release hydrofluorocarbons into the air. Splitting the atom unleashes great power, but that power can be used for good or ill. Robots put people out of work, but give people potentially more time to do what they like to do. On balance, I find email a great thing, and in general, I think technology, which is applied science, has been the primary force for improving the lives of human beings. So my prejudice is to withhold critique of new technology. Nonetheless, the purpose of technology should be to improve our lives, not make us miserable. Obviously.
Finally, as for young people considering careers, if you want to make a difference in the world I can think of no better place than at any of the world’s high-tech companies. They have the wealth, power, and influence to change the world for the better. Whether they do that or not is up to the people who work there. So if you want to change the world, join in the battle. But whatever you do, given the world as it is, you must take care of yourself. For if you don’t do that, you will not be able to care for anything else either. Good luck.
7 thoughts on “Summary of “How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist””
I can offer a narrow perspective on this issue. I come from the world of computer game design. I begin by noting the addictive power of games. This software can be addictive even though the actual challenge is inane: solving puzzles, navigating spaces, managing limited resources, and hand-eye coordination. Nobody in the world would proudly declare, “I can navigate mazes like a clairvoyant rat!” Nobody would want their tombstone to read “A great solver of obscure puzzles”.
The two factors that make games so addictive, despite their puerile content, are interactivity and a finely tuned learning curve.
Interactivity is intrinsically fascinating. What little boy would want to look at pictures of butterflies when he can catch one and rip its wings off to see what happens? You can watch a hundred learned lectures on the Internet, and learn more in an hour of conversation with a professor. Are not the most powerful experiences in our lives the interactions we had with other people?
This is how a juvenile game with absurd premises, a cliched hero, monsters with teeth that don’t fit inside their mouths, and female prizes with mechanically impossible breasts can make more money than any novel or movie. Dumb as it may be, the power of its interactivity more than compensates for its banal content. The immense artistry that goes into a great novel, play, or movie cannot hold a candle to even childish interactivity.
The second factor at work here is the finely tuned learning curve. The “learning curve” is the sequence of increasingly difficult tasks that challenge the player through the course of the game. The game starts off clean, simple, and easy to master. But as you progress to new levels, you encounter greater complexity of challenges. But each step in the sequence is tiny. The player is then encouraged to play “just one more game” in the reasonable expectation that this time he’ll get it right.
I left the games industry because it doesn’t make the world any better. Indeed, the millions of hours that people expend playing games do nothing to make them smarter or better people. Many years ago, I mastered some of the early games. Now, thirty years later, what have I gained from those victories? Absolutely nothing.
Chris – Thanks so much for your unique insights here, which I think are mostly spot on. I actually wrote a piece years ago, “The Impact of Video Games on College Students,” Communications of the ACM, March 2004. Reprinted as “Online Game Playing Can Be Addictive,” in Addiction, Opposing Viewpoints Series, (Farmington Hills, MI.: Greenhaven Press 2009), in which I discussed the problem of so many college students becoming addicted to games. So again your points are well taken. As usual your comments demand an in-depth retort. Perhaps some of my readers might join in. Thanks again for your thoughts. JGM
“Tobacco will literally kill you while your smart phone will not.” Get your broad point there, Doc, and it seems accurate to say that (cellphone “brain tumor risk” hardly withstanding, as the data says so far), unless one happens to be driving while using it.
In my experience (of my own observed impairment and that of other drivers on their phones I’ve encountered on the road) it can indeed be risky, if not always lethal (you know, like tobacco), as more than a few documented cases of “phone-impaired” driving suggest.
I suppose almost anything can kill you—you can even drink too much water—but cell phones and video games aren’t generally as deadly as tobacco.
Thank you, Mr, Messerly. The responses, so far are interesting. I am neither technophile not technophobe. Mostly, I just don’t participate. Some folks just won’t understand this stance….make it difficult to live without, so far as they can. I did not need a thousand dollar bicycle either. So, I’m restoring a dumpster junker—because I CAN.
I’ve never played a modern video game, have a dumb phone, and didn’t know what my computer science students were talking about when the discussed the web. I do like the easy access to information though. Saves all those trips to libraries that I used to live in. The problem though is all the disinformation out there.
“Robots… give people potentially more time to do what they like to do”
And the plaint that robots are merely circuitry doesn’t hold. The human nervous system is circuitry, as well—organic circuitry wired to the killer hominid.
But it is true technologies are dehumanizing, that was and is the trajectory: to escape our killer-ape origins.