"Hooked: How to Build Habit-Forming Products" is a how-to guide for how to help smartphones unlock their true potential as Skinner boxes, and it does a good job at it. It's a book I've wanted to read for a while, and my recent thinking on addiction made it seem like a good time to do so. It also feels like a good book for my first ever book review.
When reading business books like this, the biggest risk factor is just how dense, or more accurately, just how sparse in new insight it is going to be. Zero to One by Peter Thiel is already quite sparse and very philosophical (a blog post summarizes it quite well), and it's probably the most popular book for new startup founders.
Hooked was distinctive in that beyond what felt like superficial discussions on the morality of the book (which you can probably guess the contents of yourself, the knowledge can be used for bad and for good etc...), it was a very down-to-earth, practical book, with Nir Eyal leading the reader through the components of what he called the "Hook Model", interspersed with many practical examples from existing businesses.
The most striking aspect of the book is that while it was written in 2013, practically any brand Nir Eyal chooses as a pedagogical example because of its exemplary use of some aspect of his "Hook Model", is more successful now than when he wrote the book. The same goes for any individual feature he showcases. He gushes over the genius of the "Infinite Scroll" feature, because it makes user action easier (by entirely removing the action of navigating to the next page), and its inclusion in Pinterest, three years before it was added to Instagram and Twitter. It really adds to the believability of the importance of his concepts.
Without further ado, this is Nir Eyal's Hook Model. If you've read Atomic Habits, you might notice some similarities.
The Hook Model
Trigger
A hook starts with a trigger, which when the user initially uses the product is external, such as a notification, an ad, etc. After the user becomes... habituated... then the trigger is internal. Nir likes the example of a girl feeling the fear that a moment is going to be lost forever, and that fear triggering her to habitually take out her phone and take a picture.
It's notable that these internal triggers are almost always negative — for instance, a sudden need for validation making you open social media, or a need to escape to open Netflix. Nir frames this as the user learning, on a subconscious level, that your product is the best solution whenever this problem is encountered, but that takes time, which is why you must rely on external triggers initially.
Both in order to make the trigger initially effective, and to speed up the user connecting it to their problem, the ideal is to find a way to time the external trigger such that it intersects the internal trigger. The example Nir chooses to exemplify this is Any.Do (a to-do list app) sending you a notification to add items immediately after a meeting on your calendar ends, to intersect with the internal trigger which is your fear that you will forget a task that was brought up.
And of course, Nir encourages frequency. Nir implies you can't create a habit at all if your trigger happens less than weekly, and that even the least beneficial activity can become a habit if you simply make your user perform it frequently enough. His rule-of-thumb target for social apps is that your habitual users need to use the app multiple times per day. I did get a bit upset reading this - I wouldn't be surprised if part of the reason I have to disable notifications on every single app on my phone is this book, though realistically it is just Moloch.
Action
The next element of the model is the craving — oh wait, that was Atomic Habits. Nir actually skips it, I assume because it's self-evident that there is a craving between trigger and response. Anyway, the next element of the Hook Model is the Action. The user interacts with your product because of the external or internal trigger from before. Here the ideal is to make the action as easy as possible, with the platonic ideal of an action being no action at all.
This is why Nir loves the infinite scroll so much - there's no easier navigation interface than just skipping the action entirely, and it explains why it has become so universal (as a tool to get people addicted to social media).
Nir discusses that if you can't remove an action, there are many ways that you can make it easier. Making it faster to perform is one way, but requiring less brain cycles from the user is another (make the button big, bright, and singular) or making it more routine (there's a certain type of "I agree" checkbox that we all just click these days without even thinking about it).
This is the most clearly universally applicable part of the hook model: not every product has a reward to vary or a way to trigger the user to use it frequently, but every product can probably be made simpler.
Variable Reward
A discussion of variable reward is the only thing I was really expecting from this book, and it did deliver. You hear constantly about how Loot Boxes are infinitely more addicting than any previous incarnation of microtransactions, but why is that the case? Shouldn't behaviour be enforced by the average reward? How can some randomness make a net negative behaviour feel rewarding?
As always, Nir is most focused on convincing you that his model is correct (through studies) and showing you how to apply it (through many detailed examples), and is less interested in the why of things (in his defense, probably a question that is harder to answer and less interesting to most of the book's audience).
He does give some directional speculations - his best guess is an explanation is some degenerated form of curiosity. When you get a random reward, in addition to the reward itself, your brain also rewards you for the knowledge you are gaining. You didn't know what the reward was going to be, and now you know. You got to sample from a distribution.
One can see how this could extend to slot machines - each pull of the lever is objectively net negative in a vacuum, but each pull is also a sample from a distribution which the brain considers positively. The minor net negative from the money you lose in expectation is counteracted by that intrigue, and you get to a situation where the brain thinks releasing dopamine is an appropriate response to witnessing the lever.
This wasn't my guess, and I'm not sure I'm convinced — I assumed that it's because the brain is bad at modeling the distribution, and is actually not sure it really is slightly net negative, as opposed to slightly net positive. On the other hand it's well known that habitual gamblers are typically fully aware that the house always wins, which makes it seem that at least on a conscious level they do know the distribution — and there's no reason to think my intuition on this is better than Eyal's. Mentally I'm keeping this problem open - I would read a book on this question specifically.
He also gives examples of how variable rewards can be added naturally — for instance, users posting a blog post on a blogging platform get a variable reward from other users through the number of likes they get. The uncertainty is highly effective at turning them into habitual users.
Investment
The final stage of the Hook Model is Investment, in the sense of the user investing in your product, through time, content, skill, money, or data. What does this mean? Imagine you put 1,000 hours into the video game League of Legends. You're pretty damn good at it. Someone tells you that actually Dota 2 is better. Do you switch? Probably not. You've invested so much time at getting better at it, you're not going to consider switching for something that is practically the same. The same is doubly true if you've also spent $100 on cosmetics and built a "reputation" in the LoL community.
I keep seesawing on if this is the most or least icky aspect of the Hook Model. On the one hand, when a user invests in the product, it really does make the product better for them. Some level of investment is obviously mandatory — you can't have the user spending time using your product without becoming proficient (and as such invested) in it. On the other hand, the min-maxing version of this is user lock-in — making migration as difficult as possible. Make it hard to extract your data, make the interface different from that of competitors, make it a closed ecosystem.
I also feel less confident tech companies are maximising Investment at the rate that they are trying to maximise variable rewards, triggers, or (lack of) actions. Many products that I've seen take off recently are explicitly about being cross-compatible and easy to abandon, maybe a very positive sign that we are wising up. For example, Obsidian is rising meteorically, and it's incredible in how simple it is to leave. Same goes for the VSCode IDE, which has been recently killed by Cursor, with not much pain, since they are fundamentally identical.
Some thoughts
This is a very convincing book, in the sense that it makes a very strong argument that one can view a massive portion of recent technological advancement through the lens of the accelerating weaponization of addiction, that it's surprisingly easy in practice to implement, and that it represents a race to an exceptionally deep bottom.
Nir is optimistic that humans will build metaphorical antibodies to handle these developments, but I'm not so sure (I'm hopeful for something closer to literal antibodies, such as a version of Ozempic with stronger anti-addictive effects). His newer book, "Indistractable: How to Control Your Attention and Choose Your Life", seems to be his way to move this process along, and I'm excited to read it at some point (this pattern of helping to create addictive technologies, and then pivoting to trying to fight them, seems common — the creator of the infinite scroll went through the same career transformation).
Personally I really do believe that this is a problem we as a society were always doomed to bump up against, and I very much hope it won't end up being our great filter, something that has been previously posited.
On a lighter note, which book should I review next? Comment and I’ll pick randomly assuming it’s semi-reasonable.
Maybe your speed :)
River Town - Hessler
Self-Made - TI Burton