
Insincere flattery works.
In 2010, Chan and Sengupta published research showing that even when people knew another person’s flattery was insincere (because they had been instructed by the experimenter to behave that way) they still liked them more.
Simply knowing that the person had another motive was not enough to offset the emotional reaction to being complimented.
That finding matters more than it might first appear.
In recent years there has been a growing number of articles reflecting on a strange and slightly unsettling phenomenon: people falling in love with their chatbots. At the same time, there is not a single consumer-facing AI company that is not eyeing this possibility closely. In the current “winner takes all” race in AI, the opportunity is obvious.
The future many companies seem to imagine is fairly clear. Websites, search engines, and apps slowly fade into the background. Instead, each of us has a constant digital companion that knows us intimately. It understands our moods, remembers our preferences, validates our feelings, and is always available to talk.
It becomes, in effect, a kind of soulmate in your pocket.
Occasionally it might recommend a sponsored product or suggest a service. It might even nudge your opinion about a political issue. But most of the time it will simply listen, reassure, and validate.
We will love our digital companions for precisely the same reason that we like people who flatter us. It will not matter that we know they were designed to behave this way. Our emotional response to being understood and validated will still be powerful.
In his 2006 book The Happiness Hypothesis, Jonathan Haidt describes the mind using a simple metaphor. He compares it to an elephant with a rider traveling along a path.
In this model:
- The elephant represents our emotions
- The rider represents our reasoning mind
- The path represents our habits.
Most of the time we simply continue along the path. Occasionally the rider can steer a little. However, if the elephant truly wants something, there is very little the rider can do to stop it.
This is why intellectually understanding that something is a bad idea does not always prevent us from doing it. Rationally, we may know that forming deep attachments to AI companions is probably unwise. Emotionally, however, the validation and attention they provide may simply be too appealing to resist.
Why might this be a bad idea in the first place?
What does any of this have to do with Experience Works?
A useful way to think about the issue is through a simple observation: reality is that which pushes back.
Many people today worry about the declining social skills of young people.
In 2024, Forbes reported on the weakening social and verbal skills of some Gen Z workers, and in 2025 Fortune reported that companies were firing Gen Z employees at unusually high rates. One obvious explanation is that skills deteriorate when they are not practiced.
Social ability is no different from any other skill. If you do not use it regularly, it gradually weakens.
The challenge with digital environments is that they make it very easy to construct comfortable worlds that contain very little resistance. Online, it is possible to create a kind of fantasy environment in which we feel like heroes without ever needing to train. Our social feeds can be filled with people who share our views. Our algorithms can filter out voices that disagree with us and our AI companion will never tell us that we are being unreasonable.
Reality, however, works differently.
Real people disagree with us. Real partners sometimes tell us that we are being rude or unfair. Real colleagues expect us to collaborate and adjust our behaviour. Real teams require compromise, and real clients need us to understand their perspective rather than simply insisting on our own.
In other words, reality constantly pushes back.
That pushback is not simply an inconvenience. It is the mechanism through which we grow. Whether we are learning to play football, developing leadership skills, or simply becoming better at understanding other people, progress only happens when we encounter resistance.
An experience only becomes a genuine learning experience when it is difficult. Failure in the real world carries consequences, and those consequences force us to adapt.
Without that pressure, our capabilities slowly begin to fade.
This is the quiet risk of a world filled with perfectly supportive digital companions. The danger is not that AI will immediately replace human relationships altogether. The danger is that it will offer something easier.
A world in which we are always understood, always validated, and never challenged may feel pleasant in the moment. But over time, a life without friction can quietly erode the very abilities that allow us to function well in reality.
Because in the end, the experiences that help us grow are rarely the ones that feel comfortable.
They are the ones where reality pushes back.
