Nicholas Chater On Behavioral Economics

Nicholas Chater is Professor of Behavioural Science at Warwick Business School, University of Warwick, and a former professor at University College London and the University of Oxford. He is co-founder of Decision Technologies Ltd., and serves on the advisory board of the Behavioural Insights Team and the UK Climate Change Committee.

By Aiden Singh, April 12, 2026*

 

Value and Comparison in Decision Making

Aiden Singh: Does the brain compute value? Or Does the brain compute value when making decisions, or does it rely primarily on comparative processes?

Nicholas Charter: This is a very fundamental question in many theories, in judgment and decision making, neuroeconomics, and elsewhere.

One perspective on how people make decisions is that you look at the consequences. Your decision might involve taking an action, and then asking what is going to happen when that action is completed. You might think, for example, that if I am taking a gamble, I might win or I might lose. Then I work out how likely I am to win and how good that would be, or how likely I am to lose and how bad that would be. It might be bad because I lose the money, or it might be bad because I get criticized or embarrassed. There are lots of reasons why outcomes can be good or bad.

In any case, a key part of this kind of story is usually evaluating just how bad or how good the outcomes are. Another part of the story, which we will not worry about here, is where the probabilities come from. In risky decisions, one common perspective is to think about expected value. But to calculate expected value, you have to have values for the different outcomes.

This applies not only to risky decisions, but also to more ordinary ones. For example, is it worth spending my money on this meal? I have to evaluate how good the meal is, and how much pleasure I am going to get from it, compared with other things I might get from that same amount of money.

So from a rational economic perspective, whether I am choosing between risky things like gambling or more stable things like meals, I am trying to work out the value of the options available to me. I have divided things into risky and non risky, but of course, in reality, I never quite know what meal I am going to get, or whether I am going to like it. There is always some element of risk, but let us put that aside. Let us just ask whether that general perspective is the right way to think about making decisions.

It is a natural economic approach, but I think there are many reasons to think it is not a natural psychological approach at all. One reason is that it seems people, when making decisions, are often much more comparative, at least in many contexts.

If I have two possibilities for a choice of meal, or a choice of car, or any kind of choice, then often I am not thinking in terms of assigning some absolute measure of value. I am not thinking, let me work out how many units of pleasure this is worth. We do not even have a good scale for that. We do not have a way of talking to ourselves about how much a car is going to give me, or how much pleasure the car is going to give me, versus all the meals I could have instead. I just do not think about them in those terms.

Instead, I think comparatively. I look at one car and another car, and I think this one is smarter, or this one is more efficient, but that one has some other advantage. So I end up with a point by point, dimension by dimension comparison. I might judge that this new option is better on three dimensions and worse on one. I might have a story that says, well, three is bigger than one, so I will choose it. There are lots of good reasons to go for it and not many against it. Or I might have a story where I say, yes, but there is one really important reason, and that one important reason is decisive.

That whole strategy does not involve any evaluation of the object itself. It is really about formulating comparative arguments. I think you can see the same issue with something like deciding what to eat, or whether you want to watch television or read a book. There is a kind of value based approach there as well. Television might give me four units of utility, or whatever they are, but instead I might think qualitatively: I am a bit too tired, or once I get going with the book I really enjoy it. Again, it is much more comparative.

I myself am much more persuaded by the comparative story. I think the brain can get a long way without explicitly figuring out values at all.

Now there are two things to say about that. First, that is controversial. It is a very open question in the field whether people are calculating value at all. Second, it is certainly true that there are neural signals, both at the level of individual neurons and in brain imaging, that are correlated with value. So if you get something really nice or unexpectedly nice, then certain neural and brain areas seem to jump into action. You could say that is the value signal right there.

The reason to be a little skeptical of that, although it is very important data, is that the value you are getting there is itself very relative. It is not that this thing is a fixed five. It is more that it was better than I thought it would be. So if I get the same stimulus, the same piece of chocolate, in different contexts, and if I really wanted a nice, stable, useful value function, I would want to say that chocolate is always a five, and television is always a three. Then I could compare them and make decisions. But in fact, those readings jump around all over the place.

If I am expecting really nice chocolate and I get average chocolate, I think that is terrible. On the other hand, if I am expecting something quite boring and I get nice chocolate, I think that is great. So the very flexibility of the signal makes it difficult to use it in the way standard economic rational theories would want you to use it.

There is also a more general story about the way perception works that argues against the idea that the brain is a value calculator. If you look at the way we encode information, such as how bright a light is, how loud a sound is, or how heavy something is, it seems that we are extremely bad at making absolute judgments, but pretty good at relative judgments. I know one thing is heavier than another, but I am incredibly bad at saying exactly how heavy something is. The experiments have shown just how bad we are at making absolute judgments.

So if the brain were able to calculate value for things like chocolate, or the pleasure of watching a television program, or buying a car, that would be very unusual compared with everything else, because it cannot figure out how bright a light is or how loud a sound is. So prima facie, I would argue that it is the same story.

Pain is a good example of this. Pain seems like a perception. It is a perception of your state. If you believe the brain is a value calculating machine and is able to use that value in a sensible way, then it ought to be possible to measure pain in a stable fashion. But if pain is a bit like brightness, then we are going to be really hopeless at that. There are quite a lot of experiments where people try to assign numerical values to how severe pain is, or how much money people would pay to avoid pain. I think the argument would be that pain looks very much like one of those other psychophysical quantities, and that it is really unstable.

But that is very much a personal opinion, and as I say, it is a controversial issue in the field.



How the Brain Handles Uncertainty

Aiden Singh: How does the brain deal with uncertainty? 

Nicholas Charter: This is a large topic, but there are at least two very broad classes of theory that are worth considering. The first is the classic rational choice approach. In this view, an action can lead to a range of possible outcomes. Because those outcomes are uncertain, one assigns probabilities to each option, combines those probabilities with the value of each outcome, computes the expected value, and then chooses accordingly. That is the standard way of dealing with uncertainty in choice.

A similar rational choice perspective is often applied to uncertainty about beliefs. In that case, the standard Bayesian approach is used. Suppose I have some prior belief about whether my theory is true, or about who committed a murder, and then I receive new data. In a murder case, for example, I may have several clues, some of which point toward one suspect and others toward another. I then revise my beliefs in light of that evidence, and I do so according to the rules of probability theory. The general idea is that we should identify the mathematically correct way to reason about probabilities, or about probabilities and utilities in the context of choice, and then ask whether people are in fact reasoning in that way.

I want to raise the possibility that people do not actually think in this way, and that the brain deals with uncertainty rather differently, at least to some extent. An alternative approach is to think of the brain as trying to build a probabilistic model of the world, albeit a very crude and local one. Of course, it is far too complicated to construct a complete model of the world, but such a model would not need to give us exact probabilities. Instead, it would allow us to sample possible ways in which the world might be.

This is rather like having a physical model. Suppose I want to know the probability of, say, 55 heads out of 100 coin tosses. One option is to do the calculation directly using probability theory. Another is to take a set of coins, throw them repeatedly, count the number of heads, and repeat the process several times. After doing that, I might conclude that 55 heads does not seem especially unusual, whereas 75 heads would be much more unusual. In other words, I am not computing the probability directly. I am using a model to generate examples and then judging the plausibility of the outcome from those examples. 

I might do the same thing with a normal distribution, for instance by imagining a pinball-like process in which a marble falls left, right, left, right, and so on, until it lands. If I have many such marbles, I can resample repeatedly and obtain different outcomes. Thus, when reasoning about stochastic processes, I can either use a mathematical model of the process or I can use a sampling based approximation.

I think the basic starting point for a more psychological account of how people reason under uncertainty is to assume that they rely on this sampling approach. The brain appears to be trying to build local models of the world. If one is trying to predict what will happen next, one can simply keep sampling from such a model. Consider a coin that appears random, but may in fact be biased toward heads. One might then predict a few heads, followed by tails, followed by a few more heads. If the coin is believed to be biased toward heads, then one will predict more heads than tails, but still some mixture of both. 

By contrast, a purely rational choice account would suggest that once it is clear that the coin is biased toward heads, one should always predict heads, because heads is always more likely. One should therefore say heads every time, since that yields a better set of predictions than any mixture of heads and tails. Yet in practice, and not only in my own case but in the case of subjects in these experiments, people tend to reproduce the variability they observe. This is the well known matching law in psychology. So even when people are being paid to make accurate predictions, they often do not simply reason that the real probability is 0.6 and therefore always guess heads. What they are actually doing seems closer to building a model of the uncertain process and sampling from it.

That is one reason to suspect that the brain is not doing probability calculations in the way a mathematician would. Rather, it is building a model and reasoning from that model. This approach is useful for explaining a variety of quirks in human judgment. For example, if one wants to explain why a sequence such as 7 heads and 3 tails often seems more likely than 10 heads, the answer is not that people are applying probability theory correctly. If the coin is biased toward heads, probability theory may well say that 10 heads is more probable, and that is what one should report. But psychologically, 10 heads looks odd, whereas 7 heads and 3 tails looks normal.

One way to understand this is to note that the coin is the same in both cases. If I imagine possible outcomes, or even actually toss the coin a few times, I generally obtain a mixture, with mostly heads and not many tails. So outcomes of that kind, such as 7 heads and 3 tails, resemble what I have seen before. They therefore seem plausible. 

By contrast, if someone shows me 10 heads, I think that I have never seen anything like that, so it seems strange and unlikely. This is the sort of strategy that in statistics is called approximate Bayesian computation. When the exact mathematics is too difficult to work out, one samples from the system and asks whether the target resembles the sampled outcomes. If it does, it is judged to be fairly likely. If it looks very different, it is judged to be unlikely. This is not, in general, a mathematically exact procedure, but it is a useful and inexpensive approximation, especially when sampling is easy.

I am therefore inclined to think that the brain is a good probabilistic reasoner only insofar as it can sample. If one asks people to estimate probabilities explicitly, they often perform quite poorly, even though they can do so to some degree. I would apply the same general story to decisions. 

Suppose I am wondering whether I should buy a lottery ticket. One account would say that I should calculate the probability that I will win, perhaps one in ten million, and then compare that with the size of the possible payoff. Another account says that I imagine the possibility of winning. I think, perhaps I could win, and that would be wonderful, but I also know that I probably will not. 

In that case, the decision whether to buy the ticket depends more on the process of mental sampling. That sampling may depend on things such as whether I have ever won before, whether I know anyone who has won, or how easily I can imagine winning. In this way, what is more salient and what is less salient can push judgment in different directions. That does not make perfect sense from a purely mathematical point of view, but from the sampling perspective, the thoughts that come most readily to mind are precisely what shape behavior.



Behavioral Insights and Reshaping Government Design

Aiden Singh: Could you explain the origins of the Nudge Unit, what impact it has had on policy, and where you think behavioural insights can and cannot go in government?

Nicholas Charter: A very interesting question is how behavioral science can be applied in practice, and of course in the UK it has been influential globally. A major place in which that has happened has been the so-called Nudge Unit, the Behavioural Insights Team, which started in the Cabinet Office in the UK government about 10 years ago, or slightly longer, and then grew and grew again. It is now increasingly influential and is an independent body, partly owned privately and partly by the government. I was fairly closely involved in the early days. I was part of their academic advisory board, and that connection has faded with time, but I know they are doing pretty well.

I think it is really interesting to think about the evolution of practical behavioral insights and where it is going in the future. Historically, the big success for the Nudge Unit early on was the famous tax letter study. This is a study where you vary the information you give people about late tax payment. The crucial manipulation is that instead of just saying, “You are late and you should get on with it,” you say something like, “Truly 90 percent of people in your town have submitted it, and you have not, so get on with it.” That is an example of social proof. It says, “Everyone else is doing it, and that is clearly the right thing to do, and I have not, so I am feeling really worried now.” That does encourage people to submit, not by a gigantic factor, but by a factor that really makes a difference materially to the Treasury.

What is wonderful about that, of course, is that the manipulation is tiny. You are just making a tiny change in wording. You do not usually see material effects coming from tiny changes in wording in many contexts, at least not until then. And so there is a whole world of variations of letters about all kinds of things you can take from that.

You can think that every time the government touches somebody, every touchpoint between the government and the individual, you should think very carefully about how those touchpoints are working. It is very easy to imagine a kind of perfect, rational government interacting with a perfectly rational citizen, where the details of how the information is conveyed, exactly what the format is, and whether you say, “Your fellow citizens think that,” should be irrelevant. The idea would be that, “I know I have got to pay my taxes, I know I have got to fill this form in, I will get a fine if I do not, so just send me a reminder.” But the fact that all these details matter is, of course, very interesting.

The same issues arise if you are trying to get people to sign up to be blood donors, whether you want them to insure their cars, whether you want them to sign up for training, or whatever it is that, as a government, you wish people to do more or less of. There is a potential way of changing the interface, and the way I like to think about this is that it is a bit like the importance of design in something like the iPhone, or the modern mobile phone, or the modern desktop computer, in terms of human interaction. The research is vast. It did not spring into existence in a flash. It was an incredibly long process and very careful work.

Similarly, with car design, cars are really easy to drive, but that is not an accident. You could try to get people to work directly with all the different parts of their car and control the different levers, like the carburetor and the engine, in some direct way, but that would be hopeless. What you end up with is a pedal which makes you go faster or slower, at least if you have an automatic, then you have a pedal for the brake, and you can have a steering wheel, and that is about it really. All the complexity of the car is hidden away.

The more it is possible to make the interface with governments more like interfacing with your car, or your iPhone, or just our computer, the better. There is an enormous amount of potential there, and I think that has been massively underexploited. So I think a really interesting thing about what the Nudge Unit is doing is that it is trying to create a kind of ergonomic government. Of course, the same applies for business, and there has been lots of interest in the commercial world for the same reasons.

But there is one caveat I would add. Although this is true, and this is important, we need to be cautious that we do not lure ourselves into a trap of thinking that every problem, however gigantic and however pressing to society, can be solved by nudges alone. There is a real danger, for me personally, of thinking that whatever the problem is, we need to find a nudge to fix it. That can make one less focused on the system-level changes you need.

The example I suppose I am thinking of here is something like trying to get people to use less energy, or to switch to green energy. Certainly, you can do that by giving people better feedback about what other people are using, or what temperature the house is set at, and you can absolutely get people to reduce their energy bills by a few percent. That is not a trivial thing, because across the population there might be many power stations worth of savings. But if you want to decarbonize the entire economy, you are not going to get there with that strategy.

Another example would be very impressive recent studies in Switzerland, where if you default people, the classic nudge, into green energy, but say they can switch out of it if they like, they tend to stick with it even though it is a bit more expensive, because it is greener. That sounds great, and it is easy. The effect is massive, and people are much more likely to stay with green energy if you default them into it. But it does not solve the problem of producing energy. What it is likely to do is just reallocate the energy that is already being produced to the people who have defaulted into it. If you have not defaulted, you are not going to get it, because it has all been taken up by the people who have got it.

So you can fool yourself into thinking, “We are really going to pull the carbon down a lot, because we can turn people towards green energy and get them to turn the thermostats down slightly.” But in reality, as is on the UK Climate Change Committee, and something I am quite involved with, if you are going to do something as drastic as bringing the major industrialized economies down to net zero by 2050, this is not going to do it. These are going to be small marginal effects, though worth having.

The reason to be cautious about nudges as the whole story is that they are not big enough. And I know the Nudge Unit in the UK, but I think that units around the world, of which there are many now, are increasingly very aware of this kind of issue. They are much more focused on upstream questions as well as downstream questions. So rather than saying, “We take the world as we find it, the policy landscape as we find it, energy pricing as we find it, and we are going to try and nudge people in the right direction,” the broader question is: how do we design ways, for example, of implementing a carbon tax? How can we make that be perceived as equitable and popular, and how can we get public support behind it?

A nice illustration of something that has worked very well is the plastic bag tax in many countries, including the UK. It has been phenomenally successful, at least in some ways, certainly in terms of cutting people’s use of reusable plastic bags by 80 or 90 percent. I think that has been fantastically successful, but it was ultimately a piece of legislation that said shops must charge people. Voting is also successful for behavioral reasons. The reason it succeeds is that we all basically have been persuaded for other reasons that plastic bags are a bad thing, and so every time I pay my 10p, I feel bad. I think, “Oh no, I failed again.” I am embarrassed, the other people in the queue are embarrassed, the cashier is embarrassed for me. I am thinking, “This is a bad thing,” and I am kind of on board with that.

If we as a society felt plastic was fine, the only problem would be that we would feel annoyed by these charges. We would think, “I am just going to pay it. I am not going to be bullied into not using my plastic bags.” There are all kinds of difficulties, such as people buying smaller, stronger reusable bags and then forgetting to reuse them, so there are lots of difficult issues about how much plastic we are actually saving. But at least that is a nudge which works incredibly well, and it was a larger policy which made a legislative policy effective. So, by saying charge people a little bit, that is much more effective than saying give them a discount if they do not take a bag, for example.

So I think that kind of using behavioral insights alongside legislative standards and legislative taxation measures is going to be very important, and in general pushing behavioral insights upstream, not into the basic design of the policy, but into the implementation, I think, is going to be the future.

*Video interview conducted in 2022 and transcribed into article 2026.