Preferences are logically prior to incentive structures

If we want to say “it is immoral to try and influence people’s preferences because [insert boringly stupid Rawlsian reason here]”, then we should just say that, not pretend that the problem is far harder to solve than it actually is because we’ve restricted ourselves to assuming that everyone’s preference relation is purely self-interested and we just have to fix incentives to counter that.

On my list of “Opinions I currently hold”, number six reads as follows:

While “improve incentive structures” is a good way to improve outcomes, it has obvious limits if everyone is particularly immoral, so “improve people” is both an inescapable goal and also an almost domain-independent Pareto improvement

A friend recently commented that this was very oblique, so this is a short post intended to elucidate what I mean.

A generic slide from an economics course on preference relations.

Let’s consider a classic economic incentive problem: the principal–agent problem. I restate it here as follows. A principal desires that some certain outcome be achieved (e.g. a shareholder desires that the stock of a company increase in value), so they delegate their authority to an agent, who acts on their behalf (e.g. the shareholder delegates authority to the CEO). But the agent may have incentives that fail to align with the principal’s (e.g. the CEO may have an incentive to increase their own salary past some optimum point, which would not be in the interests of the company but would be in the interests of the CEO), so the principal has to put some place of checks and balances in place to ensure that the agent’s incentives align with the principal’s incentives.

The principal–agent problem is trivially “solvable” if we just define the agent’s preference relation to be the same as the principal’s preference relation: no checks and balances will be needed, since the agent will by construction always act in the interests of the principal. In order for us therefore to arrive at a principal–agent “problem” that we solve by incentives, we must assume their preference relations don’t align. In other words, preferences are (clearly) logically prior to incentive structures. The principal–agent problem, externality analysis, game theory, and indeed any other way of modelling decisions using preference relations can by definition tell us nothing about how such preferences are formed or what types of preferences exist in the real world, because these problems always assume some preference relation and then work from there.

I do not believe I have said anything interesting or insightful in the above, and yet the above is seemingly forgotten in almost all discussions about incentive structures. For instance, large numbers of economic problems become trivially solvable if we assume that we can change people’s preferences to (e.g.) incorporate externalities into their own preferences. Political science as a discipline largely falls away if you assume you have some means of ensuring that only people of moral integrity ever run for office. The constant refrains among replication-crisis commentators that “we must reform the incentives” seems to assume you could never reform people’s preferences to achieve the same effect. To belabour the point here, all of these discussions take preferences as given and then discuss how to fix incentive structures to ensure outcomes, but it’s very rare that anyone gives a reason to take preferences as given and not to consider the possibility that we could get people to have different preferences.

The reason this silence on preference formation is so bizarre is because preferences clearly are malleable and varied—literally all wills are potential principal–agent problems, and yet many executors execute wills to the letter because they want to uphold the wishes of the deceased! And if we wanted to study how and why these preference relations develop, and what we might want to do to ensure that people’s preferences are a bit less dodgy, there are whole sub-disciplines that study preference formation (in psychology, sociology, anthropology, philosophy, etc.)! If we want to say “it is immoral to try and influence people’s preferences because [insert boringly stupid Rawlsian reason here]”, then we should just say that, not pretend that the problem is far harder to solve than it actually is because we’ve restricted ourselves to assuming that everyone’s preference relation is purely self-interested and we just have to fix incentives to counter that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s