Bouke Klein Teeselink on AI, Automation, & the Future of Work
Bouke Klein Teeselink is a Lecturer in Economics in the Department of Political Economy at King’s College London. He is also affiliated with the King’s Institute for Artificial Intelligence and the AI Objectives Institute.
By Aiden Singh, May 7, 2026
The Impact of AI on the UK Labor Market
Aiden Singh: You are currently working on research that shows how large language models (LLM’s) having affected the UK labor market. And you’ve looked at how these affects have differed between firms and also between occupations. What did you find?
Bouke Klein Teeselink: Yes. Let me first very briefly explain how I am finding what I am finding. Indeed, I am looking at both occupations and firms. The logic of the analysis is to compare those that are more exposed to AI with those that are less exposed to AI. Of course, that immediately raises the question of how you measure AI exposure.
Here I am using data from others who classify, for each occupation and for all the tasks in that occupation, which of those tasks can be done by AI at, let us say, 50 percent of the time of the people who are currently doing it. That gives you, for each occupation, the number of, or the fraction of, tasks that can be done by AI, which is your occupation-level AI exposure score.
Then, to get to the firm level, I use the universe of people working in those firms, and I look at the average exposure scores of all the occupations of those people. That then gives you your firm-level exposure score. So this gives us high-exposure versus less-exposed firms, and high- versus low-exposure occupations.
What I then do is compare these two groups over time, both before and after the introduction of ChatGPT. If we look at both firms and occupations, we can see that those high- and low-exposure firms and occupations behave very similarly before ChatGPT. They have similar numbers of people working there, similar numbers of new jobs opening up, similar salaries, and so on.
We see that trend remains very stable over time until the introduction of ChatGPT. After that, we see a gradual divergence, where in the UK firms that are more exposed, and occupations that are more exposed by the definitions I just gave, have seen a decrease in the number of new people being hired, a decrease in salaries, and a decrease as well, in particular, in the number of junior people employed.
In other words, we see the first evidence emerging that highly-exposed occupations and highly-exposed firms have seen a labor market slowdown after the introduction of ChatGPT.
What is also interesting is that this is not constant across the board. Most of these effects are highly concentrated in high-wage industries and high-wage occupations. Within high-wage industries and occupations, we see a clear decline in exposed versus less-exposed groups. And in low-wage industries, high-exposure and low-exposure firms, and high-exposure and low-exposure occupations, behave very similarly even after the introduction of ChatGPT. So that is the broad finding of that paper.
Aiden Singh: You mentioned briefly entry-level roles being affected. There is a lot of concern, for example in the financial sector, about entry-level analysts being replaced by AI. Your research seems to show that this is actually a legitimate concern.
Bouke Klein Teeselink: Yes. That is what my research is showing. That is what similar research in the US is showing. There is a pretty famous “canaries in the coal mine” paper that is analytically very similar to my paper, but in the US rather than the UK. The findings are super consistent with each other, where they also find a slowdown in hiring and employment, especially for younger and lower-seniority people.
This is a very big concern because, traditionally, this is how we develop talent. It is not just that we want to give young people opportunities. Young people not being able to find jobs is a tragedy in and of itself. We also want to have a labor force that has certain expertise and certain capabilities in a particular state.
The ability to work with AI typically requires a degree of expertise in the domain. The way we traditionally develop that is through this sort of exchange where young people do the more tedious grunt work for these companies, they do not get paid all that much for it, but in return they gain that expertise implicitly through doing the job. Over time, they grow into the role, their salaries increase, and they become experts.
If AI does those initial tasks, that pipeline dries up. So this is not only a problem for young people right now; it is also a problem for society in the future. I think that is one of the main policy problems we need to grapple with, and we do not really have a clear solution to it as of now.
Aiden Singh: One of the big debates around AI is whether it will wipe out jobs and cause massive unemployment.
But many economists who study technological change tend to argue that, while new technologies often do destroy some jobs, they also create new jobs.
You’ve looked at how automation can reduce the need for labor but also potentially increase the pool of qualified workers for specific jobs. What did you find?
Bouke Klein Teeselink: The paper you are alluding to is one I put out together with Dan Carey. This is part of my work with the AI Objectives Institute, where I am the chief economist.
What we do in that paper is look at the tasks in a job that are exposed to AI and that tell us something about labor market trajectories. But as it turns out - and I think there is good reason to believe this is true - it is not just the number of tasks that are automatable that matters. It also matters a lot which tasks are automatable, and in particular whether it is the high-expertise or the low-expertise tasks in the job that can be done by AI.
If it is the high-expertise tasks, the job becomes easier because the hard parts disappear. If the job becomes easier, more people can do it. And we should also expect a decrease in salaries, because the expertise premium gets eroded.
If, however, the easy parts of the job get automated, the job actually becomes harder. Fewer people would be able to do the job, because automating the easy tasks raises the expertise requirements. That reduces the number of qualified people for those kinds of jobs and potentially increases salaries.
That is exactly what we are seeing: we’re finding early evidence that, for jobs where the exposure is in the low-expertise tasks, salaries increase compared with jobs that have the exact same number of exposed tasks but where it is the high-expertise tasks that are exposed.
I think that is a really interesting and important margin of adjustment we need to think about when considering automation: it is not just how many tasks, but also which tasks.
Another element that I and some others have written about, although this is still very much work in progress, is that one of the most important questions when thinking about the future of work is the elasticity of demand.
The basic story is this. Imagine I am a freelance researcher and I am selling research products. I am selling, say, ten units of research a year, and each unit of research is $10,000. So I earn $100,000 a year from my research.
Now suppose we automate half of my job away, so I only need half the time to produce a unit of research. What will happen, of course, is that the price will drop. The price of a unit of research will probably drop by half, because there is competition and researchers will undercut each other. The automation logic would then say that is where it ends: all of the researchers are automated and they are earning a lot less.
But that is not where the story ends. If the price of research drops by half, the demand for research goes up. The degree to which it goes up is the elasticity of demand. If demand is elastic, the demand for research will more than double when the price halves.
If demand for research more than doubles when the price halves, I will actually be selling more research. Either I get paid more to do the same amount of research, or we need more researchers, because half the work is automated but demand for research goes up more than twice. So we would actually have more people working in research because of automation.
It all depends on the elasticity of demand. We actually do not know that much about which jobs have a high elasticity of demand. This is one of the big open questions in economics right now, and I think we should be devoting many more resources to figuring out where we should expect demand to increase when prices drop because of automation.
There are a few examples I always use here. The obvious historical example is the car industry. When we got assembly lines, people became much more productive at producing cars. Did we get a massive decrease in the number of people working in the car industry? No. We got the exact opposite, because suddenly people could afford cars. Rather than 1 percent of the population having a car, it became 50 percent of the population having a car, because demand increased so much with the drop in prices. As a result, we actually got more people working in the car industry despite the productivity increase.
This can happen again with many industries, especially those where demand will increase. If you want to think about it that way, think about what are currently luxury products for rich people. For example, if the price of an interior designer drops by 80 percent, I would get an interior architect every single time I move, which would be wonderful. Right now I cannot afford it.
There are many other examples. I would get a full-body scan at a hospital every month if that cost me very little. There are all kinds of products where, if the price drops a lot, we would see a lot more people buying them. Basically, whatever rich people currently buy would be done more widely.
- - - - - - -
Automation or Augmentation
Aiden Singh: Based on this finding, and the example you brought up of the auto industry where automation augmented workers’ capacity and actually increased demand for labor, do you believe AI will primarily augment workers’ capabilities or serves as a substitute, broadly speaking?
Bouke Klein Teeselink: It is very hard to answer that in the aggregate. If I think about my UK finding at face value, one might say it suggests automation rather than augmentation. My reading at the moment, though, is that this could all just reflect uncertainty. Firms that are highly uncertain because of AI and are asking whether they will still need people in a few years are the ones that are currently reducing hiring. That may not be because they have already automated people away, but because they do not quite know whether they will need people in the future. So I do not think my UK study decisively shows either automation or augmentation.
Be that as it may, there will be elements of both. My own job is hugely augmented. I work with AI about 80 percent of the day. But that does mean that for certain things, I do not need people anymore. I do not write code anymore, which is remarkable. I check the code. That is all I do. I basically tell Claude Code or Codex, “Write this thing for me,” and then I check whether it is correct, and that is it. Which means I may not need a research assistant anymore to write my code.
If your job is to write code, that job is exposed. If your job is to evaluate code, or to think about what code we need, that is hugely complementary to AI. So I think about this in terms of substitutes versus complements, which is similar to augmentation versus automation, but is probably the more economical way of thinking about it.
AI has certain skills. For example, it can translate really well. It can code really well. It can write reasonably well. If your task or your job is exactly doing that thing, that is a substitutable task. Those tasks can now be done by AI, perhaps even better.
But there are all kinds of things that are complementary to AI. If I were only a software developer right now, I would be starting a company yesterday, checking other people’s code. There is going to be a huge increase in the amount of code generated, maybe a 100x increase, because everyone is now generating code. You do not need to code anymore. But no one knows whether that code is correct, what the vulnerabilities are, or what the weaknesses are. You need domain expertise, like a software engineer, to figure that out. So the ability to check other people’s code and vet vibe-coded projects for vulnerabilities is hugely complementary to AI.
Translators are another example. I would be quite worried if I were a translator, but I also expect that with AI we are going to have a massive increase in the amount of translation we do, because the cost of translation is going down enormously. What I expect is that a lot of translators will end up doing cultural sensitivity work. Learning a language is not just learning a language; it is learning a culture. If I want to know whether my ad is offensive to Hungarians, I have no idea. I can generate something in Hungarian with whatever AI I use, but I would very much like a translator to check whether it would actually come across well to Hungarians if I were to run that ad.
So, in a sense, I think there is going to be a shift in what people do rather than a simple automation versus augmentation story.
- - - - - - -
Augmentation vs Substitution
Aiden Singh: I recently spoke to Robert Seamans who is over at NYU Stern and was Obama’s senior advisor on tech and innovation. He expressed concern about the risk of a widening the wage gap between software-literate workers and manual workers. Do you share a similar view?
Bouke Klein Teeselink: Yes, I am very worried.
I do not know if I would draw the line between software workers and manual workers in quite that way - physical AI is a whole different field, with robotics and so on.
In some ways, I would be quite worried if I were a taxi driver right now. Ten years ago, I was very bearish on self-driving cars. At this point, I think self-driving cars have passed a market test, and we are going to reach a moment in the next ten years where it would be hugely unethical to have human drivers. So I do think there is a wave of automation there that will be quite problematic for taxi drivers.
I think the first big increase in inequality will probably come from several sources. One is that the returns to capital might go up, because we now have technology that can do all these things, so the owners of capital will see a higher return. And owners of capital are richer, so that increases inequality.
Another source is that, even among people who do not do manual work, there are big differences in how well people use these AI tools, and also which AI tools they have available to them. My impression so far is that it is very much the high-earning, high-potential people who are fully using these tools, burning through millions of tokens with frontier coding agents and so on.
It is already the case that high earners are adopting these tools to make themselves much more productive. I see that among academics as well. So far, I think it is mostly a few people at top UK and US universities who are using AI to substantially speed up their research, which has been incredible for me personally. But of course it does increase inequality, because it is an endogenous adoption problem: the people who are high achievers are the ones who adopt first, are probably most curious about these tools, and know how best to use them. So those people become much more productive, whereas others may be left behind. I would be quite worried about that.
Then there is a third element, which is that these tools are really incredible. Anyone who is not blown away at least once a week by what these tools can do is either not using them well or does not fully understand their capabilities. My willingness to pay for a frontier AI model, compared with no AI at all or no generative AI tool at all, is basically whatever amount of work I save every month. The amount of extra work I can do with these tools is remarkable. I pay much less than that, but I still pay more than many people in the world can afford. A 200 pounds a month or 200 dollars a month OpenAI, Claude, or Anthropic license is a lot. Many people cannot afford that. So only people who can afford these tools get the full productivity increase. That is another way inequality might increase.
The only pushback I would make to the increasing-inequality story is that, at least in theory, AI could be an incredible tool for learning. If you are in a remote village in the developing world where your teacher is relatively unlikely to show up at school, and the facilities available to you are subpar, then even if you really want to invest in your education, you have very limited options. India is an interesting country in this respect, because demand for education is so much higher than supply. Suddenly we have a huge increase in the supply of education. Anyone with a phone can get a really good personalized education plan, whether they want to learn linear algebra, a particular language, or some other skill.
Until now, that was not available to many people. It was mostly available to rich people with private tutors. So the most optimistic take is that there is an opportunity here to level the playing field, and that would of course decrease inequality.
- - - - - - -
Long-Term Macroeconomic Effects
Aiden Singh: On an aggregate level, do you have any thoughts on the long-term macroeconomic effects?
Bouke Klein Teeselink: Yes. I should preface this by saying I am not a macroeconomist, so take this with some caution. If I think about how the transition is likely to unfold, in the short run I am quite worried. AI is going to fully transform the labor market, and there will be winners and losers. Historically, and I have no reason to expect this to be different here, that transition is painful. It could lead to all kinds of social upheaval. It might also lead to a gradual disempowerment, where people feel increasingly distant from society. So I am quite worried about that.
In the slightly longer run, there are reasons to believe society will adapt. Humans will find other things to do. If technology really goes where technologists think it will, to the point where it can basically do whatever humans do better, we might move to an economy where all we do is entertain each other. My job as an academic might not be to do research anymore, because that would be automated. It might not even be to teach in the usual sense, because people would learn through their AI. But I might end up running highbrow reading groups with smart, interested people, where I interpret the AI model and the AI explanations of economic models. I would be delighted to do that.
Basically, it would just be human-to-human entertainment, where the humanness matters. Someone could do it with AI, but it would not be the same, in the same way that no one would go see a robot dance, even if the robot danced better than a professional ballet dancer. What matters is the humanness that is part of the product.
In a really extreme case, where AI can do all productive work, we might end up doing the kinds of things where humanness is, in a sense, the product. That is not really a macroeconomic take, but it is at least a hope that even if AI becomes incredibly capable, there may still be some role for humans to play.
——————
Editing by Harpreet Chohan.