In an ingenious study published this summer, US researchers showed that within months of ChatGPT’s launch, copywriters and graphic designers on major online freelance platforms saw a significant drop in the number of jobs that they got and even steeper declines in income. This suggested not only that generative AI was taking their work, but also devaluing the work they still do.
[…]
Most surprisingly, the study found that the self-employed who previously had the highest earnings and completed the most jobs were no less likely to see their employment and income decline than other workers. If anything, they had worse results. In other words, being more qualified was not a shield against losing jobs or earnings.
This must be unsettling for many people. Chart here:
“[W]Within months of ChatGPT’s launch, copywriters and graphic designers on major online freelance platforms saw a significant drop in the number of jobs they got and even steeper declines in earnings” . https://t.co/TGaym4Iivb pic.twitter.com/e0PKP700BR
— Steve Stewart-Williams (@SteveStuWill) November 10, 2023
But wait, there is month:
But the online freelance job market covers a very particular form of white-collar work and labor market. What about looking higher in the ranks of the knowledge worker class?
For that, we can turn to a fascinating recent study from Harvard Business School, which monitored the impact of giving GPT-4, OpenAI’s newest and most advanced offering, to Boston Consulting Group employees.
BCG staff randomly assigned to use GPT-4 when performing a set of consulting tasks were significantly more productive than their peers who did not have access to the tool. AI-assisted consultants not only performed tasks 25% faster and completed 12% more tasks overall, but their work was rated as 40% higher quality than their unassisted peers.
Employees across the skill distribution benefited, but in a pattern now common in studies of generative AI, the biggest performance gains occurred among the least highly skilled in their workforce. This makes intuitive sense: large language models are best understood as excellent regurgitators and summarizers of existing public domain human knowledge. The closer one’s own knowledge is to this limit, the lower the benefit of using them.
Fascinating stuff.