Is generative AI really making you more productive?

Performance in an environment of generative AI is not simply a matter of copying and pasting the output of a tool.
This article was first published in the February edition of People Matters Perspectives.
I started my career in book editing, at a time before digital print-on-demand had become widely used locally. The publishing house I worked at used offset films - clear bluish plastic films that acted like photographic negatives in the printing process. Errors had to be manually rectified by painstakingly cutting out the letter or word from the problematic film, cutting out another, correct letter or word from a spare film that was printed just for this use, and patching it into the hole with clear tape.
Needless to say, there were always errors. There was a running joke that you could check your manuscript and your films 10 times, 50 times, 100 times, and when the book was finally printed you’d open it at a random page and find a typo staring back at you.
Digital printing made things vastly more efficient. We cleared the little containers of spare letters and punctuation off our desks. There were still errors, but now we generated fresh print files in a fraction of the time it took to cut and paste bits of film. Even if the error was major, there was no need to put in an order to the printer for new films and wait a day - we could fix it ourselves.
By the time I moved on to full-time business journalism a few years later, the software had improved significantly. Errors were much easier to handle. We could correct pdfs directly in the file without needing to generate it fresh each time. And most importantly, there were fewer errors, because spellcheck and autocorrect got better at finding and fixing them.
(I am not exonerating the tools; in one memorable incident some years ago, auto suggestion attempted to replace every instance of ‘boyfriend’ with ‘hamster’.)
Speed? Cost? Quality?
Book printing isn’t so much of a thing now. Over the last 20 years, technology slimmed down the entire publication process to the point where copy editing, layout, illustration, reduction into reader-friendly format, publication, and dissemination can all be automated. Anyone can turn their manuscript into an e-book and release it online in a matter of minutes without ever so much as glancing at an agent, editor, or publisher.
For that matter, LLM-based tools today allow anyone to create a manuscript completely free of typographical errors, without so much as thinking about the actual writing...also in a matter of minutes.
But does all this mean the publication process has gotten more productive? Does it mean that authors and publishers are performing better?
If you measure performance by speed and quantity, then the productivity of the authoring process has skyrocketed. Anyone can write and publish a book or essay with only as much effort as it takes to type in a prompt.
If you measure performance by cost, the answer becomes more ambiguous. There are at least four costs associated with externalising the content creation process to an AI tool: the immediate resource cost of curating and editing the output, the mid-term environmental and cybersecurity implications of the technology, and the long-term personal impact on our own thinking and creative capabilities. The latter three are large, nebulous, and extremely difficult to quantify until the negative impacts are seen. How does one weigh them against the short-term savings of getting the tool to write that essay?
If you measure performance by quality, the answer becomes not only ambiguous but polarising. On the one hand, content creators who might previously have been shut out of the industry for geographical, economic, social, or other reasons are now able to access audiences they could otherwise never have reached, and vice versa. On the other hand, we are today inundated by ‘noise’ - the book and essay equivalent of a thousand artificially generated websites built from plagiarised text with the singular purpose of scraping ad revenue, clogging up search results and overwhelming the genuine, original content.
Human performance among generative AI tools
Let’s take the question of performance into the workplace. If someone creates a report in five minutes, but it is simply a mass of motherhood statements that fails to reference the appropriate business data and does not answer the board of directors’ questions - what will be evaluated?
If everyone operates an AI tool to generate the same output at equal speed and the same cost, what makes one worker different from another?
The answer to both of these questions is fundamentally the same. When generative AI proliferates, the only differentiating factor in two users’ performance is what value they add to the output after it is generated.
There is a great deal of angst and debate going on now about the kind of skills that people will need to acquire and/or master in order to do well in workplaces that implement heavy use of generative AI. The conversation has swung to every point of the compass: hard coding skills for managing the LLM, prompt-writing skills for using it, communication skills for the parts that the algorithm can’t manage, empathy and leadership, even learning as a skill in itself.
But one thing connects all of these: the ability to add value to a LLM’s output. Performance in an environment of generative AI is not simply a matter of copying and pasting the output of a tool. A very large component of human judgement and human perspective goes into making that output fit for purpose - qualities that are extremely difficult to quantify, and that sometimes only become clear when something goes wrong.
Yes, generative AI can help us to perform better. But it can also make us perform worse - or perhaps it simply exposes our inability to perform at certain tasks. Open that book: instead of finding a typo staring back at you, you may find a hundred plagiarised pages; or you may find a hamster romance.
Did you enjoy this article? People Matters Perspectives is the official LinkedIn newsletter of People Matters, bringing you exclusive insights from the People and Work space across four regions and more. Read the January and February 2025 editions here, and keep an eye out for the March edition coming this Friday, 28 March.