Are we using AI to automate the wrong things?
Large Language Models (LLMs) are storming the workplace. And why not - their ability to summarize large amounts of text and generate responses can make our lives easier.
More interesting still is their potential to power new architectures based on agents, or services that can understand the intent of a request and work out how to respond. The impact of agents could be as disruptive a development for software architecture as the internet or cloud computing.
Although LLMs may excel at orchestration and technical tasks, their use in content creation may undermine authentic communication. There is a growing tendency to embed LLMs across the board, from emails to reports and documentation. Using AI in this context bypasses the direct human communication that makes a lot of this content worthwhile. AI can certainly make us more productive, but it risks making us bland.
Encouraging lazy authorship
LLMs by their very nature tend to express middle-of-the-road opinions. Given that they are based on probability, they return a sequence of words that are most likely to satisfy any given prompt. They don't apply any genuine insight. They just assess the most appropriate response - one word at a time - based on the huge corpus of information that they have been trained on. This means that there's no controversial opinion, no hot takes, no moments of inspiration, and no surprises.
On the surface, the outputs of LLMs can appear confident and on point. However, they lack judgement. LLMs will merrily reproduce any misinformation from their sources without proper consideration for different perspectives. They are expert summarizers but won't generate any new arguments or insights. There can be a tendency towards selective inclusion and even hallucination. If you want a vaguely unreliable summary of any subject, then an LLM is a perfect solution.
This can encourage lazy authorship where written work is produced without proper consideration of nuance. It's the kind of writing you'd see from a lazy arts undergraduate (I should know - I used to be one). Generic structure, surface-level research, nothing work engaging with.
The importance of the authentic voice
There is an arms race going on between those who would use LLMs to write, and those who are building AI detectors that expose them. There are signals that give AI writing away. Given that LLM outputs are based on choosing the most likely words, their outputs tend to be fluid and predictable - far more so than any human author. They also tend to be consistent, where human authors are prone to fluctuations in sentence construction.
This flow of samey sentences and unusually perfect grammar lacks authenticity and robs the writing of any element of surprise. The outputs feel bland and lack character. Nobody speaks in uniform sentences organised into perfectly symmetrical bullet points. You cannot imagine that there is a person on the other end trying to communicate something.
It's important to recognise the importance of genuine authorship in written communication. After all, the written word should help to spread connection and share understanding. This requires that you communicate using an authentic voice so that your writing reflects the way you think, speak and feel. This isn't possible when you delegate self-expression to an LLM.
After all, who are you writing for? Do you care if anybody reads it and how they respond to it? How can you expect anybody to relate to a piece of writing if it was generated by an AI model? If you can't be bothered to write the entire article, you can't really expect anybody else to be bothered to read it.
It might have words - but that doesn't make it "writing"
This has parallels in the creative industries who are up in arms about the potential that generative AI has for undermining artistic self-expression. Our cultural landscape could be flooded with automatically generated "stuff" that has no creative impulse or emotional content, other than a corpus of prior work that has been replicated and re-purposed.
You can generate something that might sound like music or look like a film, but it won't be an authentic piece of work that speaks to other people. Gen AI will never compose Mozart's "Requiem", paint "Guernica", or write "Anna Karenina". To create "art" you need to have something human to say that people can connect with. No matter how sophisticated GenAI gets, only humans can ever hope to create this sense of connection.
There does appear to be greater willingness to accept AI generated content in the workplace. When we write an email or compose a document we may not be creating "art", but we are trying to communicate something to other humans. Passing responsibility for this creation to an LLM robs the output of any authenticity. We risk infecting the workplace with a mountain of bland, automatically generated rubbish.
We also risk bypassing the learning that comes from articulating ideas when we use LLMs to generate documentation, reports, or emails. The very act of writing something down helps people to clarify their thinking and develop sharper conclusions. Delegating this work to an LLM may erode domain expertise and organisations may lose the knowledge transfer that happens when people write their ideas down.
The gradual erosion of quality and trust
LLMs might make the work of writing a document or producing a diagram faster, but are we sacrificing clarity and communication for the sake of efficiency? Engineers might be able to generate more lines of code with GenAI, but does this come at the expense of the stability and security of a solution?
Being able to produce less satisfactory outputs a little faster might not be the progress that we hoped for. We'll get more done, but we may not like what we end up building.
Who will be responsible or accountable for this content? By delegating human intellectual creativity to LLMs are we risking a descent into a bland hellscape of AI-generated content that is only ever consumed by other AI models?
What happens when nobody creates anything new? We may eventually run out of material to train these models on - if that hasn't happened already. Much has been written on the idea of "model collapse" where machine learning models degrade over time when they are trained on synthetic data. If the quality of content declines due to our reliance on LLMs it doesn't suggest a rosy future for a workplace.
Perhaps the answer here is to use LLMs as drafting tools rather than authors. Many people regard LLMs as helpers that allow them to put a basic framework in place for a piece of writing. Models can help you to sharpen a piece of writing, identify missing arguments, and improve the overall flow. The important thing is to retain human authenticity and remember that writing is nothing without communication and learning.