No deep dive into open-source models or agentic systems here. Just a reflection on what our enthusiasm for GenAI says about us. Our readiness to embrace it as a substitute for communication suggests we stopped thinking long ago.
A couple of years ago, I was with a senior politician at a big media conference. Looking through the agenda, she rolled her eyes and said, “AI, AI, AI - it’s so boring.” Perhaps that’s an indictment of the political class’ failure to think seriously about big issues. But I can’t help but be terribly sympathetic to that sentiment.
I mean this specifically in relation to GenAI, the LLM tools like ChatGPT that we increasingly rely on instead of a personality. The tools themselves are, of course, insentient and blameless. Rather, their usage highlights our own insipidity. It’s captured in the way many people think, “here’s what I asked ChatGPT earlier” is the start of a riveting story.
The philosopher Sir Karl Popper argued that any meaningful statement should be falsifiable e.g. if I say all swans are white, you can disprove that by showing me a black swan. It set a standard for serious discussion. Ideas only mean something if they can be disproven. GenAI highlights that we gave up on this some time ago. So much of what we say and do is meaningless. It’s no wonder we think well-polished but vapid prose is an adequate replacement.
Corporate Claptrap
The brilliant journalist and teacher Lucy Kellaway fought a brave battle against this in the private sector, calling out “corporate claptrap” in the Financial Times. My former employer Accenture was named and shamed for this gem:
With the rise of the multi-polar world, the task of finding and managing talent has become more complex, turbulent and contradictory.
But back in 2017, Kellaway already wrote with resignation about stemming the tide. Things were only going to get more bullshitty. I was reminded of her foresight last week when I heard a “geopolitical strategist” offer such profound insights as:
We’re in the midst of a geopolitical risk supercycle. Geopolitics is no longer a news cycle - it’s a power dynamic.
Our astonishment that LLMs can replicate such acumen speaks not to the wonders of GenAI but our own banality. Aside from generating rich content for identikit LinkedIn posts and press releases, GenAI’s other contribution to corporate life is to amplify trivial tasks. It’s become a crucial tool in the battle for “inbox-zero”. Now you can look interested in everyone’s message and prolong exchanges without any need for action. Businesses may eventually realise days spent exchanging AI generated emails aren’t delivering great productivity gains. But is the lesson that humans are replaceable? Or that what we were doing wasn’t all that important anyway?
Understanding and Doing
Of course, some drudgery has a greater purpose. Pay attention to details and get the simple things right before you’re trusted with bigger tasks. Or in learning a particular expertise. Take trainee accountants who refine their craft through laborious excel modelling. These outputs are rarely business critical. Rather, they're an investment in future talent. And that balance between educational and immediate value is more lopsided now that GenAI can perform the same tasks faster and more efficiently.
It shows, however, that we’ve long recognised the difference between understanding and doing things. Top accountants don’t earn a reputation for their ability to keep churning out financial models. They are prized instead for their subsequent creativity with valuations or tax bills. That initial graft feeds a deeper comprehension. Just because LLMs offer words as well as numbers doesn’t change that distinction between understanding and doing. As a remedial mathematician who scraped through a GCSE, I can nonetheless solve an equation quicker than Terrence Tao using a computer. But no one sees that as a threat to mathematics. We accept that solving equations isn't the same as understanding them. Language may not share the strict rules of numbers, but it doesn’t mean there’s no logic or depth to it.
Reading, writing and meaning
Outsourcing communication doesn’t just diminish clarity, but comprehension. We stop understanding what we’re trying to say. And a recent MIT study highlights further long-term implications. Tracking the consequences of LLM-assisted essay writing, it showed users consistently underperformed at neural, linguistic and behavioural levels when compared to unassisted writing. The more you offload your thinking, the less capable you become of doing it at all.
Reading and writing force you to grapple with meaning. As
puts it, reading isn’t just about importing information into your head. It’s a two-way process that changes the reader. It forms new connections, deepens understanding and builds what calls “cognitive endurance” - the ability to properly engage with complex ideas.Skin in the game
In bottom-set maths we’d despairingly ask, “Why do we need to learn this when we’ll always have a calculator?” Only now do I understand our poor teacher’s answer that it was about learning how numbers work and thinking logically. The same applies to language. Turning to ChatGPT is like using a calculator - a very helpful tool at times. But only if you understand what you want to get out of it. Something to interrogate ideas or add context. “Give me the counter-arguments to this” rather than “write me 500 words on my synergistic, innovative company” (remember it only spits out the latter so fluently because it’s drawing on all the human-generated guff we’ve already produced).
It’s why, for instance, Warren Buffett spends six hours a day reading. And not just financial reports, but biographies and literature. Materials that don’t strike us as directly relevant to playing the markets, but all help form wider mental models and test hypotheses. Fund managers are a good example because their claims are falsifiable. Say the housing market will crash and you either profit or you don’t.
They have skin in the game. Beliefs predicate action in the form of buying or selling. In contrast, our use of GenAI is boring because it reflects our inertia. It’s the employee prioritising verbose emails over action or the politician worried only about optics. Instead of making difficult decisions, we deal in empty phrases. I think of the UN Secretary-General Antonio Guterres’ comments on the recent Israel-Iran conflict:
The only thing predictable about this conflict is its unpredictability.
Like our earlier geopolitics expert, it’s glibness masquerading as profundity. At least Trump put it clearly when he said “they don’t know what the fuck they’re doing”.
The same thing struck me while watching Netflix’s 7/7 documentary. High-minded liberals reach for cliches like “diversity is our strength” to avoid the awkward reality of why four UK citizens decided to blow themselves up. And in case it looks like I’m just bashing the liberal class, how helpful was “Brexit means Brexit”? It’s no better than the socialists who claim real socialism hasn’t been tried yet. There’s no outcome that disproves the thesis. It doesn’t pass Popper’s test.
Waffle in, waffle out
GenAI can waffle brilliantly. It’s why educators worry so much about its effects on university students. It’s not that it will stop them thinking (although our MIT study suggests it will exacerbate that too). Rather, it perpetuates an existing tendency to pad out 2,000 words with “on one hand, on the other hand”. We latch on to GenAI with such enthusiasm because it polishes our own emptiness and ambiguity.
Instead of automating the worst of us, this should be a wake up call to say things differently - to be incisive, to be falsifiable. To know what we mean rather than obfuscate. AI promises wonderful possibilities. But it can only reflect what we want to get out of it. And if that’s only to magnify the superficial, it all seems rather boring.
Almost all cases I see currently of using Generative AI are to make things that don't really need to exist.
It's adding waffle to a short email.
It's sending a new business email to someone whose not a serious prospect.
It's being used to do things that don't matter that much, because the things that matter, we need to do better than AI can do.
Work is expanding to fit the technology available
Rather than delete, we automate.
Rather than reduce bureaucracy, we make it easier to do more