Close
result
When innovative solutions go in search of a problem
Caroline Scotter Mainprize
10 min read

Caroline Scotter Mainprize argues against technological innovation for innovation’s sake.
A couple of weeks ago I sent a client contact a discussion document to kick off a new project.
She emailed me after the document had been reviewed internally with a copy of the meeting notes. And I was appalled: they seemed to have misunderstood the whole concept of the project. Not only that, but they evidently spent a disproportionate amount of time discussing a minor point that had hardly featured in my paper.
I rang my contact immediately and she was quick to reassure me. The meeting notes had in fact been generated by AI, and if I listened to the two-hour audio recording that she was just about to send, I would realise that their discussion had been positive, sensible, and nuanced.
I can’t say that I found this episode very encouraging. Using AI to produce the meeting notes was essentially pointless. It did not save either of us any time, and would not have done so even if my client had done the sensible thing and checked it before sending. And that’s because the notes were not any good. Generative AI has no intuitive ability to distinguish the critical from the trivial.
Presumably it could be prompted to do so, but defining and writing such a prompt would take more thought and time than just writing the notes. Finally – and this will mean something to chiefs of staff – by delegating the notetaking to AI, my client was missing an important trick. Whoever takes the minutes holds the power: they can decide what is recorded as significant and what is quietly dropped; and they can ‘finish off’ decisions that, in many meetings, are never properly made. As the fictional UK Cabinet Secretary Sir Humphrey Appleby advised his junior, Bernard Woolley, in the 1980s sitcom Yes, Prime Minister, ‘The minutes do not record everything that was said at a meeting … You choose from a jumble of ill-digested ideas a version which represents the Prime Minister’s views as he would, on reflection, have liked them to emerge’.
AI-powered minute-taking tools are an example of a technological innovation designed to solve a problem that does not exist – and that, in the process, has managed to create new problems. My example was relatively harmless, but there are others that are much more serious. The same AI tools that can help flood social media with vacuous ‘travel’ images (as if anyone has ever complained that there is not enough doom-scrolling fodder available) can be, and have been, adapted to create untrustworthy deep fakes and ‘nudify’ images of children – to say nothing of the planet-destroying quantities of energy and water required to power these tools in the first place. In addition, the rush to automate tasks and even whole jobs risks exacerbating existing inequality, raising levels of unemployment, and, ultimately, feeding a deflationary and dangerous demand deficiency.
This is not to say that all applications of AI are unwelcome. Some agentic AI tools are already contributing to significant and beneficial innovations in, for example, healthcare, speeding up clinical trials and facilitating personalised treatment plans; in energy, optimising electricity flows in smart grids; and in cybersecurity, improving threat detection and automating responses. Neither is it the case that automation of physical tasks is inherently negative, especially when those tasks are potentially dangerous.
But the recklessness and speed with which many new technologies are deployed – easily outpacing the development of legislation to control them – are a cause for concern. This is especially so when combined with the customary bandwagon-jumping of businesses and governments, who are desperate to claim to be ahead of the curve in adopting the latest and shiniest toys. It can feel as if innovation – and technological innovation in particular – has its own internal momentum, developing tools that no one asked for but that, once developed, we need to use or risk being left behind.
This is not actually the case, of course. The implementation of any new service or product is the result of a series of decisions made by humans on both the supply and demand sides. Those decisions are made in a variety of contexts and subject to different motivations on the part of the individuals making them. As in so many cases, the chief of staff can play a valuable role here by acting as a sounding board and devil’s advocate to ensure that decision-making slows down and that all perspectives and possible consequences are taken into account. They can do this by asking three key questions.
1. What is the problem that we are trying to solve?
This question could do with being asked a lot more, and not just about technology. Products material and digital that are designed to solve non-problems just end up creating actual harms. From USB-powered pet rocks to plastic banana-slicers, products for which there is no need waste money and resources, and clog up landfill. On the generative AI front, we already have an excess of clichéd and derivative novels, short stories, and screenplays. Why add to them? There are also millions of good marketing copywriters, designers, and other creatives who do not need any virtual ‘assistance’ to come up with original and effective ideas, so why are AI firms so insistent on providing that help? And how have companies allowed themselves to be persuaded that it is necessary? The answer, I suspect, is that good creatives do not need the help, but indifferent ones do. With AI assistance, unskilled writers and designers can produce a passable impression of ability, which may be enough for undiscerning companies – especially if they’re cheaper.
2. Is the proposed solution good enough?
Anyone who has hung on the telephone to a customer services department has longed for a quicker response, a fruitful conversation by text or the miraculous appearance of exactly the right content on the website. Chatbots that can triage customer questions, answer straightforward queries, and direct those with complex issues to a knowledgeable and responsive person would be the answer to a valid problem for customers and organisations. If only the chatbots currently deployed by organisations were capable of doing that; and if only there were enough knowledgeable and responsive people still employed to deal with all the complex queries.
There are exceptions, but for the most part it feels as if organisations have rushed to implement chatbot solutions that are not up to the job – just as they rushed to implement automated telephone answering services or self-service supermarket checkouts. As a result, they have only moved the queues rather than shortened them.
The innovative product is not always the only solution or the whole solution; and even an effective solution should not be indiscriminately scaled. Broader thinking and experimentation – running and evaluating pilot projects, doing a little more of what works and less of what doesn’t – are preferable to wholesale implementation of a single, untested, tool.
3. What are potential long-term or system-wide downsides?
We are increasingly being told that AI can, or will, eventually replace most jobs. Generative AI is ‘coming for’ white collar professions, while robotics are on the march to replace physical labour. And somehow, if you listen to the wrong parts of LinkedIn, not being OK with this is deeply uncool.
However, much as they may want to, tech companies cannot replace labour on their own. They can make the tools, but organisations decide to buy and implement them. And before they make that decision, they need to consider not only the two questions above, but also the long-term and ecosystem-wide consequences of their choices.
If organisations were really honest, they would admit that the main problem that they are aiming to solve by replacing jobs – or even just tasks - with generative AI is that currently they have to pay humans fairly for their work and treat them decently. Fewer employees doing the same work faster means more profit.
But that is only in the short term; long-term there are considerable drawbacks. For example, the slightly dull and commodified tasks typically carried out by entry-level employees serve an additional purpose. They allow young people to learn the nuts and bolts of the industry or profession they are joining and socialise them into the organisation. While a large proportion of these jobs could be automated, it would be at the cost of failing to transfer institutional knowledge and develop skills, and cutting short a talent and leadership pipeline.
Overall, the thoughtless replacement of jobs, or even just tasks, is likely to lead to higher unemployment and/or the devaluing of some occupations, reducing the number of people able to buy the products and services on which our consumption-based economies depend. Organisations focusing on short-term cost-savings and profits are in danger of self-sabotage in the long term.
However, this is not inevitable. Organisations may be able to replace low-level jobs with technology while creating more interesting, higher value jobs for the people who have been displaced. But that will take careful strategic thinking in the context of acute awareness of potential consequences and the dynamics between all actors in the wider ecosystem. Chiefs of staff, with their boundary-spanning relationships, flexibility, and systems-based mindsets, are in a position to influence that thinking. I would argue that it is their responsibility to do so.
Caroline Scotter Mainprize is the Chief Editor of The Chief of Staff