Far from being messy, the meta middle is the space we are in where we can see the disruption coming and aren’t sure where we need to steer things so we navigate through it.
The Messy Middle is a phrase used to describe the part of a project where we’re prone to get lost, mess things up, or, as Scott Belsky says “Finding your way through the hardest and most crucial part of any bold project or new venture.”
Generative AI like GPT-4 and the many other Large Language Models available for public use is throwing us into something else. I’m calling this the meta middle. This is not directly a reference to the company formerly known as Facebook, rather it is about the original use of the term:
Making or showing awareness of reference to oneself or to the activity that is taking place, especially in an ironic or comic way.American Heritage Dictionary (definition of meta)
Futures work is beset with prediction problems: it’s hard to fully anticipate what might come no matter how much data or scenarios you have. In the case of Generative AI, the future that we once thought was far off is now here.
This past week many of the leading scientists, investors and developers of AI models signed and shared an open letter calling for a pause on large AI experiments. The pause ensured that more thoughtful consideration could be made of the potential harms these models could generate. While admirable, many of the issues tied to AI are well-known. We’ve already seen some of the things that they can do and how they are biased or blind to certain issues (because the models they were trained on reflect the blindness and biases of the humans who developed the models). These models lack morals, too.
None of this is surprising to me. What I’m focused on is what comes next.
Short Termism Meets The Long Now
There’s a quote that’s attributed to many that speaks truth to a human quirk of perception:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.The Internet
The volumes of blog posts, news features, podcast episodes and videos outlining the predictions of AI are too many to attend to, but as a snapshot I’d sum the arguments in two ways:
- AI is coming for our jobs!
- AI is not coming for our jobs and will unleash a new creative age through disruption. We will just do different jobs.
This is the meta part, and it requires that we use a zoom-in/zoom-out approach. If we zoom in, we see millions of people are using tools like ChatGPT; it’s all over the news and provides an innovation already disrupting tools like Google. In the five months since GPT-3 was released, we’ve yet to see the sky fall, jobs disappear to AI, and economies disrupted. Yes, school children and university students are looking to use AI to write their essays, but there are already counter-programs that are helping limit the ability to fool a professor.
Zooming in, we see a lot of examples of what AI can do, including passing graduate school law and business exams. But, in practice, people are still studying for entrance exams, writing essays, and using many of the same approaches to learning, only maybe with a little help.
Zooming out, things look different. One of the reasons we’ve yet to see massive disruption in the labour market is that it takes organizations a long time to adapt. First, they need to understand the technology and its potential. Secondly, they need to be able to rely on that technology to do something consistently and effectively in a better manner (cheaper, faster, more reliable) than what it replaces. In some cases, that ‘something’ is done by a human,
Take self-serve check-out systems. They’ve been around for years, and only now starting to see them replace humans. Most stores I have seen use them switched to having more self-checkouts than staffed ones only this year. The reason is that a lot of human energy goes into automation — at least at first. But unlike self-checkouts, AI learns and adapts. The human energy that goes into it is minor relative to its output potential. That’s not easily seen in the short term, but a little farther out, and we start to see things change.
When the Money Catches Up
Right now, the idea that you would replace your lawyer with an AI bot is far too risky. Legal matters require attention to detail and understanding the context in ways that AI can’t guarantee. Today. Just as the idea of ‘Robo investing‘ seemed risky a few years ago, now nearly every investment company uses AI assists. Soon, we’ll get legal advice from a chat bot and have an AI prompt system to write our research reports.
The money will catch up. For all the claims about ‘our people are our strength’ the truth is far less attractive when money is on the line. There is also the real fear of missing out by failing to invest and deploy AI. Do you want to be the CEO of an organization that pays humans — many of them with all their needs for salaries, benefits, leave, and such — to do what your competitor does X times faster for a fraction of the price?
Just think of all the compassionate payments to the furloughed staff we saw during the early stages of the COVID-19 pandemic, only to see those same staff laid off or made redundant in the months after we returned to previous practices. The money caught up to the reality of the situation.
When I look at these issues as a designer and psychologist using my futurist lens, I don’t see all these wonderful innovations coming without massive human disruptions. As we’ll soon see, many companies with leases coming up start to rethink what it means to work in the office or have staff (period). This is the meta middle. We can see where things are going, and while we can’t know what will happen, the general destination feels assured. The money will catch up to the tech.
The question becomes: what are we going to do about it? This is more than a foresight exercise; it’s about strategically designing for a world we want, knowing what is likely to come. This is about going off the road from the meta middle and into something else.