Evidence and Learning in the New AI-driven Arms Race

What happens when the AI-powered algorithms take what was once a peer-to-peer network and turn it into an AI arms race?

Cory Doctorow has coined the term enshitification to describe the ways in which many large electronic media platforms from Google to Facebook to (now-called) X and others. He’s speaking to that feeling that I have and hear from others: the Internet isn’t what it used to be.

Tools like Google are far more likely to serve up ads of vaguely related products to whatever search I’m doing than quality search results. Instagram and Facebook have found new ways to serve up ads or traffic baiting through recommended posts, drowning out the posts from those people you choose to follow. Amazon hosts a vast storehouse of cheaply made, poorly designed, environmentally and ethically dubious crap (that’s paying for promotion) making it more onerous to sift and sort to find what you’re actually looking for or want.

All marketing aside, the situation isn’t much better for those seeking thought leadership, networked learning, or new insights from the field via social platforms. A colleague of mine recently shared a 2023 guide to the LinkedIn algorithm for users looking to increase their exposure. It was a short document and an exhausting experience. Why? The amount of work that is now required to stay visible is becoming ever-more time consuming.

My colleague Mark Leung calls this the AI-driven arms race. AI-powered algorithms are constantly reworking content contributions to serve them up to us in ways that suit the latest demands of their human overlords. It’s why these guides are only as good as their proximity to the date they were released, because with AI things are always changing.

The value is in the clicks. Once we exhaust a strategy to address a change in the algorithm, a new one appears. It keeps going.

This has implications for trust, which influences evidence and learning in profound ways.

Trust Fall

What this begins to do is erode trust. Fake news and mis or disinformation aside, the reasons I stopped engaging with Facebook and X were tied to a lack of trust. I lost trust that I could find what I wanted. I lost trust that the people I opted in to hear from were not being silenced (or muffled) in the algorithm. Let’s face it: if people are offering knowledge, not selling products or outrage, then that’s not worthy of clicks.

In the AI-arms race the destination is most certainly the bottom. From a design perspective, once we lose trust we stop using something. Elon Musk thinks he can charge a fee to make X a better platform, except he’s already lost most of the best users. I used to ‘tweet’ multiple times per day, every day, and engage in many conversations with people I met, knew, or wanted to learn from. Now, days go by and I don’t even read anything on the platform. My daily posts are down to 3-4 per week. I post on Facebook about once every 6 weeks.

LinkedIn is where I’ve found the most value, yet this new algorithmic adjustment is the latest step toward degrading trust in that platform, too. I’ve been used to social posting and sharing — it’s been nearly 20 years since Web 2.0 captured our imagination. Now? I see little value in these platforms. Where am I going? Increasingly, places like Substack. That community — as of this writing — seems more robust, although significantly smaller. It’s a group of writers and core readers, which is what the early stages of social platforms were about. I don’t think it’s going to be a recreation of the “good old days”, but it might return some of that feeling.

When we design for trust, we are making a commitment to each other, not to an algorithm. AI doesn’t get that. Nor should it — trust is a human quality. AI is about results — no matter whether people trust those or not.

What this means is giving up on systems that were once at the heart of who and how we trusted others. It means relying on a smaller group of people, maybe reading deeper instead of wider, and giving up on the idea of a World Wide Web and adopting more of a displaced, narrow, web. Maybe that’s a good thing.

All I know is that I’m not looking to participate in the arms race as much as it’s asking me to.

Evidence and Learning

What suffers when we close off from others are many forms of evidence (verification and generation) and learning. We build evidence from trust. People need to trust data and they need to trust others to provide their input and consent for generating that data. AI is scraping our data — what we say, produce, and share — mostly without our consent. The large language models developed are taking mostly decontextualized data generated for a circumstance and purpose, gathering it, and using it to provide recommendations and products for other purposes.

This erodes trust in evidence. When I conduct research, I know what is being collected. When I use data, I have to trust in the knowledge of how it was collected, where it was gathered, and how it was treated. If I can’t trust it, is it evidence?

If I can’t trust it, what can I learn from it? I learned not to jump up on the counter to get cookies (I wasn’t supposed to have) because one time I did that the stove was on and I put my hand on the burner. Ouch! I learned from that. I had very good, trustworthy evidence to support that choice — especially because my brothers’ managed to do the same thing later on.

Designing for trust is the skill for the future. Creating tools and evidence that we can learn from is what comes from using this skill. I wonder how much that’s being considered by our our tech leaders?

Image credits:  Marvin Meyer on Unsplash and Sincerely Media on Unsplash

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top
%d