Trust has never been worth more than it is now – and its value is only increasing. Designing for trust will be the most focus of our organizations in the months and years to come.
Trust-making is going to be the design skill for the future. That’s my prediction.
You might be inclined to say “trust-building,” — but I mean trust-making. We’ve always had to build trust, but we’ll need to make it in this new age of AI just like with any design problem.
Let’s look at the two words:
bɪld | verb
build, construct, make. make by combining materials and parts
build, progress, build up, work up form or accumulate steadilyOxford English Dictionary & Visual Thesaurus
meɪk | verb
form (something) by putting parts together or combining substances; create.
alter something so that it forms (something else).Oxford English Dictionary
Build tends to imply working from something to make it grow, evolve, and develop. Making, implies more from scratch. I know the distinctions are small, but in my mind, they make a big difference in how we proceed.
The development of AI-powered tools to create new content in various forms has evolved at a speed that has prompted even the founding creators of the underlying technology to voice concerns about what’s happening with it.
These concerns for AI are many and I want to address one of them: trust.
Trust in 2023: A Case Study
On May 22, 2023, the stock market briefly dropped in the wake of a circulating fake news report on Twitter (with an accompanying photo) of smoke coming from the Pentagon, implying a terrorist attack. The report was generated from the account @BloombergFeed which also had a blue check mark beside it that, up until earlier this year, meant it had been verified as a trusted source. Now, it just means you paid for it.
The story came via an account that put a deceptive twist on Bloomberg News, the story was fake, and the photo was fake, but the checkmark was real. The brief financial implications (and who knows what other anxiety it produced in people who read and believed the story) was also real.
This is not to suggest that AI-generated content is bad, rather it’s the deceptive use of it that I take issue. Artists, authors, and creators might find enormous benefit from AI. However, just having good things doesn’t mean there won’t be a lot of bad stuff, too.
Trust-Making Is Beyond AI
Trust-making involves being trustworthy, which means being trusted. As organizations, it will mean doing what we say and being clear about saying what we do. But it means creating the kind of organization where we engender trust within the organization and outside of it. We do this by design.
Within the organization means being kind to your staff, nurturing a community of well-being, and leading with integrity, openness, and care. That includes creating an organization where everyone feels they can be open, honest, and supportive of each other. Psychological safety is key. Designing governance, leadership, and communication systems within your organization is part of this, too.
Outside the organization, it means marketing and communicating your work with integrity. It means avoiding things like clickbait, deceptive visuals, or hyperbole in what we say. It also means living our values through our work and doing what we say we’re going to do. I’ve had the chance to work with some remarkable organizations who have made this a priority even when they are being governed (as in the case of public services) by leaders who may not share these values.
The AI is secondary. We can still generate content from AI prompts or assists, it just means we do so to make our work better, not say things that aren’t true or mislead people to believe what’s been created by humans was not when it matters.
Islands of Trust
Once you make trust by design, you can build it by demonstrating trustworthiness in interactions. Only those organizations that make, build and demonstrate trust will be worth working with. Too much is at stake to have to constantly be re-evaluating what we see and hear in situations where its not warranted (like our everyday encounters and interactions). Let’s save the critical evaluation of things for our discovery, sensemaking and strategy.
We are about to see a tsunami of fake everything come to us. The only way to keep our head afloat is create what Margaret Wheatley calls Islands of Sanity. Actually, it’s more like Islands of Trust.
That island will have to be human made.
Let’s grab a virtual coffee if you want to design in this kind of thing for your organization or team.