
Contests are seen by many in the social sector as a way to engage audiences and generate new thinking about important issues, yet in generating all of these contributions from the crowd are we undermining the very aims of work in social innovation when the fruits of these ideas largely remain to rot on the vine and what is the true cost of harvesting them?
Kevin Starr, writing on the Stanford Social Innovation Review blog, recently pointed to the many ways in which well-meaning contests for public engagement in social innovation ideas undermine their very goals. After watching many a contest come and go I felt he was channeling my inner curmudgeon:
After years of watching and participating in this stuff, I’ve concluded that it does more harm than good—and by “this stuff” I mean the whole contest/challenge/prize/award industry. Yes, this lumps together way too many disparate things; yes, there are exceptions to everything I say here; and yes, it deserves a more nuanced discussion. That’s all true, but on the whole, I think we could dump it all and not miss a thing.
His reasoning is four-fold:
- It wastes huge amounts of time.
- There is way too much emphasis on ideation* and not nearly enough on implementation.
- It gets too much wrong and too little right.
- It serves as a distraction from the social sector’s big problem.
Starr makes reference to a scenario posed by futurist Thomas Frey who comments on the false wisdom of crowds by considering the idiocy of having crowds vote on how to fly a plane as it was en route as a way of democratizing the experience of flight.
Crowds and their crowds
Crowd-based anything seems to be popular. With the rise of behavioural economics, social network influence maps, and the popularity of crowd-enabled funding projects proliferating there is much to be found for those looks at how the ‘wisdom of crowds’ . The popularity of opinion-based journalism reveals that you need not have to know much about anything you’re talking about, just having an opinion matters. Indeed, we are asking people for their opinion on things they know nothing about, yet are making enormous decisions based on feedback.
This is not about experts vs novices, it’s about knowing when expertise or more information is needed and when new, fresh thinking is necessary. The two aren’t always incompatible, but there is a place for knowing what information to trust, when and where. The madness of crowd enthusiasm has lost this subtlety.
In the case of contests, Kevin Starr remarks:
The current enthusiasm for crowdsourcing innovation reflects this fallacy that ideas are somehow in short supply. I’ve watched many capable professionals struggle to find implementation support for doable—even proven—real-world ideas, and it is galling to watch all the hoopla around well-intentioned ideas that are doomed to fail. Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute on someone else’s idea. Much of what captures media attention and public awareness barely rises above the level of entertainment if judged by its potential to drive real impact.
There is this common notion that ideas will change the world. That’s nonsense.
Doing something with a good idea is what changes the world. It’s what Seth Godin and Scott Belsky and his group at 99u have been pushing: it’s making ideas happen that counts most. The world has never been changed by inventions that were left solely in people’s minds. Putting ideas out into the world also allows for their critique and other types of innovation through additive elements, iteration and prototyping.
Ideas themselves are plentiful, easy to cultivate and a seed, not a tree.
As the late George Carlin put so well (as he often does):
Ideas? I have plenty of amazing ideas! I have lots of ideas. Trouble is, most of them suck
There is a crowd of cheerleaders that presume crowdsourcing ideas or problem-solving will work for almost anything and that is a myth. Much of the data on effective crowd-based decision-making points to very specific circumstances and where there is an ability to average decisions. Thus predictions of movement or assessment of quantity or dichotomous outcomes are all good areas for crowdsourcing. But what has happened is that this effective use of crowds to make sense of large phenomena has been over-extracted to areas it is less adept at dealing with. There are also processes that facilitate effective ways to engage crowds to get better data.
Contests play into this mindset when they seek the ‘best idea’ from many on issues where very often people are ill-informed about the scope of the projects.
Systems thinking about impact
Some might argue that enlisting many people’s involvement in a topic is still good value because it gets people thinking about an issue. This might be true for some things, but does that thinking produce any change in something else? Starr points to another recent work by the Knight Foundation that looks at the energy that went into its contests and the wider impact that it saw when it stepped back at looked at the contests winners and losers in its totality.
The Knight Foundation recently released a thoughtful, well-publicized report on its experience running a dozen or so open contests. These are well-run contests, but the report states that there have been 25,000 entries overall, with only 400 winners. That means there have been 24,600 losers. Let’s say that, on average, entrants spent 10 hours working on their entries—that’s 246,000 hours wasted, or 120 people working full-time for a year. Other contests generate worse numbers. I’ve spoken with capable organization leaders who’ve spent 40-plus hours on entries for these things, and too often they find out later that the eligibility criteria were misleading anyway. They are the last people whose time we should waste.
Putting aside the motivation for giving to a prize, the bigger issue is what these prizes cost the social benefit sector by drawing out so much energy that ends up stored in one place, for one purpose and likely never for use again. Unlike academic science grants — which introduce their own system of waste, but generally have calls every year that allow people to rework failed project proposals — there is often a one-shot opportunity with these contests that mean creating large amounts of content from scratch to meet the idiosyncratic circumstances of the contest. Starr adds:
And it’s exploitive. For social sector organizations, money is the oxygen they need to stay alive, so leaders have to chase prizes just like they do other, more sensible sources of funding. Some in the industry justify this as a useful learning process. It’s not. Few competitions (with some notable exceptions) provide even the most rudimentary feedback. Too many of these contests and prizes seem like they are more about the givers than the getters anyway.
If we are looking at creating impact perhaps we need more systems thinking and design thinking about what it is we are intending to produce and how we can better design our initiatives to produce them. Otherwise, we’ll create much creative noise, very little innovation signal while reducing the impact of the system as a whole in the process.
* Starr uses the word ‘innovation’ in the original text, however my definition of innovation is one that necessitates implementation — you must actually do something different than before to innovate, not just have a good idea. It requires some rearrangement of the social and technological relationships to the product or service being designed.
Comments are closed.