Just like with drugs, dosage and interactions matter with learning and innovation; it’s about amount, mix, and more.
If you know someone over the age of 80 (including you) there is a good chance they (or you) are taking drugs — lots of them. Medical science has done a remarkable job with finding pharmaceutical therapies for many of the ailments and conditions that come with aging. While these might work well individually, it becomes much more complicated when combined with other medications for other different conditions — of which seniors are likely to have multiple.
Just as doctors and patients wrestle with drug combinations and their effects, so too do those working at improving learning and innovation have to consider the same thing when it comes to understanding the effects of dosing. Dosing is the practice of understanding the right amount and combination of ingredients required to produce safe and optimal outcomes.
Just as it’s true with drugs, it’s also true for innovation and learning outputs and the evaluation and design research that goes into it. To illustrate, let’s go back to the 1980’s and the Cola Wars.
A Matter of Taste
Remember New Coke? In 1983 the Coca-Cola company stunned the world by replacing its original flavour with something called ‘New Coke’. In a stunning reversal based on massive consumer outcry, the original — now called ‘Classic’ — was restored to shelves within four months.
New Coke was largely considered a spectacular product failure, yet it was one based on some sound research. That research suggested that people found the sweeter taste of the New Coke — which was meant to counter Pepsi, which was publicly ‘beating’ Coca-Cola in its ads with the Pepsi Challenge taste test — more appealing. The problem is that the appeal was limited to small doses or the kind that the Pepsi Challenge capitalized on.
If you compare drinks using small doses, people prefer sweeter beverages. Yet, this appeal wanes as more of it is consumed. It’s a classic (pun!) problem of dosage and it can bedevil our efforts to understand learning and our impact if we aren’t careful.
In the case of innovation and learning, scale is a matter of expected or potential effects. The bias toward making things bigger, bolder, and more available in our culture comes partly from consumer capitalism, but also a genuine human belief that something good in small scales can be better when spread larger.
That can be a fallacy. It sometimes can also be the wrong question as my colleague John Gargani and Robert McLeod have written about in their recent book on scaling impact. Understanding scale is something that requires considerable attention and sophistication. Without an understanding of how effects can translate across contexts and scales, we are at risk of creating ‘new coke’ problems where something is attractive and useful at one scale and not at another, more practical one.
Service design is heavily beset with these problems. So is entrepreneurship. What makes a business a success with one location, a personalized approach to service, and hand-crafted products might not work when scaled to a second, third, tenth, or thousandth location.
It’s the reason why the work experience for a small start-up is different than when a company grows to a mid-sized business and beyond. The benefits might outweigh the costs/problems, but that can’t be assumed and requires the right research and data to understand what dose matters.
It’s also — like with medications — a matter of the mix.
Perhaps the least discussed issue with innovation and learning (which I am combining because good innovation requires good learning) is mix – the right combination of elements to produce an effect.
A recent study looking at learning and performance found this to be true. Contrary to what some believe, the best way to learn is not to perform excellently on every test, no matter how difficult, but to perform very well. An 85% success rate on performance measures appears to be a ‘sweet spot’ or the Goldilocks space where the mix of failure and success leads to the highest amount of retention.
That is a matter of mix related to outcomes; it’s also a matter of inputs.
Input mix is that combination of activities — prototyping, feedback, discussion, synthesis, and resources (human and otherwise) — that produce optimal results. Just as a chef or baker will try many different combinations in determining the final recipe for a successful dish, service and product designers need to consider what the mix is related to the use and impact of their products.
Personal attention might matter, but how much? When should it happen? Does it matter who is doing it or what kind of qualities or knowledge they have? These questions could make the difference.
Mixing Up Everything But Quality
People will put up with an inferior product if they have a positive service experience. In healthcare, we know that even such things as providing indicators of relative progress (e.g., time to be seen) when waiting to see a physician can make enormous differences to the satisfaction and reduction in concern even if the absolute time doesn’t change.
These factors are too often neglected in our studies and point to the need to have high-quality, appropriate evaluations in our work lest we miss or mis-read the reasons why things happen and either design the best out or the worst in.
Good innovation evaluation requires we take into consideration what we look at and how it contributes to effects in the near and longer-term in order to create sustainable innovations. The methods behind doing this also require the same level of mix that we seek to study from qualitative, quantitative and combined sources.
Without doing this, we are at high risk of missing opportunities or compounding risks and that might mean ending up with New Coke when really all you wanted was more refreshment.
Note: Getting the mix right matters and requires a blend of design and design thinking with evaluation — a mix of its own. Contact me if this is something that you need with your efforts to introduce something new to your organization.