Successful evaluators know the power of benchmark. The Oxford English Dictionary describes the act ‘to benchmark’ as “evaluate or check (something) by comparison with a standard. The Wikipedia definition of Benchmarking is:
“Benchmarking is the process of comparing the business processes and performance metrics including cost, cycle time, productivity, or quality to another that is widely considered to be an industry standard benchmark or best practice. Essentially, benchmarking provides a snapshot of the performance of your business and helps you understand where you are in relation to a particular standard.”
From an evaluation standpoint, a benchmark provides us with a comparator to help assess how well (or poorly) a particular program is doing. From corporate leaders to university presidents to healthcare administrators benchmarking serves as the referent and focus for programming activities and the foundation for ‘best practice’. But what if best practice isn’t good enough? Or put another way, what if following the leader means going the wrong way?
In the world of consumer or behavioural eHealth much of what we use as our benchmarks are derived from a type of healthcare model that is institution and often technology-centred rather than patient-centred. It is more often something tied to medical treatment of specific problems and technology focused using a highly linear approach to treatment.
Yet in the age of Google Wave, these linear models don’t look to fare well. The future of healthcare, as Frog Design recently opined, is social. What are the benchmarks when your eHealth intervention is not a single technology, but a suite of interacting tools that are online, collaborative and mobile in different measures at different times within a diverse context of treatment and preventive behaviour? How do we measure success? What happens when the ‘effect’ of an intervention is social in nature and supported by multiple tools working in different combinations each time?
In evaluation, we often look for the most likely cause of a particular effect. Yet, what is the effect of any one wave in an ocean of influence? While it is impossible to deconstruct the influence of that wave, it is possible to anticipate what a wave might do under certain conditions and, if the timing is right, it might be possible to get on top of that wave and surf it to shore.
What if we took a wave model and, like surfers, read the seas to determine the appropriate time to dive in, acknowledging that the break will occur differently, the velocity might vary, the height of can’t be predicted, but through activity and practice we can enhance our anticipatory guidance systems to better select waves that might lead to some fine surfing? My research team at the University of Toronto has begun working on these models and methods because as anyone in public health can tell you, the tide is high and with complex problems like chronic disease, the waves are getting big. Twitter, Facebook, blogs, iPhone apps big and small are all collectively influencing people’s behaviour in subtle ways and through acknowledging that these collective tools are the cause and consequence of change can we begin to develop evaluation models to make sense of their impact on the world around us.