Evaluation, Evidence and Moving Beyond the Tyranny of ‘I Think’

I think, you think
The concrete evidence for ‘I think’

Good evidence provides a foundation for decision-making in programs that is dispassionate, comparable and open to debate and view, yet often it is ignored in favour of opinion. Practice-based evidence allows that expert opinion in, however the way to get it into the discussion is through the very means we use to generate traditional evidence. 

Picture a meeting of educators or health practitioners or social workers discussing a case or an issue about a program. Envision those people presenting evidence for a particular decision based on what they know and can find. If they are operating in a complex, dynamic field the chances are that the evidence will be incomplete, but there might be some. Once these studies and cases are presented, invariably the switch comes when someone says “I think...” and offers their opinion.

Incomplete evidence is not useful and complex systems require a very different use of evidence. As Ray Pawson illustrates in his most recent book, science in the realm of complexity requires a type of sensemaking and deliberations that differ greatly from extrapolating findings from simple or even complicated program data.

Practice-based evidence: The education case

Larry Green, a thought leader in health promotion, has been advocating to the health services community that if we want more evidence based practice we need more practice based evidence (video). Green argues that systems science has much to offer by drawing in connections between the program as a system and the systems that the program operates in. He further argues that we are not truly creating evidence-based programs without adding in the practice-based knowledge to the equation.

Yet, practice-based evidence can quickly devolve into “I think” statements that are opinion based on un-reflective bias, personal prejudice, convenience and lack of information. To illustrate, I need only consider a curriculum decision-making process at universities (and likely many primary and secondary schools too). Having been a part of training programs at different institutions — in-house degree programs and multi-centre networks between universities — I can say I’ve rarely seen evidence come into play in decisions about what to teach, how to teach, what is learned and what the purpose of the programs are, yet always see “I think”.

Part of the reason for this is that there is little useful data for designing programs. In post-secondary education we use remarkably crude metrics to assess student learning and progress. Most often, we use some imperfect time-series data like assignments that are aggregated together to form a grade. If this is undergraduate education, most likely we are using multiple-choice exams because they are easier to distribute and grade and use within the resource constraints. For graduate students, we still use exams, but perhaps we use papers as well. But rarely, do we have any process data to make decisions on.

Yet, we recruit students based on quotas and rarely look to where the intellectual and career ‘markets’ are going to set our programs. Instead, we use opinion. “I think we should be teaching X” or “I think we should teach X this way” with little evidence for why these decisions are made. It is remarkable how learning — whether formal or informal — in organizations is left without a clear sense of what the point is. Asking why we teach something, why people need to learn something, and what they are expected to do with that knowledge is a question that is well beyond a luxury. To get a sense of this absurdity, NYC principal Scott Conti’s talk at TEDX Dumbo is worth a watch. He points to the mis-match between what we seek to teach and what we expect from students in their lives.

Going small to go big

There is, as Green points out, a need for a systems approach to understanding the problem of taking these anecdotes and opinion and making them useful. Many of the issues with education have to do with resources and policy directions made at levels well beyond the classroom, which is why a systems approach to evaluation is important.Evaluation applied across the system using a systems approach that takes into account the structures and complexity in that system can be an enormous asset. But how does this work for teachers or anyone who operates at the front line of their profession?

Evaluation can provide the raw materials for discussion in a program. By systematically collecting data on the way we form decisions, design our programs, and make changes we create a layer of transparency and an opportunity to better integrate practice-based evidence more fully. The term “system” in evaluation or programming often invokes a sense of despair due to a perception of size. Yet, systems happen at multiple scales and a classroom and the teaching within it are systems.

One of the best ways to cultivate practice-based evidence is to design evaluations that take into account the way people learn and make decisions. It requires initially using a design-oriented approach to paying attention to how people operate in the classroom — both as teachers and learners. From there, we can match those activities to the goals of the classroom — the larger goals, the ones that ask “what’s the point of people being here” and also what the metrics for assessment are within the culture of the school. Next, consider ways to collect data on a smaller scale through things like reflective practice journals, recorded videos of teaching, observational notes, and markers of significance such as moments of insight, heightened emotion, or decisions.

By capturing the small decisions it is possible to generate practice-based evidence that goes beyond “I think”. It also allows others in. Rather than ideas being formed exclusively in one’s own head, we can illustrate where people’s knowledge comes from and permit greater learning from those around that person. Too often, the talents and tools of great leaders and teachers are accessible only in formal settings — like lectures or discussions — and not evident to others in the fire of everyday practice.

What are you doing to support evaluation at a small scale and allowing others to access your practice-based knowledge to create that practice-based evidence?

References:

Green, L. W. (2006). Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? American Journal of Public Health, 96(3), 406–9.

Green, L. W. (2008). Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Family practice, 25 Suppl 1(suppl_1), i20–4.

Pawson, R. (2013). The science of evaluation: A realist manifesto. London, UK: Sage Publications.

Comments are closed.

Discover more from Censemaking

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top