Tag: metrics

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

education & learning

The Quality Metric in Education

Image

What goes on the pedestal of learning?

What is quality when we speak of learning? In this third post in series on education and evaluation metrics the issue of quality is within graduate and professional education is explored with more questions than answers about the very nature of learning itself.

But what does learning really mean and do we set the system up to adequately assess whether people do it or not and whether that has any positive impact on what they do in their practice.

What do you mean when you say learning?

The late psychologist Seymour Sarason asked the above question with the aim of provoking discussion and reflection on the nature and possible outcomes of educational reform. Far from being glib, Sarason felt this question exposed the slippery nature of the concept of learning as used in the context of educational programming and policy. It’s a worthwhile question when considering the value of university and professional education programming. What do we mean when we say learners are learning?

The answer to this question exposes the assumptions behind the efforts to provide quality educational experiences to those we call learners. To be a learner one must learn…something.

The Oxford English Dictionary defines learning this way:

learning |ˈlərniNG|

noun

the acquisition of knowledge or skills through experience, practice, or study, or by being taught: these children experienced difficulties in learning | [ as modifier ] : an important learning process.

• knowledge acquired in this way: I liked to parade my learning in front of my sisters.

ORIGIN Old English leornung (see learn,-ing1) .

This might sufficiently answer Dr Sarason except there is no sense of what the content is or whether that content is appropriate, sufficient, timely or well-supported with evidence (research or practice-based); the quality of learning.

Knowledge translation professionals know that learning through evidence is not achieved through a one-size-fits-all approach and that the match between what professionals need and what is available is rarely clean and simple (if it was, there would be little need for KT). The very premise of knowledge translation is that content itself is not enough and that sometimes it requires another process to help people learn from it. This content is also about what Larry Green argues: practice-based evidence is needed to get better evidence-based practice.

How do we know when learning is the answer (and what are the questions)?

If our metric of success in education is that those who engage in educational programming learn, how do we know whether what they have learned is of good quality? How do we know what is learned is sufficient or appropriately timed? Who determines what is appropriate and how is that tested? These are all questions pertaining to learning and the answers to them depend greatly on context. Yet, if context matters then the next question might be: what is the scope of this context and how are its parameters set?

Some might choose academic discipline as the boundary condition. To take learning itself as an example, how might we know if learning is a psychology problem or a sociology problem (or something else)? If it is a problem for the field of psychology, when does it become educational psychology, cognitive psychology, community psychology or one of the other subdisciplines looking at the brain, behaviour, or social organization? Successful learning through all of these lenses means something very different across conditions.

Yet, consider the last time you completed some form of assessment on your learning. Did you get asked about the context in which that learning took place? When you were asked questions about what you learned on your post-learning assessment:

  • Did it take into account the learning context of delivery, reception, use, and possible ways to scaffold knowledge to other things?
  • Did your learner evaluation form ask how you intended to use the material taught? Did you have an answer for that and might that answer change over time?
  • Did it ask if your experience of the learning event matched what the teachers and organized expected you to gain and did you know what that really was?
  • Did you know at the time of completing the evaluation whether what you were exposed to was relevant to the problems you needed to solve or would need to solve in the future?
  • Did you get asked if you were interested in the material presented and did that even matter?
  • Was there an assumption that the material you were exposed to could only be thought of in one way and did you know what that way was prior to the experience? If you didn’t think of the material in the way that the instructors intended did you just prove that the first of these two questions is problematic?

Years of work in post-secondary teaching and continuing professional education suggests to me that your answer to these questions was most likely “no”, except the very last one.

These many questions are not posed to antagonize educators (or “learners”, too) for there are no single or right answers to any of them. Rather, these are intended to extend Seymour Sarason’s question to the present day and put in the context of graduate and professional education at a time when both areas are being rethought and rationalized.

Learning to innovate (and being wrong)

A problem with the way much of our graduate and professional education is set up is that it presumes to have the answers to what learning is and seeks to deliver the content that fills a gap in knowledge within a very narrow interpretation. This is based on an assumption that what was relevant in the past is both still appropriate now and will be in the future unless we are speaking of a history lesson. However, innovation and discovery — and indeed learning itself — is based on failure, discomfort and not knowing the answers as much as building on what has come before us. There is no doubt that a certain base level of knowledge is required to do most professional and scientific work and that building a core is important, but it is far from sufficient.

The learning systems we’ve created for ourselves are based on a factory model of education, not for addressing complexity or dynamic systems like we find in most social worlds. We do not have a complex adaptive learning system in place, one that supports innovation (and the failures that produce new learning) because:

If you’re not prepared to be wrong, you’ll never come up with anything original. – Sir Ken Robinson, TED Talk 2006

The above quote comes from education advocate Sir Ken Robinson in a humorous and poignant TED talk delivered in 2006 and then built on further in a second talk in 2010. Robinson lays bare the assumptions behind much of our educational system and how it is structured. He also exposes the problem we face in advancing innovation (my choice of term) because we have designed a system that actively seeks to discourage wide swaths of learning that could support it, particularly with the arts.

Robinson points to the conditions of interdisciplinary learning and creativity that emerge when we free ourselves of the factory model of learning that much of our education is set up, “producing” educated people. If we are assessing learning and we go outside of our traditional disciplines how can we assess whether what we teach is “learned” if we have no standard to compare it to? Therein lies the rub with the current models and metrics.

If we are to innovate and create the evidence to support it we need to be wrong. That means creating educational experiences that allow students to be wrong and have that be right. If that is the case, then it means building an education system that draws on the past, but also creates possibilities for new knowledge and learning anchored in experimentation and transcends disciplines when necessary. It also means asking questions about what it means to learn and what quality means in the context of this experimental learning process.

If education is to transform itself and base that transformation on any form of evidence then getting the right metrics to evaluate these changes is imperative and quality of education might just need to be one of them.

Image: Shutterstock

education & learning

Rationalized Education and The Futures of the University

Hallowed Halls, Empty Promises?

Hallowed Halls, Empty Promises?

Next to the church, the university may be the most enduring formal institution in our society. And like nearly every institution from banking to manufacturing to healthcare and even the church, the university is facing a major disruption from social and technological change.

The church’s (simplified)  purpose is to provide a place of worship, communion and education on matters of faith and spiritual guidance.

The university is a place for preparing people to be better citizens, scientists or scholars, and professionals and to advance understanding of our world and universe.

Just as many question how well the church is realizing its purpose, so too are many questioning the university and how it is faring in its mission and purpose.

CENSEmaking returns to a discussion started last year with a requiem for the dream of a university no longer experienced by someone who aspired to serve within it. Following my advice to new scholars and attempts to peel back the curtain to show more about what university looks like for those outside it, it seems appropriate to revisit that discussion to explore the state of post-secondary education as another year passes.

This is the first in a series of upcoming posts looking at the future(s) of learning and professional education.

Rationalizing Education

Universities are rethinking things in a big way led by changes to the way they are funded. Quoting from a recent article in the Globe and Mail on the state of funding for Canadian universities:

Midsize Canadian universities are starting a new kind of cost-cutting exercise as they face the prospect of prolonged austerity and sustained pressure to show their graduates are succeeding.

Administrators have tended to slash budgets equally across the board, leaving it up to each dean and department to set targets inside their faculties. Now, Canadian schools are importing a movement from the United States in which economic hardship is viewed as an opportunity to refocus scarce dollars on faculties that deliver.

If we are to parse through this language, one will see it that points to a new way of evaluating the impact of the university and how it makes decisions about what to invest in:

“Instead of making decisions based on internal political factors or you-scratch-my-back-I’ll-scratch-yours, or whatever else has gone on in the past, it’s time for us to shift to a culture of evidence,” said Robert C. Dickeson, the U.S. consultant at the heart of the crusade against across-the-board cuts.

Ah, evidence. This powerful concept is the bedrock of science , has transformed the way medicine is practiced and is now being applied to the ‘business’ of education. In Canada, universities are now seeking data about its product to inform its strategic decisions. Some universities are doing more of this than others in applying some form of evidence to their policy and strategy to deal with current funding challenges:

The University of Guelph has gone furthest. Facing a $32-million shortfall over the next four years, Guelph’s leaders hired Dr. Dickeson for help after an invitation to a workshop he runs landed in provost Maureen Mancuso’s inbox. He was on hand at a Guelph University town-hall meeting in late November where president Alastair Summerlee laid out the challenge: rising costs, flat government funding and capped tuition, combined with a shortage of space to keep boosting enrolment.

“People outside of our institutions are full of a rhetoric around ‘do we produce quality, a quality product?’” Dr. Summerlee told a crowd of about 300. “These things make a case for actually trying to prioritize what we’re doing. … We need to act now.”

The plan is Darwinian. Each of the university’s nearly 600 programs and services, from undergraduate biology to the parking office, has to complete a “program information report” answering 10 criteria, to be reviewed and ranked by a task force of faculty, staff and students.

Embedded in the middle of this quote is the line: ‘do we produce quality, a quality product?

I have been involved in academic governance and policy making for 20 years first as a student representative at the undergraduate and graduate level and later as a full-time faculty member. The timing of my post-secondary life coincided with the last major shift in educational funding and rationalization that began in the early 1990’s with the first introduction of student fees and the start of philanthropic named sponsorship in Canadian universities. Prior to this time, students tuition was all they paid to access services and get an eduction and buildings, faculties and facilities were named based on criteria that was not tied to specific donations.

Despite all of this, quality was rarely a term used explicitly to shape strategy.

Money Matters and Defining Quality

I have never — not once — witnessed a major decision made on the basis of educational quality when juxtaposed against financial concerns. I’ve been a student, trainee or faculty member at five different universities and a visiting or guest lecturer or examiner at many more institutions worldwide and never have I seen quality of education trump fiscal or logistical issues on matters of great significance. Sure, there are small decisions to include particular content into a course or program or invite/disinvite a particular speaker based on perceptions of quality , but no program I’ve known chose, for example, to limit recruitment or enrolment because there were not enough resources to give a quality experience to students.

So if universities are now being judged on quality, what does this mean in practice?

Is quality about jobs? If so, then are they the jobs that students want, the ones they get (which may not be the same thing), the ones that students are trained for, or the ones that the market produces?

Is quality about what gets taught, what gets learned, or what gets applied? If it is some combination, then in what measure?

Is quality about what the market asks for or what the world’s citizens and its ecosystem (including plants, animals and oceans) demand?

Is quality about training people for jobs and roles that have traditionally existed, exist now, or may emerge in the future?

Is quality about the canon, questioning the canon, or re-discovering or creating new canons? Or all of them?

These are some of the questions worth asking if we wish to understand what the futures of the university might be and whether any of those possible futures mean not existing at all. Stay tuned.

Photo University by martybell from Deviant Art.