Category: evaluation

art & designcomplexitydesign thinkingevaluationsystems science

Systems Thinking and Design: A Case for Egypt?

Politics provides a great analogy for why systems thinking and design fit together and how effective “design” and systems thinking work so closely together. It’s time that our politicians and policy makers start considering the role of design and systems thinking a little more and Egypt provides a great example of what happens when those areas come together.

Designing (Building) a New Egypt?

Over the past month I’ve done a lot of reading on the role of design and the culture of designers. The reasons are many, but mostly because I see the challenges that we face as a society as ones of poor design and an inability to see systems, think about them clearly, and translate that into action. This last part is really where design comes into play.

Take the current situation in Egypt, that is in many ways a design problem. Many years ago, Egyptians were comfortable — if not always happy — to accept a government designed to be under the heavy influence of one person. Despite the flaws of that leadership, there was acceptance and general, if somewhat muted, support for that model for the last part of the 20th century and the early start of this century. The recent events in Tunisia showed Egyptians that there were alternatives to their current model of politics and that the people could design a new leadership. The past few days have seen a remarkable chain of events that represent the culmination of desire of Egypt’s people for change.

Design thinking provides a lens for viewing problems and developing contextual solutions — or new situations. The exploration of the problem and developing new solutions using design thinking involves a number of steps. This usually includes such steps as:

  • Define the scope, scale and context of the problem at hand;
  • Research the problem and determine causes, consequences, alternatives and opportunities related to that problem;
  • Start working on developing possible options — ideate — in ways that include the wild and outlandish to ensure that there is sufficient opportunity to build on ideas that might encompass the fullest possible perspective on the issue, even if such ideas may seem impractical;
  • Prototype some new options. In the case of political systems, perhaps trying some new ways to organize parts of the government, shift the leadership structure or conduct local experiments to try new models of governance in relatively safe environments;
  • Evaluate the implementation of the prototype and incorporate the findings into successive models and then re-implement them in the form of new prototypes. This rapid-cycle prototyping on small scale experiments enable a safe-fail culture to form rather than aim for the impractical fail-safe models that almost never work in complex systems;
  • Implement and repeat. Take the lessons learned from taking a utilization-focused evaluation (PDF) approach or a developmental evaluation approach (or a combination), implement the necessary changes and repeat.

The manner by which the outcomes and implementation of the new model are assessed can be viewed by taking a systems thinking perspective, largely because what is being designed is indeed a system. By considering the boundaries, the interconnections between civil society and government, and by articulating intention in guiding the change, it is possible to design a new political system for one of the world’s oldest societies in a manner that honours the past and creates a compelling, healthy future.

Change isn’t usually so straightforward, but that’s often because there isn’t the planning and process in place for change. Now is a perfect time to bring that about as Egyptians struggle along with their Northern African neighbours in Tunisia to find ways to bring their intention to bear on the way their country is governed and, in doing so, create one of the most significant opportunities for design and systems thinking in either of those nations’ histories.

complexityeducation & learningevaluationsocial systems

Complexity and Child-Rearing: Why Amy Chua is Neither Right or Wrong

Family

Science strives for precision and finding the right or at least the best answers to questions. The science of complexity means shifting our thinking from right answers to appropriate ones and what is best to good. The recent debate over parenting (particularly among Chinese families) illustrates how framing the issue and the outcomes makes a big difference.

Amy Chuais probably the most reviled mother in America” according to Margaret Wente writing in the Globe and Mail.  In her column, Wente is looking at the phenomenon that Chua writes about in her new book on parenting, Battle Hymn of the Tiger Mother. What has drawn such attention to Chua and her book is that she advocates for a very strict method of parenting in a manner that achieves very specific objectives with her children. The payoff? Her children are very successful. This is not a new argument, particularly when it comes to Chinese and other Asian cultural stereotypes. But like many stereotypes, they emerge from something that has a kernel of truth that gets used in ways that gets applied as a universal, rather than in context. Judging by the comments on the original Wall Street Journal story that attracted attention and the Globe and Mail’s review page, I would say that there is some truth to this stereotype and some wild overstatements as this gets applied universally to parenting.

A summary of the comments and commentary on this, crudely, fall into two camps (which, for reasons I’ll elaborate on later is ironic given how problematic the whole idea of reducing arguments into twos is, but go with me on this): 1) Amy Chua is recalling my childhood or parenting reality and its nice to hear someone acknowledge it and 2) Amy Chua is promoting harmful, inaccurate, racist stereotypes.

Child-raising is a common example of a complex system, showing how past experience is not necessarily a formula for future success. Thus, you can have the same parents, same household, even same genes (in the case of twins) and get two very different outcomes. Complex systems do not lend themselves to recipes or “best practices”. You can’t shoehorn complexity into “right” / “wrong” and either/or positions.

What is interesting about the discussion around Chua’s parenting style, which she claims reflects traditional Chinese behaviour (I am not Chinese so this is out of my realm for comment) is that the focus is on raising successful children, not necessarily happy, well-adjusted, self-determined or even creative children. And success, in the terms referred to means achieving or exceeding certain prescriptive standards for socially acceptable activities. This might mean acceptance at a prestigious school, an error-free performance, or a straight A report card. It is a rather narrowly proscribed form of achievement based upon a particular set of cultural conditions and assumptions.

One of the problems I see in this debate is that people are conflating the two types of outcomes, which is where the complexity comes in. What Chua has done is actually refer to parenting in line with a set of complicated activities and outputs, rather than part of a complex system. She has sought to reduce the complexity in the system of parenting by focusing on issues of tangible measurement and has created a familial system aimed at reducing the likelihood that these objectives will not be met. Her benchmark for success are visible outcomes, not the kind that come from growing one’s self-esteem, building true friendships, or learning to love. This isn’t to say that her children or those raised by “tiger parents” don’t have such experiences, but this isn’t what her method of parenting is focused on. And therein lies the rub and why much of the debate surrounding Chua’s book is misaligned.

If you are assessing the life of a person and their total experience as a human being, Chua’s method of parenting is quite problematic. Success in this situation has many different paths and may not even have a clear outcome. What does it really mean to be successful if love, happiness, and self-fulfilment is the outcome of interest – particularly when all of those things change and evolve over a week, a month or a lifetime? It is the kind of task that one might use developmental evaluation to assess if you were looking to determine what kind of impact a particular form of parenting has on children’s lives. Margaret Wente’s article uses some examples of “tiger parenting” outcomes with those who achieved much “success” using the benchmarks of externally validated standards and found mixed outcomes when “success” was viewed as part of a whole person. Andre Agassi grew to loathe tennis because of his experience, while Lang Lang appears to love his piano playing. Both have achieved success in some ways, but not all.

These two examples also go to show that with human systems, there is little ability to truly control the outcomes and process. Even if one can reduce outcomes to complicated or simplistic terms, those outcomes are still influenced by complex interactions. Complicated systems can be embedded within complex ones or the opposite. So no matter what kind of prescription a person uses, no matter how tight the controls are put, the influence of complexity has a way of finding itself into human affairs.

So is Amy Chua’s method of parenting successful or not, supportive or harmful, right or wrong? The answer is yes.

behaviour changedesign thinkingevaluationresearchscience & technology

The Science of Design & the Design of Science

Glasgow Science Centre (by bruce89, used under Creative Commons Licence)

As the holidays approach I’ve been spending an increasing amount of time looking at a field that has become my passion: design. Design is relevant to my work in part because it frequently deals with the complex, requires excellent communication, and as Herbert Simon would suggest, is all about those interest in changing existing situations into preferred ones.

Yet for all the creativity, innovation and practicality that design has I find it lacking in a certain scientific rigour that it requires to gain the widespread acceptance it deserves.

This is not to say that designers do not employ rigorous methods or that there is no science informing design. For example, architecture, a field where design is embedded and entwined, employs high levels of both rigour and science in its practice. The issue isn’t that these two concepts aren’t applied, they just aren’t applied to each other. I was heartened this week to see Dexigner profile a new pamphlet on the science of design. Although true in spirit, it wasn’t what I expected to see as it largely profiled ways to assess the quality of design projects from the perspective of design.

What if we could assess the impact of design on a larger scale, a social and human scale?

Interaction designers speak of this need to connect to the human in design work. The emergent field of social design exemplified by groups like Design 21 who aim to produce better products for social good. All of this is important, but it’s important largely because we say it is so. Rhetorical arguments are fine, but at some point design needs to confront the problem of evidence.

Does “good” design lead to better products than “bad” design?

What components of design thinking are best suited to addressing certain kinds of problems? Or are there simply problems that design thinking is just better at addressing than other ways of approaching them?

What methods of learning produce effective design thinkers? And what is effective design thinking anyway? Does it exist?

What is the comparative advantage of a design-forward approach to addressing complex problems than one where design is less articulated or not at all?

These are just some of the many questions that there seems to be little evidence in support of. A scientific approach to design might be one of the first ways of addressing this. In doing so, a scientifically-grounded design field is far more likely to garner support of decision makers who are the ones who will approve and fund the kind of projects that can have wide-scale impact. Design is making serious in-roads to fields such as business, education, and health, but it represents a niche market when it has the potential to be much larger.

Roger Martin has argued that the reliance on scientific approaches to problem solving runs counter to much of design thinking. This assumes that science is applied in a very detached, prescriptive manner, which is common, but not the only way. Micheal Gibbons and colleagues have described two forms of science, which they call Mode I and Mode II science. The first Mode is the one that most people think of when they hear the term “scientist”. It is of the (usually) lone researcher working in a lab on problems that are driven by curiosity with the aim of generating discoveries. For this reason, it is often referred to as discovery-oriented research.

Mode 2 research is designed to be problem-centred and aimed at answering questions posed by practical issues and has a strong emphasis on knowledge translation. This is an area more accustomed to the designer.

Design presents the opportunity to transcend both of these Modes into something akin to Mode 3 research, which I surmise is a blend of the abductive reasoning inherent in Roger Martin’s view of design thinking and the discovery-oriented approach that goes beyond just the problem to create value beyond the contracted issue. A design-oriented approach to the science of design would involve leveraging the creative processes of designers with some of the tools and methods accustomed to researchers in Mode 1 and 2 science. Can we not do detailed ethnographic studies looking at the process of design itself? Is there any reason why we cannot, with limits acknowledged and in appropriate contexts, attempt to do randomized controlled trials looking at certain design thinking activities and situations?

If design is to make a leap beyond niche market situations, a new field must dawn within design + science and that is the science of design and the design of science.

design thinkingeducation & learningevaluationinnovation

More Design Thinking & Evaluation

Capturing Design in Evaluation (CameraNight by Dream Sky, Used under Creative Commons License)

On the last day of the American Evaluation Association conference, which wrapped up on Saturday, I participated in an interactive session on design thinking and evaluation by a group from the Savannah College of Art and Design.

One of the first things that was presented was some of the language of design thinking for those in the audience who are not accustomed to this way of approaching problems (which I suspect was most of those in attendance).

At the heart of this was the importance of praxis, which is the link between theory, design principles and practice (see below)

Design Thinking

As part of this perspective is the belief that design is less a field about the creation of things on their own, but rather a problem solving discipline.

Design is a problem solving discipline

When conceived of this way, it becomes easier to see why design thinking is so important and more than a passing fad. Inherent in this way of thinking are principles that demand inclusion of multiple perspectives on the problem, collaborative ideation, and purposeful wandering through a subject matter.

Another is to view design as serious play to support learning.

Imagine taking the perspective of design as serious play to support learning?

The presenters also introduced a quote from Bruce Mau, which I will inaccurately capture here, but is akin to this:

One of the revelations in the studio is that life is something that we create every single day. We create our space and place.

Within this approach is a shift from sympathy with others in the world, to empathy. It is less about evaluating the world, but rather engaging with it to come up with new insights that can inform its further development. This is really a nod (in my view) to developmental evaluation.

The audience was enthralled and engaged and, I hope, willing to take the concept of design and design thinking further in their work as evaluators. In doing so, I can only hope that evaluation becomes one of the homes for design thinking beyond the realm of business and industrial arts.

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking & Evaluation

Design Thinking Meets Evaluation (by Lumaxart, Creative Commons Licence)

This morning at the American Evaluation Association meeting in San Antonio I attended a session very near and dear to my heart: design thinking and evaluation.

I have been a staunch believer that design thinking ought to be one of the most prominent tools for evaluators and that evaluation ought to be one of the principal components of any design thinking strategy. This morning, I was with my “peeps”.

Specifically, I was with Ching Ching Yap, Christine Miller and Robert Fee and about 35 other early risers hoping to learn about ways in which the Savannah College of Art and Design (SCAD) uses design thinking in support of their programs and evaluations.

The presenters went through a series of outlines for design thinking and what it is (more on that in a follow-up post), but what I wanted to focus on here was the way in which evaluation and design thinking fits together more broadly.

Design thinking is an approach that encourages participatory engagement in planning and setting out objectives, as well as in ideation, development, prototyping, testing, and refinement. In evaluation terms, it is akin to action research and utilization-focused evaluation (PDF). But perhaps its most close correlate is with Developmental evaluation (DE). DE is an approach that uses complexity-science concepts to inform an iterative approach to evaluation that is centred on innovation, the discovery of something new (or adaptation of something into something else) and the application of that knowledge to problem solving.

Indeed, the speakers today positioned design thinking as a means of problem solving.

Evaluation , at least DE, is about problem solving by collecting the data used as a form of feedback to inform the next iteration of decision making. It also is a form of evaluation that is intimately connected to program planning.

What design thinking offers is a way to extend that planning in new ways that optimizes opportunities for feedback, new information, participation, and creative interaction. Design thinking approaches, like the workshop today, also focuses on people’s felt needs and experiences, not just their ideas. In our session today, six audience members were recruited to play the role of either three facets of a store clerk or three facets of a customer — the rational, emotional and executive mind of each. A customer comes looking for a solution to a home improvement/repair problem, not sure of what she needs, while the store clerk tries to help.

What this design-oriented approach does is greatly enhance the participant’s sense of the whole, what the needs and desires and fears both parties are dealing with, not just the executive or rational elements. More importantly, this strategy looks at how these different components might interact by simulating a condition in which they might play out. Time didn’t allow us to explore what might have happened had we NOT done this and just designed an evaluation to capture the experience, but I can confidently say that this exercise got me thinking about all the different elements that could and indeed SHOULD be considered if trying to understand and evaluate an interaction is desired.

If design thinking isn’t a core competency of evaluation, perhaps we might want to consider it.

 

 

complexityevaluationresearchsystems thinking

Systems Thinking, Logic Models and Evaluation

San Antonio at Night, by Corey Leopold (CC License)

The American Evaluation Association conference is on right now in San Antonio and with hundreds of sessions spread over four days it is hard to focus on just one thing. For those interested in systems approaches to evaluation, the conference has had a wealth of learning opportunities.

The highlight was a session on systems approaches to understanding one of evaluation’s staples: the program logic model.

The speakers, Patricia Rogers from the Royal Melbourne Institute of Technology, and consultants Richard Hummelbrunner and Bob Williams spoke to the challenges posed with the traditional forms of logic models by looking at the concepts of beauty, truth and justice. These model forms tend to take the shape of the box model (the approach most common in North America), the outcome hierarchy model, and the logic framework, which is popular in international development work.

The latter model was the focus of Hummelbrunner’s talk, which critiqued the ‘log frame’ approach and showed how its highly structured approach to conceptualizing programs tends to lead to a preoccupation with the wrong things and a rigidity in the way programs are approached. They work well in environments that are linear, straightforward, and in situations where funders need simple, rapid overviews of programs. But as Hummelbrunner says:

Logframes fail in messy environments

The reason is often that people make assumptions of simplicity when really such programs are complicated or complex. Patricia Rogers illustrated ways of conceptualizing programs using the traditional box models, but showing how different program outcomes could emerge from one program, or that there may be the need to have multiple programs working simultaneously to achieve a particular outcome.

What Rogers emphasized was the need for logic models to have a sense of beauty to it.

Logic models need to be beautiful, to energize people. It’s can’t just be the equivalent of a wiring diagram for a program.

According to Rogers, the process of developing a logic model is most effective when it maintains harmony between the program and the people within it. Too often such model development processes are dispiriting events rather than exciting ones.

Bob Williams concluded the session by furthering the discussion of beauty, truth and justice, by expanding the definitions of these terms within the context of logic models. Beauty is the essence of relationships, which is what logic models show. Truth is about providing opportunities for multiple perspectives on a program. And a boundary critique is a an opportunity for ethical decision making.

On that last point, Williams made some important arguments about how, in systems related research and evaluation, the act of choosing a boundary is a profound ethical decision. Who is in, who is out, what counts and what does not are all critical questions to the issue of justice.

To conclude, Williams also challenged us to look at models in new ways, asking:

Why should models be the servant of data, rather than have data serve the models?

In this last point, Williams highlights the current debates within the knowledge management community, which is dealing with a decade where trillions of points of data have been generated to make policy and programming decisions, yet better decisions still elude us. Is more data, better?

The session was a wonderful puctuation to the day and really advanced the discussion on something so fundamental as logic models, yet took us to a new set of places by considering them as things of artful design, beauty, ethical decision making tools, and vehicles for exploring the truths that we live. Pretty profound stuff for a session on something seemingly benign as a planning tool.

The session ended with a great question from Bob Williams to the audience that speaks to why systems are also about the people within them and emplored evaluators to consider:

Why don’t we start with the people first instead of the intervention, rather than the other way around like we normally do?

evaluationsystems thinking

American Evaluation Association Conference

Over the next few days I’ll be attending the American Evaluation Association conference in San Antonio, Texas. The conference, the biggest gathering of evaluators in the world. Depending on the Internet connections, I will try to do some live tweeting from my @cdnorman and some blogging reflections along the way, so do follow along if you’re interested. In addition to presenting some of the work that I’ve been engaged in on team science with my colleagues at the University of British Columbia and Texas Tech University, I will be looking to connect more with those groups and individuals doing work on systems evaluation and developmental evaluation with an eye to spotting the trends and developments (no pun intended) in those fields.

Evaluation is an interesting area to be a part of. It has no disciplinary home, a set of common practices, but much diversity as well and brings together a fascinating blend of people from all walks of professional life.

Stay tuned.