Tag: developmental evaluation

complexityeducation & learningevaluationsocial systems

Complexity and Child-Rearing: Why Amy Chua is Neither Right or Wrong

Family

Science strives for precision and finding the right or at least the best answers to questions. The science of complexity means shifting our thinking from right answers to appropriate ones and what is best to good. The recent debate over parenting (particularly among Chinese families) illustrates how framing the issue and the outcomes makes a big difference.

Amy Chuais probably the most reviled mother in America” according to Margaret Wente writing in the Globe and Mail.  In her column, Wente is looking at the phenomenon that Chua writes about in her new book on parenting, Battle Hymn of the Tiger Mother. What has drawn such attention to Chua and her book is that she advocates for a very strict method of parenting in a manner that achieves very specific objectives with her children. The payoff? Her children are very successful. This is not a new argument, particularly when it comes to Chinese and other Asian cultural stereotypes. But like many stereotypes, they emerge from something that has a kernel of truth that gets used in ways that gets applied as a universal, rather than in context. Judging by the comments on the original Wall Street Journal story that attracted attention and the Globe and Mail’s review page, I would say that there is some truth to this stereotype and some wild overstatements as this gets applied universally to parenting.

A summary of the comments and commentary on this, crudely, fall into two camps (which, for reasons I’ll elaborate on later is ironic given how problematic the whole idea of reducing arguments into twos is, but go with me on this): 1) Amy Chua is recalling my childhood or parenting reality and its nice to hear someone acknowledge it and 2) Amy Chua is promoting harmful, inaccurate, racist stereotypes.

Child-raising is a common example of a complex system, showing how past experience is not necessarily a formula for future success. Thus, you can have the same parents, same household, even same genes (in the case of twins) and get two very different outcomes. Complex systems do not lend themselves to recipes or “best practices”. You can’t shoehorn complexity into “right” / “wrong” and either/or positions.

What is interesting about the discussion around Chua’s parenting style, which she claims reflects traditional Chinese behaviour (I am not Chinese so this is out of my realm for comment) is that the focus is on raising successful children, not necessarily happy, well-adjusted, self-determined or even creative children. And success, in the terms referred to means achieving or exceeding certain prescriptive standards for socially acceptable activities. This might mean acceptance at a prestigious school, an error-free performance, or a straight A report card. It is a rather narrowly proscribed form of achievement based upon a particular set of cultural conditions and assumptions.

One of the problems I see in this debate is that people are conflating the two types of outcomes, which is where the complexity comes in. What Chua has done is actually refer to parenting in line with a set of complicated activities and outputs, rather than part of a complex system. She has sought to reduce the complexity in the system of parenting by focusing on issues of tangible measurement and has created a familial system aimed at reducing the likelihood that these objectives will not be met. Her benchmark for success are visible outcomes, not the kind that come from growing one’s self-esteem, building true friendships, or learning to love. This isn’t to say that her children or those raised by “tiger parents” don’t have such experiences, but this isn’t what her method of parenting is focused on. And therein lies the rub and why much of the debate surrounding Chua’s book is misaligned.

If you are assessing the life of a person and their total experience as a human being, Chua’s method of parenting is quite problematic. Success in this situation has many different paths and may not even have a clear outcome. What does it really mean to be successful if love, happiness, and self-fulfilment is the outcome of interest – particularly when all of those things change and evolve over a week, a month or a lifetime? It is the kind of task that one might use developmental evaluation to assess if you were looking to determine what kind of impact a particular form of parenting has on children’s lives. Margaret Wente’s article uses some examples of “tiger parenting” outcomes with those who achieved much “success” using the benchmarks of externally validated standards and found mixed outcomes when “success” was viewed as part of a whole person. Andre Agassi grew to loathe tennis because of his experience, while Lang Lang appears to love his piano playing. Both have achieved success in some ways, but not all.

These two examples also go to show that with human systems, there is little ability to truly control the outcomes and process. Even if one can reduce outcomes to complicated or simplistic terms, those outcomes are still influenced by complex interactions. Complicated systems can be embedded within complex ones or the opposite. So no matter what kind of prescription a person uses, no matter how tight the controls are put, the influence of complexity has a way of finding itself into human affairs.

So is Amy Chua’s method of parenting successful or not, supportive or harmful, right or wrong? The answer is yes.

design thinkingeducation & learningevaluationinnovation

More Design Thinking & Evaluation

Capturing Design in Evaluation (CameraNight by Dream Sky, Used under Creative Commons License)

On the last day of the American Evaluation Association conference, which wrapped up on Saturday, I participated in an interactive session on design thinking and evaluation by a group from the Savannah College of Art and Design.

One of the first things that was presented was some of the language of design thinking for those in the audience who are not accustomed to this way of approaching problems (which I suspect was most of those in attendance).

At the heart of this was the importance of praxis, which is the link between theory, design principles and practice (see below)

Design Thinking

As part of this perspective is the belief that design is less a field about the creation of things on their own, but rather a problem solving discipline.

Design is a problem solving discipline

When conceived of this way, it becomes easier to see why design thinking is so important and more than a passing fad. Inherent in this way of thinking are principles that demand inclusion of multiple perspectives on the problem, collaborative ideation, and purposeful wandering through a subject matter.

Another is to view design as serious play to support learning.

Imagine taking the perspective of design as serious play to support learning?

The presenters also introduced a quote from Bruce Mau, which I will inaccurately capture here, but is akin to this:

One of the revelations in the studio is that life is something that we create every single day. We create our space and place.

Within this approach is a shift from sympathy with others in the world, to empathy. It is less about evaluating the world, but rather engaging with it to come up with new insights that can inform its further development. This is really a nod (in my view) to developmental evaluation.

The audience was enthralled and engaged and, I hope, willing to take the concept of design and design thinking further in their work as evaluators. In doing so, I can only hope that evaluation becomes one of the homes for design thinking beyond the realm of business and industrial arts.

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking & Evaluation

Design Thinking Meets Evaluation (by Lumaxart, Creative Commons Licence)

This morning at the American Evaluation Association meeting in San Antonio I attended a session very near and dear to my heart: design thinking and evaluation.

I have been a staunch believer that design thinking ought to be one of the most prominent tools for evaluators and that evaluation ought to be one of the principal components of any design thinking strategy. This morning, I was with my “peeps”.

Specifically, I was with Ching Ching Yap, Christine Miller and Robert Fee and about 35 other early risers hoping to learn about ways in which the Savannah College of Art and Design (SCAD) uses design thinking in support of their programs and evaluations.

The presenters went through a series of outlines for design thinking and what it is (more on that in a follow-up post), but what I wanted to focus on here was the way in which evaluation and design thinking fits together more broadly.

Design thinking is an approach that encourages participatory engagement in planning and setting out objectives, as well as in ideation, development, prototyping, testing, and refinement. In evaluation terms, it is akin to action research and utilization-focused evaluation (PDF). But perhaps its most close correlate is with Developmental evaluation (DE). DE is an approach that uses complexity-science concepts to inform an iterative approach to evaluation that is centred on innovation, the discovery of something new (or adaptation of something into something else) and the application of that knowledge to problem solving.

Indeed, the speakers today positioned design thinking as a means of problem solving.

Evaluation , at least DE, is about problem solving by collecting the data used as a form of feedback to inform the next iteration of decision making. It also is a form of evaluation that is intimately connected to program planning.

What design thinking offers is a way to extend that planning in new ways that optimizes opportunities for feedback, new information, participation, and creative interaction. Design thinking approaches, like the workshop today, also focuses on people’s felt needs and experiences, not just their ideas. In our session today, six audience members were recruited to play the role of either three facets of a store clerk or three facets of a customer — the rational, emotional and executive mind of each. A customer comes looking for a solution to a home improvement/repair problem, not sure of what she needs, while the store clerk tries to help.

What this design-oriented approach does is greatly enhance the participant’s sense of the whole, what the needs and desires and fears both parties are dealing with, not just the executive or rational elements. More importantly, this strategy looks at how these different components might interact by simulating a condition in which they might play out. Time didn’t allow us to explore what might have happened had we NOT done this and just designed an evaluation to capture the experience, but I can confidently say that this exercise got me thinking about all the different elements that could and indeed SHOULD be considered if trying to understand and evaluate an interaction is desired.

If design thinking isn’t a core competency of evaluation, perhaps we might want to consider it.

 

 

evaluationsystems thinking

American Evaluation Association Conference

Over the next few days I’ll be attending the American Evaluation Association conference in San Antonio, Texas. The conference, the biggest gathering of evaluators in the world. Depending on the Internet connections, I will try to do some live tweeting from my @cdnorman and some blogging reflections along the way, so do follow along if you’re interested. In addition to presenting some of the work that I’ve been engaged in on team science with my colleagues at the University of British Columbia and Texas Tech University, I will be looking to connect more with those groups and individuals doing work on systems evaluation and developmental evaluation with an eye to spotting the trends and developments (no pun intended) in those fields.

Evaluation is an interesting area to be a part of. It has no disciplinary home, a set of common practices, but much diversity as well and brings together a fascinating blend of people from all walks of professional life.

Stay tuned.

complexityeducation & learningevaluationsocial systems

Developmental Evaluation And Accountability

Today I’ll be wrapping up a two-day kick off to an initiative aimed at building a community of practice around Developmental Evaluation (PDF), working closely with DE leader and chief proponent, Michael Quinn Patton. The initiative, founded by the Social Innovation Generation group, is designed in part to bring a cohort of learners (or fellows? — we don’t have a name for ourselves) together to explore the challenges and opportunities inherent in Developmental Evaluation as practiced in the world.

In our introductions yesterday I was struck by how much DE clashes with accountability in the minds of many funders and evaluation consumers. The concept strikes me as strange given that DE is idea for providing the close narrative study of programs as they evolve and innovate that clearly demonstrates what the program is doing (although, due to the complex nature of the phenomenon, it may not be able to fully explain it). But as we each shared our experiences and programs, it became clear that, tied to accountability, is an absence of understanding of complexity and the ways it manifests itself in social programs and problems.

Our challenge over the next year together will be how to address these and other issues in our practice.

What surprises me is that, while DE is seen as not rigorous by some, there is such strong adherence to other methods that might be rigorous, but completely inappropriate for the problem, yet that is considered OK. It is as if doing the wrong thing well is better than doing something that is a little different.

This is strange stuff. But that’s why we keep learning it and telling others about it so that they might learn too.

complexityevaluationresearchsocial systemssystems science

Developmental Evaluation: Problems and Opportunities with a Complex Concept

Everyone's Talking About Developmental Evaluation

“When it rains, it pours” so says the aphorism about how things tend to cluster. Albert Lazlo-Barabasi has found that pattern to be indicative of a larger complex phenomenon that he calls ‘bursts‘, something worth discussing in another post.

This week, that ‘thing’ seems to be developmental evaluation. I’ve had more conversations, emails and information nuggets placed in my consciousness this week than I have in a long time. It must be worth a post.

Developmental evaluation is a concept widely attributed to Michael Quinn Patton, a true leader in the field of evaluation and its influence on program development and planning. Patton first wrote about the concept in the early 1990’s, although the concept didn’t really take off until recently in parallel with the growing popularity of complexity science and systems thinking approaches to understanding health and human services.

At its root, Developmental Evaluation (DE) is about evaluating a program in ‘real time’ by looking at programs as evolving, complex adaptive systems operating in ecologies that share this same set of organizing principles. This means that there is no definitive manner to assess program impact in concrete terms, nor is any process that is documented through evaluation likely to reveal absolute truths about the manner in which a program will operate in the future or in another context. To traditional evaluators or scientists, this is pure folly, madness or both. When your business is coming up with the answer to a problem, any method that fails to give you ‘the’ answer is problematic.

But as American literary critic H.L. Mencken noted:

“There is always an easy solution to every human problem — neat, plausible and wrong”

Traditional evaluation methods work when problems are simple or even complicated, but rarely do they provide the insight necessary for programs with complex interactions. Most community-based social services fall into this realm as does much of the work done in public health, eHealth, and education. The reason is that there are few ways to standardize programs that are designed to adapt to changing contexts or operate in an environment where there is no stable benchmark to compare.

Public health operates well within the former situation. Disaster management, disease outbreaks, or wide-scale shifts in lifestyle patterns all produce contexts that shift — sometimes radically — so that the practice that works best today, might not be the one that works best tomorrow. We can see this problem demonstrated in the difficulty with ‘best practice’ models of public health and health promotion, which don’t really look like ‘best’ practices, but rather provide some examples of things that worked well in a complex environment. (It is for this reason that I don’t favour or use the term ‘best practice’ in public health, because I simply view too much of it as operating in the realm of the complex, which is something for which the term is not suited.)

eHealth provides an example of the latter. The idea that we can expect to develop, test and implement successful eHealth interventions and tools in a manner that fits with the normal research and evaluation cycle is impractical at best and dangerous at the worst. Three years ago Twitter didn’t exist except in the minds of a few thousand and now has a user population bigger than a large chunk of Europe. Geo-location services like Foursquare, Gowalla and Google Latitude are becoming popular and morphing so quickly that it is impossible to develop a clear standard to follow.

And that is OK, because that is the way things are, not the way evaluators want them to be.

DE seeks to bring some rigour, method and understanding to these problems by creating opportunities to learn from this constant change and use the science of systems to help make sense of what has happened, what is going on now, and to anticipate possible futures for a program. While it is impossible to fully predict what will happen in a complex system due to the myriad interacting variables, we can develop an understanding of a program in a manner that accounts for this complexity and creates useful means of understanding opportunities. This only really works if you embrace complexity rather than try and pretend that things are simple.

For example, evaluation in a complex system considers the program ecology as interactive, relationship-based (and often networked) and dynamic. Many of the traditional evaluation methods seek to understand programs as if they were static. That is, that the lessons of the past can predict the future. What isn’t mentioned, is that we evaluators can ‘game the system’ by developing strategies that can generate data that can fit well into a model, but if the questions are not suited to a dynamic context, the least important parts of the program will be highlighted and thus, the true impact of a program might be missed in the service of developing an acceptable evaluation. It is, what Russell Ackoff called: doing the wrong things righter.

DE also takes evaluation one step further and fits it with Patton’s Utlization-focused evaluation approach., which frames evaluation in a manner that focuses on actionable results. This approach to evaluation integrates the process of problem framing,data collection, analysis, interpretation and use together akin to the concept of knowledge integration. Knowledge integration is the process by which knowledge is generated and applied together, rather than independently, and reflects a systems-oriented approach for knowledge-to-action activities in health and other sciences, with an emphasis on communication.

So hopefully these conversations will continue and that DE will no longer be something that peaks on certain weeks, but rather infuses my colleagues conversations about evaluation and knowledge translation on a regular basis.