Category: psychology

psychology

The Developmental Psychology of Organizations

Organizations start change somewhere

Every living thing has a journey that starts somewhere and ends eventually. Our ability to see this, understand it, and apply what we know about how humans grow and develop (as individuals and organizations) is what helps us determine how this journey unfolds and where it ends up.

The psychology of individuals is a complicated affair that involves understanding a variety of matters from personal and family history, genetics, cultural context, education, and social situating. While all of these contribute to who we are as people, the degree of influence and mix is different from person to person. It means that we are all a product of a collection of forces that combine together in various ways that make understanding how we change a challenge because of this holistic complexity.

For example, some of us might have behaviours and preferences associated with a certain personality type (extroverted and introverted) and find that quality to be relatively stable across the lifespan. While there are times we might exhibit qualities of another type, those are more situational than stable. For those who are more of an ambivert, identification with a particular preference might be more challenging. Whatever investment you place in this kind of personality assessment, what is important is that the stability and consistency of certain characteristics are what largely shapes our identity to others (and ourselves). It’s what makes us ‘us’.

From Individuals to Organizations

It has been argued that organizations exhibit much of the same kind of characteristic habits on their own while providing an aggregation of the characteristics of those within them and leading them to various degrees. Personality theory has been applied to organizational behaviour as a means of understanding how it is that certain actions, activities, habits, and patterns form from within organizations and their implications. This involves taking ideas developed for individuals and applying them to groups and the implications of this are considerable.

If we are to consider organizations similar to humans seriously, it can have significant implications for the way in which we engage in organizational change efforts. Much of the research on organizational change is tied to the development and implementation of a strategy. Strategy, in most conventional applications, is an expression of intent manifest through specific choices of focus and action. This approach rests largely on a cognitive rational model of change (pdf) where information (e.g., data, ‘facts’, perceptions, beliefs, and opinion) guides an assessment of the situation that forms the basis for a plan of action. The idea is that we see and learn things and plan and act according to that knowledge.

Most individual behaviour change models are founded on this approach that has thinking preceding action in a relatively rational, logical manner based on an objective assessment of the facts and evidence (with some emotional contributions here and there to make life interesting). So if we tie organizational change to the similar kind of mechanisms and models that we use to understand individuals, should we not apply similar modes of change facilitation? We do — but its how we do it that might be the problem.

Change Theory to Change Reality

One of the most vexing (and little discussed) issues for behavioural scientists is that the application of the cognitive rational model to personal, organizational, and social change has a rather unimpressive track record. A look at how people change finds that relatively little change comes from rationally reviewing a threat or opportunity and planning out a strategy (nevermind executing the planned strategy as envisioned). Even when the effects are modest, factors such as the match between the person, technique or intervention approach, and the problem being addressed continues to mediate the outcomes.

What happens when our theories and our practices don’t really work? Or at least don’t work as well as we think they do?

The answer — using the very argument that we are looking to disprove — is that we will address the matter as many individuals might: disagreement, resistance, and denial.

The field of organizational decision-making and innovation is littered with case studies that show how, in the face of overwhelming evidence to the contrary, organizations (like many individuals) resist change. Whether it was the speed at which those on the Titanic accepted the fact that their ship would sink after hitting the iceberg (nevermind the perception that the ship was invulnerable, to begin with) or companies who persist with a strategy that doesn’t match with changing times (e.g., Kodak and it’s photographic film business, Sears and its retail model), the inability to see, unwillingness to perceive or accept changing situations has led to major problems.

These problems are a matter of failing to change or adapt. To quote from The Leopard:

If want things to stay as they are, things will have to change

Change is something we need to do even if that is simply to maintain the status quo.

Person-Centred Organizational Change

Erik Eriksen, the Austrian-American psychoanalyst whose work focused on identity formation and development, was among the few to challenge the belief that people’s essential character was immutable and resistant to change. (The dominant view was that thinking and behaviour could change, but not ‘how one was’ as a person). He did, however, acknowledge that our ability to change who we are was not easy and takes a lifetime. This flies in the face of the dominant thinking in Western societies that we can make dramatic changes in an instant.

While talk-shows and popular self-books are filled with stories of dramatic transformation and inspiration about how you can change everything in an instant, the truth is that these cases are outliers (and often exaggerations) or misrepresentations. Much like the artist who ‘breaks out’ and becomes an ‘overnight sensation’ the journey to stardom is usually a long one that follows a Pareto distribution (that is a long, slow climb over time followed by very quick punctuation at the end). What is misread into these success stories is that the rapid change is a factor of a long, protracted build-up.

While there are some things that do follow this pattern much change is also linear and progressive. We see this in the work of another Ericsson: Anders Ericsson. His work is widely cited (and mis-cited) as being behind the ‘10,000 Hour Rule’ that suggests that expertise — a change from an unskilled novice to a skilled expert — is developed over that much time of practice. While the time itself is important, what is often missed in the citation of this work is that the key is on deliberative practice (pdf), which makes all the difference.

If we extrapolate from the work of both Eriksen/Ericsson’s we might develop a model of behaviour change that looks quite different than we have at present. Instead of trying 5-year plans, strategic goals, and inspirational visions of the future, we might be better off delving into an organization’s past, it’s formation, it’s core beliefs and personality, and spend more time looking at what it is already doing than what it seeks to do.

Developmental Organizations

We might then find what it seeks to deliberate on day-in-and-out and emphasize the ways in which to amplify the feedback that helps people learn deliberately and consistently. We might take these lessons — much like those small, tiny adjustments that expert violinists, athletes, and surgeons make to hone their craft — and make them visible and build on them. We would look upon organizations as developing organizations using approaches that fit with them developmentally (e.g., developmental evaluation). We would treat organizations like we would people.

Which is kind of funny because organizations are made of people. That’s some change.

Photo by Stanislav Kondratiev on Unsplash

design thinkingpsychologyresearch

Elevating Design & Design Thinking

 

ElevateDesignLoft2.jpgDesign thinking has brought the language of design into popular discourse across different fields, but it’s failings threaten to undermine the benefits it brings if they aren’t addressed. In this third post in a series, we look at how Design (and Design Thinking) can elevate themselves above their failings and match the hype with real impact. 

In two previous posts, I called out ‘design thinkers’ to get the practice out of it’s ‘bullshit’ phase, characterized by high levels of enthusiastic banter, hype, and promotion and low evidence, evaluation or systematic practice.

Despite the criticism, it’s time for Design Thinking (and the field of Design more specifically) to be elevated beyond its current station. I’ve been critical of Design Thinking for years: its popularity has been helpful in some ways, problematic in others.  Others have, too. Bill Storage, writing in 2012 (now unavailable), said:

Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design.

Bruce Nussbaum, who helped popularize Design Thinking in the early 2000’s called it a ‘failed experiment’, seeking to promote the concept of Creative Intelligence instead. While many have called for Design Thinking to die, it’s not going to happen anytime soon. Since first publishing a piece on Design Thinking’s problems five years ago the practice has only grown. Design Thinking is going to continue to grow, despite its failings and that’s why it matters that we pay attention to it — and seek to make it better.

Lack of quality control, standardization or documentation of methods, and evidence of impact are among the biggest problems facing Design Thinking if it is to achieve anything substantive beyond generating money for those promoting it.

Giving design away, better

It’s hard to imagine that the concepts of personality, psychosis, motivation, and performance measurement from psychology were once unknown to most people. Yet, before the 1980’s, much of the public’s understanding of psychology was confined to largely distorted beliefs about Freudian psychoanalysis, mental illness, and rat mazes. Psychology is now firmly ensconced in business, education, marketing, public policy, and many other professions and fields. Daniel Kahneman, a psychologist, won the Nobel Prize in Economics in 2002 for his work applying psychological and cognitive science to economic decision making.

The reason for this has much to do with George Miller who, as President of the American Psychological Association, used his position to advocate that professional psychology ‘give away’ its knowledge to ensure its benefits were more widespread. This included creating better means of communicating psychological concepts to non-psychologists and generating the kind of evidence that could show its benefits.

Design Thinking is at a stage where we are seeing similar broad adoption beyond professional design to these same fields of business, education, the military and beyond. While there has been much debate about whether design thinking as practiced by non-designers (like MBA’s) is good for the field as a whole, there is little debate that its become popular just as psychology has.

What psychology did poorly is that it gave so much away that it failed to engage other disciplines enough to support quality adoption and promotion and, simultaneously, managed to weaken itself as newfound enthusiasts pursued training in these other disciplines. Now, some of the best psychological practice is done by social workers and the most relevant research comes from areas like organizational science and new ‘sub-disciplines’ like behavioural economics, for example.

Design Thinking is already being taught, promoted, and practiced by non-designers. What these non-designers often lack is the ‘crit’ and craft of design to elevate their designs. And what Design lacks is the evaluation, evidence, and transparency to elevate its work beyond itself.

So what next?

BalloonsEgypt.jpg

Elevating Design

As Design moves beyond its traditional realms of products and structures to services and systems (enabled partly by Design Thinking’s popularity) the implications are enormous — as are the dangers. Poorly thought-through designs have the potential to exacerbate problems rather than solve them.

Charles Eames knew this. He argued that innovation (which is what design is all about) should be a last resort and that it is the quality of the connections (ideas, people, disciplines and more) we create that determine what we produce and their impact on the world. Eames and his wife Ray deserve credit for contributing to the elevation of the practice of design through their myriad creations and their steadfast documentation of their work. The Eames’ did not allow themselves to be confined by labels such as product designer, interior designer, or artist. They stretched their profession by applying craft, learning with others, and practicing what they preached in terms of interdisciplinarity.

It’s now time for another elevation moment. Designers can no longer be satisfied with client approval as the key criteria for success. Sustainability, social impact, and learning and adaptation through behaviour change are now criteria that many designers will need to embrace if they are to operate beyond the fields’ traditional domains (as we are now seeing more often). This requires that designers know how to evaluate and study their work. They need to communicate with their clients better on these issues and they must make what they do more transparent. In short: designers need to give away design (and not just through a weekend design thinking seminar).

Not every designer must get a Ph.D. in behavioural science, but they will need to know something about that domain if they are to work on matters of social and service design, for example. Designers don’t have to become professional evaluators, but they will need to know how to document and measure what they do and what impact it has on those touched by their designs. Understanding research — that includes a basic understanding of statistics, quantitative and qualitative methods — is another area that requires shoring up.

Designers don’t need to become researchers, but they must have research or evaluation literacy. Just as it is becoming increasingly unacceptable that program designers from fields like public policy and administration, public health, social services, and medicine lack understanding of design principles, so is it no longer feasible for designers to be ignorant of proper research methods.

It’s not impossible. Clinical psychologists went from being mostly practitioners to scientist-practitioners. Professional social workers are now well-versed in research even if they typically focus on policy and practice. Elevating the field of Design means accepting that being an effective professional requires certain skills and research and evaluation are now part of that skill set.

DesignWindow2.jpeg

Designing for elevated design

This doesn’t have to fall on designers to take up research — it can come from the very people who are attracted to Design Thinking. Psychologists, physicians, and organizational scientists (among others) all can provide the means to support designers in building their literacy in this area.

Adding research courses that go beyond ethnography and observation to give design students exposure to survey methods, secondary data analysis, ‘big data’, Grounded Theory approaches, and blended models for data collection are all options. Bring the behavioural and data scientists into the curriculum (and get designers into the curriculum training those professionals).

Create opportunities for designers to do research, publish, and present their research using the same ‘crit’ that they bring to their designs. Just as behavioural scientists expose themselves to peer review of their research, designers can do the same with their research. This is a golden opportunity for an exchange of ideas and skills between the design community and those in the program evaluation and research domains.

This last point is what the Design Loft initiative has sought to do. Now in its second year, the Design Loft is a training program aimed at exposing professional evaluators to design methods and tools. It’s not to train them as designers, but to increase their literacy and confidence in engaging with Design. The Design Loft can do the same thing with designers, training them in the methods and tools of evaluation. It’s but one example.

In an age where interdisciplinarity is spoken of frequently this provides the means to practically do it and in a way that offers a chance to elevate design much like the Eames’ did, Milton Glaser did, and how George Miller did for psychology. The time is now.

If you are interested in learning more about the Design Loft initiative, connect with Cense. If you’re a professional evaluator attending the 2017 American Evaluation Association conference in Washington, the Design Loft will be held on Friday, November 10th

Image Credit: Author

complexityeducation & learningpsychologysystems thinking

Complex problems and social learning

6023780563_12f559a73f_o.jpg

Adaptation, evolution, innovation, and growth all require that we gain new knowledge and apply it to our circumstances, or learn. While much focus in education is on how individuals attend, process and integrate information to create knowledge, it is the social part of learning that may best determine whether we simply add information to our brains or truly learn. 

Organizations are scrambling to convert what they do and what they are exposed to into tangible value. In other words: learn. A 2016 report from the Association for Talent Development (ATD) found that “organizations spent an average of $1,252 per employee on training and development initiatives in 2015”, which works out to an average cost per learning hour of $82 based on an average of 33 hours spent in training programs per year. Learning and innovation are expensive.

The massive marketplace for seminars, keynote addresses, TED talks, conferences, and workshops points to a deep pool of opportunities for exposure to content, yet when we look past these events to where and how such learning is converted into changes at the organizational level we see far fewer examples.

Instead of building more educational offerings like seminars, webinars, retreats, and courses, what might happen if they devoted resources to creating cultures for learning to take place? Consider how often you may have been sent off to some learning event, perhaps taken in some workshops or seen an engaging keynote speaker, been momentarily inspired and then returned home to find that yourself no better off in the long run. The reason is that you have no system — time, resources, organizational support, social opportunities in which to discuss and process the new information — and thus, turn a potential learning opportunity into neural ephemera.

Or consider how you may have read an article on something interesting, relevant and important to what you do, only to find that you have no avenue to apply or explore it further. Where do the ideas go? Do they get logged in your head with all the other content that you’re exposed to every day from various sources, lost?

Technical vs. Social

My colleague and friend John Wenger recently wrote about what we need to learn, stating that our quest for technical knowledge to serve as learning might be missing a bigger point: what we need at this moment. Wenger suggests shifting our focus from mere knowledge to capability and states:

What is the #1 capability we should be learning?  Answer: the one (or ones) that WE most need; right now in our lives, taking account of what we already know and know how to do and our current situations in life.

Wenger argues that, while technical knowledge is necessary to improve our work, it’s our personal capabilities that require attention to be sufficient for learning to take hold. These capabilities are always contingent as we humans exist in situated lives and thus our learning must further be applied to what we, in our situation, require. It’s not about what the best practice is in the abstract, but what is best for us, now, at this moment. The usual ‘stuff’ we are exposed to is decontextualized and presented to us without that sense of what our situation is.

The usual ‘stuff’ we are exposed to under the guise of learning is so often decontextualized and presented to us without that sense of how, whether, or why it matters to us in our present situation.

To illustrate, I teach a course on program evaluation for public health students. No matter how many examples, references, anecdotes, or colourful illustrations I provide them, most of my students struggle to integrate what they are exposed to into anything substantive from a practical standpoint. At least, not at first. Without the ability to apply what they are learning, expose the method to the realities of a client, colleague, or context’s situation, they are left abstracting from the classroom to a hypothetical situation.

But, as Mike Tyson said so truthfully and brutally: “Every fighter has a plan until they get punched in the mouth.”

In a reflection on that quote years later, Tyson elaborated saying:

“Everybody has a plan until they get hit. Then, like a rat, they stop in fear and freeze.”

Tyson’s quote applies to much more than boxing and complements Wenger’s assertions around learning for capability. If you develop a plan knowing that it will fail the moment you get hit (and you know you’re going to get hit), then you learn for the capability to adapt. You build on another quote attributed to Dwight D. Eisenhower, who said:

“I have always found that plans are useless but planning is indispensable.”

Better social, better learning

Plans don’t exist in a vacuum, which is why they don’t always turn out. While sometimes a failed plan is due to poor planning, it is more likely due to complexity when dealing with human systems. Complexity requires learning strategies that are different than those typically employed in so many educational settings: social connection.

When information is technical, it may be simple or complicated, but it has a degree of linearity to it where one can connect different pieces together through logic and persistence to arrive a particular set of knowledge outcomes. Thus, didactic classroom learning or many online course modules that require reading, viewing or listening to a lesson work well to this effect. However, human systems require attention to changing conditions that are created largely in social situations. Thus, learning itself requires some form of ‘social’ to fully integrate information and to know what information is worth attending to in the first place. This is the kind of capabilities that Wenger was talking about.

My capabilities within my context may look very much like that of my colleagues, but the kind of relationships I have with others, the experiences I bring and the way I scaffold what I’ve learned in the past with what I require in the present is going to be completely different. The better organizations can create the social contexts for people to explore this, learn together, verify what they learn and apply it the more likely they can reap far greater benefits from the investment of time and money they spend on education.

Design for learning, not just education

We need a means to support learning and support the intentional integration of what we learn into what we do: it fails in bad systems.

It also means getting serious about learning, meaning we need to invest in it from a social, leadership and financial standpoint. Most importantly, we need to emotionally invest in it. Emotional investment is the kind of attractor that motivates people to act. It’s why we often attend to the small, yet insignificant, ‘goals’ of every day like responding to email or attending meetings at the expense of larger, substantial, yet long-term goals.

As an organization, you need to set yourself up to support learning. This means creating and encouraging social connections, time to dialogue and explore ideas, the organizational space to integrate, share and test out lessons learned from things like conferences or workshops (even if they may not be as useful as first thought), and to structurally build moments of reflection and attention to ongoing data to serve as developmental lessons and feedback.

If learning is meant to take place at retreats, conferences or discrete events, you’re not learning for human systems. By designing systems that foster real learning focused on the needs and capabilities of those in that system, you’re more likely to reap the true benefit of education and grow accordingly. That is an enormous return on investment.

Learning requires a plan and one that recognizes you’re going to get punched in the mouth (and do just fine).

Can this be done for real? Yes, it can. For more information on how to create a true learning culture in your organization and what kind of data and strategy can support that, contact Cense and they’ll show you what’s possible. 

Image credit: Social by JD Hancock used under Creative Commons license.

businessinnovationpsychologyscience & technologysocial systems

The logic of a $1000 iPhone

TimePhone.jpg

Today Apple is expected to release a new series of iPhone handsets with the base price for one set at more than $1000. While many commentators are focusing on the price, the bigger issue is less about what these new handsets cost, but what value they’ll hold. 

The idea that a handset — once called a phone — that is the size of a piece of bread could cost upward of $1000 seems mind-boggling to anyone who grew up with a conventional telephone. The new handsets coming to market have more computing power built into them than was required for the entire Apollo space missions and dwarf even the most powerful personal computers from just a few years ago. And to think that this computing power all fits into your pocket or purse.

The iPhone pictured above was ‘state of the art’ when it was purchased a few years ago and has now been retired to make way for the latest (until today) version required not because the handset broke, but because it could no longer handle the demands placed on it from the software that powered it and the storage space required to house it all. This was never an issue when people used a conventional telephone because it always worked and it did just one thing really well: allowed people to talk to each other at a distance.

Changing form, transforming functions

The iPhone is as much about technology as it is a vector of change in social life that is a product of and contributor to new ways of interacting. The iPhone (and its handset competitors) did not create the habits of text messaging, photo sharing, tagging, social chat, augmented reality, but it also wasn’t just responding to humans desire to communicate, either. Adam Alter’s recent book Irresistible outlines how technology has been a contributor to behaviours that we would now call addictive. This includes a persistent ‘need’ to look at one’s phone while doing other things, constant social media checking, and an inability to be fully present in many social situations without touching their handset.

Alter presents the evidence from a variety of studies and clinical reports that shows how tools like the iPhone and the many apps that run on it are engineered to encourage the kind of addictive behaviour we see permeating through society. Everything from the design of the interface, to the type of information an app offers a user (and when it provides it), to the architecture of social tools that encourage a type of reliance and engagement that draws people back to their phone, all create the conditions for a device that no longer sits as a mere tool, but has the potential to play a central role in many aspects of life.

These roles may be considered good or bad for social welfare, but in labelling such behaviours or outcomes in this way we risk losing the bigger picture of what is happening in our praise or condemnation. Dismissing something as ‘bad’ can mean we ignore social trends and the deeper meaning behind why people do things. By labelling things as ‘good’ we risk missing the harm that our tools and technology are doing and how they can be mitigated or prevented outright.

PhoneLookingCrowd.jpg

Changing functions, transforming forms

Since the iPhone was first launched, it’s moved from being a phone with a built in calendar and music player to something that now can power a business, serve as a home theatre system, and function as a tour guide. As apps and software evolve to accommodate mobile technology, the ‘clunkiness’ of doing many things on the go like accounting, take high-quality photos, or manage data files has been removed. Now, laptops seem bulky and even tablets, which have evolved in their power and performance to mimic desktops, are feeling big.

The handset is now serving as the tether to each other and creates a connected world. Who wants to lug cables and peripherals with them to and from the office when you can do much of the work in your hand? It is now possible to run a business without a computer. It’s still awkward, but it’s genuinely possible. Financial tools like Freshbooks or Quickbooks allow entrepreneurs to do their books from anywhere and tools like Shopify can transform a blog into a full-fledged e-commerce site.

Tools like Apple Pay have turned your phone into a wallet. Paying with your handset is now a viable option in an increasing number of places.

This wasn’t practical before and now it is. With today’s release from Apple, new tools like 3-D imaging, greatly-improved augmented reality support and enhanced image capture will all be added to the users’ toolkit.

Combine all of this with the social functions of text, chat, and media sharing and the handset has now transformed from a device to a social connector, business driver and entertainment device. There is little that can be done digitally that can’t be done on a handset.

Why does this matter?

It’s easy to get wrapped up in all of this as technological hype, but to do so is to miss some important trends. We may have concern over the addictive behaviours these tools engender, the changes in social decorum the phone instigates, and the fact that it becomes harder to escape the social world when the handset is also serving as your navigation tool, emergency response system, and as an e-reader. But these demands to have everything in your pocket and not strapped to your back, sitting on your desk (and your kitchen table) and scattered all over different tools and devices comes from a desire for simplicity and convenience.

In the midst of the discussion about whether these tools are good or bad, we often forget to ask what they are useful for and not useful for. Socially, they are useful for maintaining connections, but they have shown to be not so useful for building lasting, human connections at depth. They are useful for providing us with near-time and real-time data, but not as useful at allowing us to focus on the present moment. These handsets free us from our desk, but also keep us ‘tied’ to our work.

At the same time, losing your handset has enormous social, economic and (potentially) security consequences. It’s no longer about missing your music or not being able to text someone, when most of one’s communications, business, and social navigation functions are routed through a singular device the implications for losing that device becomes enormous.

Useful and not useful/good and bad

By asking how a technology is useful and not useful we can escape the dichotomy of good and bad, which gets us to miss the bigger picture of the trends we see. Our technologies are principally useful for connecting people to each other (even if it might be highly superficial), enabling quick action on simple tasks (e.g., shopping, making a reservation), finding simple information (e.g., Google search), and navigating unknown territory with known features (e.g., navigation systems). This is based on a desire for connection a need for data and information, and alleviating fear.

Those underlying qualities are what makes the iPhone and other devices worth paying attention to. What other means have we to enhance connection, provide information and help people to be secure? Asking these questions is one way in which we shape the future and provide either an alternative to technologies like the iPhone or better amplify these tools’ offerings. The choice is ours.

There may be other ways we can address these issues, but thus far haven’t found any that are as compelling. Until we do, a $1000 for a piece of technology that does this might be a bargain.

Seeing trends and developing a strategy to meet them is what foresight is all about. To learn more about how better data and strategy through foresight can help you contact Cense

Image credits: Author

 

behaviour changecomplexitydesign thinkingevaluationpsychology

Exploding goals and their myths

SparkWall_Snapseed.jpg

Goal-directed language guides much of our social policy, investment and quests for innovation without much thought of what that means in practice. Looking at the way ideas start and where they carry us might offer us reasons to pause when fashioning goals and whether we need them at all. 

In a previous article, I discussed the problems with goals for many of the problems being dealt with by organizations and networks alike. (Thanks to the many readers who offered comments and kudos and also alerted me that subscribers received the wrong version minus part of the second paragraph!). At aim was the use of SMART goal-setting and how it made many presumptions that are rarely held as true.

This is a follow-up to that to discuss how a focus on the energy directed toward a goal and how it can be integrated more tightly with how we organize our actions at the outset might offer a better option than addressing the goals themselves.

Change: a matter of energy (and matter)

goal |ɡōlnoun:  the object of a person’s ambition or effort; an aim or desired result • the destination of a journey

A goal is a call to direct effort (energy) toward an object (real or imagined). Without energy and action, the goal is merely a wish. Thus, if we are to understand goals in the world we need to have some concept of what happens between the formation of the goal (the idea, the problem to solve, the source of desire), the intention to pursue such a goal, and what happens on the journey toward that goal. That journey may involve a specific plan or it may mean simply following something (a hunch, a ‘sign’ — which could be purposeful, data-driven or happenstance, or some external force) along a pathway.

SMART goals and most of the goal-setting literature takes the assumption that a plan is a critical success factor in accomplishing a goal.

If you follow SMART, Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely) this plan needs to have these qualities attached to them. This approach makes sense when your outcome is clear and the pathway to achieving the goal is also reasonably clear such as smoking cessation, drug or alcohol use reduction, weight loss and exercise. It’s the reason why so much of the behaviour change literature includes goals: because most of it involves studies of these kinds of problems. These are problems with a clear, measurable outcome (even if that has some variation to it). You smoke cigarettes or you don’t. You weigh X kilograms at this time point and Y kilograms at that point.

These outcomes (goals) are the areas where the energy is directed and there is ample evidence to support means to get to the goal, the energy (actions) used to reach the goal, and the moment the goal is achieved. (Of course, there are things like relapse, temporary setbacks, non-linear changes, but researchers don’t particularly like to deal with this as it complicates things, something clinicians know too well).

Science, particularly social science, has a well-noted publication bias toward studies that show something significant happened — i.e., seeing change. Scientists know this and thus consciously and unconsciously pick problems, models, methods and analytical frameworks that better allow them to show that something happened (or clearly didn’t), with confidence. Thus, we have entire fields of knowledge like behaviour change that are heavily biased by models, methods and approaches designed for the kind of problems that make for good, publishable research. That’s nice for certain problems, but it doesn’t help us address the many ones that don’t fit into this way of seeing the world.

Another problem is much less on the energy, but on the matter. We look at specific, tangible outcomes (weight, presence of cigarettes, etc..) and little on the energy directed outward. Further, these perspectives assume a largely linear journey. What if we don’t know where we’re going? Or we don’t know what, specifically, it will take to get to our destination (see my previous article for some questions on this).

Beyond carrots & sticks

The other area where there is evidence to support goals is from management and study of its/ executives or ‘leaders’ (ie. those who are labelled leaders and might be because of title or role, but whether they actually inspire real, productive followership is another matter). These leaders call out a directive and their employees respond. If employees don’t respond, they might be fired or re-assigned — two outcomes that are not particularly attractive to most workers. On the surface it seems like a remarkably effective way of getting people motivated to do something or reach a goal and for some problems it works well. However, those type of problem sets are small and specific.

Yet, as much of the research on organizational behaviour has shown (PDF), the ‘carrot and stick’ approach to motivation is highly limited and ineffective in producing long-term change and certainly organizational commitment. Fostering self-determination, or creating beauty in work settings — something not done by force, but by co-development — are ways to nurture employee happiness, commitment and engagement overall.

A 2009 study, appropriately titled ‘Goals Gone Wild’ (PDF), looked at the systemic side-effects of goal-setting in organizations and found: “specific side effects associated with goal setting, including a narrow focus that neglects non-goal areas, a rise in unethical behavior, distorted risk preferences, corrosion of organizational culture, and reduced intrinsic motivation.” The authors go on to say in the paper — right in the abstract itself!: “Rather than dispensing goal setting as a benign, over-the-counter treatment for motivation, managers and scholars need to conceptualize goal setting as a prescription-strength medication that requires careful dosing, consideration of harmful side effects, and close supervision.”

Remember the last time you were in a meeting when a senior leader (or anyone) ensured that there was sufficient time, care and attention paid to considering the harmful side-effects of goals before unleashing them? Me neither.

How about the ‘careful dosing’ or ‘close supervision’ of activities once goal-directed behaviour was put forth? That doesn’t happen much, because process-focused evaluation and the related ongoing sense-making is something that requires changes in the way we organize ourselves and our work. And as a recent HBR article points out: organizations like to use the excuse that organizational change is hard as a reason not to make the changes necessary.

Praxis: dropping dualisms

The absolute dualism of goal + action is as false as the idea of theory + practice, thought + activity. There are areas like those mentioned above where that conception might be useful, yet these are selective and restrictive and can keep us focused on a narrow band of problems and activity. Climate change, healthy workplaces, building cultures of innovation, and creating livable cities and towns are not problem sets that have a single answer, a straightforward path, specific goals or boundless arrays of evidence guiding how to address them with high confidence. They do require a lot of energy, pivoting, adapting, sense-making and collaboration. They are also design problems: they are about making the world we want and reacting the world we have at the same time.

If we’re to better serve our organizations and their greater purpose, leaders, managers, and evaluators would be wise to focus on the energy that is being used, by whom, when, how and to what effect at more close intervals to understand the dynamics of change, not just the outcomes of it. This approach is one oriented toward praxis, an orientation that sees knowledge, wisdom, learning, strategy and action as combined processes that ought not be separated. We learn from what we do and that informs what we do next and what we learn further. It’s also about focusing on the process of design — that creation of the world we live in.

If we position ourselves as praxis-oriented individuals or organizations, evaluation is part of regular attending to the systems we design to support goals or outcomes through data and sensemaking. Strategy is linked to this evaluation and the outcomes that emerge from it all is what comes from our energy. Design is how we put it all together. This means dropping our dualisms and focusing more on integrating ourselves, our aspirations and our activities together toward achieving something that might be far greater than any goal we can devise.

Image credit: Author

 

 

psychologysystems thinking

Smart goals or better systems?

IMG_1091.jpg

If you’re working toward some sort of collective goals — as an organization, network or even as an individual — you’ve most likely been asked to use SMART goal setting to frame your task. While SMART is a popular tool for management consultants and scholars, does it make sense when you’re looking to make inroads on complex, unique or highly volatile problems or is the answer in the systems we create to advance goals in the first place?  

Goal setting is nearly everywhere.

Globally we had the UN-backed Millennium Development Goals and now have the Sustainable Development Goals and a look at the missions and visions of most corporations, non-profits, government departments and universities and you will see language that is framed in terms of goals, either explicitly or implicitly.

A goal for this purposes is:

goal |ɡōl| noun:  the object of a person’s ambition or effort; an aim or desired result the destination of a journey

Goal setting is the process of determining what it is that you seek to achieve and usually combined with mapping some form of strategy to achieve the goal. Goals can be challenging on their own when a single person is determining what it is that they want, need or feel compelled to do, even more so when aggregated to the level of the organization or a network.

How do you keep people focused on the same thing?

A look at the literature finds a visible presence of one approach: setting SMART goals. SMART goals reflect an acronym that stands for Specific, Measurable, Attainable, Realistic, and Time-bound (or Timely in some examples). The origin of SMART has been traced back to an article in the 1981 issue of the AMA’s journal Management Review by George Doran (PDF). In that piece, Doran comments how unpleasant it is to set objectives and that this is one of the reasons organizations resist it. Yet, in an age where accountability is held in high regard the role of the goal is not only strategic, but operationally critical to attracting and maintaining resources.

SMART goals are part of a larger process called performance management, which is a means by enhancing collective focus and alignment of individuals within an organization . Dartmouth College has a clearly articulated explanation of how goals are framed within the context of performance management:

” Performance goals enable employees to plan and organize their work in accordance with achieving predetermined results or outcomes. By setting and completing effective performance goals, employees are better able to:

  • Develop job knowledge and skills that help them thrive in their work, take on additional responsibilities, or pursue their career aspirations;
  • Support or advance the organization’s vision, mission, values, principles, strategies, and goals;
  • Collaborate with their colleagues with greater transparency and mutual understanding;
  • Plan and implement successful projects and initiatives; and
  • Remain resilient when roadblocks arise and learn from these setbacks.”

Heading somewhere, destination unknown

Evaluation professionals and managers alike love SMART goals and performance measurement. What’s not to like about something that specifically outlines what is to be done in detail, the date its required by, and in a manner that is achievable? It’s like checking off all the boxes in your management performance chart all at once! Alas, the problems with this approach are many.

Specific is pretty safe, so we won’t touch that. It’s good to know what you’re trying to achieve.

But what about measurable? This is what evaluators love, but what does it mean in practice? Metrics and measures reflect a certain type of evaluative approach and require the kind of questions, data collection tools and data to work effectively. If the problem being addressed isn’t something that lends itself to quantification using measures or data that can easily define a part of an issue, then measurement becomes inconclusive at best, useless at worst.

What if you don’t know what is achievable? This might be because you’ve never tried something before or maybe the problem set has never existed before now.

How do you know what realistic is? This is tricky because, as George Bernard Shaw wrote in Man and Superman:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

This issue of reasonableness is an important one because innovation, adaptation and discovery are not able reason, but aspiration and hope. Were it for reasonableness, we might have never achieved much of what we’ve set out to accomplish in terms of being innovative, adaptive or creative.

Reasonableness is also the most dangerous for those seeking to make real change and do true innovation. Innovation is not often reasonable, nor are the asks ‘reasonable.’ Most social transformations were not because of reasonable. Women’s rights to vote  or The rights of African Americans to be recognized and treated as human beings in the United States are but two examples from the 20th century that are able lack of ‘reasonableness’.

Lastly, what if you have no idea what the timeline for success is? If you’ve not tackled this before, are working on a dynamic problem, or have uncertain or unstable resources it might be impossible to say how long something will take to solve.

Rethinking goals and their evaluation

One of the better discussions on goals, goal setting and the hard truths associated with what it means to pursue a goal is from James Clear, who draws on some of the research on strategy and decision making to build his list of recommendations. Clear’s summary pulls together a variety of findings that show how individuals construct goals and seek to achieve them and the results suggest that the problem is less about the strategy used to reach a goal, but more on the goals themselves.

What is most relevant for organizations is the concept of ‘rudders and oars‘, which is about creating systems and processes for action and less on the goal itself. In complex systems, our ability to exercise control is highly variable and constrained and goals provide an illusory sense that we have control. So either we fail to achieve our goals or we set goals that we can achieve, which may not be the most important thing we aim for. We essentially rig our system to achieve something that might be achievable, but utterly not important.

Drawing on this work, we are left to re-think goals and commit to the following:

  1. Commit to a process, not a goal
  2. Release the need for immediate results
  3. Build feedback loops
  4. Build better systems, not better goals

To realize this requires an adaptive approach to strategy and evaluation where the two go hand-in-hand and are used systemically. It means pushing aside and rejecting more traditional performance measurement models for individuals and organizations and developing more fine-tuned, custom evaluative approaches that link data to decisions and decisions to actions in an ongoing manner.

It means thinking in systems, about systems and designing for ways to do both on an ongoing, not episodic manner.

The irony is, by minimizing or rejecting the use of goals, you might better achieve a goal of a more impactful, innovative, responsive and creative organization with a real mission for positive change.

 

 

Image credit: Author

 

innovationpsychologysystems thinking

Decoupling creators from their creations

IMG_0859

The world is transformed by creators — the artists, the innovators, the leaders — and their creations are what propel change and stand in the way of it. Change can be hard and its made all the more so when we fail to decouple the creator from the created, inviting resistance rather than enticing better creations. 

If you want to find the hotspot for action or inaction in a human system, look at what is made in that system. Human beings defend, promote and attend to what they made more than anything. Just watch.

Children will rush to show you what they made: a sandcastle, a picture, a sculpture, a…whatever it is they are handing you. Adults don’t grow out of that: we are just big kids. Adults are just more subtle in how we promote and share what we do, but we still place an enormous amount of psychological energy on our creations. Sometimes its our kids, our spouses (which is a relationship we created), our ideas, our way of life, our programs, policies or businesses. Even something like a consumer purchase is, in some ways, a reflection of us in that we are making an identity or statement with it.

Social media can feel like one big echo-chamber sometimes, but it’s that way because we often are so busy sharing our opinions that we’re not listening — focusing on what we made, not what others made. Social media can be so harsh because when we attack ideas — our 140 character creations sometimes — we feel as if we are being attacked. This is one of the reasons we are having such traumatized, distorted discourse in the public domain.

Creations vs creators

The problem with any type of change is that we often end up challenging both the creator and the creation at the same time. It’s saying that the creation needs to change and that can hurt the creator, even if done unintentionally.

The interest in failure in recent years is something I’ve been critical of, but a positive feature of it is that people are now talking more openly about what it means to not succeed and that is healthy. There are lots of flaws in the failure talk out there, mostly notably because it fails (no pun — even a bad one, intended… but let’s go with it) to decouple the creator from the creation.  In many respects, creators and their creations are intimately tied together as one is the raw material and vehicle for the other.

But when we need to apologize for, make amends because, or defend our creations constantly we are using a lot of energy. In organizational environments there is an inordinate amount of time strategizing, evaluating and re-visiting innovation initiatives, but the simple truth is that — as we will see below — it doesn’t make a lick of difference because we are confusing the creators with the creations and the systems in which they are creating.

Sometimes we need to view the creator and creation separately and that is all a matter of trust – and reality.

 

A matter of trust, a matter of reality

If you are part of a team that has been tasked with addressing an issue and given a mandate to find and possibly create a solution, why should it matter what you produce? That seems like an odd question or statement.

At first it seems obvious: for accountability, right?

But consider this a little further.

If a team has been given a charge, it presumably is the best group that is available given the time and resources available. Maybe there are others more suited or talented to the job, perhaps there is a world expert out there who could help, but because of time, money, logistics or some combination of reasons in a given situation that isn’t possible. We are now dealing with what is, not what we wish to be. This is sometimes called the real world. 

If that team does not have the skills or talent to do it, why is it tasked with the problem? If those talents and skills are unknown and there is no time, energy or commitment  — or means — to assess that in practice then you are in a truly innovative space: let’s see what happens.

In this case, accountability is simply a means of exploring how something is made, what is learned along the way, and assessing what kind of products are produced from that, knowing that there is no real way to determine it’s comparative merit, significance or worth — the hallmark tenets of evaluation. It’s experimental and there is no way to fail, except to fail to attend and learn from it.

This is a matter of trust. It’s about trusting that you’re going to do something useful with what you have and that’s all. The right / wrong debate makes no sense because, if you’re dealing with reality as we discussed above there are no other options aside from not doing something. So why does failure have to come into it?

This is about trusting creators to create. It has nothing to do with what they create because, if you’ve selected a group to do something that only they are able to do, for reasons mentioned above, it has nothing to do with their creation.

Failing at innovation

The real failure we speak of might be failing to grasp reality, failing to support creative engagement in the workplace, and failing to truly innovate. These are products of what happens when we task individuals, groups, units or some other entity within our organizations, match them with systems that have no known means forward and provide them with no preparation, tools, or intelligence about the problem to support doing what they are tasked with. The not knowing part of the problem is not uncommon, particularly in innovative spaces. It’s nothing to sneer at, rather something that is a true learning opportunity. But we need to call it as it is, not what we want it or pretend it to be.

My colleague Hallie Preskill from FSG is big on learning and is not averse to rolling her eyes when people speak of it, because it’s often used so flippantly and without thought. True learning means paying attention, giving time and focus to what material — experience, data, reflections, goals and contexts — is available, and in . Learning has a cost and it has many myths attached to it. It’s difficulty is why many simply don’t do it. In truth, many are not as serious about really learning, but talking about learning. This is what Hallie sees a lot.

The material for learning is what these innovators, these creators, are producing so if we are valuing creation and innovation we need to pay attention to this and entice creators to continue to generate more quality ‘content’ for us to learn from and worry less about what these creators produce when we task them with innovation missions that have no chance to ‘succeed’ just as they have no chance to ‘fail’

A poor question leads us to poor answers.

Consider what we ask of our innovators and innovation and you’ll see that, if we want more and better creators and if we want more and better creations we need to see them in a new light and create the culture that sees them both for what they are, not what we want them to be.

Image credit: Author