Defining the New Designer

Who is the real designer?

It’s been suggested that anyone who shapes the world intentionally is a designer, however those who train and practice as professional designers question whether such definition obscures the skill, craft, and ethics that come from formal disciplinary affiliation. Further complicating things is the notion that design thinking can be taught and that the practice of design can be applied far beyond its original traditional bounds. Who is right and what does it mean to define the new designer?

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. – Herbert Simon, Scientist and Nobel Laureate

By Herb Simon’s definition anyone who is intentionally directing their energy towards shaping their world is a designer. Renowned design scholar Don Norman (no relation) has said that we are all designers [1]. Defined this way, the term design becomes something more accessible and commonplace removing it from a sense elitism that it has often been associated with. That sounds attractive to most, but is something that has raised significant concerns from those who identify as professional designers and opens the question up about what defines the new designer as we move into an age of designing systems, not just products.

Designer qualities

Design is what links creativity and innovation. It shapes ideas to become practical and attractive propositions for users or customers. Design may be described as creativity deployed to a specific end – Sir George Cox

Sir George Cox, the former head of the UK Design Council, wrote the above statement in the seminal Cox Review of Creativity in Business in the UK in 2005 sees design as a strategic deployment of creative energy to accomplish something. This can be done mindfully, skilfully and ethically with a sense of style and fit or it can be done haphazardly, unethically, incompetently and foolishly. It would seem that designers must put their practice of design thinking into use which includes holding multiple contradictory ideas at the same time in one’s head. Contractions of this sort are a key quality of complexity and abductive reasoning, a means of thinking through such contradictions, is considered a central feature of design thinking.

Indeed, this ‘thinking’ part of design is considered a seminal feature of what makes a designer what they are. Writing on Cox’s definition of design, Mat Hunter, the Design Council’s Chief Design Officer, argues that a designer embodies a particular way of thinking about the subject matter and weaving this through active practice:

Perhaps the most obvious attribute of design is that it makes ideas tangible, it takes abstract thoughts and inspirations and makes something concrete. In fact, it’s often said that designers don’t just think and then translate those thoughts into tangible form, they actually think through making things.

Hunter might be getting closer to distinguishing what makes a designer so. His perspective might assert that people are reflective, abductive and employ design thinking actively, which is an assumption that I’ve found to be often incorrect. Even with professionals, you can instill design thinking but you can’t make them apply it (why this is the case is something for another day).

This invites the question: how do we know people are doing this kind of thinking when they design? The answer isn’t trivial if certain thinking is what defines a designer. And if they are applying design thinking through making does that qualify them as a designer whether they have accreditation or formal training degree in design or not?

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics. Discipline is a cultural code of identity. For the public or consumers of designed products it can also provide some sense of quality assurance to have credentialed designers working with them.

Yet, those who practice what Herb Simon speaks of also are designers of something. They are shaping their world, doing with intent, and many are doing it with a level of skill and attention that is parallel to that of professional designers. So what does it mean to be a designer and how do we define this in light of the new spaces where design is needed?

Designing a disciplinary identity

Writing in Design Issues, Bremner & Rodgers (2013) [2] argue that design’s disciplinary heritage has always been complicated and that its current situation is being affected by three crisis domains: 1) professionalism, 2) economic, and 3) technological. The first is partly a product of the latter two as the shaping and manufacture of objects becomes transformed. Materials, production methods, knowledge, social context and the means of transporting – whether physically or digitally — the objects of design have transformed the market for products, services and ideas in ways that have necessarily shaped the process (and profession) of design itself. They conclude that design is not disciplinary, interdisciplinary or even transdisciplinary, but largely alterdisciplinary — boundless of time and space.

Legendary German designer Dieter Rams is among the most strident critics of the everyone-is-a-designer label and believes this wide use of the term designer takes away the seriousness of what design is all about. Certainly, if one believes John Thackara’s assertion that 80 per cent of the impact of any product is determined at the design stage the case for serious design is clear. Our ecological wellbeing, social services, healthcare, and industries are all designed and have enormous impact on our collective lives so it makes sense that we approach designing seriously. But is reserving the term design(er) for an elite group the answer?

Some have argued that elitism in design is not a bad idea and that this democratization of design has led to poorly crafted, unusable products. Andrew Heaton, writing about User Experience (UX) design, suggests that this elitist view is less about moral superiority and more about better products and greater skill:

I prefer this definition: elitism is the belief that some individuals who form an elite — a select group with a certain intrinsic quality, specialised training, experience and other distinctive attributes — are those whose views on a matter are to be taken the most seriously or carry the most weight. By that definition, Elitist UX is simply an insightful and skilled designer creating an experience for an elevated class of user.

Designing through and away from discipline

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics, but there is little concrete evidence that it produced better designed outcomes. Design thinking has enabled journalists like Helen Walters, healthcare providers like those at the Mayo Clinic, and business leaders to take design and apply it to their fields and beyond. Indeed, it was business-journalist-cum-design-professor Bruce Nussbaum who is widely credited with contributing to the cross-disciplinary appeal of design thinking (and its critique) from his work at Newsweek.

Design thinking is now something that has traversed whatever discipline it was originally rooted in — which seems to be science, design, architecture, and marketing all at the same time. Perhaps unlocking it from discipline and the practices (and trappings) of such structure is a positive step.

Discipline is a cultural code of identity and for the public it can be a measure of quality. Quality is a matter of perspective and in complex situations we may not even know what quality means until products are developmentally evaluated after being used. For example, what should a 3-D printed t-shirt feel like? I don’t know whether it should be like silk, cotton, polyester, or nylon mesh or something else entirely because I have never worn one and if I was to compare such a shirt to my current wardrobe I might be using the wrong metric. We will soon be testing this theory with 3-D printed footwear already being developed.

Evaluating good design

The problem of metrics is the domain of evaluation. What is the appropriate measurement for good design? Much has been written on the concept of good design, but part of the issue is that what constitutes good design for a bike and chair might be quite different for a poverty reduction policy or perhaps a program to support mothers and their children escaping family violence. The idea of delight (a commonly used goal or marker of good design) as an outcome might be problematic in the latter context. Should mothers be delighted at such a program to support them in time of crisis? That’s a worthy goal, but I think if those involved feel safe, secure, cared for, and supported in dealing with their challenges that is still a worthwhile design. Focusing on delight as a criteria for good design in this case is using the wrong metric. And what about the designers who bring about such programs?

Or should such a program be based on designer’s ability to empathize with users and create adaptive, responsive programs that build on evidence and need simultaneously without delight being the sole goal? Just as healthy food is not always as delightful for children as ice cream or candy, there is still a responsibility to ensure that design outcomes are appropriate. The new designer needs to know when to delight, when and how to incorporate evidence, and how to bring all of the needs and constraints together to generate appropriate value.

Perhaps that ability is the criteria for which we should judge the new designer, encouraging our training programs, our clients (and their asks and expectations), our funders and our professional associations to consider what good design means in this age of complexity and then figure out who fits that criteria. Rather than build from discipline, consider creating the quality from the outcomes and processes used in design itself.

[1] Norman, D. (2004) Emotional design: why we love (or hate) everyday things. New York, NY: Basic Books.

[2] Bremner, C. & Rodgers, P. (2013). Design without discipline. Design Issues, 29 (3), 4-13.

Science of Team Science

For the last two days I’ve been attending the Science of Team Science conference at Northwestern University in Chicago. It is what I can only imagine is the closest thing to the Super Bowl or World Cup of team science (minus the colourful jerseys, rampant commercialism, and hooligans — although that would have made quite an impact as academic conferences go).

The presentations over the first day and a half have illustrated how far we have come in just a few years. In 2008 a similar conference was held near the NIH campus in Bethesda, MD. That event, sponsored by the US National Cancer Institute, was an attempt to raise the profile of team science by highlighting the theories and rationale underlying why the idea of collaboration, networks and multi-investigator applied research might be a good idea. The conference was aimed at sparking interest in the phenomenon of collaborative team research for health and resulted in a special issue of the American Journal of Preventive Medicine highlighting some of the central ideas.

Although there are many of the same people attending this conference as there was two years ago, the content and tenor of the conversation is markedly different. The biggest difference is that the idea of team science no longer needs to be sold (at least, to the audience in the room). There is wide agreement by attendees that team science is a good thing for a certain set of problems (particularly wicked ones) and that it will not replace normal science, rather complement it or fill in gaps that standard research models leave.

There is also much contention. Although, unlike other conferences, this contention is less about a clash between established bodies of knowledge, rather it is based on uncertainty over the direction that team science is going and the best routes to get there, wherever “there” is. Stephanie Jo Kent, a communications researcher from UMass, has been live blogging at the event (and encouraging the audience to join in — follow #teamsci10 on Twitter or Stephanie @stephjoke) and wrote a thoughtful summary of the first day on her blog. Here she points to one of the biggest challenges that the emergent field of team science and the conference attendees will need to address: Getting beyond “the what” of team science.

She writes:

Because everyone has their own thing that they’re into, whether its research or administration or whatever, we would have to come up with “a meta-thing” as a goal or aim that everyone – or at least a solid cadre of us – could get behind. What if we decided to answer the process question? Instead of focusing on, “What is ‘the what’ of team science?” which takes as its mission connecting the science; we propose an examination of self-reflective case studies in order to identify “what works” and thus be able to explain and train people in the skills and techniques of effective team science.

This issue of training is an important one. My own research with the Research on Academic Research (RoAR) project has found that many scientists working in team science settings don’t know how to do it when they start out. We scientists are rarely trained in collaboration and teamwork, and those that are, are not in science.

It will be interesting to see where things go from here. I suggest following us all on Twitter to see.

Education for Innovation


Are we creating the type of innovators that suit the digital economy? That respond to any opportunity, not just the ones that we plan for? There’s a lot of thinking out there that suggests we’re not.

I recently read an article on considerations around how to train for innovation in the December 2009 issue of the Harvard Business Review. When most people speak of innovation, it seems as if they do with an idea that there are some key steps or tricks to being an innovator and that is about it. But what Gina Colarelli O’Connor, Andrew Corbett, and Ron Pierantozzi argue is that there are three types of innovators that all have three stages that build on each other depending on where they are in their career. This is an important and interesting idea and a shift from the traditional mindset.

The authors state:

Companies must first understand that breakthrough innovation consists of three phases:

Discovery: Creating or identifying high-impact market opportunities.

Incubation: Experimenting with technology and business concepts to design a viable model for a new business.

Acceleration: Developing a business until it can stand on its own.

To address this, they suggest training people to match these distinct stages and phases:

Each phase lends itself to distinct career paths, as well. The bench scientist, for instance, may eventually want to be involved in policy discussions about emerging technologies and how they may influence the company’s future. The incubator may want to pursue a technical path – managing larger, longer-term projects – or to manage a portfolio of emerging businesses. And the accelerating manager may want to stay with the business as it grows, take on a leadership role.
They go on to add:
Rather than develop those paths, however, many firms assume that an individual will be promoted along with a project as it grows from discovery through to acceleration. In reality, individuals with that breadth of skill sets are extremely rare. In other words, companies have essentially been setting their innovators up to fail.
And fail is what we seem to be doing. It’s an intriguing idea and one that begs the question of whether our schools and training programs are doing the right thing by training people to be just “innovators”.
Certainly in the realm of digital technology and using it to adapt to the changing climate and accelerating innovation the view is pessimistic. Some, such as David Johnson, President of the University of Waterloo, believe that we don’t train people in a mindset that allows them innovate. Johnson, told the CBC:

Johnston said the country’s university system must shoulder part of the blame for the lag in Canada’s technological mindset. The schools haven’t done enough to train students to work smarter, he said, which means that few Canadian companies succeed based on innovation. Of Canada’s biggest companies, most are banks, while only BlackBerry maker Research In Motion — also based in Waterloo — has succeeded internationally, mainly because it has focused on innovation.

(

Perhaps the problem is that we use the term innovation so loosely that graduates fail to recognize where innovations are or how to move them along. Or, as Colarelli O’Connor and colleagues point out, they are trained for the wrong set of skills for the right kind of innovation stage.

Scientific Discovery, Innovation, Creativity and How We’re Killing It


The black hole visualization above might be more than just a depiction of something in the outer regions of space, it could be an apt metaphor for what is taking place in our research institutes and universities as far as young scientists are concerned. As a young scientist and innovator(?) (* I’ll come back to that term later) this is a deeply personal issue so keep this in mind as you read on. I am also an educator and responsible for training a new group of scientists, health practitioners, and social innovators in public health — our the future discovery agents — and it is in this latter role that I am most upset and passionate about the issue that is befalling scientific research:  a systematic strangling of opportunities for young people.

Jonah Lehrer recently explored this topic in his column in the Wall Street Journal and on his science blog (“The Frontal Cortex“) and I’m very glad he did. Lehrer points to the widening gap between those who have funding and those that do not (the rich getting richer) and how this trend is hurting researchers at the very beginning of their career —  the time they are most likely to make breakthrough discoveries.

In 1980, the largest share of grants from the National Institutes of Health (NIH) went to scientists in their late 30s. By 2006 the curve had been shifted sharply to the right, with the highest proportion of grants going to scientists in their late 40s. This shift came largely at the expense of America’s youngest scientists. In 1980, researchers between the ages of 31 and 33 received nearly 10% of all grants; by 2006 they accounted for approximately 1%. And the trend shows no signs of abating: In 2007, the most recent year available, there were more grants to 70-year-old researchers than there were to researchers under the age of 30.

My personal experience in Canada is that this pattern is not much different. As the available grant opportunities decrease or stagnate, the pie continues to remain relatively stable in size while the number of people wanting to eat from it grows. And to compound the problem, more researchers are staying longer in the field. As Lehrer notes: if you’re 70 or older you’re more likely to get an NIH grant than someone under the age of 30.What does that say to young scholars?

In order to be more competitive in grant competitions and for jobs, students are considering post-docs and spending a longer time training, paying money into an education and deferring potential employment-related income, with the hope that it will pay off. Recent National Science Foundation data shows this trend:

New doctorate recipients are increasingly likely to take postdocs, and that is evident in the 2006 SDR data: among all SEH doctorate recipients, 38% had held a postdoc at some point in their careers (table 1). More recent cohorts were more likely than earlier ones to have held a postdoc: 45% of those earning the doctorate within the last 5 years compared with 31% of those who earned the doctorate more than 25 years ago.

TABLE 1. Percentage of SEH doctorate recipients who have held one or more postdocs, by years since doctorate and broad field of doctoral study: 2006.

If that payoff is a stable research position and the ability to start up your own research group then they are likely to be disappointed, as Lehrer notes:

The age distribution of NIH grants has significant implications for American science. It has become much harder for young scientists to establish their own labs. According to the latest survey from the National Science Foundation, only 26% of scientists hold a tenure-track academic position within six years of receiving their Ph.D.

Jason Hoyt discusses this further and illustrates the trends in grant funding and asks whether we have too many PhD’s in the first place given this trend? Good question. But perhaps a better question is whether we’re killing off innovation, discovery and creativity by stifling the ability for people in their most creative years to do the work in the first place? And are we discouraging the next generation of young scientific leaders by making life for early career researchers so difficult that talented, creative people self-select out of the applicant pool for graduate school and faculty posts?

The issue of tenure mentioned above is not a moot point. To those outside the academy, the concept of tenure surely seems something anachronistic in uncertain economic times and I certainly appreciate that there will be little sympathy for non-tenured scientists and professors among the public. However, it is worth pointing out that the science and innovation done in research has a time horizon that is not the same as other jobs. A grant takes months to prepare, months to adjudicate (a recent grant I applied for had a deadline of October 15th and the decision will not be rendered for the competition until April), and then requires anything between a year to five to do the research (if you’re so lucky to have funding last more than a year or two), and then at least a year to publish your findings. This rests on the assumption that you get funded the first time and that your manuscripts get accepted the first time around. Neither of these are reasonable assumptions. Yet, grants and publications are the #1 things used to assess success. And if people think researchers get paid too much, consider what 14 years of post-secondary education and training gets you according to Hoyt:

With a PhD, a postdoc can expect to start, at most, US $42K a year in academia and $52K in industry.

I made $36.5K as a post doc and as an assistant professor at a major research university, I make less on an hourly basis than all but my part-time data entry clerk (this includes my graduate student research assistants) considering the number of hours I have to work each week to get everything accomplished. My reasons for doing this work isn’t about money, but at some point for most young researchers who have a mountain of student loan debt from a decade and a half of accumulated education and opportunity costs, it has to be.

The tenure gap also points to another shadow in the system and that is the idea that scientists raise their own salary. Thus, scientists and professors are spending an ever-increasing amount of time writing grants to pay themselves so that there is someone to do the research. In the United States, there exists mechanisms to put salaries into grants, however in Canada this is a relatively rare occurrance. The assumption is that universities and research institutes pay salaries and funders pay for research, which simply doesn’t hold true. Imagine having most of your creative talent spending 1/3 of their time applying for funding to support them in…writing more grants to support them in writing more grants. When does the innovation happen? And when are scientists supposed to be doing all of that knowledge translation stuff that they’re increasingly expected to do?

Another problem for young researchers (and tied to the tenure or long-term contract issue) is that the value of an innovation is only really seen in hindsight, therefore any profession based on innovation requires methods of promotion, retention and acknowledgment that somewhat fits this horizon (even if that is imperfect, particularly in basic sciences where the innovation isn’t always obvious for many years). How can you reasonably judge someone’s contribution based on a short period of time?

Here we have a system that espouses language of innovation without any mechanism to support it or worse, an entrenched pattern of behaviour designed to prohibit it. How much sense does that make?

%d bloggers like this: