Archive for the ‘Innovation Strategy’ category

Opportunity Overload

August 26, 2008

Information overload has been with us since the dawn of electronic media. According to McLuhan’s theories (and Robert Logan’s recent enhancements to media theory), when we humans overextend a communications channel, we create a new one.  We create one commensurate with the increased volume and complexity of content that our culture generates. When we overwhelmed the capacity of radio and television (and print), the Internet emerged to expand our ability to communicate, globally.

So each new media “channel” expands our scope and matches the developing complexity of communication. As we adapt and learn the new media channel, our cognitive capacity – trained as it was from prior media eras – experience cognitive infoload.

As the online experience consumes more of our attention and with it our time, all of us notice the acceleration of overload. And with very little guidance from research, we are left with a range of practical time-management options from the Pickle Jar to scheduling your email. But none of these address the fact of information overload, which threatens to significantly diminish the value of the web and email. As demonstrated by the situation of too many choices.

Jared Spool once posted (and podcasted) an interview with Barry Schwartz where they discuss his book and the line of research into “choice overload,” which starts off with the Iyengar and Leeper Jam Study:

“… that showed when you present 30 flavors of jam at a gourmet food store, you get more interest but less purchasing than when you only show six flavors of jam. All of a sudden, it became an issue, or at least a possibility, that adding options could actually decrease the likelihood that people would actually choose any of them. More and more, because of that study, people have actually tried to study it in the wild, in the field, by getting companies to vary the variety that they offer and tracking both purchasing and also satisfaction. So that’s starting to happen, but there are not very many papers that are actually published on that. This whole line of work is only about five years old.”

There may be a common phenomenon underlying choice and information overload. Neither of these surfeits of stuff are problematic unless we’re interested, unless there’s an opportunity. Since information is neutral until deemed interesting, information overload is not problematic until we admit ever-larger boundaries of interest and attention. When we overwhelm short term memory and task attention, we’re forced to stop and change the focus of attention. The same with choice – I don’t care whether there are 5 jams or 30 unless I really want jam. Otherwise, like the overload of celebrity stories in the public media, the overload is easy to ignore.

Once we evaluate email and user experience with the concept of opportunity overload, the angle of insight shifts from technology itself to the idea of value. While 90% or more of all my email I could ignore, I also have extraordinary opportunities presented by way of this communication channel. Not only most of my consulting projects, but collaborations, new tools, great ideas to work with, answers to questions I did not think to pose. Its opportunity “push,” with the Web as opportunity “pull,” a nightmare of opportunity overwhelm if you let it.

As a research issue this interests me as it entails hermeneutics (individually and not externally interpreted) and economics (as in the cost/value of opportunity). We attend to the extent we are emotionally engaged with the perceived value of the opportunity represented by a choice (a product or a message in an email). But attention is only the intial draw. There are significant cognitive requirements demanded in processing the value (what is this worth to me? How cool is that?) and choice (Which one do I want, or is it worth my time to evaluate further?).

To finally make a decision may require additional learning (which one really is better? do I know enough to choose this opportunity? What are the costs in time and lost business/opportunity?). It may require communication (who should I ask about this? Wouldn’t Nick want to know about this?) Next thing we know, the day is gone!

So nobody except Miles the Marketer seems to be onto opportunity overload. (And Miles means to make you money, and I don’t, so go there if you want marketing opportunities!)

Advertisements

Cognitive impacts of Google’s info hegemony

July 19, 2008

Referring to the prior post, the title was meant to provoke and reprieve the Atlantic article thesis. As with many technological aids to cognitive augmentation, the answer is “both” dumber and smarter.

Perhaps we are all still only in the first few years of a new media behavior, and like “boiling frogs” we cannot see the effects on ourselves yet.  Surprisingly, there are no in-depth research studies on Google-think. As somone who’s researched and observed information behavior in the search and research domains for over 10 years, I want to consider longitudinal aspects, not just whether Google makes us “feel” smarter or dumber.

I have researchable concerns over the universal casual acceptance of Google’s information hegemony.  We are smarter in some ways, for sure – but I have also sensed a rapid dismissal of Carr’s (Atlantic article) thesis, as if it were obvious he’s just making a fuss. There may be ways – ways in which we don’t have easy access to awareness – that continual Google use makes us dumber.

How do we know what behaviors will be obviated by growing up with a ubiquitous search appliance whose evolution of relevancy reflects popular choices? (Over time, anything popular reverts to the mean, which is not exactly “smart.”) PageRank bases relevancy on (among other things) having the highest number (and weighting) of citing pages to the given page. It displays (by default) only 10 items on the results, and overwhelmingly people select the top hit in a search. While Google is powerful, the results display is not as helpful for browsing as – for example – the clustered responses of Clusty, or search enginers like Scirus being used in science research.

It rides our cultural proclivity toward instant gratification – we get a sufficient response VERY quickly, making a compelling argument to rapidly explore the top hit. How often do we pursue the hits on page 3 or further? Do we know what knowledge we are avoiding in our haste? Why do we think the most-referred to pages are the most “relevant” to our real needs? This “instant good enough” may lead us to demand that value of other types of services and supposed knowledge.

Kids may then demand this type of easy, superficial access from their teachers. A quick relevant story: The  teacher I probably learned the most from in all my years of formal education was Dave Biers, graduate psychology research methods and stats. Rather than laser print his worksheets clearly, he insisted on using old blurred, photocopied mimeo. The formulas were barely readable – so you HAD to pay attention in class, where everything was explained and scrawled on the board. This made you attend class, and attend in class. If you didn’t understand, you couldn’t act as if you did. Illegibility was a deliberate learning device.

In a 2005 article in Cognition, Technology and Work I reported on a study at Univ of Toronto on information practices in scientific research. I reported on the trend of grad students using Google and PubMed instead of the expensive, dedicated research tools often used more by their faculty, such as SciFinder, Medline, Web of Science. The earlier use of the more “opaque” search interfaces, now being obsoleted, had at one time trained a generation to think about the terms used in the domain of their research.Opacity is helpful when it reveals opportunities for further learning that you would miss if in a hurry.

This may have also enabled serendipity.Discoveries in science often happen by analogy and serendipitous relationships. Google’s ruthlessly immediacy and transparency of the “top” answers bypasses some of these learning and suggestion opportunities. Even Google Scholar hides a lot more than it shows. How do we actually “slow down” the process of info foraging so that we can find patterns in a problem domain and not just assume the top hits are best?

Now consider the McLuhan tetrad model of the replacement of an older media by a newer regime. The tetrad is a model for thinking through trends and impacts of media transformation. It is also a helpful way to map out the impacts of a new media and to make predictions of its future directions.

So using the tetrad  on Google we get:

  • What does the medium enhance?  Information foraging – finding many sufficient, alternative responses to a given question that can be described in simple keywords. Google amplifies our temporal effectiveness – it gives us the ability to respond quickly in time to almost any information need. It enhances our ability to communicate, by giving us access to other people’s points of view for a given topic of interest. It augments our (already-weakened by infoload) memories by allowing us to neglect exact dates, names, references until the point of need.
  • What does the medium make obsolete? Published encyclopedias, and many types of indexes. It obviates the memorizing of factual details, which can now be retrieved quickly when needed. (Exact retrieval is not a typical competency of human cognition). It reduces the importance of directories, compiled resources, catalogs, list services, even editorial compilations such as newspapers.
  • What does the medium retrieve that had been obsolesced earlier? Do we know yet? It may return the ability to create context across domains of learning. It may enable multi-dimensional thinking, that was more common in the 18th and 19th centuries than today. Recent re-readings of Emerson and Thoreau have left me astonished at the breadth of lifeworld of authors of that time. They had a Renaissance-person grasp of culture, news, politics, geography, literature, scientific developments, and the intellectual arguments of their time. Our culture lost much of this in the specialized education created to satisfy the demands of industrialization. I have hope that searching may lead to a broader awareness and access to the multitude of meaningful references that can be positioned into waiting dendrites in our pre-understanding of things.
  • What does the medium flip into when pushed to extremes? Google is flipping into itself. Google has already flipped into the world online library (Print and Books), it has flipped into the world online geosearch (Earth) and navigation (Maps). Images. News. Video. These are not just object types – these are new media with new possibilities. What’s next? Immersive broadband imagery by your preferred channel of perception.

What it does not help us with is version control. I had to rewrite the tetrad from memory after (apparently) clearing the WordPress editor somehow and clicking Save. Then finding the editor empty – why isn’t there yet a Google Undo?

Designing design in non-design organizations

June 3, 2008

Should designers embed with their clients?

Designers have tied themselves closely to their clients since the early days of the Vatican. In design consulting, you must understand your clients’ business to advise effectively. So we have to work closely with clients to understand their users/customers.

We’ve done this since 2001 as a boutique research/design consulting firm, and have noticed that smaller consulting firms have always done this. Its the larger firms likeIDEO that have to formalize a process for customer intimacy – but when you’re already close to your client, you nurture them in many ways outside of the contractual relationship.

The evolving processes of “Design 3.0” have now also turned this imperative toward the organization itself – organizational processes are becoming “designable options.”  In ever more projects, we are advising user experience processes, consulting on overall product design and branding, conducting holistic UX research (end to end), and advising on organizational design and new practices.

Rather than merely extending an organization’s UX capacity, we are designing that capacity, more management consulting than “design delivery.” I stay close to long term clients and often work as an extended capacity for their internal UX organization. Redesign has partnered with organizations that have no formal UX group, and we’ve developed a model for just-in-time education of product managers, prototypers, and the closest equivalent to UX in a company. We call this process socialization, which looks like collaborative consulting in practice. This approach also lets a smaller consulting firm like Redesign consult strategically through process change and adapting the new UX processes closely to their strategic intent and product portfolio.

A problem with larger design agencies is they cannot afford to seat their better designers or advisors with clients in a mentoring capacity, and their rate structure won’t easily allow them to give up the time. If we all did a better job of educating the client while working on projects, this would not seem a novel idea but instead a standard practice. We also need to realize that better transition planning (the deliverables handoff from design to development) will reduce the need for mitigating turmoil in the client’s implementation of our design plans.

Making a Difference by Design

May 3, 2008

Like the onerously overused “innovation,” transformation may be getting a bad rap. Both are broad, overstated terms that mean very different things to people, depending on background, experience, industry. Both must be defined in their contexts of use before we can have any serious discussion. The wide range of meanings and uses of transformation should give us pause before going too far with the term in mixed company. But transformation (as in organizational) has been merging closer to design (as in envisioned, creative, structured changemaking and sensemaking).

Time magazine may have just eased our quandary by making Humantific, and its transformation practice, a sort-of household term. People may know what we mean now.  In Different by Design, Time reports on New York’s Humantific, and the West Coast’s IDEO and Jump Associates. While we’ve seen tons of press on IDEO in recent years, the 3 paragraph exposure of Humantific (with a nice shot of GK and Elizabeth) was refreshing. The brief piece keeps it light, there was nothing mentioned about their practice areas or methods (Strategic Co-creation, Visual Sensemaking, Complexity Navigation, Innovation research).

Also see:

NextD.org (Transforming that Sustainability Thing)

Jump Associates

IDEO Transformation by Design

The Hub

Designs of the Time

No post would be complete without advocating my perspective on transformation. in a paper presented at the 2007 INCOSE Symposium I suggested:

The general thrust of transformation efforts aims toward significant organizational changes that institutionalize desired behaviors necessary for long-term business success. While some management thinkers may place the responsibility solely on management to accomplish transformation, in our view successful transformation depends on the collaboration of all stakeholders in the enterprise, at a minimum by adopting the new practices as full participants. This view is supported by Kotter (1995), whose findings show transformation efforts fail to the extent that organizational communication and collaboration fails.

Indeed, that seems to be a suitably complex, interesting design problem.

Real innovators fail, more.

April 29, 2008

I follow the Freakonomics blog in the New York Times online – one of the few that i do follow anymore. (Blogs have become so abundant worldwide that any opinion or commentary is cheap and available. In such an infoloaded ecology, only the relevant, compelling, and well-written rise above the noise. Relevancy and context rule.)

So I’m fascinated by the shifting trends in economics leading to ecological thinkers such as Steven Levitt and Stephen Dubner opening up the discourse into areas that would be risk the “credibility” of more mainstream economists. Freakonomics recently held a Quorum of several collaborating authors (Ashish Arora, John Seely Brown, Seth Godin, Bill Hildebolt, Daphne Kwon, and Mark Turrell) to dialogue on Measuring Innovation.  Several of these are truly worth the read, but you’ll have to scroll – a lot. Freakonomics does not break out sections into new posts. (An innovation I would propose is arbitrary links you can add to perma-link to a section in a public medium. ) So, go to Kwon and Hildebolt:

While we track traditional industry metrics such as number of reviews, breadth of catalog, and quality of information, we’ve added new metrics that help define the goals of consumer word-of-mouth. Defining these new benchmarks helps us select new risk-taking projects that can speed us along our path to success.

How can our experience measuring innovation in the moment (rather than just looking backwards) be generalized for other entrepreneurs and managers?

We’re going to go on record and say that it is all about looking for and then celebrating the unique “failure metrics” in your business:

They list 3 measures of organizational failure, which correspond to an innovative culture. These are all small-scale failures, not the cover-up, highly-leveraged kind that bring down the product line. Consider:

1) The rate of failure. More small failures are better.

2) Failing along the right path. Embracing failure, however, brings you dangerously close to failure’s more deadly cousin, flailing.

3) The source of failures. Another measure we use to determine if our company is embracing failures is whether new strategic ideas are coming from all levels of the company.

And of course, the comments are always telling, worth a final scroll down for a scan.

We Tried To Warn You

March 23, 2008

In Boxes  and Arrows, March 19

There are many kinds of failure in large, complex organizations – breakdowns occur at every level of interaction, from interpersonal communication to enterprise finance. Some of these failures are everyday and even helpful, allowing us to safely and iteratively learn and improve communications and practices. Other failures – what I call large-scale – result from accumulated bad decisions, organizational defensiveness, and embedded organizational values that prevent people from confronting these issues in real time as they occur.

So while it may be difficult to acknowledge your own personal responsibility for an everyday screw-up, it’s impossible to get in front of the train of massive organizational failure once its gained momentum and the whole company is riding it straight over the cliff. There is no accountability for these types of failures, and usually no learning either. Leaders do not often reveal their “integrity moment” for these breakdowns. Similar failures could happen again to the same firm.

I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure. We must try to stop the train, even if we are many steps removed from the larger decision making process at the root of these failures.

The Innovator’s Long Nose

January 7, 2008

Bill Buxton, Toronto’s-own design luminary and Microsoft principal scientist, writes up a first thesis on The Long Nose of Innovation in the current Business Week.

My belief is there is a mirror-image of the long tail that is equally important to those wanting to understand the process of innovation. It states that the bulk of innovation behind the latest “wow” moment (multi-touch on the iPhone, for example) is also low-amplitude and takes place over a long period—but well before the “new” idea has become generally known, much less reached the tipping point. It is what I call The Long Nose of Innovation.

Such clarity of concept. The mirror-image of Long Tail is intriguing here, since the Long Tail is largely about adoption and deep reach across expanded markets. The Long Nose idea suggests we don’t hit the sweet spot overnight. This is certainly true for many technological innovations – I can think of a few startups that might be today’s winners is they had a long range plan that allowed for a 10-year pop horizon. (Remember Firefly?) Wouldn’t it be good to predict which classes of systems so that we might set our strategic sights toward reality?

This seems similar in diffusion process to the dreaded idea of incremental innovation, (which I believe is a powerful competitive force). Incremental has been out of favor in recent years mainly due to its lukewarm appeal in a hypercompetitive Web-saturated marketplace. You will find few executives willing to stake their jobs on incremental innovation when they can drive and demand “disruptive innovation,” and have it now – or at least within the year. It takes an organization to innovate (as in “bring ideas to market.”) Regardless of soundness of theory, innovation is visible to us from the marketplace results, and marketplace dynamics have a lot to do with the uptake of incrementally developed or rapid breakthroughs.

I love the conclusioon (not to steal thunder, its a short article and you should be reading it instead of myblog about it):

Instead, perhaps we might focus on developing a more balanced approach to innovation—one where at least as much investment and prestige is accorded to those who focus on the process of refinement and augmentation as to those who came up with the initial creation.

To my mind, at least, those who can shorten the nose by 10% to 20% make at least as great a contribution as those who had the initial idea. And if nothing else, long noses are great for sniffing out those great ideas sitting there neglected, just waiting to be exploited.

Perhaps there are several different innovation diffusion curves, depending on the case studies we use to describe their variations. Depending on how we define innovation , the curves may differ. Some innovations have significant impact, but are infrastructural – so they have long curves and sharp breaks when they finally break through. Others are more market-based, as maybe the mouse could be. As one writer said, Doug’s mouse was good enough in the 1960’s. But with no PC market, no need for mainframe mouse. (It’s not the innovation’s fault the cycle was so long!) This is a really valuable contribution.