Archive for the ‘Design ecology’ category

Opportunity Overload

August 26, 2008

Information overload has been with us since the dawn of electronic media. According to McLuhan’s theories (and Robert Logan’s recent enhancements to media theory), when we humans overextend a communications channel, we create a new one.  We create one commensurate with the increased volume and complexity of content that our culture generates. When we overwhelmed the capacity of radio and television (and print), the Internet emerged to expand our ability to communicate, globally.

So each new media “channel” expands our scope and matches the developing complexity of communication. As we adapt and learn the new media channel, our cognitive capacity – trained as it was from prior media eras – experience cognitive infoload.

As the online experience consumes more of our attention and with it our time, all of us notice the acceleration of overload. And with very little guidance from research, we are left with a range of practical time-management options from the Pickle Jar to scheduling your email. But none of these address the fact of information overload, which threatens to significantly diminish the value of the web and email. As demonstrated by the situation of too many choices.

Jared Spool once posted (and podcasted) an interview with Barry Schwartz where they discuss his book and the line of research into “choice overload,” which starts off with the Iyengar and Leeper Jam Study:

“… that showed when you present 30 flavors of jam at a gourmet food store, you get more interest but less purchasing than when you only show six flavors of jam. All of a sudden, it became an issue, or at least a possibility, that adding options could actually decrease the likelihood that people would actually choose any of them. More and more, because of that study, people have actually tried to study it in the wild, in the field, by getting companies to vary the variety that they offer and tracking both purchasing and also satisfaction. So that’s starting to happen, but there are not very many papers that are actually published on that. This whole line of work is only about five years old.”

There may be a common phenomenon underlying choice and information overload. Neither of these surfeits of stuff are problematic unless we’re interested, unless there’s an opportunity. Since information is neutral until deemed interesting, information overload is not problematic until we admit ever-larger boundaries of interest and attention. When we overwhelm short term memory and task attention, we’re forced to stop and change the focus of attention. The same with choice – I don’t care whether there are 5 jams or 30 unless I really want jam. Otherwise, like the overload of celebrity stories in the public media, the overload is easy to ignore.

Once we evaluate email and user experience with the concept of opportunity overload, the angle of insight shifts from technology itself to the idea of value. While 90% or more of all my email I could ignore, I also have extraordinary opportunities presented by way of this communication channel. Not only most of my consulting projects, but collaborations, new tools, great ideas to work with, answers to questions I did not think to pose. Its opportunity “push,” with the Web as opportunity “pull,” a nightmare of opportunity overwhelm if you let it.

As a research issue this interests me as it entails hermeneutics (individually and not externally interpreted) and economics (as in the cost/value of opportunity). We attend to the extent we are emotionally engaged with the perceived value of the opportunity represented by a choice (a product or a message in an email). But attention is only the intial draw. There are significant cognitive requirements demanded in processing the value (what is this worth to me? How cool is that?) and choice (Which one do I want, or is it worth my time to evaluate further?).

To finally make a decision may require additional learning (which one really is better? do I know enough to choose this opportunity? What are the costs in time and lost business/opportunity?). It may require communication (who should I ask about this? Wouldn’t Nick want to know about this?) Next thing we know, the day is gone!

So nobody except Miles the Marketer seems to be onto opportunity overload. (And Miles means to make you money, and I don’t, so go there if you want marketing opportunities!)

Advertisements

Valuing tech vs. valuing learning

August 19, 2008

When will the computer finally recede into the ubiquitous background as promised by Don Norman a decade ago? Instead, educational reform is grasping at technology as the innovation, bringing technology front and center, as you have pointed out here. But how do we expect students even younger than yours Sam, such as inner city high school students, to switch to an online pedagogy and self-educate with discipline?

It is the individual that chooses to self-educate – the tech are tools, not the stuff of learning itself. I’m not as sanguine about the role of interactive tech per se in the classroom, even though two heavy hitters in innovation (Clay Christensen) and organizational learning (John Seely Brown, blogged here) have recently weighed in with tech-oriented reform promises.

Christensen says “For virtual learning to have this transformative impact, however, it must be implemented in the correct way. The theory of disruptive innovation shows us a way forward.”

A disruptive innovation transforms an industry not by competing against the existing paradigm and serving existing customers, but by targeting those who have no other option and are not being served — people we call non-consumers.

Little by little, disruptive innovations predictably improve. At some point, they become good enough to handle more complicated problems — and then they take over and supplant the old way of doing things.

The key is that instead of simply cramming computers in the back of classrooms as a tool of instruction as we have done in the past, we need to allow computer-based learning to take root in places where the alternative to computer-based learning is no learning at all. Only then will computer-based learning have a true impact in transforming education.

There are a few problems with Clay’s innovation theory as applied to education. I am a big fan of Innovator’s Dilemma, and have written up RPV as serious business theory in “real” articles, not blogs. But my issue with disruptive innovation in education – is that the problem is NOT with students, or school systems as such. It is socio-economic, cultural, and systemic – a complex system, not a market of users or consumers. Disrutpive innovation owes something to the concept of early adopters predicting the trend. But in education, the early adopters are the self-educators who workaround the system. We can pick up and use anything, but that doesn’t mean other students should use the tech tools I did to self-educate. Example: Long before the Internets, after exhausting the simple lessons in 5th grade, I would ask to leave the class and sit in the hallway and read the Britannicas.

And which students get to fail while the system tries to “go disruptive” and falls even further behind in the tyranny of state school district measures? A couple of years worth of classes before they get it right? Christensen’s innovation theory says large incumbent firms are literally unable to innovate in this way. But has he ever seen charter schools in real “urban” districts? These attempts at innovation lead to outsourcing (like to Sylvan), which does no good, and leaves “no learning behind” for others to bring forward as an innovation.

Take a look at Dayton, Ohio. Patricia started one of the Gates schools (she’s not teaching there or anywhere in a system anymore). The program was primarily problem-based, no issue with that. But self-motivated, self-direct learning kids excel at this already, regardless of technology.  The charter schools in Dayton are (not to put too fine a point on it) total failures. The Gates program? Mixed – the self-motivated always do well, the others make teachers do twice the work they normally do, which is already a lot more than you can imagine.  “It won’t change until society values education” Patricia says “It’s so much government cheese.”

Interaction flow or Activity flow?

August 12, 2008

Boxes and Arrows is a great source for the publication of in-depth discussions of ideas and concepts emerging in the user experience community. Originally more of an information architects how-to, nuts & bolts go-to service, it has grown into a true eJournal with good editorial review and a “real” community of readers that know each other (supra-virtual?). EiC Christina Wodtke and editor Austin Govella have done a great job of encouraging publications beyond the expected scope in the IA readership.

A new, well-cited article by (another sharp alumnus of Calgary’s nForm shop) Trevor van Gorp –  Design for Emotion and Flow – brought a raft of comments related to designing for flow and to enhance positive emotional response in an interactive experience. Trevor gives a guidelines for design based on his research and the classical model from Csikszentmihalyi:

Flow channel

Flow channel

1. A clear goal… The user navigates to accomplish a task, like seeking information on a particular topic or surfing for fun. This is an evolving goal, dependent on the options presented to the user and aided by logical information architecture, intuitive navigation, effective wayfinding and clear options for proceeding like information scent, breadcrumbs, meaningful labels, clear page titles, etc.

2. With immediate feedback on the success of attempts to reach that goal… The user receives quick, sensory feedback in the form of a visual shift and/or sound from links, buttons, menus, or other navigation items.

3. Presented as a challenge that you have the skills to handle. Csikszentmihalyi’s definition of flow in the context of user experience was debated between designing flow into the experience, and deisgning to optimize the flow as an end in itself.
“The key element of an optimal experience is that it is an end in itself. Even if initially undertaken for other reasons, the activity that consumes us becomes intrinsically rewarding.”

Andy Polaine, who’s doctoral research was also on this topic, described another path toward interactive flow:

“The main point here is that interaction designers can encourage this self-contained activity, the intrinsically rewarding aspect unrelated to future benefit through the design of interactions and interfaces that are rewarding in themselves to use. Interfaces that are satisfying in their own right encourage users to play with them and explore them further, which means they learn them without thinking about learning them.”

My notion of facilitating flow in interactive experience design leans more toward optimizing the experience of a real world activity or a whole system. In most of my design challenges, for Redesign Research anyway, we are aiming to design a total service to support a professional or intellectual activity of some sort. Consumer or discretionary experiences that may lend themselves to interaction for the sake of its own enjoyment, or a gaming-type pursuit experience.

In professional activity we might consider several questions relevant to optimizing their cognitive flow, such as:

How is individual flow affected when multiple players are involved? (For example, you need a critical mass to make a multiplayer game, or Second Life, compelling enough to flow – there are tradeoffs between individual and group experience of flow).

How about the world beyond interactivity, which is where work and play live for most people?

Where is the focal experience of the flow? Where is it experienced?

Is it in the interactive experience or in the activity that the interaction supports?

Consider the design requirements for medical decision making – the flow is happening in the consult room, not in the information display. The physician, nurse, order clerk, pharmacist are part of a complex, continual loop-closing communications feedback system. The attempt to design-in flow states to an interactive experience could be counter-productive to total flow, which requires maintaining context awareness, status updates, attention cues to change in patient state, and providing brief-but-accurate communications at the time and point of need. Yes, flow could be improved. But a UX, designed in relative isolation from the total system, might over-flow the information display and sub-optimize the activity.  Is this situation amenable to design for flow?

Trade-offs. There might also be multiple interactions involved that trade-off “more flow” or enjoyable challenge in one state versus more radical efficiency in another.

Take an eBook reader for example (a project I just finished). If an eBook vendor designs their platform for the purpose of maximizing reading flow while online, they may inlcude features or navigation that impedes the flow state of the researcher, who is attempting to understand a thread of ideas across a number of publications (common task), or who is maximizing the number of references to an idea by finding all the citations in and across books.

Flow is a good example of a classical concept that has been retrieved for adaptation in a pragmatic design context, and may have a lot to offer practitioners. But we can also see where Flow Theory becomes weak – other theories (whole system) may require us to expand the flow theory to accommodate it. Is Flow scalable? Should we keep interpreting Csikszentmihalyi or should we start to make our own observations about interaction flow and extend the aging theory of flow for other domains?

Cognitive impacts of Google’s info hegemony

July 19, 2008

Referring to the prior post, the title was meant to provoke and reprieve the Atlantic article thesis. As with many technological aids to cognitive augmentation, the answer is “both” dumber and smarter.

Perhaps we are all still only in the first few years of a new media behavior, and like “boiling frogs” we cannot see the effects on ourselves yet.  Surprisingly, there are no in-depth research studies on Google-think. As somone who’s researched and observed information behavior in the search and research domains for over 10 years, I want to consider longitudinal aspects, not just whether Google makes us “feel” smarter or dumber.

I have researchable concerns over the universal casual acceptance of Google’s information hegemony.  We are smarter in some ways, for sure – but I have also sensed a rapid dismissal of Carr’s (Atlantic article) thesis, as if it were obvious he’s just making a fuss. There may be ways – ways in which we don’t have easy access to awareness – that continual Google use makes us dumber.

How do we know what behaviors will be obviated by growing up with a ubiquitous search appliance whose evolution of relevancy reflects popular choices? (Over time, anything popular reverts to the mean, which is not exactly “smart.”) PageRank bases relevancy on (among other things) having the highest number (and weighting) of citing pages to the given page. It displays (by default) only 10 items on the results, and overwhelmingly people select the top hit in a search. While Google is powerful, the results display is not as helpful for browsing as – for example – the clustered responses of Clusty, or search enginers like Scirus being used in science research.

It rides our cultural proclivity toward instant gratification – we get a sufficient response VERY quickly, making a compelling argument to rapidly explore the top hit. How often do we pursue the hits on page 3 or further? Do we know what knowledge we are avoiding in our haste? Why do we think the most-referred to pages are the most “relevant” to our real needs? This “instant good enough” may lead us to demand that value of other types of services and supposed knowledge.

Kids may then demand this type of easy, superficial access from their teachers. A quick relevant story: The  teacher I probably learned the most from in all my years of formal education was Dave Biers, graduate psychology research methods and stats. Rather than laser print his worksheets clearly, he insisted on using old blurred, photocopied mimeo. The formulas were barely readable – so you HAD to pay attention in class, where everything was explained and scrawled on the board. This made you attend class, and attend in class. If you didn’t understand, you couldn’t act as if you did. Illegibility was a deliberate learning device.

In a 2005 article in Cognition, Technology and Work I reported on a study at Univ of Toronto on information practices in scientific research. I reported on the trend of grad students using Google and PubMed instead of the expensive, dedicated research tools often used more by their faculty, such as SciFinder, Medline, Web of Science. The earlier use of the more “opaque” search interfaces, now being obsoleted, had at one time trained a generation to think about the terms used in the domain of their research.Opacity is helpful when it reveals opportunities for further learning that you would miss if in a hurry.

This may have also enabled serendipity.Discoveries in science often happen by analogy and serendipitous relationships. Google’s ruthlessly immediacy and transparency of the “top” answers bypasses some of these learning and suggestion opportunities. Even Google Scholar hides a lot more than it shows. How do we actually “slow down” the process of info foraging so that we can find patterns in a problem domain and not just assume the top hits are best?

Now consider the McLuhan tetrad model of the replacement of an older media by a newer regime. The tetrad is a model for thinking through trends and impacts of media transformation. It is also a helpful way to map out the impacts of a new media and to make predictions of its future directions.

So using the tetrad  on Google we get:

  • What does the medium enhance?  Information foraging – finding many sufficient, alternative responses to a given question that can be described in simple keywords. Google amplifies our temporal effectiveness – it gives us the ability to respond quickly in time to almost any information need. It enhances our ability to communicate, by giving us access to other people’s points of view for a given topic of interest. It augments our (already-weakened by infoload) memories by allowing us to neglect exact dates, names, references until the point of need.
  • What does the medium make obsolete? Published encyclopedias, and many types of indexes. It obviates the memorizing of factual details, which can now be retrieved quickly when needed. (Exact retrieval is not a typical competency of human cognition). It reduces the importance of directories, compiled resources, catalogs, list services, even editorial compilations such as newspapers.
  • What does the medium retrieve that had been obsolesced earlier? Do we know yet? It may return the ability to create context across domains of learning. It may enable multi-dimensional thinking, that was more common in the 18th and 19th centuries than today. Recent re-readings of Emerson and Thoreau have left me astonished at the breadth of lifeworld of authors of that time. They had a Renaissance-person grasp of culture, news, politics, geography, literature, scientific developments, and the intellectual arguments of their time. Our culture lost much of this in the specialized education created to satisfy the demands of industrialization. I have hope that searching may lead to a broader awareness and access to the multitude of meaningful references that can be positioned into waiting dendrites in our pre-understanding of things.
  • What does the medium flip into when pushed to extremes? Google is flipping into itself. Google has already flipped into the world online library (Print and Books), it has flipped into the world online geosearch (Earth) and navigation (Maps). Images. News. Video. These are not just object types – these are new media with new possibilities. What’s next? Immersive broadband imagery by your preferred channel of perception.

What it does not help us with is version control. I had to rewrite the tetrad from memory after (apparently) clearing the WordPress editor somehow and clicking Save. Then finding the editor empty – why isn’t there yet a Google Undo?

The Book is Dead – Long Live The Book!

June 10, 2008

This is a mail art call, one of the ongoing cultural artifacts spawned by Fluxus and Ray Johnson. Even if you don’t contribute, this is worth paying attention to, as cultural observers everywhere (Paul Krugman’s NYTimes op-ed on Friday) have been predicting the end of the book as we know it.

So what do you think? Is the printed book format in danger of becoming an relic from the Gutenberg Galaxy? In the eBooks research I’m currently engaged in, the printed book remains a preferred medium for textbooks, cover to cover reading, and texts for personal markup. eBooks are good for many things, but they do not replace the love of paper.

Books are themselves a system of signs, a packaging of signs that, when collected with sufficient other relevant texts, constructs a persistent identity, representations to others, and prompts of past literacies. You can walk in to a colleague’s office and know their competencies, interests, specialties, and possible contact points for relationship. (Have you ever seen someone’s book collection when on a first date situation and decided, on sight, this was not ever gonna work? Or, maybe it just would?) Try doing that on the web.

THE LAST BOOK

(A Project by Luis Camnitzer, sponsored by the National Library of Spain)

Open call for collaborations
The Last Book is a project to compile written as well as visual statements in which the authors may leave a legacy for future generations. The premise of the project is that book-based culture is coming to an end. On one hand, new technologies have introduced cultural mutations by transferring information to television and the Internet. On the other, there has been an increasing deterioration in the educational systems (as much in the First World as on the periphery) and a proliferation of religious and anti-intellectual fundamentalisms. The Last Book will serve as a time-capsule and leave a document and testament of our time, as well as a stimulus for a possible reactivation of culture in case of disappearance by negligence, catastrophe or conflagration.

Contributions to this project will be limited to one page and may be e-mailed to lastbook.madrid@gmail.com or mailed to Luis Camnitzer, 124 Susquehanna Ave., Great Neck NY 11021, USA. In case of submission of originals, these will not be returned. The book will be exhibited as an installation at the entrance of the Museum of the National Library of Spain in Madrid at some point of 2008. Pages will be added during the duration of the project, with the intention of an eventual publication of an abridged version selected by Luis Camnitzer, curator of the project. The tentative deadline is October 15, 2008.

This call is open and we hope that it will be resent to as many potential contributors as possible.

Designing design in non-design organizations

June 3, 2008

Should designers embed with their clients?

Designers have tied themselves closely to their clients since the early days of the Vatican. In design consulting, you must understand your clients’ business to advise effectively. So we have to work closely with clients to understand their users/customers.

We’ve done this since 2001 as a boutique research/design consulting firm, and have noticed that smaller consulting firms have always done this. Its the larger firms likeIDEO that have to formalize a process for customer intimacy – but when you’re already close to your client, you nurture them in many ways outside of the contractual relationship.

The evolving processes of “Design 3.0” have now also turned this imperative toward the organization itself – organizational processes are becoming “designable options.”  In ever more projects, we are advising user experience processes, consulting on overall product design and branding, conducting holistic UX research (end to end), and advising on organizational design and new practices.

Rather than merely extending an organization’s UX capacity, we are designing that capacity, more management consulting than “design delivery.” I stay close to long term clients and often work as an extended capacity for their internal UX organization. Redesign has partnered with organizations that have no formal UX group, and we’ve developed a model for just-in-time education of product managers, prototypers, and the closest equivalent to UX in a company. We call this process socialization, which looks like collaborative consulting in practice. This approach also lets a smaller consulting firm like Redesign consult strategically through process change and adapting the new UX processes closely to their strategic intent and product portfolio.

A problem with larger design agencies is they cannot afford to seat their better designers or advisors with clients in a mentoring capacity, and their rate structure won’t easily allow them to give up the time. If we all did a better job of educating the client while working on projects, this would not seem a novel idea but instead a standard practice. We also need to realize that better transition planning (the deliverables handoff from design to development) will reduce the need for mitigating turmoil in the client’s implementation of our design plans.

Adobe’s CTO on UX Design

May 29, 2008

Knowledge@Wharton recently interviewed Kevin Lynch, Adobe’s AIR apparent CTO, elevated to CTO earlier this year to make Adobe Integrated Runtime (AIR) the next disruptive tech platform. What’s in the secret sauce? Lots of UX, since that’s the first thing Lynch mentions at kickoff time:

Knowledge@Wharton: You were recently given the title of Chief Technology Officer at Adobe. How is that different from your previous role as Chief Software Architect?

Lynch: I’ll be involved more with Adobe overall in terms of our technology direction and the problems we are trying to solve; working across the different business units at Adobe. To some degree, I was already doing this in my previous role with the platform technology [unit at Adobe] because it touches so many of the other things that we do. This is formalizing that more.

In terms of my day-to-day activities, I’m continuing to work with the platform [group] and I’ll also be working with our design group called XD — Experience Design — to pull together our Experience Design and our platform efforts. They are obviously somewhat related — you can see a lot of great design in the Flex framework and in the applications we produce — but there’s more opportunity to build usability and best practices into our frameworks that we are learning from the XD group.

Lynch notes three disruptive technologies they are focused on – web applications, mobile computing, the ecosystems of social networks (and integration with directory management). But you knew that already.  But it is a good sign they are starting to lionize The Experience Design, even if they are overdoing it a bit on their promo pages. (I mean, do you ever see cool B&W shots of the software engineers that build the stuff? No, just designers, or now, as so many titles read, ‘experience designers”. Yes, but, do they know Human Factors?)

Adobe is still about tools for hot geeks, and not so much end user applications. But I have to admit, they finally took a big leap forward with the latest Reader, which was improved when some UX researchers noticed that people often select text, and right away, making that the default interaction mode when launching a PDF document.