Archive for the ‘Information Ecology’ category

Opportunity Overload

August 26, 2008

Information overload has been with us since the dawn of electronic media. According to McLuhan’s theories (and Robert Logan’s recent enhancements to media theory), when we humans overextend a communications channel, we create a new one.  We create one commensurate with the increased volume and complexity of content that our culture generates. When we overwhelmed the capacity of radio and television (and print), the Internet emerged to expand our ability to communicate, globally.

So each new media “channel” expands our scope and matches the developing complexity of communication. As we adapt and learn the new media channel, our cognitive capacity – trained as it was from prior media eras – experience cognitive infoload.

As the online experience consumes more of our attention and with it our time, all of us notice the acceleration of overload. And with very little guidance from research, we are left with a range of practical time-management options from the Pickle Jar to scheduling your email. But none of these address the fact of information overload, which threatens to significantly diminish the value of the web and email. As demonstrated by the situation of too many choices.

Jared Spool once posted (and podcasted) an interview with Barry Schwartz where they discuss his book and the line of research into “choice overload,” which starts off with the Iyengar and Leeper Jam Study:

“… that showed when you present 30 flavors of jam at a gourmet food store, you get more interest but less purchasing than when you only show six flavors of jam. All of a sudden, it became an issue, or at least a possibility, that adding options could actually decrease the likelihood that people would actually choose any of them. More and more, because of that study, people have actually tried to study it in the wild, in the field, by getting companies to vary the variety that they offer and tracking both purchasing and also satisfaction. So that’s starting to happen, but there are not very many papers that are actually published on that. This whole line of work is only about five years old.”

There may be a common phenomenon underlying choice and information overload. Neither of these surfeits of stuff are problematic unless we’re interested, unless there’s an opportunity. Since information is neutral until deemed interesting, information overload is not problematic until we admit ever-larger boundaries of interest and attention. When we overwhelm short term memory and task attention, we’re forced to stop and change the focus of attention. The same with choice – I don’t care whether there are 5 jams or 30 unless I really want jam. Otherwise, like the overload of celebrity stories in the public media, the overload is easy to ignore.

Once we evaluate email and user experience with the concept of opportunity overload, the angle of insight shifts from technology itself to the idea of value. While 90% or more of all my email I could ignore, I also have extraordinary opportunities presented by way of this communication channel. Not only most of my consulting projects, but collaborations, new tools, great ideas to work with, answers to questions I did not think to pose. Its opportunity “push,” with the Web as opportunity “pull,” a nightmare of opportunity overwhelm if you let it.

As a research issue this interests me as it entails hermeneutics (individually and not externally interpreted) and economics (as in the cost/value of opportunity). We attend to the extent we are emotionally engaged with the perceived value of the opportunity represented by a choice (a product or a message in an email). But attention is only the intial draw. There are significant cognitive requirements demanded in processing the value (what is this worth to me? How cool is that?) and choice (Which one do I want, or is it worth my time to evaluate further?).

To finally make a decision may require additional learning (which one really is better? do I know enough to choose this opportunity? What are the costs in time and lost business/opportunity?). It may require communication (who should I ask about this? Wouldn’t Nick want to know about this?) Next thing we know, the day is gone!

So nobody except Miles the Marketer seems to be onto opportunity overload. (And Miles means to make you money, and I don’t, so go there if you want marketing opportunities!)

Interaction flow or Activity flow?

August 12, 2008

Boxes and Arrows is a great source for the publication of in-depth discussions of ideas and concepts emerging in the user experience community. Originally more of an information architects how-to, nuts & bolts go-to service, it has grown into a true eJournal with good editorial review and a “real” community of readers that know each other (supra-virtual?). EiC Christina Wodtke and editor Austin Govella have done a great job of encouraging publications beyond the expected scope in the IA readership.

A new, well-cited article by (another sharp alumnus of Calgary’s nForm shop) Trevor van Gorp –  Design for Emotion and Flow – brought a raft of comments related to designing for flow and to enhance positive emotional response in an interactive experience. Trevor gives a guidelines for design based on his research and the classical model from Csikszentmihalyi:

Flow channel

Flow channel

1. A clear goal… The user navigates to accomplish a task, like seeking information on a particular topic or surfing for fun. This is an evolving goal, dependent on the options presented to the user and aided by logical information architecture, intuitive navigation, effective wayfinding and clear options for proceeding like information scent, breadcrumbs, meaningful labels, clear page titles, etc.

2. With immediate feedback on the success of attempts to reach that goal… The user receives quick, sensory feedback in the form of a visual shift and/or sound from links, buttons, menus, or other navigation items.

3. Presented as a challenge that you have the skills to handle. Csikszentmihalyi’s definition of flow in the context of user experience was debated between designing flow into the experience, and deisgning to optimize the flow as an end in itself.
“The key element of an optimal experience is that it is an end in itself. Even if initially undertaken for other reasons, the activity that consumes us becomes intrinsically rewarding.”

Andy Polaine, who’s doctoral research was also on this topic, described another path toward interactive flow:

“The main point here is that interaction designers can encourage this self-contained activity, the intrinsically rewarding aspect unrelated to future benefit through the design of interactions and interfaces that are rewarding in themselves to use. Interfaces that are satisfying in their own right encourage users to play with them and explore them further, which means they learn them without thinking about learning them.”

My notion of facilitating flow in interactive experience design leans more toward optimizing the experience of a real world activity or a whole system. In most of my design challenges, for Redesign Research anyway, we are aiming to design a total service to support a professional or intellectual activity of some sort. Consumer or discretionary experiences that may lend themselves to interaction for the sake of its own enjoyment, or a gaming-type pursuit experience.

In professional activity we might consider several questions relevant to optimizing their cognitive flow, such as:

How is individual flow affected when multiple players are involved? (For example, you need a critical mass to make a multiplayer game, or Second Life, compelling enough to flow – there are tradeoffs between individual and group experience of flow).

How about the world beyond interactivity, which is where work and play live for most people?

Where is the focal experience of the flow? Where is it experienced?

Is it in the interactive experience or in the activity that the interaction supports?

Consider the design requirements for medical decision making – the flow is happening in the consult room, not in the information display. The physician, nurse, order clerk, pharmacist are part of a complex, continual loop-closing communications feedback system. The attempt to design-in flow states to an interactive experience could be counter-productive to total flow, which requires maintaining context awareness, status updates, attention cues to change in patient state, and providing brief-but-accurate communications at the time and point of need. Yes, flow could be improved. But a UX, designed in relative isolation from the total system, might over-flow the information display and sub-optimize the activity.  Is this situation amenable to design for flow?

Trade-offs. There might also be multiple interactions involved that trade-off “more flow” or enjoyable challenge in one state versus more radical efficiency in another.

Take an eBook reader for example (a project I just finished). If an eBook vendor designs their platform for the purpose of maximizing reading flow while online, they may inlcude features or navigation that impedes the flow state of the researcher, who is attempting to understand a thread of ideas across a number of publications (common task), or who is maximizing the number of references to an idea by finding all the citations in and across books.

Flow is a good example of a classical concept that has been retrieved for adaptation in a pragmatic design context, and may have a lot to offer practitioners. But we can also see where Flow Theory becomes weak – other theories (whole system) may require us to expand the flow theory to accommodate it. Is Flow scalable? Should we keep interpreting Csikszentmihalyi or should we start to make our own observations about interaction flow and extend the aging theory of flow for other domains?

Cognitive impacts of Google’s info hegemony

July 19, 2008

Referring to the prior post, the title was meant to provoke and reprieve the Atlantic article thesis. As with many technological aids to cognitive augmentation, the answer is “both” dumber and smarter.

Perhaps we are all still only in the first few years of a new media behavior, and like “boiling frogs” we cannot see the effects on ourselves yet.  Surprisingly, there are no in-depth research studies on Google-think. As somone who’s researched and observed information behavior in the search and research domains for over 10 years, I want to consider longitudinal aspects, not just whether Google makes us “feel” smarter or dumber.

I have researchable concerns over the universal casual acceptance of Google’s information hegemony.  We are smarter in some ways, for sure – but I have also sensed a rapid dismissal of Carr’s (Atlantic article) thesis, as if it were obvious he’s just making a fuss. There may be ways – ways in which we don’t have easy access to awareness – that continual Google use makes us dumber.

How do we know what behaviors will be obviated by growing up with a ubiquitous search appliance whose evolution of relevancy reflects popular choices? (Over time, anything popular reverts to the mean, which is not exactly “smart.”) PageRank bases relevancy on (among other things) having the highest number (and weighting) of citing pages to the given page. It displays (by default) only 10 items on the results, and overwhelmingly people select the top hit in a search. While Google is powerful, the results display is not as helpful for browsing as – for example – the clustered responses of Clusty, or search enginers like Scirus being used in science research.

It rides our cultural proclivity toward instant gratification – we get a sufficient response VERY quickly, making a compelling argument to rapidly explore the top hit. How often do we pursue the hits on page 3 or further? Do we know what knowledge we are avoiding in our haste? Why do we think the most-referred to pages are the most “relevant” to our real needs? This “instant good enough” may lead us to demand that value of other types of services and supposed knowledge.

Kids may then demand this type of easy, superficial access from their teachers. A quick relevant story: The  teacher I probably learned the most from in all my years of formal education was Dave Biers, graduate psychology research methods and stats. Rather than laser print his worksheets clearly, he insisted on using old blurred, photocopied mimeo. The formulas were barely readable – so you HAD to pay attention in class, where everything was explained and scrawled on the board. This made you attend class, and attend in class. If you didn’t understand, you couldn’t act as if you did. Illegibility was a deliberate learning device.

In a 2005 article in Cognition, Technology and Work I reported on a study at Univ of Toronto on information practices in scientific research. I reported on the trend of grad students using Google and PubMed instead of the expensive, dedicated research tools often used more by their faculty, such as SciFinder, Medline, Web of Science. The earlier use of the more “opaque” search interfaces, now being obsoleted, had at one time trained a generation to think about the terms used in the domain of their research.Opacity is helpful when it reveals opportunities for further learning that you would miss if in a hurry.

This may have also enabled serendipity.Discoveries in science often happen by analogy and serendipitous relationships. Google’s ruthlessly immediacy and transparency of the “top” answers bypasses some of these learning and suggestion opportunities. Even Google Scholar hides a lot more than it shows. How do we actually “slow down” the process of info foraging so that we can find patterns in a problem domain and not just assume the top hits are best?

Now consider the McLuhan tetrad model of the replacement of an older media by a newer regime. The tetrad is a model for thinking through trends and impacts of media transformation. It is also a helpful way to map out the impacts of a new media and to make predictions of its future directions.

So using the tetrad  on Google we get:

  • What does the medium enhance?  Information foraging – finding many sufficient, alternative responses to a given question that can be described in simple keywords. Google amplifies our temporal effectiveness – it gives us the ability to respond quickly in time to almost any information need. It enhances our ability to communicate, by giving us access to other people’s points of view for a given topic of interest. It augments our (already-weakened by infoload) memories by allowing us to neglect exact dates, names, references until the point of need.
  • What does the medium make obsolete? Published encyclopedias, and many types of indexes. It obviates the memorizing of factual details, which can now be retrieved quickly when needed. (Exact retrieval is not a typical competency of human cognition). It reduces the importance of directories, compiled resources, catalogs, list services, even editorial compilations such as newspapers.
  • What does the medium retrieve that had been obsolesced earlier? Do we know yet? It may return the ability to create context across domains of learning. It may enable multi-dimensional thinking, that was more common in the 18th and 19th centuries than today. Recent re-readings of Emerson and Thoreau have left me astonished at the breadth of lifeworld of authors of that time. They had a Renaissance-person grasp of culture, news, politics, geography, literature, scientific developments, and the intellectual arguments of their time. Our culture lost much of this in the specialized education created to satisfy the demands of industrialization. I have hope that searching may lead to a broader awareness and access to the multitude of meaningful references that can be positioned into waiting dendrites in our pre-understanding of things.
  • What does the medium flip into when pushed to extremes? Google is flipping into itself. Google has already flipped into the world online library (Print and Books), it has flipped into the world online geosearch (Earth) and navigation (Maps). Images. News. Video. These are not just object types – these are new media with new possibilities. What’s next? Immersive broadband imagery by your preferred channel of perception.

What it does not help us with is version control. I had to rewrite the tetrad from memory after (apparently) clearing the WordPress editor somehow and clicking Save. Then finding the editor empty – why isn’t there yet a Google Undo?

Feeling dumber? Maybe it’s just Google-think.

July 13, 2008

Maybe it’s in the secret sauce?  In the last month, I’ve heard several commentaries on the notion that sustained use of Google is affecting our thinking processes. As if Google were the “bad television” of the 21st century, the meme apparently suggesting overuse of Google searching is dumbing us down because of our passive/receptive way of literally consuming information.

The Atlantic’s recent article Is Google Making Us Stupid? (July/August issue) is the most immediate and critical reading for interested information seekers. Google, Nicholas Carr suggests, has perhaps caused a permanent alteration of our information and reading behaviors, not just searching, but browsing, reading texts, researching, and sensemaking. We (many of us) now skim the surface, jump around from link to link, and cannot attend to an entire article online, let alone an entire book offline (remember, they are still available in printed form). He cites a few examples of Very Smart Persons exhibiting these symptoms. Perhaps he’s right.

My wife Patricia, being an artist, was on the leading edge of this wave. She was concerned that Google was interfering with her imagination, which is the source and font of all wonder for the creative life. She was searching Google in her dreams. And she reports that she finds herself doing similar behaviors, of relentless surfing and wandering the Net, losing total track of time. But she insists its a positive modification of mental life, if it is indeed permanent (she says “it’s the network, you’re able to see all the interconnections of things you never could before, you learn what’s behind everything.”) Something like, that anyway. Maybe she’s right – a couple of years ago she was on about Tristram Shandy being the first hypertext novel, and how that really heralded post-modern thinking. So maybe people were trying to think like Google makes us way back in 1759.

And just recently at the ELPUB conference in Toronto, in an offline conversation, John Senders (what, no Wikipedia article?), one of the founders of the field of human factors (from at least 1942), was observing basically the same thing (the “television is bad for you part”) about Google.  His observation was in effect that Google was changing the way children were learning and interacting with knowledge. Rather than trial and error, observation, finding out for themselves, etc., young children would (and do) just search and rely on whatever they locate online. His main concern was for the eventual (or even current) dumbing down of the future generations as they developed intellectually though their chief years of learning by relying on the common information appliance. He wanted to pursue the issue as a social science experiment, which is a good idea. Maybe John is right as well.

Carr’s article cites Larry Page’s statement to the effect that Google is creating a type of Internet AI, that we are all smarter when we tap into the world’s published information whenever we have a question or problem. I cite Carr’s concern that follows, that perhaps “easy access” is not the highest human or social value associated with information seeking.

“Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”… Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

Carr also cites the UK’s JISC/CIBER program, a 5 year + study on online information behavior in UK education and society. I also found this provocative publication just in time to cite and interpret for the current Redesign Research study of eBooks user experience at the University of Toronto Libraries. CIBER essentially suggests the Google Generation is trending toward a style of thinking and working characterized by endless skimming, jumping around, and scavenging rather than thinking for oneself.

“It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.”

Maybe they are right as well. What do you think? Are we becoming Borg’d? Do you feel your link to the Matrix yet? Have you read a NOVEL lately? I will post my responses (agreements, disagreements, expansions) in a later post.

The Book is Dead – Long Live The Book!

June 10, 2008

This is a mail art call, one of the ongoing cultural artifacts spawned by Fluxus and Ray Johnson. Even if you don’t contribute, this is worth paying attention to, as cultural observers everywhere (Paul Krugman’s NYTimes op-ed on Friday) have been predicting the end of the book as we know it.

So what do you think? Is the printed book format in danger of becoming an relic from the Gutenberg Galaxy? In the eBooks research I’m currently engaged in, the printed book remains a preferred medium for textbooks, cover to cover reading, and texts for personal markup. eBooks are good for many things, but they do not replace the love of paper.

Books are themselves a system of signs, a packaging of signs that, when collected with sufficient other relevant texts, constructs a persistent identity, representations to others, and prompts of past literacies. You can walk in to a colleague’s office and know their competencies, interests, specialties, and possible contact points for relationship. (Have you ever seen someone’s book collection when on a first date situation and decided, on sight, this was not ever gonna work? Or, maybe it just would?) Try doing that on the web.

THE LAST BOOK

(A Project by Luis Camnitzer, sponsored by the National Library of Spain)

Open call for collaborations
The Last Book is a project to compile written as well as visual statements in which the authors may leave a legacy for future generations. The premise of the project is that book-based culture is coming to an end. On one hand, new technologies have introduced cultural mutations by transferring information to television and the Internet. On the other, there has been an increasing deterioration in the educational systems (as much in the First World as on the periphery) and a proliferation of religious and anti-intellectual fundamentalisms. The Last Book will serve as a time-capsule and leave a document and testament of our time, as well as a stimulus for a possible reactivation of culture in case of disappearance by negligence, catastrophe or conflagration.

Contributions to this project will be limited to one page and may be e-mailed to lastbook.madrid@gmail.com or mailed to Luis Camnitzer, 124 Susquehanna Ave., Great Neck NY 11021, USA. In case of submission of originals, these will not be returned. The book will be exhibited as an installation at the entrance of the Museum of the National Library of Spain in Madrid at some point of 2008. Pages will be added during the duration of the project, with the intention of an eventual publication of an abridged version selected by Luis Camnitzer, curator of the project. The tentative deadline is October 15, 2008.

This call is open and we hope that it will be resent to as many potential contributors as possible.

Powerset – Toward semantic search in a closed ecosystem

May 29, 2008

Powerset provides advanced natural language browsing of searched terms and topics in Wikipedia. It’s designed to handle conversational language entries, and the tool is a good start. Try it on a few simple searches (e.g., a name) which is simple, then throw something abstract at it. Like “sensemaking” or “design theory” and the gaps in Wikipedia show up quickly. Wikipedia does not search across all articles for close matching terms (their search is an article finder, not a browse view). So Powerset fills a real need for knowledge awareness as Wikipedia becomes a popular starting point for Q&A, student-level research, and scanning the current cultural repertoire for memes and conventional wisdom.

The Powerset model makes sense – semantic relevance is achievable in a closed ecosystem where you have some level of editorial control of the content. They also index Freebase, which is much less mature than Wikipedia, so Powerset’s indexing of the two services does not yet offer access into deep knowledge resources.

For reaching deeply into authoritative publications, and indexing qualified (institutional) servers using the FAST search engine, I like Elsevier’s Scirus. It now looks almost exactly like Google, which was the direction we steered it in 2001 when I advised on redesign. In my opinion it has lost some personality on the start page due to its recent facelift though. Scirus’ indexing and retrieval are very powerful – the browse experience is much more inviting than Google Scholar, and it accesses highly relevant content form multiple artefacts, not just citable articles, but research reports, lecture notes, online presentations, good stuff shared online by the same authors Google Scholar only cites. Scirus indexes a different a closed content ecosystem as well – based on validity (academic, institutional, verifiable publications) and not domain (Wikipedia or .edu sites) or authority.

Google Plays Doctor

April 17, 2008

And Microsoft also wants your health records as well. The New York Times reports on the NEJM article warning about the entrants of mega-players GOOG and MSFT as purveyors of your private healthcare information. These are not altruistic enterprises – they have to turn a profit on this somehow. So it does make one wonder about their product strategy – will Google flash consumer health ads at you while you review your meds and shots? Will Microsoft create a new Health Passport ID to qualify your access to your medical records on their servers? So, who will the early adopters be?

There are so many questions –  Have you actually tried to locate and consolidate your medical records? Unless you’re a veteran and on VistA, have you noticed they are paper? So, will Google scan them for you as well?  What happens to your records if you leave the country or die? What if laws change and you don’t know about it? Can your doctor get to your online records, and will they have to have a separate ID for all their patients?  Why can’t people just put all their records on a flash drive, that gets updated at the doctor’s office, and then keep it physically with them – and a backup on the laptop? What’s the real value-add of the big players here?

The NEJM authors consider the trend toward personal health records a positive development for personal responsibility for healthcare:

Despite their warnings, Dr. Mandl and Dr. Kohane are enthusiastic about the potential benefits of Web-based personal health records, including a patient population of better-informed, more personally responsible health consumers.

“In very short order, a few large companies could hold larger patient databases than any clinical research center anywhere,” Dr. Mandl said in an interview.

But the authors see a need for safeguards, suggesting a mixture of federal regulation — perhaps extending HIPAA to online patient record hosts — contract relationships, certification standards and consumer education programs.

Today you’ll see almost 200K Google hits on “personal health record.” The growth in this “space” in one year has been truly amazing. Remember that Microsoft bought Medstory a year ago, (and there’s been no news from them since). However, if the US had a national healthcare system (or even statewide systems), a feature of patient and cost management would be your access to personal versions of electronic health records, such as those available on the aforementioned VistA system. Your tax dollars already built VistA and it is a public domain application. Veterans already have personal eHealth records through My Health eVet (such a bad brand name, but you get the idea). In this area, government and veterans are at the leading edge, and not consumers.

Information services providers might consider the “records aftermarket,” rather than records access –  helping consumers make sense of their health records. The medical records themselves are used as intra-practice documentation, and are not easily readable by patients or family members. Perhaps this is the direction the big players want to go, but there’s a lot of gray area in the reading and interpretation of medical tests, procedure, physician orders and notes, etc.  Doing this well is an editorial process – currently services such as Medstory do a good job getting you to qualified information, but the level of difficulty of interpretation and sensemaking remains daunting.