Posted: February 21st, 2008 | 1 Comment »
The other day, looking at toys in a kid store, I ran across this robotic horse and my attention was instantly attracted by the missing left ear:
Why do I blog this? My interest towards the user experience of broken artifacts. This poor robotic pet has lost an important body part. But important for whom? Obviously it would not really change the robot itself (I don’t think there was any noise sensor in there) but what does that mean for the robot “user” (I put it into brackets because it’s difficult to define a stereotypical “user”). It made me think of the uncanny valley (as defined in Wikipedia: the emotional response of humans to robots and other non-human entities). How uncanny is uncanny? Would it repel kids? Would they find it curious? What would be the discourse around this?
Is it possible to take advantage of defunct parts of artifacts? Can design take this into account? I was wondering if there could be a sort of long-term design perspective in which you create objects with intended malfunctions (to foster specific user behavior).
Posted: February 20th, 2008 | 2 Comments »
[Last year, I wrote a paper for a workshop at a human-computer interaction conference about the user experience of video games, actually it briefly presents the work I am doing with game companies. The paper was not accepted and I thought it would be pertinent to leave it online anyway]
Video game companies have now integrated the need to deploy user-centered design and evaluation methods to enhance players experiences. This has led them to hire cognitive psychology researchers, human-computer interface specialists, develop in-house usability labs or subcontract tests and research to companies or academic labs. Although, very often, methods has been directly translated form classic HCI and usability, this game experience analysis started to gain weights through publications. This situation acknowledges the importance of setting a proper method for user-centered game design, as opposed to the one applied for “productivity applications” or web services. The Microsoft Game User Research Group for example has been very productive on that line of research (see for example ) with detailed methods such as usability tests, Rapid Iterative Test and Evaluation  or consumer playtests . Usability test is definitely the most common method currently given its relevance to identify interfaces flaws as well as factors that lower the fun to play through behavioral analysis.
That said, most of the methods deployed by the industry seem to rely heavily on quantitative and experimental paradigms inherited from the cognitive sciences tradition in human-computer interaction (see ). Studies are often conducted in corporate laboratory settings in which myriads of players come visit and spend hours playing new products. Survey, ratings, logfile analysis, brief interviews (and sometimes experimental studies) are employed to apprehend users’ experiences and implications for game or level designers are fed back into game development processes.
While these approaches proves to be fruitful (as reported by the aforementioned papers which describe some case studies), this situation only accounts for a limited portion of what HCI and user-centered design could bring to table in terms of game user research. Too often, the “almost-clinical” laboratory usability test is deployed without any further thoughts regarding how players might experience the product “in the wild”. For example, this kind of studies does not take into account how the activity of gaming is organized, and how the physical and social context can be important to support playful activities.
What we propose is to step back for a while and consider a complementary approach to gain a more holistic view of how a game product is experienced. To do so, we will describe two examples from our research carried out in partnership with a game studio.
Examples from field studies
Our first example depicted on Figure 1 shows the console of an informant: a Nintendo DS with a post-it that says “Flea market on Saturday” and an exclamation mark. The player of “Animal Crossing” indeed left this as a reminder that two days ahead, there would be a flea market in the digital environment. This is important in the context of that game because it will allow him to sell digital items to non-playable characters in the game.
This post-it is only an example among numerous uses of external resources to complement or help the gameplay. Player-created maps of digital environments xeroxed and exchanged in schools in the nineties is another example of such behavior. Magazines, books and digital environment maps are also prominent examples of that phenomenon, which eventually leads to business opportunities. Some video game editors indeed start publishing material (books, maps, cards) and try to connect it to the game design (by allowing secret game challenges through elements disseminated in comics for example).
Figure 2 shows another example that highlights the social character of play. This group of Japanese kids is participating the game experience, although there is only one child holding a portable console. The picture represented here is only one example of collective play along many that we encountered, both in mobile of fixed settings. They indicate that playing a video-game is much more than holding an input controller since participants (rather than “The Player”) have different roles ranging from giving advices, scanning the digital environment to find cues, discussing previous encounters with NPCs or controlling the game character.
Another intriguing results from a study about Animal Crossing on the Nintendo DS has revealed that some players share the game and the portable console with others. An adult described how he played with his kid asynchronously: he hides messages and objects in certain places and his son locates them, displace them and eventually hide others. The result of this is the creation of a circular form of game-play that emerged from the players’ shared practice of a single console.
Although this looks very basic and obvious, these three examples correspond to two ways to frame cognition and problem solving: “Distributed Cognition”  and “Situated Action” . While the former stresses that cognition is distributed the objects, individuals, and tools in our environment, Situated Action emphasizes the interrelationship between problem solving and its context of performance, mostly social. The important lesson here is that problem solving, such as interacting with a video-game is not confined to the individual but is both influenced and permitted by external factors such as other partners (playing or not as we have seen) or artifacts such as paper, pens, post-its, guidebooks, etc. Whereas usability testing relates to more individual model of cognition, Situated Action or Distributed Cognition imply that exploring and describing the context of play is of crux importance to fully grasp the user experience of games. Employing ethnographic methodologies, as proposed by these two Cognitive Sciences frameworks, can fulfill such goal by focusing on a qualitative examination of human behavior. It is however important to highlight the fact that investigating how, where and with whom people play is not meant to replace more conventional test. Rather, one can see this as a complement to understand phenomenon such as the discontinuity of gaming or the use of external resources while playing.
One of the reasons why this approach can be valuable is that results drawn from ethnographic research of gaming can be relevant to find unarticulated opportunities. For example, by explicitly requiring the use of external resource or the possibility to have challenges designed for multiple players as shown in the Animal Crossing example we described.
In the end, what this article stressed is that playing video-games is a broad experience which can be influenced by lots of factors that could be documented. And this material is worthwhile to design a more holistic vision of a product.
 Davis, J., Steury, K., & Pagulayan, R. A survey method for assessing perceptions of a game: The consumer playtest in game design. Game Studies: the International Journal of Computer Game Research, 5(1) (2005).
 Fulton, B. (2002). Beyond Psychological Theory: Getting Data that Improve Games. Game Developer’s Conference 2002 Proceedings, San Jose CA, March 2002. Available at: http://www.gamasutra.com/gdc2002/features/fulton/fulton_01.htm
 Hutchins, E. (1995). Cognition in the Wild, MIT Press.
 Medlock, M. C., Wixon, D., Terrano, M., Romero, R., Fulton, B. (2002). Using the RITE Method to improve products: a definition and a case study. Usability Professionals Association, Orlando FL July (2002). Available at: http://download.microsoft.com/download/5/c/c/5cc406a0-0f87-4b94-bf80-dbc707db4fe1/mgsut_MWTRF02.doc.doc
 Pagulayan, R. J., Keeker, K., Wixon, D., Romero, R., & Fuller, T. User-centered design in games. In J. Jacko and A. Sears (Eds.), Handbook for Human-Computer Interaction in Interactive Systems, pp.883-906. Mahwah, NJ: Lawrence Erlbaum Associates (2002).
 Suchman, L.A. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge: Cambridge Press.
[Now it's also interesting to add a short note about WHY the paper has not been accepted The first reviewer was unhappy by the fact that many ethnographies of game-playing have been published. Although this is entirely true in academia, it's definitely not the case in the industry (where ethnography is seldom employed in playtests). And my mistake may have been that I frame the paper in an game industry perspective, using the literature about gaming usability. The second reviewer wanted a more extensive description of a field study and less a scratch-the-surface approach that I adopted. My problem of course is that it's always difficult to describe results more deeply because most of the data are confidential... This is why I stayed at a general level]
Posted: February 20th, 2008 | 3 Comments »
Via Michael Keferl @CScout Japan, this Lap Around Japan Pedometer is a more complex version of a pedometer: the device counts your steps and also map out your virtual trip around the coast of Japan.
As described by CScout:
“The tiny (14×42×78mm) pedometer counts the total distance around Japan from the starting point you enter. As you make your way through the 18,880 km journey (11,731 miles), you can zoom in and get information about 1,258 local sights, history, and products. Kind of like a Wikipedometer for city walkers. the whole point is to be able to “travel around Japan” by commuting to work, a task that would take you about fifty years at a single kilometer per day. The concept itself is admirable, and is done in collaboration with the Japan Walking Association to encourage exercise.“
Why do I blog this? I’ve always found pedometer curious as entry point but often limited in its usage of the output data. In that case, although the service is really simple (and the output as well), it’s a bit more complex. It definitely shows a trend towards more complex visual representations of movement.
Posted: February 19th, 2008 | 1 Comment »
Morning partner in commuting Frederic Kaplan finally revealed his latest project called wizkid (conducted with his team). In his words:
“Wizkid is a novel kind of computer permitting easy multi-user standing interactions in various contexts of use. The interaction system does not make use of classical input tools like keyboard, mouse or remote control, but features instead a gesture-based augmented reality interaction environment, in conjunction with the optional use of convivial everyday objects like books, cards and other small objects.
Wizkid could be described as a computer display with a camera mounted on top, fixed on a robotic neck. It looks like a computer, but it is a robot that can gaze in particular direction and engage in face-to-face interaction.“
Martino d’Esposito, who take care of the design aspects, defines it as “a computer with which we could communicate in a more natural manner, but which would still not look “human”.
Why do I blog this? I find the project interesting because it’s shows the convergence between computer/ubiquitous computing and robots, plus I quite like approach Frederic describes by: “despite some successful results this kind of natural interaction systems has tended to be used only in the domain of interaction with anthropomorphic or zoomorphic robots and progress in these fields has not impacted more mundane kinds of computer systems“. Furthermore, the interaction modes with that device are very intriguing through the “halo” mode (see description in the interview). From the output point of view, the interesting part is the “body language” used by the wizkid to express interest, confusion, and pleasure. To some extent it forces to ask questions close to the one I have to address with wii gestures, except that in the wizkid case it’s about output gestures (and not input gestures for the wiimote/nunchuk).
For those who want to see it, Wizkid is part of MoMA‘s Design and the Elastic Mind exhibit, running from February 24 to May 12, 2008.
Posted: February 19th, 2008 | 1 Comment »
See below, three very relevant occurrences of how space is transformed in the 21st century. These are 3 examples of “defensive space” (aka “defensible space”: architectural and environmental design used to reduce criminality by increasing field of observation and ownership) can be found next to where I live in Geneva.
The first and the second one consist in covering the ground with concrete instead of the vague lawn that was used by drug dealers to hide their stuff. Note that the first move was (before putting concrete on that poor little tree) to break a mirror there so that drug deals would cut their hands when trying to get their heroine.
The third one is maybe less conspicuous: two pieces of steel has been put on the ground to prevent people to park their car (which nicely complements the yellow signage).
Why do I blog this? well although this is, sort of, environmental scannning 2 meters from home, it’s definitely an important collection of signals that attest spatial changes. What does that mean for urban computing? I guess the next step when you’re done with concrete, steel and broken mirror is to use electrons to prevent people from doing certain things.
Please see also the classical and sad anti-skateboard devices.
Posted: February 18th, 2008 | 1 Comment »
(via)Read in Information Week:
“At an 11-store chain of Papa John’s restaurants in north Alabama, location data is being pushed directly to customers. Using an online-tracking system developed by startup TrackMyPizza, customers can watch online as their deliveries move street by street toward their doors. Drivers carry GPS-enabled handsets that feed location data to a TrackMyPizza server. There, the data is coupled with the customer’s phone number, providing location updates every 15 seconds.
Sound like technology overkill, just to know your pizza hasn’t gone astray? Rival Domino’s thinks consumers want more such information about their orders, and it’s doing a national rollout of a Web system that shows buyers when their pizzas have been prepared, cooked, then sent out the door. But it doesn’t offer location once the pizza leaves the store.
At Papa John’s, pizza tracking is delivering business benefits in its first two months by getting more people ordering online–a 100% jump in online ordering since the rollout, says Tom Van Landingham, the franchise operating partner. Online orders save phone-answering time, and Web customers spend about $2 more per order, since they can see the whole menu. About 18% of all delivery customers in the last 60 days have gone on the Web site to track their pizza. Van Landingham expects to begin using the tracking system to improve productivity behind the scenes, by plotting more efficient delivery routes, for instance. The service is only 2 months old, so it still needs to prove it’s more than a novelty. But the chain proved it can be done.“
Why do I blog this? The perspective of having people at home riveted to their computers, following the movement of their pizza mapped digitally makes me giggle. It looks like a weird version of Pacman where you don’t have any control on your little character. Perhaps there is something cultural that I am missing or maybe it’s the novelty who made people following their pizza on-line.
So, at first glance, this looks awkward and I am really curious to see if there are some user experience researcher already doing work on this kind of service. Beyond people’s motivation to track an artifact that may be in their stomach one hour later, it would be interesting to understand more what are the expectations towards the pizza’s location, the sort of happenstance people fear about this or even the reactions they would have if the pizza wandered around instead of taking a straight line to the consumer’s house. To some extent, this is a PERFECT tool to conduct psychological experiments!
Posted: February 18th, 2008 | No Comments »
Back in the 20s, when electricity (lighting, appliances) was less common, the american company Westinghouse tried to create early robots. One of the goal was to stimulate demand and interest in their electrical products. They indeed showed at the 1939 World’s Fair a prototype of two curious characters: a tall humanoid robot called Elektro and – above all – a robotic dog named Sparko:
Although much of the emphasis has been put on Elektro (7 foot tall, 300 pound, it could walk, count, see things with photoelectric cell eyes, talk using a record player and smoke cigarettes), it’s Sparko that I found more intriguing. Far less complex, Sparko was more into pet stuff: situp, barking, and dog tricks. You can see some video footage here.
Created by Westinghouse engineer Joseph Barnett, these oldest US robots are very curious. As matter of fact, Sparko is reported missing: “The biggest challenge remaining to Weeks is finding one of the three robot dogs, all named Sparko, that were built as pets for Elektro. The last confirmed sighting of Sparko was in California in 1957. The dogs were light-followers and legend has it that one of the three dogs was hit by a car and destroyed when it wandered out of an open door at the Westingouse lab.“.
Why do I blog this? Don’t know if these two artifacts were a technological failure (it was rather meant to be a marketing demonstrator of electricity and not a real product but it’s definitely part of my catalogue of insightful projects. Moreover, the man-dog “couple” as robots is also a very interesting metaphor which know lead to different product avenues: robotic pets from Sparko to Aibo (or Pleo) on one hand and robotic humanoids on the other hand.
Posted: February 15th, 2008 | No Comments »
In the last few weeks, there has been an interesting discussion on the the anthrodesign Yahoo! group concerning “Integration of ethnography in R&D“. It basically addressed the link between ethnography and “action” (e.g. implementation) in a client-vendor research relationship, a somewhat controversial issue. The discussion started with how “actionable” ethnography results should be and the problem with that term (““something that will allow me to do my job based on what you’ve told me)”.
There are some very good points there, especially about:
- what does ethnography means for R&D (“needs to be there from the beginning to frame the problem AND at the end to inform the marketing“, “moving from finding to insight. Cameras find stuff. People produce insights. Insights are actionable. They give us design principles that guide creativity and can test what we create. (…) All of this informs our research planning, “Making research “actionable,” to me, means providing specific direction for transforming whatever social context you’ve been studying“, “people need to make choices/decisions, whether they be creative or strategic, and they look to the research to help them do that. This can mean inspiring new choices that they weren’t aware of, or (commonly) deciding between options that they are already aware of but can’t decide or agree on.“)
- the questions to be asked: “how do projects get managed and recommendations get communicated here? Via presentation, text doc, what doc size etc? Which audiences – there are usually multiple. How many versions should I expect to create, over time, for which audiences? Who’s my partner-in-crime internally who’ll deliver the message with me? Should they take the lead in comms, or should I? Which people can I talk with? How can I assess the accuracy of these people’s perceptions and rapidly put together a basic org map and understanding of this org’s dynamics, before I commit to doing a project that’s not positioned for success? “.
- Some indeed distates the word “actionable” as it forces people to take results and “do something”, whatever that is. The point for one of the discussant is about rumination: t seem rather unseemly to me to simply take participants’ interactions with me and then “do something,” instead of reflecting, ruminating, and turning back to the participants for validation.
(I haven’t really put people’s name next to the quote since I was not sure about how public they wanted these statements to be revealed; it’s a mailing list).
Why do I blog this? great points to keep in mind when working with non-academics. The tension between “actionable” and “rumination” is very intriguing and is sometimes difficult to explain to people (aka potential clients).
Posted: February 15th, 2008 | No Comments »
Some notes from Michele Bowman‘s podcast entitled “The Role of Ethnofutures and Environmental Scanning”:
“2 things to keep in mind:
“Any truly useful idea about the futures should first seem to be ridiculous.”: The idea that airplanes would carry people, the fall of the berlin wall, paying bottled water
“not all change is created equal”: The general consensus in the business world is that change happens really, in fact this is not the case at all. There are all sorts of degress of change:
- environmental change, demographic change: takes decade to be felt
- evolutionary change: rising of women in the workforce… incremental… almost predictable timeframe
- faster furious change: the cost of sequencing the genetic alphabet that dropped
Environmental scanning is the understanding the dynamic of change, where and when, how fast? how slow? a collection, interpretation about trends and emerging issues And it needs to be external: “I don’t know who discovered water but it wasn’t a fish” because challenges will come form external environment
Roles of scanning
- as a decision-making capacity
- organizational learning, increasing the sensitivity to change“
Why do I blog this? curiosity towards different methods about foresight research, lots of resonance with the LIFT08 session about this theme.
Posted: February 13th, 2008 | 2 Comments »
Fabien and I finally manage to release a near future laboratory project: Sliding Friction: The Harmonious Jungle of Contemporary Cities, a booklet that assembles photos and annotations we took here and there along our dérive through the many cities we lived in and visited. Sliding Friction is an attempt to showcase the curious aspects of contemporary urban spaces. Through 15 topics and 4 themes we focus our lenses on the sparkles generated by the many frictions between ideas, practices and infrastructures that populate cities. We hope to provide some raw food for thoughts to consider the city of the future. Do we want to mitigate, or even eliminate these frictions?
You can find it here as a pdf.
It’s edited by Walabab editions, Designed by the (utterly fabulous) Bread and Butter, Preface by Bruce Sterling, Postface by Julian Bleecker.