Spotted in Geneva this morning, this somewhat cute speed radar dressed with the idealistic cow camouflage (very close to the WTO offices). How this change the user experience of being detected? Is it about visiblity?
I already blogged about onlife, this program now called Slife that tracks and help you to visualizes traces of your interaction with Mac applications. There is now a “social component” called Slifeshare:
A Slifeshare is an online space where you share your digital life activities such as browsing the web and listening to music with your friends, family or anyone you care about. It is a whole new way of staying in touch, finding out which sites, videos and music are popular with your friends, meeting new people and discovering great new stuff online. Take it for a spin, it’s free, easy to set-up and quite fu
The “how page” is quite complete and might scare to death any people puzzled by how technologies led us to a transparent society (a la Rousseau). Look at the webpage that is created with the slife information:
Why do I blog this? Slife was already an interesting application, in terms of how the history of interaction is shown to the user. This social feature add another component: using Jyri’s terminology (watch his video, great insights), it takes people’s interaction with various applications as a “social object”. This means that designers assume that a sociability will grow out of the interaction patterns (in a similar way to the sociability of Flickr is based on sharing pictures).
Boundary Functions is a project by Scott Sona SNIBBE:
“If you participate in this work, you will see a line as a boundary between you and others, which is usually supposed to be invisible, to identify your territory. The boundary changes according to the position of each individual on the floor, but the rule is that the person at the center must always be the closest to the boundary.
This line-producing program relates to the “Voronoi Diagram” and “Dirichlet Boundary Conditions”, which are used to analyze natural phenomena with mathematical rules: patterns of ethnic settlement, animal dominance, or plant competition in anthropology or geography, the arrangement of atoms in a crystal structure in chemistry, the influence of gravity on stars or star clusters in astronomy, and so on. The boundary that surrounds participants does not exist on their own but changes in a subtle way like conflicts between the individual and society.”
Why do I blog this? I thought it’s a nice project that exemplify the spatial aspects of interactive technology.
This is the first blogpost of a serie that concerns my thoughts about the topic “Space, cognition, interaction” that I address in my dissertation. This issue has been tackled by various disciplines ranging from environmental psychology to sociology, architecture and human-computer interaction when technology is involved. This blogpost serie summarizes some important notions and results arising from these fields. In each of the post I try to describe how this is important to the the object of my research: the location awareness of others. Step 1 is about the differentiation between “space” and “place”.
A recurrent discussion concerning spatiality targets the differences between the concepts of “space” and “place”. Harrison and Dourish (1996) indeed advocated for talking about place rather than space. They claim that even though we are located in space, people act in places. This difference opposes space defined as a range of x and y coordinates or latitude/longitude to the naming of places such as “home” or “café”. By building up a history of experiences, space becomes a “place” with a significance and utility; a place affords a certain type of activity because it provides the cues that frame participants’ behavior. For instance, a virtual room labeled as “bar” or “office” will trigger different interactions. In a sense, it is the group’s understanding of how the space should be used that transform it into a place. Space is turned into place by including the social meanings of action, the cultural norms as well as the group’s cultural understanding of the objects and the participants located in a given space. However, as Dourish recently claimed, this distinction is currently of particular interest since technologies pervade the spatial environment (Dourish, 2006). This inevitably leads to the intersection of multiple spatialities or the overlay of different “virtual places” in one space.
Thus, location-awareness of others also relates to how people make sense of a specific location: depending on the way the location of others is described, it could lead to different inferences. For example, knowing that a friend is at the “library” (place) frames the possible inferences about what the friend might be doing there.
Additionally, partitioning activities is another social function supported by spatiality (Harrison and Dourish, 1996). For example, in a hospital, corridors are meant to be walked in to go to waiting rooms where people wait before meeting doctors who operatein operating rooms. Research concerning virtual places also claims that a virtual room can define a particular domain of interaction (Benford et al. 1993). Chat rooms, for example, are used to support different tasks in collaborative learning: a room for teleconferences and a room for class meetings (Haynes, 1998). Different tasks correspond to virtual locations: a room for meetings related to a project, office rooms related to brainstorm, public spaces related to shopping and so on. Fitzpatrick et al. (1996) found that structuring the workspace into different areas enables to switch between tasks, augments group awareness and provides a sense of place to the users as in the physical world. Since work partitioning can be supported by space, knowing others’ whereabouts is an efficient way to make inferences about the division of labor in a group. Once we know that a person is in a particular place, we can infer that he or she is doing something (as we saw in the distinction space/place) and how this may contribute to the joint activity.
Benford, S.D., Bullock, A.N., Cook, N.L., Harvey, P., Ingram, R.J., & Lee, O. (1993). From Rooms to Cyberspace: Models of Interaction in Large Virtual Computer Spaces. Interacting With Computers, 5(2), 217-237.
Dourish, P. (2006). Re-Space-ing Place: Place and Space Ten Years On. In Proceedings of CSCW’2006: ACM Conference on Computer-Supported Cooperative Work (pp.299-308), Banff, Alberta.
Fitzpatrick, G., Kaplan, S. M. Mansfield, T. (1996). Physical spaces, virtual places and social worlds: A study of work in the virtual.. In Q. Jones, and C. Halverson, (Eds.) Proceedings of CSCW’96: ACM Conference on Computer Supported Cooperative Work (pp.334-343), Boston, MA.
Harrison, S., & Dourish, P. (1996). Re-Place-ing Space: The Roles of Place and Space in Collaborative Systems. In Q. Jones, and C. Halverson, (Eds) Proceedings of CSCW’96: ACM Conference on Computer Supported Cooperative Work (pp.67-76), Cambridge MA, ACM Press.
Haynes, C. (1998). Help ! There’s a MOO in This Class. In C. Haynes, and J.R. Holmevik, (Ed.s) High Wired: On the Design, Use, and Theory of Educational Moos (pp.161-176). Ann Arbor: The University of Michigan Press.
Free creatures: The role of uselessness in the design of artificial pets by Frédéric Kaplan is a very relevant short paper, which postulates that the success of the existing artificial pets relies on the fact that they are useless.
Frédéric starts by explaining that the difference between an artificial pet and robotic application is that nobody takes it seriously when an AIBO falls, it’s rather entertaining.
Paradoxically, these creatures are not designed to respect Asimov’s second law of robotics : ‘A robot must obey a human
beings’ orders’. They are designed to have autonomous goals, to simulate autonomous feelings. (…) One way of showing that the pet is a free creature is to allow it to refuse the order of its owner. In our daily use of language, we tend to attribute intentions to devices that are not doing their job well.
What is very interesting in the paper is that the author states that giving the robot this apparent autonomy is a necessary (but not sufficient) feature for the development of a relationship with its owner(s).
Then comes from the uselessness principle:
The creature should always act as if driven by its own goals. However, an additionnal dynamics should ensure that the behavior of the pet is interesting for its owner. It is not because an artificial creature does not perform a useful task that it can not be evaluated. Evaluation should be done on the basis of the subjective interest of the users with the pet. This can be measured in a very precise way using the time that the user is actually spending with the pet. (…) be designed as free ‘not functional’ creatures.
Why do I blog this? first because I am more and more digging into human-robot interaction research since I feel the interesting convergence between robotics and pervasive computing (that may eventually lead to a new category of objects a la Nabaztag). Second, because I am cobbling some notes for different projects for the Near Future Laboratory (pets, geoware).
POPsci features a very long and insightful interview of Will Wright (game designer of The Sims and working on his next project called Spore). IMO, the article is important because it describes the current trends in the gaming industry. Let’s see some of them below with quotes:
The first trend is certainly the interest towards user-generated content. Wright wants to turn players in “Pokemon designers, Neopet designers, or Pixar designers“:
I think Second Life is interesting because they have given the players such huge control over the environment (…) In Spore, the tools are more and more powerful than they were in The Sims, so the next step is, now, how do we take those things and use them to build a narrative
Every time the player makes something in the game – creature, building, vehicle, planet, whatever, it gets sent to our servers automatically, a compressed representation of it. As other players are playing the game we need to populate their game with other creatures around them in the evolution game, other cities around them in the civilization game, other planets and races and aliens in the space game, and those are actually coming from our server and were created by other players. So there’s an infinite variety of NPCs that I can encounter in the game that are continually being made by the other players as they play.
We’re going to have different feedback mechanisms. One of the things we’re going to be doing continually is rating the most popular content, so when you make a creature you’re going to be able to go to what we call the metaverse report and get a sense of what is your creature’s popularity ranking relative to other people’s creatures.
And he recognizes that an economy that emerges out of it is inevitable: as in Second Life, it will develop, go on eBay or other platforms and might lead to “some sort reward”.
Second, gaming foster an “augmented sociality” that is based on the content and is achieved not in the game itself but with other channels:
the asynchronous socializing through content, which we’re already seeing in The Sims web community. huge communities form with very well-known people based on the content they’ve made, other people taking that content and telling cool stories with it.
Third, the educational model of using games is now less about directly teaching content/facts but rather making people know processes. This has been a long discussion in psychology and educational sciences but there are still some people trying to design games to make kids learn irregular verbs or Napoleon’s battles. Actually, the thing is that video games are less good at declarative learning (content) and better for procedural learning and problem solving. And it’s good to see a game design such as Will Wright agreeing with that:
I think in a deep way yeah [answering the question "Do you see Spore, or the rest of your games for that matter, as being educational?"] – that’s kind of why I do them. But not in a curriculum-based, ‘I’I'm going to teach you facts’ kind of way. I think more in terms of deep lessons of things like problem-solving, or just creativity – creativity is a fundamental of education that’s not reallytaught so much. But giving people tools.
And finally, concerning the future of gaming, Wright addresses the articulation between interactions in the physical environment and digital interactions. In a sense, the question can be rephrased as how to turn data generated from real-world interactions and put them back in the game to enrich the playful experience:
One thing that really excites me, that we’re doing just a little bit of in Spore… I described how the computer is kind of looking at what you do and what you buy, and developing this model of the player. I think that’s going to be a fundamental differentiating factor between games and all other forms of media. The games can inherently observe you and build a more and more accurate model of the player on each individual machine, and then do a huge amount of things with that – actually customize the game, its difficulty, the content that it’s pulling down, the goal structures, the stories that are being played out relative to every player.
Why do I blog this? this is a quite good overview of the current game trends (and I left aside some other issues). Besides, it’s pretty refreshing to hear them from a game designer and not from observers/researchers who try to shake the game industry.
This is what the french calls a “podotactile”, namely a textured strip which runs along the edge of the metro/tram station platform or even sidewalk, which one can feel with the feet. It’s meant to warn people (blind or not) that there is limit/boundary between a space one is free to walk in and another area that can be dangerous. So the texture affords the limit (Bruno Latour would say that this “non-human” artifact is a way to delegate a function to an object).
This leads to another kind of “touch” feeling: in a sense “podotactility” is about feeling with the feet.
So why this is interesting? I quite like this example because it shows how textures are important and can have affordances (especially in physical space). Would it be possible to use podotactility in innovative way, beyond signaling people that there are dangers? Yes, of course, but what will happen if it has several affordances? A possible solution would be to use different granularities. Will then people learn these new codes (lot of space between dots = low danger, close dots = big danger)? Certainly food for thoughts for near-field interactions.
And of course, in terms of digital equivalent, there are some projects that propose some rugosity in mouse interactions/force feedback that can be perceived by similar (felt by the hand though).
Another accepted paper for the Common Models and Patterns for Pervasive Computing (CMPPC) workshop at Pervasive 2007. I co-authored this with Fabien Girardin (Barcelona) and Mike Blackstock (Vancouver).
It’s called “Issues from Deploying and Maintaining a Pervasive Game on Multiple Sites” and basically describes how the deployment of the CatchBob! pervasive game has been carried out in two different settings (in Lausanne and Vancouver).
Abstract: In this paper we present the lessons learned from the deployment of a collaborative pervasive game on two different sites. We emphasize on the practical aspects of getting a pervasive systems deployed without any extra special infrastructure. Based on our experience, we describe the issues providers and administrators must take into consideration to deploy and maintain pervasive environments. In this perspective, we highlight that ubiquitous technologies must be consciously attended, as they are unevenly distributed, unevenly available.
Reading Kaptelinin and Nardi’s book, I was interested in the chapter entitled “Do we need theory in interaction design?” because it describes why developing and using theory is needed.
The authors essentially summarizes the evolution of theories in the field of human-computer interaction (HCI), starting form the “cognitive years” to what they call “postcognitive” paradigm that appeared consecutively to Lucy Suchman’s book Plans and Situated Actions. HCI indeed started as a coupling of cognitive psychology and computer sciences models that envisioned human cognition as an information-processing system. With Suchman’s work (and the use of the ethnomethodology paradigm), the investigation of new lines of research as been favored with the inclusion of social/organizational factors, CSCW and the importance of context/artifacts in cognition. However, the problem of the ethnomethodological approach was that it succeeded in bringing detailed/rich/precise depictions of practices and interactions but it lead to no generalizable accounts (the essence of a Theory).
As a matter of fact, a theory is helpful for 4 reasons:
1. Theory forms community through shared concepts
2. Theory also helps us make strategic choices about how to proceed
3. To move forward, to know where to invest our energies (…) otherwise we will always be going to the square one of detailed renderings of particular cases. As interesting as the cases might be, we have now way of assessing whether they are typical, whether they are important exceptions to which we should pay particular attention, or if they are corner cases we do not have time for at the moment
4. Theoretical frameworks will facilitate productive cooperation between social scientists and software designers. Not only can such approaches help formulate generalizations related to the social aspects of the use of technology and make them more accessible to designers, they can support reflection on how to bring social scientists and software designers close together.
The criteria needed for such a theory are that it should: (a) be rich enough to capture the most important aspects of the actual use of technology (which is not met by classic cognitive psychology since it does not account for some important phenomenon), and (b) be descriptive and generalizable enough to be a practical tool for interaction design. A possible way to meet these criteria is to take theories that model phenomenon as complex systems. At this point, I would have been interested in having more development about the second criteria (“be descriptive and generalizable enough to be a practical tool for interaction design“) because it’s often the case that designers complain about this. And still, I have to admit that I have a hard time figuring out how a theory (or even a guideline) can meet this criteria.
Then the authors proposed that Activity Theory is the perfect candidate for that matter, and the rest of the book is describing to what extent this holds true. A final chapter however discusses other “postcognitive theories”: Distributed Cognition, Actor-Network Theory and Phenomenology.
Why do I blog this? because those questions are crux issues in my research work. Coming from a cognitive science background, it took me a while to understand how inadequate cognitive psychology or experimental psychology were to address human-computer interaction problems. That lead me to take other paths (such as more bottom-up approach like ethnography) but I tried to not forget what cognitive sciences could bring to the table.
And maybe the problem here is the one of the granularity of theories. There are sub-domains in cognitive sciences that can be of interest for HCI. For example, psycholinguistics offer interesting insights about how people interact with each other, how each others’ intents are mutually inferred (I quote this example because that’s what I addressed in my PhD research). Thus, of course the information processing model is somewhat passé but cognitive sciences is a HUGE field that have sub-aread of interest.
The coming of gestural interactions on mass market products such a the Wii brings lots of question about how to design movements, how to express them and discuss their relevance. This question is of particular importance in the video game industry and there is currently lots of discussion about how to create gestural grammar/vocabularies. I’ve attended seminars about people try to describe the movements (both the physical movements and their translation in the virtual counterpart) and there has not been any satisfactory solutions.
Reading a newspaper, I stumbled across this exhibit called “Les écritures du mouvements” (i.e. The writings of movements) in Paris that presents the different notation systems used in dancing and it seems strikingly pertinent for explaining movements. As described on this website about the show, each notation system attest of the peculiar way to perceive movements, which also depends on the historical, scientific and cultural context of the society in which this system occur. These systems are used either as mnemonic helps but also as a way to train people or even to create. Historically, there has been lots of different systems such as the ones represented below (left: by Bagouet, right: by Zorn):
Why do I blog this? This sort of notation systems seems interesting and pertinent for describing gestural interactions. Might have to dig this more deeply. Will wee see superb game design documentations with pages showing this sort of depictions?