Posted: February 28th, 2006 | No Comments »
This afternoon, I tried to formalize a bit my current research approach to analyse qualitative data of CatchBob! The point is to benefit from users’ annotations (in game) and the interview I conducted after the game (based on a replay of the activity). This leads me to the extraction of different valuable information that concerns coordination processes in the game.
This is based on Herbert Clark’s framework of coordination (as explained in the book “Using Language”). In this context, coordination is a matter of solving practical “coordination problems” through the exchange of what he calls ‘coordination keys/devices’; that is to say, mutually recognized information that would enables the teammates to choose the right actions to perform so that the common goal might be reached. As a matter of fact, such information allows a group to mutually expect the individual actions that are going to be carried out by the partners. According to Clark, a coordination device is not only defined by its content but also by the way the persons who collaborate mutually recognize it. For that matter, Clark differentiates four kinds of coordination devices: conventional procedures (when a convention is set by the participants), explicit agreement (when the participants explicitly acknowledge the information), precedent (when a precedent experience allows participants to form some expectations about others’ behavior), manifest (when the environment or the information sent makes the next move apparent within the many moves that could conceivably be chosen).
This framework then leads to the creation of two coding schemes to analyze my data:
- What a participant inferred about his/her partner during the game. This coding scheme is clearly data-driven in the sense that it emerged from the players’ verbalizations (namely those extracted during the self-confrontation phase after the game)
- How a participant inferred these information about their partners: this one is theory-driven since I used Herbert Clark’s theory of coordination keys/devices to have clear categories about what happened
Now, there is another dimension that should be taken into account: TIME: different coordination keys are used at different moments in CatchBob, so I’m trying to put this together in a global model of spatial coordination. In the end, in the would express which kind of coordination keys are used to solve certain coordination problems in the context of a task mobile collaboration such as CatchBob. The potential outcome for this would be to understand whether specific tools can supports the coordination process (for instance would a location awareness tool be useful at a certain point the process’).
Posted: February 28th, 2006 | No Comments »
(Via social fiction)
When San Francisco is interested in turning dog poo into power, some other folks have designed a robot that does not require batteries or electricity to power itself but instead, it generates energy by catching and eating houseflies.
Dr Chris Melhuish and his Bristol-based team hope the robot, called EcoBot II, will one day be sent into zones too dangerous for humans, potentially proving invaluable in military, security and industrial areas. (…) The EcoBot II powers itself in much the same way as animals feed themselves to get their energy, he said. At this stage, EcoBot II is a “proof-of-concept” robot and travels only at roughly 10 centimeters per hour. (…) The EcoBot II uses human sewage as bait to catch the insects. It then digests the flies, before their exoskeletons are turned into electricity, which enables the robot to function.
(Image taken from der spiegel)
Few years ago, it was just a project, and now it works…
Posted: February 28th, 2006 | No Comments »
SUCKER is a location-based game in the context of the ’Laboratory for Context-dependant Mobile Communication’ (LaCoMoCo).
The project is running on PDAs, with a positioning system that use Ekahau but I cannot parse danish so… The game webpage seems to be down.
Posted: February 28th, 2006 | 1 Comment »
Less sexy than Aibo but still nifty, this autonomous robotic fisch seems interesting. Designed by Dan Massie, Mike Kirkland, Jen Manda, Ian Strimaitis
An autonomous, micro-controlled fish was designed and constructed using sonar to help guide it in swimming. It was predetermined that constructing a mechatronic fish would be a large and demanding project due to the complex shape of a fish body, the unfamiliar territories of sonar sensing, the intricacies of fluid propulsion, and the challenge of keeping submerged electronics dry. However, the team was willing to put in a lot of time and produced an exceptionally successful first prototype by the name of Dongle.
The most important part is about the design and construction of this robotic pet: using soft-clay, a tail servo, microcontrollers…
Posted: February 28th, 2006 | 3 Comments »
Just ran this interesting discussion about the weather in video games. The author, Matt Barton (University of South Florida) worked on this topic for a paper called “How’s the Weather?: A Look at Weather and Gaming Environments” (in the “Playing with Mother Nature: Video Games, Space, and Ecology” book).
hat are some examples of good and bad use of weather in videogames? I’d really like a list of games that used weather not just as decoration or “atmosphere” but in ways that really affected gameplay. An example off the top of my head was Weather War, where players controlled hail, sleet, lightning, and rain? to destroy each other’s castles. Help me out here, please.
1. What are some games you know of that make interesting use of weather?
2. What were the first games to include weather? How did they use it?
3. What are examples of games that turn the weather into a character, or feature bosses and such that manipulate the weather?
Why do I blog this? I won’t enter into the details of the discussion but the questions brings some interesting ideas about the connection between game design and video games weather. The weather is one of the contextual feature of an environment.
Posted: February 28th, 2006 | No Comments »
Olofsson, S., Carlsson, V. and Sjolander, J. (2006) The friend locator: supporting visitors at large-scale events, Personal and Ubiquitous Computing, 10: 84–89.
This paper is about the fact that during large-scale events, people tend to lose each other and a LBS system might support the “friend locator feature”. It describes the findings of an ethnographic field study that was carried out during a music festival in Sweden. Well, it’s another “where is my buddy?” system, in a design phase.
Why do I blog this? I am less interested in such a system than by the possible new applications the atuhors envision:
We believe that the friend locator also would be usable at other types of large-scale events e.g., at a rally it could be used to see where a specific competitor’s car is located, and the distance remaining until it passes the position of you as a spectator could be measured in order to estimate the time left. When the competitor moves between the different stages again, the GPS could be turned off to prevent a surprise from the fans. At large football tournaments for children, a team could be allowed more freedom if the coach could easily communicate and locate the entire team through a friend locator.
Posted: February 27th, 2006 | 4 Comments »
Today reading in the train: “Beyond Blade runner: Urban control, the ecology of fear by Mike Davis.
An excerpt I liked:
Perhaps, as William Gibson suggests, 3-dimensional computer interfaces will soon allow post-modern flaneurs (or ‘console cowboys’) to stroll through the luminous geometry of this mnemonic city where data-bases have become ‘blue pyramids’ and ‘cold spiral arms’.
If so, urban cyberspace – as the simulation of the city’s information order – will be experienced as even more segregated, and devoid of true public space, than the traditional built city. Southcentral LA, for instance, is a data and media black hole, without local cable programming or links to major data systems. Just as it became a housing/jobs ghetto in the early twentieth century industrial city, it is now evolving into an electronic ghetto within the emerging information city.
Why do I blog this? what I like there is (1) this idea of embedding virtual data flows in reality (through light/displays, as in this project or this one for example), (2) the notion of electronic divide: there’s going to be ghettos without data holes.
This is connected to Usman Haque’s paper about Invisible Topographies quoting Antony Dunne:
Humans have only recently begun contributing to the cacophony with their pagers, medical devices, television broadcasts and mobile phones. This abundant invisible territory, a topography that is altered in shape and intensity by both natural and human-constructed landscapes, has been called “hertzian space” by industrial design theorist Anthony Dunne. He has observed that hertzian space is often ignored by designers saying, in Hertzian Tales, that the “material responses to immaterial electromagnetic fields can lead to new aesthetic possibilities for architecture.
An example of such idea is Tunneable Cities project by Anthony Dunne and Fiona Raby, part of their “hertzian tales” (thanks you elastico!):
Posted: February 27th, 2006 | 1 Comment »
If you’re into information visualization, the Licentiate thesis of Tobias Skog (Future Applications Lab, Göteborg) is very appealing. It’s called “Ambient Information Visualization” (1.7Mb pdf here) and it deals with various issues regarding informative art, everyday displays as well as their utility and evaluations.
This thesis investigates the concept of ambient information visualization. It has its background in the research fields of ubiquitous computing and information visualization (…) The term ambient information visualization distinguishes an area where these two research fields merge, and can be defined as the use of visual representations of digital data to enhance a physical location. These visualizations are typically displayed using flat-panel displays or projectors and ideally act both as information displays and decorative elements in the interiors where they are placed.
The thesis describes a suite of design examples, where the first ones explicitly address the issue of creating a decorative surface by using the styles of famous artists as inspiration for the appearance of the visualizations, creating so-called informative art. Subsequent designs are developed under the superordinate term ambient information visualization and strive to find generic, inherent properties of peripheral information displays and how these properties come to affect design requirements. As a way of informing the design process, visualizations have continually been tested with users in different environments, including exhibition settings with large amounts of visitors as well as long-term studies of use in office settings with smaller user groups.
The knowledge gained from the design and study of these examples is analyzed and the results highlight issues that are of central importance when designing a visualization. These issues are divided into three categories that concern the information source, the mapping from data to visual structures and the use of the
Some of the examples, my favorite is certainly the one using the Mondrian compositions as inspiration to show information about e-mail traffic:
Posted: February 27th, 2006 | No Comments »
Self-replication robotics is a curious domain. Unlike, self-reconfigurable robotics, the idea is to utilize an original unit to actively assemble an exact copy of itself from passive components. Greg Chirikjian of John Hopkins University created a self-replicating robot capable of driving around a track and assembling four modules into a robot identical to the original.
Prototype 1 is a remote-controlled robot, consisting of seven subsystems: the left motor, right motor, left wheel, right wheel, micro-controller receiver, manipulator wrist, and passive gripper. This particular implementation is not autonomous. We built it to demonstrate that it is mechanically feasible for one robot to produce a copy of itself. The prototype was made of LEGO parts from LEGO Mindstorm kits.
Why do I blog this? well, would the interactive toys of the future by like that?
Posted: February 27th, 2006 | 2 Comments »
Arminen, I. (2005): Social Functions of Location in Mobile Telephony. Personal and Ubiquitous Computing.
This article addresses a topic close to my PhD research: the importance of location awareness in (mobile) communication. Prior to studying the importance of location-based services (especially when it comes to buddy finder or granny locators), the author put the emphasis on the understanding of this peculiar feature: the discussion about one’s location over the phone.
To understand the dynamic nature of location, we have to study the actual communicative practices in which location gains its value. (…) Weilenmann has studied particularly the ways in which location references are used to signal communication difficulties: ‘‘I can’t talk now, I’m in a fitting room’’ (…) Laurier, for his part, has
shown how mobile professionals routinely stated their locations on a mobile phone as a part of their mobile usage. Both these studies on actual communicative practices point out how the value of location is embedded in the activity in which the mobile user is engaged.
74 Finnish mobile phone conversations were recorded (…) The material covered both mobile-to-mobile and landline-to-mobile or mobile-to-landline conversations (…) The calls were transcribed and analysed in detail by using conversation analytical (CA) method.
The usage of mobile communication device does not technically require the parties to get to know where the other party is. (…) 62 mobile calls out of 74 involved a sequence in which the mobile party stated her or his location to the other party
As for the context of this question, the author found that:
Location telling during mobile calls takes place in five different activity contexts. In other words, location seems relevant for the parties in mobile interaction during five different types of activities. (…) Location may be an index of interactional availability, a precursor for mutual activity, part of an ongoing activity, or it may
bear emergent relevance for the activity or be presented as a social fact. (…) Most location-telling sequences in these data are linked with practical arrangements. People state their location as a precursor for some practical arrangements (…) Location telling is also commonly done as a part of the real-time ongoing activity in which the parties are engaged. (…) Location can also be a mutual real-time co-ordination task, such as seeing each other in the cafeteria to meet there (…) Finally, a kind of location that is also realized during the ongoing activities is a virtual location referring to a web page or other material at hand to be shared with the communicative partner. (…) A not common, but existing, social practice involves location telling due to its social, symbolic qualities [exemple: beach which signify 'having fun']
Now, for the social functions of discussing locations:
Location may be an index of interactional availability, a precursor for mutual activity, part of an ongoing activity, or it may bear emergent relevance for the activity or be presented as a social fact.
International availability: audio-physical and social features of proximal location: noise (disco), network availability, (train, remote areas), involvement with proximal interaction, intimacy of situation (toilet, etc.)
Praxiological – spatio-temporal availability: readiness to engage in action
(Are you doing anything special? Can you come to x?)
– spatio-temporal location of a party vis-a`-vis the engaged activity: temporal distance
(half an hour [by car, by train, on foot, etc.]
– real-time perspicuous location in an ongoing action: visibility (I’m at x where are you), real-time location (I just saw a reindeer by the road, beware—[told to the car driving behind])
– instructable location: spatialized requests (I’m/accident at the crossroads of A and B, etc.)
– proximate praxiological location: microco-ordination of activity (I’m feeling his pulse, the wound stretches from elbow to breast, etc.)
– virtual location (I’m on the web page x)
Socioemotional – socio-emotional significance of location: biographical relevance (I’m at the cottage of x/my friend, I’m driving car with x), cultural significance (I’m visiting x (old church, museum, medieval city, etc.), aesthetic significance (it’s very scenic here)
Why do I blog this? this kind of study is of tremendous relevance to my phd research since I address the effects of location-awareness on collaboration processes: communication, coordination, division of labor, mutual modeling… What the author described here is very interesting, it’s one of the seldom resource about this fact (along with Marc Relieu, Laurier (and there too, plus this one by Weilenmann).
However, the results from our field experiment with CatchBob makes me bit skeptical about the authors’ conclusion; when it comes to the implications of this study to LBS, he says “Location awareness that would also indicate the user’s estimated temporal distance from the destination would have a wide applicability for a majority of mobile users. A simple and usable technical solution would immediately meet the end users’ needs“. The reason why I am skeptical is that automating location-awareness can sometimes leads to putting the emphasis on an information (others’ location versus others’ availability, intentions…) that might be not relevant for the time being. Another problem is the kind of location that should be automated and made relevant for other parties (place? country? lat/long? …).