Posted: March 31st, 2007 | Comments Off
La Fuga (by Négone) is a intriguing pervasive game in which players has to escape from Mazzinia, a futuristic high-security jail.
“Named La Fuga (The Breakout), the game opened this month at a former bank not far from the Real Madrid Stadium in northern Madrid. The facility can host up to 300 players at a time, each of whom tries to solve quizzes and pass through different obstacles in order to escape. Every player receives a console consisting of a specially designed PDA worn on the wrist. Between the PDA and its wrist strap is a passive RFID tag with a unique ID number used to locate and identify each player during the game.
“The game system activates the quizzes, the doors and the tricks in response to the detection of the tags. This allows the system to keep track of the gaming information of each player and generate each player’s game individually,”
RFID interrogators (readers) placed in doorways and in other areas of the game rooms enable the application to detect a player’s location, and to use that information to drive the gamer’s experience. For example, when the interrogator detects a person in a certain location, the system might display questions on the PDA screen that he or she must answer in order to progress. On the other hand, it might signal doors to open. “
(Picture courtesy RFID Journal)
Why do I blog this? yet another one on my list of pervasive game. The use of RFID tag/readers can be interesting to use in a location-based game; what is intriguing is the discovery of location on specific points such as doors and not on the whole field. That may lead to granularities that can be employed in specific gameplays.
Posted: March 31st, 2007 | Comments Off
An article in the WSJ about cool-hunting that express how things works in “fashion” industry. Some excerpts:
“The role of trend spotters — sometimes also called cool hunters — has grown in importance as the fashion cycle has speeded up. (…) Trend spotters can help mass merchandisers figure out which nascent trends from chic boutiques or even thrift stores might be hot sellers on a wider scale. Street style has become an important source of inspiration for retailers eager to lure shoppers with a taste for “fast fashion”.
“There is the longstanding debate of what influences what. Does the street influence high fashion or does fashion influence the street?”
Equally important to identifying trends, is figuring out when they are over. (…) “You can tell when a trend sort of moves on,” he said. “When you start seeing people who shouldn’t be wearing a certain brand or look, that’s when it’s over.”
It’s getting tougher to figure out where to find fashionable folks. (…) St Tropez, rock festival in Denmark“
Why do I blog this? no, I am not interested in fashion hunting, this is interesting because of some elements that can be helpful in foresight or when studying innovation: where to look at? what looking at? spotting boundaries in time. However, this does not mean that things in the tech industry work as in fashion; for example I may question the assumption that the pace speeded up in technology (of course there’s a lot of new gadget and stuff but reaching a mature market is often as slow as what happened in the past).
Posted: March 29th, 2007 | Comments Off
Reading Mr. Heathcote’s post about serendipity, it struck me as interesting that more and more awareness systems are directed towards the future. As Chris puts it “it’s exciting that there’s services looking at the future – much effort has gone into recording, collecting and remembering“. For example dopplr allows people to say where you’re going to travel and when (eventually you’re notified whether some contacts will be there too). Similarly, WAYN allows this for the present and the future. Another example is a whereabouts clock namad CLoc (slightly similar to the one designed by Microsoft) created at the Interactive Institute in Sweden. This clock is an interactive ambient display artefact that shows the current, past and planned location and activities of each member of a household. There is even a knob allows one to see the past location (captured through GPS reporting and radio beacons scanning) and the planned location proposed by the users. For more, see Fahlén, L., Frécon, E., Hansson, P., Avatare Nöu, A., & Söderberg, J. (2006). CLoc – Clock Interface for Location and Presence. ERCIM Workshop “User Interfaces for All”, Bonn, Germany, 27 – 28 September 2006.
It finally occurred to me that the area of location-based application is now well differentiated by the time spectrum it covers. What I called mutual location awareness in my dissertation (knowing where other people are located) can relate to the past, the present and the future. Theories in Computer Supported Collaborative Work describes this in term of synchrony: participants may either be aware synchronously (knowledge about events that happen currently) or asynchronously (knowledge about events in the past). The problem here is that some asynchronous systems are both conveying elements about the past and the present; in addition, this variable does not account for knowledge about future events. Therefore, instead of using the synchrony metaphor, let’s use “time span”.
Why do I blog this? sorting out ideas for a paper about mutual location-awareness.
Posted: March 29th, 2007 | Comments Off
Rebecca Allen‘s talk at LDM. Raw and messy
back in the days
“computers are too important to be left to engineers”
so use of computer graphics, take human forms into the computer, in a natural way
examples: the catherine wheel (1982) with choreographer Twyla Tharp, musique non stop with Kraftwerk (1986), mostly music clips
then hired as a “3D visionary” by a game company to shift the way programmers were designing game space (2D sprites versus 3D, assembler versus C++)… then all companies wanted to do quake-like games, so she moved to UCLA, interested less by shoot’em ups but virtual worlds.
sense of loosing control when you’re working as a designer with virtual worlds, artificial creatures
set up aesthetic rules and let people interact, things emerge
the world has a flavor of a game but there is no winning or loosing
emergence: explore the role of human presence in a world of artificial life
At Media Lab Europe:
notion of liminal devices: to explore the boundary between virtual and physical reality and between our inner and outer states of awareness (liminal = in between)
to define subtle, intimate interface paradigm for mobile devices using biosignals and position
to design simultaneous realities that allow us to see and sense more than the world in front of us in ways that enhance rather than overwhelm
- liminal identities: an interactive installation in the form of a wooden box, serves as a portal to the world of mixed reality. Through the holes in the box two people can mix their identities.
- sleight of hands
- body as interface: myophone (to get rid of heads-up display), project with Essilor (embed display in regular eye-glasses)
now: advisors on the “one laptop per child project”
Posted: March 29th, 2007 | Comments Off
There’s an intriguing piece in Science news about using the power of online gaming to address big computational challenges such as language translation, refining online search, locating objects in images, etc. The point is to use the time, the energy and the mass of players to solve problems and collect data: “turning playtime to profit”. Moreover, the researchers realize that computers are good at certain things but less at others, hence the idea of tapping into “human brainpower”.
Some examples described on the ACM Technews:
“One example is the ESP Game developed by von Ahn, in which two players come up with words to describe an image, and are awarded points when the words match; in this way, images can be creatively labeled to facilitate easier Web searching. Players are encouraged to choose more creative, less obvious descriptive terms by being restricted from using certain words. Training computers to determine the location of an image of an object is the goal behind Phetch, another game of von Ahn’s in which players search for images that fit certain descriptions in a scavenger hunt scheme. One player or narrator types out a description of an image chosen from a database at random, and then several other players or seekers find the image by using a built-in browser; points are awarded to the narrator every time a search is carried out successfully, while the first seeker to find the image gets points and assumes the role of narrator for the next image. Von Ahn’s latest game, Verbosity, is founded on the concept of building a database of common-sense facts through gameplay. In Verbosity, one player is given a word and presents hints about the word to another player in the form of sentences with blanks where words should go. Von Ahn says all his games have a time limit because he wants participants to play faster and thus generate more data.“
Posted: March 28th, 2007 | Comments Off
Trying to expand on what fabien blogged (weblog as a way to elaborate our thoughts), I am digging the internets to find how digital information/traces/logs can be mined and of interest. The massive number of traces generated automatically (cell phone, wifi laptops) or by the user (synchronously or asynchronously with flickr pictures) can be used to perform inferences about spatio-temporal of city inhabitants.
Fabien describes 3 main domains of applicability of the processing and visualization of these massively collected personal logs, traces:
1) Provide urban planners, transport authorities and traffic engineers with data to refine their models of citizens spatio-temporal behaviors.
2) Bring new perspective for decision making and policies building.
3) Raise awareness and effect the discussion making of individuals or of a crowd
(Picture: mapping by Fabien of Flickr images taken in San Francisco between March 11 and March 25, 2007)
Why do I blog this? IMO, beyond a representation of the digital layer, there is another level, that would be: how to use the data generated by people (cell phones call/sms, flickr pictures at certain location, lbs patterns, games…) to have a sort-of “infrastructure” that would allow to specific services. For example: is it possible to design a public transport information system relying on these information?
This topic is also addressed in the following paper: Ratti C., Pulselli R. M., Williams S., Frenchman D., 2005,”Mobile Landscapes“, Environment and Planning B – Planning and Design. Some excerpts:
“A possible study would be to use this data to infer information about the ‘character’ of a neighborhood where the antenna is placed. At a simplistic level, districts with base stations showing a prevailing use during working hours are likely to have an office/business nature. Neighborhoods with high evening and early morning cell phone traffic are likely to have a stronger residential character. On the other hand, residential neighborhoods with high cell phone use during business hours may reveal emerging live-work situations.
our hypothesis is that the patterns of cell phone intensity correlate with the intensity of urban activity; revealing them can help monitor important urban dynamics. Critical points in the use of the urban infrastructure can be highlighted, as well as special events. Finally, a long-standing problem can be addressed: that of estimating flows in and out of the city: patterns of daily commuting, weekday versus weekend activities, holiday movements. Real time applications could also have new uses in emergency relief, based on broadcast alerts that would be different from one region to the other.
As the authors say, this seems to be a “new promising line of urban research: Making sense of the unlimited flow of data from the cell phone infrastructure in the urban context is still unexplored territory”. Even though the article focuses on data gathered through cell phones, I think the situation is similar with all the digital information generated by ubicomp systems and social web applications (a la flickr).
Posted: March 28th, 2007 | 3 Comments »
Musing in the train this morning with Frederic, we discussed a near-future laboratory topic: offline-gaming that Julian describes more thoroughly here. This is also helpful for the presentation for Mobile Monday I am working on.
Let’s start with Julian’s notes (the near future laboratory method is about knots):
“Can there be “offline gaming” where the screen disappears to the point of it not even being necessary? Where you sort of ambiently know that you’re gaming in the sense that your actions and activities “offline” will register in the game world once you get back to your normal human computer later? Can you still be gaming while you’re doing a run to the market, without being consciously and actively “in” the game while doing the grocery shop? But still, knowing in the back of your mind that, hey, cool! I’ll get my shopping done and probably get a +2 power up!“
This said, it led Frederic and I to think about 2 main axes: the connection to the network (yes, the internets) and the use of the mobile device display as the output. Therefore, we have this simple 2×2 matrix that set the design space for mobile gaming opportunities:
Strictly speaking “offline gaming” should only refer to game played out of the network but we started using it for the square “no network/no display” (maybe because “off-the-screen-offline” is not really nice to pronounce). I’ve also put “crossmedia gaming” to represent games that (for instance) can be played on cell phone and then brought back to the computer either to benefit from a larger display or an access to the network (or a larger bandwidth…), that is the case with V-migo. Instead of using the crossmedia term, one can also say that a constant access to the network is hard to reach, thus even synchronous situations are alternance of sync/async moments.
Besides, the fact that the squares are empty on the picture above does not mean that nothing has been done in them; however I have to admit that the “offline gaming” square is maybe less crowded.
Now, that would be the way to design offline gaming interactions? let’s wait a bit to gather some thoughts (but the use of motion is one of the avenue here).
Posted: March 28th, 2007 | 1 Comment »
Yet another tableware project today (it’s funny that this week has been filled by discussion about tables here at the lab): Topoware by Alexandra Deschamps-Sonsino and Karola Torkos. The point of the project is to “questions the landscape of dining”: is territory an adequate notion during a meal? could the observation of dining allows to make assumptions about eating behaviors? What about the way people occupy space?
“By looking at places, maps and especially contour lines, which define a landscape two dimensionally we decided to in turn “outline” the dining experience. This can also be interpreted as “zooming in” from the whole to the single item, from the tablecloth to the placemat down to the utensils.
The lines decorating the tablecloth are mapping the table, defining the space were people sit and interact at the dinner table. The closer to the person’s designated space and area of intense interaction, the darker the lines become. The placemat helps keep the experience of complex dining simple or makes the simple dining experience feel special, each layer defining what comes first and where cutlery and tableware should be placed.
In a playful way the lines reappear on the tableware itself, be it plates, bowls and cups to illustrate, label and determine your dining habits.“
Why do I blog this? I quite like this “With the Topoware collection, you are how you eat” motto. To me it’s very a very pertinent way to make explicit invisibles (or implicit) phenomenon and behavior, especially in unexplored field such as dining.
Posted: March 28th, 2007 | Comments Off
The Tangible Table is a new table platform by Manuel Hollert and Daniel Guse:
“Our goal was to build a working prototype of a tangible table-based user interface. In contrast to a simulation, this environment facilitates the evaluation and testing of user interactions. That’s why the visual components on the table surface (such as scales) are quite basic and rough. The principles of interaction and graphical behavior had higher priority.“
The technical implementation is described here with a description of how they used fiducial markers. Also, check the video
Posted: March 27th, 2007 | 1 Comment »
In the last issue of BW, there is an article about motion capture and gestural interactions. So, this seems to be the new revolution, it traces back the trend to the VR attempts of the 90s, nintendo powergloves and other stuff. Then an Intel Chief Technology Officer claims that withing five years we “could use gesture recognition to get rid of the remote control” and that it willeventually “drive demand for its important new generation of semiconductors, the superprocessors known as teraflop chips, which Intel previewed in February” (I won’t comment on this but… mmhmm… mentioning the supeprocessor issue when it comes to human-computer interaction seems not very apropos here). But why would it work this time?
virtual reality 1.0 was a bust. The hype was too loud, computers were too slow, networking was too complicated, and because of motion-sickness issues that were never quite resolved, the whole VR experience was, frankly, somewhat nauseating.
VR 2.0, enhanced by motion capture, is different in many critical ways. Most important, the first batch of applications, such as the Wii, while still primitive, are easy to use, inexpensive, and hard to crash. You don’t get anything close to a fully sense-surround experience, but neither do you feel sick after you put down the wand. The games are simple and intuitive
system enables a presenter to take audiences on a tour of a 3D architectural design or on a fly-through of a model city. And the presenter’s measured theatrics make a big impression. “Everyone’s looking for the new, sexy way to communicate with their employees and their clients. We’re selling their ability to sell,”
Why do I blog this? well, I am not sure the reasons the VR failed for the reasons mentioned, they were surely part of the problems but there is still a misunderstanding about interactions in VR and the notion of 3D. There is still this belief that replicating reality in a 3D digital space is the must, and that gestural interfaces is then the solution because it’s more natural (given the direct mapping).
Back to gestures, some excerpts that I liked in the BW article though:
“Any company that creates a product used by people needs to understand how the human body moves,”
Aeronautics veterans who hear about this program are sometimes skeptical. “When people cannot touch a prototype, it’s always a hard sell
“It’s early, but such simulations could be one of the most profitable areas in the future,”
“The Wii is helping debug this question about how you move in virtual ways,” says Jaron Lanier. After a year with the Wii, society “will be better educated about the overlap of the virtual and the real world,” he says.