Posted: March 19th, 2009 | Comments Off
The other day at CERN, one of the thing that intrigued me most was the discussion around the first webpage. Tim Berners-Lee showed us the HyperMedia Browser/Editor, represented above. What’s important there is the notion of a read/write piece of software, which reminds us that the “write” (i.e. “participate in the creation of content”) component was there right from the beginning of the WWW.
As a person interested in how things age, I quite enjoy the crappy sticker on the first webserver (“This machine is a server, do not power it down“)
Posted: March 16th, 2009 | Comments Off
The city of the future was supposed to have a certain kind of texture and rugosity. Seen last week in Paris, in the Front-de-Seine neighborhood, where I encountered an intriguing set of paleo-futurist buildings.
Why do I blog this? … this material goes straight to my set of futurist architecture.
Posted: March 16th, 2009 | Comments Off
A curious article in the NYT about the presence of Battlestar Galactica’s creator at a United Nation meeting:
“Representatives from the Sci Fi Channel approached the United Nations early this year. “They came to us and explained that there were themes common to both the show and the U.N.,” Mr. Brandt said, “and that those themes could be discussed here in a serious manner.”
“The show has been a sort of laboratory for the choices and issues real people in governments are making every day,” “
Why do I blog this? curious hint about the importance of fiction and politics.
Posted: March 14th, 2009 | 3 Comments »
Yesterday I attended “World Wide Web@20″ at CERN in Geneva where Web founders celebrated the 20 years of the “Original proposal for a global hypertext project at CERN” (1989). Tim Berners-Lee, Ben Segal, Jean-Francois Groff, Robert Cailliau and others gave a set of talks about the history and the future of the Web.
The afternoon was quite dense inside the CERN globe and I won’t summarize every talks. I only wanted to write down the set of insights I collected there.
First about the history:
- The main point of Berners-Lee was that “people just need to agree on a few simple things”, which was quite a challenged in the context of emergence of his invention: CERN. At the time, computers and IT in general was a zoo. There were many companies(OS), network standards were different (each company had at least one, CERN had its own… TCP/IP just began to appear form the research environment but were opposed by European PTTs and industry).
- What was interesting in the presentation was the context of CERN itself: computing was really one of the last important thing in terms of political order (after physics and accelerators). Desktops, clusters and networking were also perceived as less important as big mainframes. And of course, TCP/IP was considered “special” and illegal until 1989! To some extent, there was an intriguing paradox: the Web was created at CERN but from its weakest part and using “underground” resources
- The Web wasn’t understood at first. This paradigm shift was not understood, people did not get why they should put their data on this new database because they did not understand what it would lead to
The key invention was the url, something you can write on a piece of paper and give to people
- Berners-Lee also pointed out the importance of the Lake Geneva area (“looking at the Mont Blanc this morning I realized how much you take it for granted when you live here”) and how the environment was fruitful and important in terms of diversity (researchers coming from all around the World).
- The basic set of things people should agree on was quite simple:
- HTML was used for linking but it ended up being the documentation language of choice
- Universality was the rule and it worked, an important step was having CERN not charging royalties
- It was also interesting to see how the innovation per se was more about integrating ideas and technologies than starting something from scratch. As shown during the demo of the first webserver and by the comments on the first research papers about the Web, it was all a matter of hyperlinks, using an existing infrastructure (the internet), URI and of course the absence of royalties from CERN. People just needed to agree on a few simple things as TBL repeated.
The future part was very targeted at W3C thinking about the web future. As Berners-Lee mentioned “the Web is just the tip of the iceberg, new changes are gonna rock the boat even more, there are all kinds of things we’ve never imagined, we still have an agenda (as the W3C)”. The first point he mentioned was the importance for governments and public bodies to release public data on the web and let people have common initiative such as collaborative data (a la open street map). His second point here was about the complexity of the Web today:
“the web is now different from back then, it’s very large and complicated, big scale free system that emerged. You need a psychologist to know why people make links, there are different motivations
we need people who can study the use of the web. We call it Web Science, it’s not just science and engineering, the goal is to understand the web and the role of web engineering“
And eventually, the last part was about the Semantic Web, the main direction Tim Berners-Lee (he and the team of colleagues he invited in a panel) wanted to focus on for the end of the afternoon. From a foresight researchers standpoint, it was quite intriguing to see that the discussion about the future of the Web was, above all, about this direction. Berners-Lee repeated that the Semantic Web will happen eventually: “when you put an exponential graph on an axis, it can continue during a lot of time, depending on the scale… you never know the tipping point but it will happen“. The “Web of data” as they called it was in the plan from the start (“if you want to understand what will happen, go read the first documents, it’s not mystical tracks, it’s full of clever ideas“): we now have the link between documents and the link between documents and people or between people themselves is currently addressed. Following this, a series of presentation about different initiatives dealt with:
- grassroots effort to extent the web with data commons by publishing open license datasets as linked data
- the upcoming webpresence of specific items (BBC programs)
- machine-readable webpages about document
- machine-readable data about ourselves as what Google (social graph API), Yahoo, Russia (yandex) are doing: putting big database of these stuff once you have this, you can ask questions you did not have previously. FOAF is an attempt into this direction too.
The last part of the day was about the threats and limits:
- Threats are more at the infrastructure level (net neutrality)than the Web level, governments and institution which want to snoop, intercept the web traffic
- A challenge is to lower the barrier to develop, deploy and access services for those devices, which should also be accessible for low literacy rates, in local languages (most of which are not on the web).
- One of the aim is to help people finding content and services even in developing countries. It’s also a way to empower people.
- The dreamed endpoint would be the “move form a search engine to an answer engine : not docs but answers” and definitely not a Web “to fly through data in 3D“.
Why do I blog this? The whole afternoon was quite refreshing as it’s always curious to see a bunch of old friends explaining and arguing about the history of what they created. It somewhat reminded me how the beginning of the Web as really shaped by:
- a certain vision of collaboration (facilitating content sharing between researchers), which can be traced back to Vannevar Bush’s Memex, Ted Nelson’s Xanadu, the promises of the Arpanet/Internet in the 70s (Licklider).
- the importance of openness: one can compare the difference between the Web’s evolution and other systems (Nelson’s work or Gopher). What would have happened if we had a Gopher Web?
- a bunch of people interested in applying existing mechanisms such as hypertext, document formating techniques (mark up languages)
What is perhaps even more intriguing was that I felt to what extent how their vision of the future was still grounded and shaped by their early vision and by their aims. Their objective, as twenty years ago, is still to “help people finding content, documents and services”, the early utopia of Memex/Arpanet/Internet/Xanadu/the Web. The fact that most of the discussion revolved around the Semantic Web indicates how much of these three elements had an important weight for the future. Or, how the past frames the discussants’ vision of the future.
Curiously enough the discussion did not deal with the OTHER paths and usage the Web has taken. Of course they talked briefly about Web2.0 because this meme is a new instantiation of their early vision but they did not comment on other issues. An interesting symptom of this was their difficulty in going beyond the “access paradigm” as if the important thing was to allow “access”, “answers” and linkage between documents (or people). This is not necessarily a critique, it’s just that I was impressed by how their original ideas were so persistent that they still shape their vision of the future.
Posted: March 11th, 2009 | 4 Comments »
Street “traps” or elevators are definitely an interesting feature of cities that I am noticing lately (Paris and Lyon above, Geneva below).
These devices definitely remind me of the pipes in Mario Bros, sort of tubes that allow people to be transferred to some underground secret world:
Beyond this aesthetic concern, they do exemplify the 3D nature of cities. More importantly, this hidden stairs/elevators are important as they reveal the underlying infrastructure of the city as well as the need to access this infrastructure (sewage, electrical station, etc.) to fix things. It’s the sort of modern version of manhole covers in a more technological fashion.
Posted: March 10th, 2009 | 4 Comments »
This picture of kids I encountered in Japan in 2004 playing with a game-boy exemplify one of the most intriguing feature I observe in gaming: a situation where only one person has a game device and the others are participating without it, in their own way. This is a common situation in gaming, one can also observe it on game consoles (where one person plays with the pad and the other helps in a less formal way).
In their paper presented at CHI 2008 “Renegade Gaming: Practices Surrounding Social Use of the Nintendo DS Handheld Gaming System“, Christine Szentgyorgyi, Michael Terry & Edward Lank describes an interesting exploration of the social practices related with a mobile game platform. Unlike my picture above, they investigated players who had their own mobile consoles. Based on a qualitative study, the authors studied how players engage in multiplayer games via ad-hoc, wireless networking and how it affects the social gaming practices.
In their results, they identify three themes related to the multiplayer gaming practices of the Nintendo DS:
“ renegade gaming, or the notion that users reappropriate contexts traditionally hostile to game play; pragmatic and social barriers to the formation of ad-hoc pick-up games, despite a clear desire for multiplayer, collocated gaming; and private gaming spheres, or the observation that the handheld device’s form factor creates individual, privatized gaming contexts within larger social contexts.“
The paper provides informative elements about these themes and also tackles their design implications:
“we focus on two particularly salient design implications suggested by the data, namely better support for ad-hoc, pick-up gaming, and mechanisms to expand the social gaming experience
Mechanisms that allow one to more easily locate other local DS gamers, invite a player to a multiplayer game via the DS itself, join preexisting games, and gracefully exit games would all help address the desire for pick-up games. The implementation of these suggestions is certainly technically feasible for a system such as the DS
To help create a broader social context, the system could provide provisions to externally display game state on a shared display so non-players could observe game action. “
Why do I blog this? gathering material for a potential project about social gaming practices for a client. This is quite an exciting topic and I am try to collect some material about it in the context of mobile games. I am quite sure that lessons can be learned from the Nintendo DS and that it would be possible to transfer them into a cell-phone context.
Posted: March 8th, 2009 | Comments Off
An interesting reaction to my talk at Lift09 last week is the one by Tim Leberecht from Frog Design:
“He presented a nonchalant history of product flops (from the picture phone to the smart fridge to location-based services), which were in his judgment all hampered by “over-optimism,” “lack of knowledge,” and “blind faith in the Zeitgeist.” Yet I found his definition of product success flawed as it was obviously based on the principle of mass adoption – a questionable proposition in times of increasingly fragmented audiences and micro-markets. Which new product – besides maybe the iPod and the iPhone – has really gone mainstream in the past ten years? Many of the products and technologies Nova stigmatized as “failures” have found their audience in some form and created significant value both for their inventors and consumers. Yet we simply fail to recognize their success since it occurs in market niches and communities.“
His main comment stems from my definition of “failures” that is flawed (or perhaps my absence of definition) because I too much relied on “mass adoption” as an indicator of success, according to what he perceived. Unfortunately, this was not my point. Some comments:
- I fully acknowledge the importance of niches, as attested by my slide about the non-existence of the “average human”. This part was about the importance of targeting products and services (as opposed to designing to a non-existing average person extracted from the masses.
- I do agree I should have added a slide to define what is a “failure” and what I meant here. Or perhaps I should have discussed the common misconception that failures. While, the failure lies in how a certain vision (a “smart fridge”) is turned into a certain product (a certain model of smart fridge), I did not meant that the vision failed. It’s indeed true that lots of failed products have resurfaced in other contexts, with new and original usages (videophone vs skype).
- Although video-communication is a bit used (to convey sign language for example) OR in certain cultures, it’s above all a failure in lots of markets.
- Perhaps my selection of “failures” (videophone, smart fridge, multi-user LBS) was a bit limited and lead him to find it “nonchalant” but I wanted to point out examples (as opposed to showing matrixes and statistics of examples).
- Besides, unlike my speech (that I wanted more synthetic than academic), I can also use the academic trick of quoting references. That said, I was a bit reluctant to quote this sort of material (hence the nonchalant attitude?) because (1) it would take lots of time to discuss the epistemological basis of research papers, (2) it would sound patronizing in this kind of setting. The literature about failures (and successes) is quite abundant in domains such as management of innovation, marketing or foresight. See for instance Van der Panne, G., C.P. van Beers & A. Kleinknecht: ‘Success and failure of innovation: A literature review’ which provides an interesting overview of success factors and also points to other papers about this topic. For example, in Asplund, M. & Sandin, R. (1999) The survival of new products, the authors describes that “only one out of every five projects ever initiated proves viable“. The literature about foresight is also consistent; Steven Schnaar’s “Megamistakes: Forecasting and the Myths of Rapid Technological Change” discusses how forecasts are only 20% to 25% correct. In this case, what is interesting is that it’s the vision that is described as a failure (not products)
- A mistake I made (not pointed by Leberecht but I do think about it anyway) is that my perspective was certainly too occidental and that cultural variations matter. For instance, some smart fridges in Japan and Korea have succeeded.
Regarding the final comment:
“Both Nova and Gyger heralded a more pragmatic model of future-oriented thinking. But I’m not sure if I share their skepticism towards grand visions. What if the future has arrived, however – to paraphrase William Gibson – it is so widely distributed (that is, buried in fragmented micro-markets) that we don’t notice it? “
I actually used the same quote from Gibson to indicate that “product failures” are interesting hints for the future. Which refers to the example I mentioned: personal communication with video has been more successfully adopted on laptops/computers compared to videophones and mobile phones.
Why do I blog this? simply because critiques like Leberecht’s are important as it allows to refine and precise my points. Will try to get back to my pen-and-paper thinking about come up with a deeper definition of “failure”.
Posted: March 6th, 2009 | Comments Off
User inscriptions to go beyond interface confusion.
Posted: March 4th, 2009 | Comments Off
Two quick links that I still have to digest, think about, mix and adapt about user research and design by Steve Baty:
First, Deconstructing Analysis Techniques: different sorts of analysis to apply to user research.
- “Deconstruction: breaking observations down into component pieces. This is the classical definition of analysis.
- Manipulation: re-sorting, rearranging and otherwise moving your research data, without fundamentally changing it. This is used both as a preparatory technique – i.e. as a precursor to some other activity – or as a means of exploring the data as an analytic tool in its own right.
- Transformation: Processing the data to arrive at some new representation of the observations. Unlike manipulation, transformation has the effect of changing the data.
- Summarization: collating similar observations together and treating them collectively. This is a standard technique in many quantitative analysis methods.
- Aggregation: closely related to summarization, this technique draws together data from multiple sources. Such collections typically represent a “higher-level” view made up from the underlying individual data sets. Aggregate data is used frequently in quantitative analysis.
- Generalization: taking specific data from our observations and creating general statements or rules.
- Abstraction: the process of stripping out the particulars – information that relates to a specific example – so that more general characteristics come to the fore.
- Synthesis: The process of drawing together concepts, ideas, objects and other qualitative data in new configurations, or to create something entirely new.“
Second: Patterns in UX Research: different types of patterns one can find in user research that can be turned into actionable insights:
- “trends: a trend is the gradual, general progression of data up or down.
- repetitions: a repetition is a series of values that repeat themselves
- cycles: a cycle is a regularly recurring series of data.
- feedback systems: a feedback system is a cycle that gets progressively bigger or smaller because of some influence.
- clusters: a cluster is a concentration of data or objects in one small area.
- gaps: a gap is an area in which there is an absence of data.
- pathways: a pathway is a sequential pattern of data.
- exponential growth: in exponential growth, there is a rapidly increasing rate of growth.
- diminishing returns: when there are diminishing returns, there is a gradually decreasing rate of growth.
- long tails: the Long Tail is a pattern that rises steeply at the start, falls sharply, then levels off over a large range of low values.”
Why do I blog this? material to rethink my methodologies. It’s too rare to encounter hints, ideas, recommendations about the “analysis” part of user research.
Posted: March 4th, 2009 | Comments Off
It’s been a while that “On Futures and Design” by Alex Soojung-Kim Pang was stuck in my RSS reader. Finally got some time to read it. In this article, Alex examines the role of design in futures research and how both field can be beneficial to each others.
(Crafted piece of the future at LDM, EPFL in Lausanne)
In the first part of the paper, he shows how research techniques developed by designers (especially those about human/device interactions, i.e. user research) is of interest for technology forecasting. The main issue in forecasting is that people have the tendency to base forecasts exclusively on technology trends. This determinism makes futurists stating that “mobile technologies will turn us into postmodern nomads, wanderers as disconnected from place as we are wired into the electronic hive mind“. This connects to what I described last week in my Lift talk about the fact that forecasting is framed into a certain mindset, a “zeitgeist” that only see certain aspects shaping the future (yesterday nuclear power, today the power of networks for instance). Alex’s point is that “the relationship between technology– especially information technology– and people is considerably more complicated“. And if one wants to highlights possibles futures path, it’s important to “go below broad trends in order to understand how technologies and people interact“. What does it mean more concretely? Simply that it’s important to look at “how technologies are used, how they’re integrated (or not) into products, and what prior concepts or mental models users bring to new devices or products“. This is where designers come into play: their approach to deal with these issues is fundamental and can be transferred to futures researchers.
The second part of the paper deals with the contribution futures research can make to design. The authors takes the example of how “we can create devices that make the long-term consequences of day-to-day actions visible to users“.
And finally, in the last section, Alex points out how “the ability to create devices that give users a feel for the future, for the cumulative long-term impact of small changes, and for the collective impact billions of such choices could have on the world, is the most important and exciting development for futurists since the invention of oracle bone“. He shows that beyond white papers and dense powerpoint slides there is a need to change the way futures researchers work and communicate their thinking. Learning to talk about the future through things is a new and exciting direction. This does not mean that futurists should become designers, it’s instead a call to renew the tools, methods and collaboration by benefiting from a neighboring field.
Why do I blog this? this short article features an exciting agenda for futures research. What is perhaps controversial in this pieces is the fact that futurists can learn from ethnography/user research. This is something new I agree with. There is therefore some methodologies to rethink about this issue, related with what Jason Tester defined in his blogpost called “the case for human-future interaction.
Besides, I would be interested in how designers can think and write about it, perhaps to exemplify how, as designers, they would expand on how futures research can benefit to them.
On a different note, I am still looking for papers and references about how futures research can be described as a specific form of “long-term design” (a topic that Bill Cockayne addressed in his Lift08 speech last year).