Posted: March 31st, 2008 | 14 Comments »
Today was my last day at Media and Design Lab and consequently at EPFL. So, I leave academia and… here’s the whole story.
It’s been 5 years that most of my time was spent there (although other ventures such as simpliquity, LIFT and the near future laboratory also took me some time) doing a PhD in Human-Computer Interaction and a post-doc in a design/architecture lab working on various projects. So not it’s time to reflect a bit abut the next step (as Julian did lately).
So, I am leaving academia and there are different reasons for that. The first thing to say is that I am not sure that I want to play the tricks of the academic game which are both about location (“in Europe is usually “you go to certain US universities for 3-X years and find another position somewhere to eventually try to get back”), publication competition and of course specialty/discipline. I guess it’s here that I was a bit frustrated lately. Being a bit interdisciplinary, I don’t know where to sit now: my original background was in cognitive sciences (psychology) and I did a master in Human-Computer Interaction in a psychology department (University of Geneva), and then a PhD in HCI in a Computer Science department (EPFL). My last job (research assistant in a design/architecture lab) also reflect my interest in design research. And in addition, over the course of my studies, I have been interested in having conversations with companies/think tank which turned me into a consultant and a conference organizer for LIFT. This combination of activities and interests have led me to look various overlapping domains (user experience, design research, foresight research, cognitive psychology, anthropology and ethnography, human-computer interaction, usability, etc.) and of course different methods, paradigms, authors, POV.
Looking at these another domains and methods have no doubt changed my practice and modified my interests. The sort of research I was doing 5 five years ago was mostly experimental and quant studies to address psychological implications of technologies. Starting from theoretical models, the point was to test hypothesis (H0 versus Hn), compute inferential statistics, analyse the results, see what they mean wrt to theories and at the end of the day reflect on what this means for engineers or design practitioners (the criticized “implications for design“). Very much cognitive psychology-inspired HCI. I don’t really do that anymore and my research practice has been changed due to different factors:
- Having worked with video game designers in the last 7 years, I learnt how looking or building “implications for design” is different than doing cognitive psychology. I felt that their needs were less about building theories and laws about behavior but more situated account of how their technologies or environment were used, understood, appropriated. Learning more about ethnography and qualitative research was very fruitful for that matter. Of course, this does not dismiss more quant-inferential research but it seemed to me that qual+quant descriptive research was the thing they need at first.
- On the theoretical side, I was also interested in less hardcore cognitive science theories, more alternative accounts of how people make decision, do things together and use/create artifacts. Theories like Situated Action, Distributed Cognition was interesting for that matter although I don’t agree with everything here. But it surely changes the way I want to conduct research.
- Having done research in ubicomp where it’s impossible to carry out studies in a controlled environment (which is the pre-requisite to run experiments), I had to go for more qualitative methods.
Thus, having experienced that (and sometimes tried to make some bridges between different methods or theories), I started to doubt about where I was sitting, what sort of research tradition I’d like to adopt (top-down inferential cognitive psychology? bottom-up descriptive situated ethnomethodology?). And of course, once you’re doubting… you start raising issues or questions that does not appeal to reviewers or researchers from certain tradition. Eventually, I found it hard to get back to ultra narrow-minded cognitive stuff and interdisciplinarity is sometimes recommended however not really rewarded. And i became fed up reading papers in which people don’t know anything about a certain literature/methods/ideas because it’s “out of their field”.
So… this situation led me to to question what I was interested in: circulation of knowledge, innovation, user experience of techniques/technologies, foresight and future research can be relevant keyword in that context. Which can be defined by two vectors:
- “User experience research”: I know that sounds not very academic and rather practitioner oriented but I find it covers a lot of the issues I like to investigate through field/user studies. It’s about understanding the implications of certain technologies, how they are appropriated, used, deployed, understood, etc. a a micro/meso level (not the whole society level) from a descriptive perspective that can inspire design.
- “Foresight”: doing scanning, building scenarios, describing alternative futures based on weak signal spotting. This is definitely less academic and more about diffusion of innovation but I am convinced the material gathered in (1) can be useful for this and help defining scenarios for the future. In general, foresight research rather operate at higher level (more macro, with data coming from sociology) but my point is that all these levels can be combined.
What I am interested in, oftentimes, it the cross-pollination between different worlds, making analogies between different domains and drawing issues/solutions/problems/insights form them to enrich the problem at stake. Mapping the overview, defining the problem space, finding opportunities by using various sources: meeting people, having conversations, reading academic papers, annotating books, conducting user research (from usability test to ethnographical studies), taking weird pictures or writing about all of this. This is why I am interested in foresight research since it’s rather about this sort of macro perspective than the more narrow POV of scientific research.
What does that mean for the next? Simply that I will have, from now on, 2 affiliations: being a consultant/conference organizer at LIFTlab and a researcher at the Near Future Laboratory. In the end, it’s about being involved in a “think-tank” stance: smaller, more flexible, less about ivory-tower and silos. That said I will still have one foot in academia through teaching HCI and user experience research in different institutions; and I am still working on academic publications.
We are currently in the process of defining the services we will provide ranging from providing strategic review of projects, writing foresight research report, conducting user studies, organizing workshops (or participating in workshops), lecturing, teaching and organizing conferences. And my focus will remain in my areas of expertise are: urban and mobile computing, networked objects and tangible interfaces. Any interest in collaborating? Need someone like me for a specific gig? feel free to ping me.
In the meantime, thanks Jef and all the LDM team for this fruitful year!
Posted: March 30th, 2008 | 1 Comment »
Paul Saffo’s talk at the Long Now Foundation (MP3 here) is a very good overview of foresight research heuristics/rules of thumbs/methods. Some notes:
- “Hunt of Bin Laden, experts agree, Al Qaeda leader is dead or alive” is a great forecast because it accurately captures the uncertainty of the moment. The biggest mistake is to be more certain than what the fact suggest, especially today, at this very uncertain moment in time (where indicators are going in different directions). As Peter Schwarz says: “The difference between a good forecast and reality is…a good forecast has to be believable and internally consistent”
- The job is not about predicting but rather mapping the “cone of uncertainty” on a subject. And, uncertainty means opportunity. It’s a cone shape for commonsensical reasons and because uncertainty expands as you project further into the future. The important thing is to find edges: Where might they happen? There you should look for wild-cards to define the boundaries and science-fiction can be a good candidate for that matter (as well as bad press about the future).
- Change is not linear and very slow and most big technological changes take 20 years to develop (“new technologies take 20 years to have an overnight success”). This means that you need good backsight, BUT because evolution is slow, you still have time even if you miss an early indicator.
- Look then for early indicators (“prodromes” or “prodroma”: an early symptom or leading indicator) as claimed by William Gibson’s observation that the “future is already here, it’s just not unevenly distributed”. Look for indicators and things that don’t fit.
- We tend to over-estimate the speed of short-term adoption and under-estimate the diffusion of the technology (“Never mistake a clear view for a short distance”). In addition, things aren’t accelerating and every society has always complained that things were getting faster, even in the 16th century (“every generation thinks things are accelerating”).
- Look at failures and cherish them (Preferably other people’s). Silicon Valley has been built on the ashes of failure. Look also for people who failed in a company and went starting their own.
- Prove yourself wrong: look for indicators that proves what you say BUT also weak signals that prove it wrong
- “Be indifferent. Don’t confuse the desired with the likely”
- Know when not to make a forecast
- The problem for forecasters is not of being wrong, it about persuading people to act on forecasts.
Posted: March 28th, 2008 | 1 Comment »
Watching “The Wire – The Complete Second Season” (Ernest Dickerson), there is this interesting moment in Episode 5 (around minutes 47:00 to 47.56) where the dockers are explaining how technologies often fail. It deals with both radio-wave signals and handheld computers:
“That’s cans, containers, coming off the ship, and others going back on. Now, look at the screen. Every time a can goes on or off, the computer creates a record and puts it in the permanent database.
He was saying the computer makes it hard to steal off the docks. Did our port manager tell you that right now we got 160 boxes missing off the Patapsco terminal alone? Or that last time, we inventoried the truck chassis… We came up 300 light?
No, I suppose not.
That’s management for you.
Not that all of them are stolen.
You can lose a can by accident, no problem. For one thing, these hand-helds use radio waves. With all the equipment and container stacks out there… sometimes waves get knocked down. That happens, a can don’t get entered.
Or, just as easy,
A checker makes the wrong entry. Either ’cause he’s lazy, he’s sloppy, or he’s still shitfaced from the night before. Or, simpler than that, you got fat fingers. So imagine February on the docks. You’re wearing Gortex gloves, trying to punch numbers on that thing.“
Why do I blog this? always intrigued to find examples of such issues.
Posted: March 27th, 2008 | 1 Comment »
It’s often when reading obscure and never translated european writers that I find the most intriguing ideas, especially when it comes to foresight and innovation. The book “Les sens de la Technique” by Victor Scardigli is no exception to this; the title is a sort of pun since “sens” in french means both “meaning” and “direction”. Thus you can read the title as “The Meaning of Technique” or “Where Technique is heading”, which reveals the ambivalence of technical innovation. What’s intriguing here is that the author, for once, do not distinguish “techniques” and “technologies”, rather taking techniques as a whole that encompass vaccines or ICTs.
Above all, the book is above the gap between the expectations our societies put into innovation AND the weak consequences of the first change we can notice. After inventions and R&D processes, innovation is expected by some (especially the inventors) to diffuse in society and “impact it” (for best or for worse). Different rationales are at stake here since engineers or biologists expect Sciences to serve Progress, the reciprocal adaption of human beings and techniques and hence measure the “social impact” of their invention. On the other hands, social scientists often more convinced by the prominence of human causalities are more skeptical and think that new techniques are only tools to modify the course of time based on their own objectives.
The author then addresses how techniques and their usage evolve over time, for which he describes 3 phases in his “diffusion model” using a raft of interesting examples that I won’t describe here:
- Phase 1: The “time of prophecy and fantasy” (enthusiastic or terrifying) where revolutions are predicted and technique is “inserted socially” (right after invention and R&D). It’s mostly the time of positivists and the moment where imaginary symbols are constituted. The less objective fact you have, the more imaginary you get, so irrational thoughts are important here. Prophecies (or social actors who promote them) attempt to create a connection between 3 elements: the new technical object, human desire and expectations/fear of the time being. This leads to imaginary representations that you can find in the discourse of companies promoting the innovation, surveys or advertising/media messages. For Scardigli, there are of course constant imaginary issues: power on constraints (liberty of slavery), knowledge, fear of death, social justice, social bounds, economical wealth and global solidarity. There is therefore a discourse around the hopes and fears linked to these issues which are recurring in history. What happen is that fantasy, scientific knowledge and actions are intertwined and even the weakest signal is turned into an excessive hope or fear. Prophecies become necessities and then self-justificated.
- Phase 2: The “delusion phase” that suggest how the expected technological revolution does not lead to a social revolution. Positivists’ prominence is obscured by skeptical voices who raise the gap between forecasts and realizations/effects. They also reveal how “techniques” themselves are not sufficient to change “society”. To some extent, observers realize that science only make progress… in science. It’s of course the time where “users/people” enter the scene and begin employing the technique. These small actors transform, invent new uses, hack or tweak the innovation. This appropriation and reinvention of daily life leads to a third phase.
- Phase 3: “the side-effect phase”: 30 or 40 years after, the real diffusion of the technique is effective and some social and more long term consequences appear but often different from the one expected at first (new social form, new forms of cultures or human activities). He cites an example of a sort of bulletin-board system in the 80s in French that was expected to revive surburbian communities. What happened is that technology vanished (the state program was stopped) but it allowed people to gather, meet and create “mediating” organizations that survived. In the end, the collective imaginary of progress from the 1st phase is articulated with the strategy of actors who promote the innovation. Social change appear as a side-effect of the technical innovation, not because of it. The introduction of the innovation acts as a “analyser” revealing problems, social dynamic, aspirations, needs and above all as an alibi for new forms of sociality. And at the end of the road, it’s end-users themselves who give sense to techniques by integrating to their daily life/culture.
Also Scardigli raises the importance of the socio-cultural context of innovation, who often fail without it. He exemplify this with a description of “mediating” persons who are social actors who can promote technologies and make people understand how it will be of interest for their purposes/life. In addition, there is of course a compromise between the Ideal of the project and the economic/user realism. If what happen in the 3rd phase is different than what was expected in the first one, it’s because big actors (States, companies) are struggling with each others with different visions BUT also because small actors (users!) modify, change, tweak or slow down the unfolding of these innovation.
Finally, in his conclusion, he discusses some lessons about progress and innovation:
- Human beings build their own history, sometimes by designing new techniques but often with other means (e.g. organizational). And it’s not these techniques that will change or social and daily life.
- These innovation effort are always carried out over and over, as a sort of Sisyphean curse because new techniques have to articulate both Science (who likes to “discover”) and social demand for a better world. Unfortunately, harmonious encounters between both is very rare and needs and innovation are scarcely matching. Technical inventions are always the fruit of a culture and inventors, engineers or users all share the will to have a better world so they try, like Sisyphe.
- Social appropriation is always slower than technical innovation. 5-10 years are needed to go from the fantasy phase to find a niche of users. 10 or 20 years are then needed so that the innovation is entirely appropriated in daily life.
Posted: March 27th, 2008 | 3 Comments »
Being interested in technological failures, I read “Where’s My Jetpack?: A Guide to the Amazing Science Fiction Future that Never Arrived” by Daniel H. Wilson. Some excerpts that I found interesting, related to causes of failures:
“Jetpack: “the development of the jetpack effectively ceased the day Wendell Moore passed away, and there are plenty of reasons why. As it turns out, the government frowns on the notion of everyday people equipped with jetpacks and the ensuing midair collisions, air range, and transformation of drunk drivers into inebriated human torpedoes. Worse yet, jeptacks are nearly useless in military applications – a soldier strapped to a jetpack is a sitting duck”
Moving sidewalk: “a few litigious pedestrians have spoiled it for the rest of us with their skull-cracking falls and attendant lawsuits”
Self-steering cars: “Obstacles abound, but without a broader understanding of the world, a robot car cannot tell the difference between a harmless clump of grass and a farmers’ market. Negative obstacles, such as holes in the ground, are particularly difficult for robot cars to identify. Navigation is also more difficult in cities, where tall buildings and bridges can block crucial GPS signals and soft, delicate targets (called pedestrians) abound.”
Flying car: “Merely providing the vehicles is not enough, however; if everyday people are to use them, scientists must know how to track thousands of these car-planes. And knowing is half of the battle. Collision-deterring navigation systems are key to transforming highways into skyways. Regular people just can’t be trusted”
Hoverboard: “They may be perfect for cruising over flat surfaces like water, ice, or a well-manicured lawn. but they are dangerously inept on city streets”.
Why do I blog this? currently collecting material about technological failures and failed (micro-)visions of the future for a project.
Posted: March 26th, 2008 | No Comments »
A recent EPFL Technical report I wrote with Fabien Girardin and Pierre Dillenbourg: A Descriptive Framework to Design for Mutual Location-Awareness in Ubiquitous Computing.
“The following paper provides developers, designers and researchers of location-aware applications with a descriptive framework of applications that convey Mutual Location-Awareness. These applications rely on ubiquitous computing systems to inform people on the whereabouts of significant others. The framework describes this as a 3 steps process made of a capturing, retrieval and delivery phase. For each of these phases, it presents the implications for the users in terms of interpretations of the information. Such framework is intended to both set the design space and research questions to be answered in the field of social location-aware applications.“
The paper actually gives an overview of the main issues regarding location-based services, and more specifically multi-user location-aware applications/mobile social software.
Posted: March 26th, 2008 | 1 Comment »
Seen in Verona, Italy last week-end. Different layers of information, some official (regular signage), some more informal (badly written with Tipp-Ex, to state that drugs can be find on the right).
Another way to communicate that information is to use shoes hung onto a telephone wire as seen on the picture taken in San Francisco few years ago (but this is not more a sort of urban legend):
Anyway, this is part of an ecosystem of signs in contemporary cities that are more less perceived or understood by city-dwellers.
Posted: March 25th, 2008 | No Comments »
Reading this piece on ZDnet that I flagged few months ago, I stumbled across interesting figures:
- In 2007, Minitel traffic and services generated 100 Millions Euros (shared between the french provider France Telecom and third parties) through to 4000 “services” (sort of the equivalent of websites). In 1996, there were 25 000 services that generated 1 billion euros of revenues.
- 220 millions of connections in 2007, approximately 20 millions per months.
- Traffic dropped by 90% between 1996 and 2006 and by 35% between 2006 and 2007.
- There are still 1 million Minitel terminals that are active but people still access Minitel services through their PCs (2 millions do, through emulators). In 1996, there were 6.5 millions.
- Minitel terminals are still sold and – even better – manufactured by recycling old ones.
- Most of the usage are: looking up addresses/phone numbers by the mythic “3611″, reverse phone directory, astrology and bets. But “minitel rose” (sex chats) have vanished.
- Professional services are very important mostly for logistics marketplace.
- 25% of the revenues comes from financial services (following stock exchanges, investments etc.)
Why do I blog this? intrigued by an object of the past that still have offer some resistance to progress. The fact that people still use minitel services through the Web is very interesting: the thing work, people have their habits and keep using the media they used for a long time.
Also it’s important to note that if things/invention are slow to take off, they are also slow to die!. If there’s an S-curve till a mature market for any invention, the curve is reversed the other way as well. The following paper in the NYT times the other day address this issue concerning mainframe computer which were expected to disappear ten years ago:
“What are the common traits of survivor technologies? First, it seems, there is a core technology requirement: there must be some enduring advantage in the old technology that is not entirely supplanted by the new. But beyond that, it is the business decisions that matter most: investing to retool the traditional technology, adopting a new business model and nurturing a support network of loyal customers, industry partners and skilled workers. The unfulfilled predictions of demise, experts say, tend to overestimate the importance of pure technical innovation and underestimate the role of business judgment.“
Posted: March 25th, 2008 | No Comments »
Among my readings during Easter was this “Small Things Considered: Why There Is No Perfect Design” by Henry Petroski. The whole book is about design as a compromise in response to constraints, illustrated by stories concerning automobile cup holders, duct tape, WD-40, paper cups/bags and the devices to make them, the invention of single-lever faucets, the redesign of vegetable peelers and printers. It reads a bit like a Stephen Jay-Gould book in the sense that it’s highly descriptive with lots of details. Some chapters are a bit less insightful than others (the one about buying a house was a bit less interesting). And Petroski is an engineer, which gives him a certain perspective of the world.
The conclusion was certainly the part that interested me most, about silos/disciplines:
“Designing and building a piece of technology is more than an application of science. In fact, relying on science alone would make it virtually impossible to design even a modest bridge. What science would be applied? The laws of mechanics tell us that forces must balance if the bridge is to stand. But what forces, and stand how? Unless inventors and engineers, designers all, can first visualize some specific kind of bridge in their mind’s eye, they have nothing to which to apply the laws of science. The creation of a bridge or any other artifact requires, before anything else something imagined. Whether or not science can be applied to that mental construct is a matter of availability. It there is a body of scientific knowledge that can be applied, then it would be foolish not to exploit it.
In fact, “Science finds – Industry Applies – Man Conforms” will never be more than a catchy motto. The reality is “People Design – Industry Makes – Science Describes.” It is the creative urge that drives the human endeavor of design, which leads to inventions, gadgets, machines, structures, systems, theories, technologies, and sciences. Both science and technology are themselves artifacts of human thought and effort.“
And the last bit about failures:
“Simply put, all technology is imperfect as its creators, and we can expect that it will always be. As we can, by practice and discipline, improve our own behavior, so we can, by experience and process improve the behavior of our creations.
As this book has suggested, there are countless examples of technology’s imperfections and limitations, from the simplest of the most complex of made things. By understanding their flaws and the limitations of the design process that created them, we can better appreciate why they are and must be imperfect. All things designed and made have to conform to constraints, have had to involve choice among competing constraints, and thus have had to involve compromise among the choices. By understanding this about the nature of design, we can better negotiate the variety of stairways that we encounter, no matter how idiosyncratic or metaphorical, taking us from one level of technology to another.“
Why do I blog this? Being currently interested in “failure, possibly for a book/short piece, I am gathering sources like this. What did I learn here? all the evidences gathered in this book are meant to illustrate that “design failures” are not caused by human errors but are a side-effect of the need to make compromise between needs and constraints.
Posted: March 23rd, 2008 | 2 Comments »
A location-based annotation that indicates when water overflowed that street in paris. Interesting marker of the past (from 1910) that aims at reminding a different state of the environment. That’s the sort of Holy Grail for mobile phone service developers… who try to promote a digital equivalent to this. Where are we in 2008 wrt to this sort of system?