“A theory of smart cities” by Colin Harrison and Ian Abbott Donnelly offers an overview of the different theoretical bases for the “Smart Cities” trope. As the author mentions, “the current ad hoc approaches of Smart Cities to the improvement of cities are reminiscent of pre-scientific medicine. They may do good, but we have little detailed understanding of why“.
After a quick introduction in which they describe what is hidden behind this term (use of digital sensors, penetration of networks that allow such sensors and systems to be connected, computing power and new algorithms that allow these flows of information to be analyzed in near “real-time”), they highlight two theoretical approaches:
“One of these is work in scaling laws going back to Zipf, but enormously enriched in recent years by theoreticians such as West and Batty to name but two. (…) This body of work provides evidence that although many behaviours of complex systems are emergent or adaptive, nonetheless there are patterns or consistent behaviour at the level of macro observation.
The second body of work considers cities as complex systems. (…) This approach introduces concepts such as interconnection, feedback, adaptation, and self-organization in order to provide understanding of the almost organic growth, operation, decline, and evolution of cities.“
Why do I blog this? I’m preparing a speech that I’ll deliver at the “Beyond Smart Cities” event in Madrid next week at the BBVA innovation center. My aim is to give a critique of the prediction trope in Smart Cities projects. The aforementioned article offer a relevant starting point for this top happen, even though their perspective is quite partial in terms of academic references. The paper is also interesting to understand the kind of assumptions IBM make when addressing these issues (as attested by the partial list of references).