When Caleb Gets A Call
When Caleb Gets A Call

when caleb gets a call

A still from Westworld Season 3 Episode 01
Westworld

Contributed By: Julian Bleecker

Published On: Friday, January 29, 2021 at 12:43:54 PST

***

A Design Fiction Breakdown

There’s a wonderful Design Fictional moment, done with a kind of subtlety and incisiveness that implicates the residue of a possible episteme in which the Algorithm is the central mechanism of our relationship to the world, and to ourselves.

Rewind to Westworld, Season 3, Episode 01 at around 43:50.

In the scene, our earnest protagonist Caleb gets a call checking in on an unlikely desk job he recently applied for. On the phone is an agent from the company. The conversation is polite and quite empathetic on the part of the agent, who is explaining that, unfortunately, it wasn’t going to work out — seems there wasn’t a good fit. 🤷🏽‍♂️

Westworld Season 3 Ep 1 @ 43m:50s

Sean the Agent: Hi, this is Sean from DCA.

Caleb: Is this about the position?

Sean the Agent: Listen, Caleb. Your application was very strong. Unfortunately, our strategy group hasn’t found anything.

[PAUSE]

Sean the Agent: Caleb, are you still with me?

Caleb: Okay, thank you. Look, is there anything I should be working on to make myself a better candidate?

Sean the Agent: Like I said, your application was strong. We just don’t have anything that would be a great fit for you right now.

Caleb: Sure. But, um, you know, if I’m not a good fit, is there a different shape I can fit myself into?

[LONG PAUSE]

Caleb: Hey, no offense, but are you a human?

Sean the AI: I’m Sean. I can help you with all kinds of resources for DCA. Anything else I can do for you today, Caleb?

Caleb: No, that’s all right. Thanks.


In just a few lines, we get a fairly rich exchange that implies a rather thick set of characteristics of this world sometime in the middle future of 2053 (but don’t get stuck on dates.)


This Is Protected

Expeditions are journeys with purpose

Avery's Journey is a self-directed design fiction exercise and dance with Midjourney and other large language models.

'Avery hated the box. It sat in the corner, always on, always waiting, and always a reminder of the Schism. The box wasn’t outwardly menacing like other equipment deployed in the office or in the district, but any reminders of those difficult days were unwelcome.'

Expedition Works

What Design Fiction does best is to effervesce more of what Roland Barthes called the ‘residue’ of the reality of the world. Good Design Fiction doesn’t hit you over the head with lots of didactic anchorage — exposition explaining technology, or why some techie-looking nozzle does this or does that — or overdone props like pump-action plasma rifles and over-wrought vehicles done up by self-indulgent production designers, or lots of CG lens flare. Good Design Fiction restrains itself and emphasizes a kind of relatable everyday-ness.

The mundane science-fictional drama context here — struggling to get a job rather than heroically shooting up an evil robot encampment or saving the planet from an asteroid strike — lets the implications of a world of AI agents, like “Sean the AI”, force us to imagine the broader context of the world. The moment isn’t made distracting by lots of lasers going pew-pew-pew. This scene could be at a Westfield Mall in the near future, it’s that boring and ordinary, which allows us to zero in on the subtle characteristics of the conversation between a human and a non-human.

We might use this scene to consider the implications evoked here of AI-based natural language agents. In fact, I wouldn’t be surprised if the scene has entered into conversations in and around work going on with Siri, Alexa and whatever else is coming down the pike.

There are four things I found compelling about the scene — four things that tickle my imagination, all expressed in the brief exchange between Caleb and Sean the AI. Each of these implicate some characteristic of this near future world, describing indirectly the character of human — non-human interactions and relationships.

  1. The empathy the HR AI is able to evoke in Caleb — it feels like Caleb doesn’t want the AI to feel bad for rejecting his application.
  2. That Caleb is able to notice that he may in fact be talking to an non-human. He broaches that point with a bit of awkwardness as if he is also empathetic and doesn’t want to embarass the AI.
  3. That Sean the AI seems to not get the subtle humor of Caleb’s “is there a different shape I can fit myself into” line and so just stops responding. Which I can imagine as either a subtle way of handling an error state — as if the algorithm can’t find a ‘next state’ or whatever. Or it’s actually how a somewhat humorless human might respond — doing the audio equivalent of a couple of blinks and a blank stare.
  4. That Caleb politely ends the call, almost apologizing rather than just hanging up.

These four aspects of the exchange are at the front of my mind as I begin to excavate what they imply — how they implicate, or are somehow symptomatic of — the larger character of this fictional world. By implicating these aspects of the world, we’re drawn to fill in the gaps with our imagination from the point of view of our Design Fictional mindset, some of which I will do below.

Design Fiction Creates Implications Not Predictions

How do we interpret the action here and then make some sense out of that action so we gain a deeper understanding of the world that we’re prototyping.

From a Design Fiction perspective, what I see here can break down into two ‘reads’.

The first is expressly within the diegetic moment. By that I mean the actual dialogue of the scene — the exchange between Caleb and Sean the AI.

((A bit further below I’ll explain why I say ‘diegetic’ instead of just saying, you know — dialogue, or script. ‘Diegetic’ has technical specificity to it that’s helpful as we develop some useful aspects of Design Fiction. It’ll be worth understanding. It adds some precision to the practice, even as it sounds highfalutin. Like any technical language, it helps practitioners to communicate and be precise to help them make collective meaning of the world, like when a horticulturist refers to a plant by its technical name, rather than ‘that weird squiggly plant over there with the blue thing on top.’ Technical languages are important to getting work done in the world, not just to sound fancy. It makes it so the guy who tightens the screws that hold the wing onto the airplane knowscisely how many Newton-meters of torque to apply to tighten those screws, which is better than the alternative of tightening them “a bit more than hand-tight” or “until it stops, then a bit more” or something loosey-goosey like that. I guess they’d be bolts, not screws, so there’s that too. Or maybe rivets. 🤷🏽‍♂️ ))

The second is the expression of the state of the world as we ponder this question: ‘So..what’s going on here? How did we arrive at a state of things such that this kind of experience happen?’

That is to ask, what does this little diegetic prototype of an AI user experience tell us about the relationship between a human (Caleb) and a non-human (the AI)? How does this activate and swirl about in our imagination? What are the good, indifferent, and unintended outcomes of such a system, or such an exchange? What does it make us think about, both in social as we as technical terms? By what means and sequence of events both large and newsworthy and small and boring occured such that we arrive here at this moment shown on screen?

A still from Westworld Season 03 Episode 01
A visual explication of an AI in a meaningful conversation with a human

The First Order Read

First, the diegetic moment — our first-order ‘read’ of the scene. What we see is a Design Fictional representation of a kind of AI agent named Sean that we quickly realize has somehow been invoked (IFTTT?) to “follow-up” with job applicants when some human somewhere clicks a box in a web form or some other futuristic triggering event that effectively rejects the applicant.

Or, hold on — perhaps there was an AI listening in on the interview that occured by implication in an earlier scene. Could it be that interviews are monitored by AIs? Or perhaps the interview was with an AI that was in some bouquet of flowers next to a box of Kleenex on a table? You know, you talk to that bouquet AI which then makes an assessment based on multiple factors, including Caleb’s performance in that room, his “feed”, his database of past everythings, etc. Maybe Caleb was invited to apply because there was a statistical chance that his overall profie might fit with a particular role, but we just need to get him in the flower bouquet room to validate that statistical possibility. Maybe the AI interviewing him knew after 4 carefully phrased and uttered units of introductory conversation, measuring the tone and sentiment and specific responses, that he wasn’t a fit but the AI carried on with the interview because, like..humans are to be treated with kid gloves during vulnerable transactions like applying for a job or asking for relationship advice. No need to get them riled up and have them cause a scene, right? It’s far too easy for an algorithm to drive a human to blind raging lunacy, as we know.. Best to say the right things for another 30 minutes to convince them they have a shot at the job even as the agorithm has already flagged them for rejection.

We don’t know what triggered this rejection, but such things as this are implied and left to our imagination to consider, which is the fun part of the Design Fiction mindset.

The Second Order Read

Next, the second-order read is to imagine the kind of world in which this human to non-human conversation actually happens — probably zillions of times a day, we might imagine. When we put our Design Fiction mindset hat on, we’re moving well beyond the narrative. We want to imagine big and see what we discover in our perambulations into the implied but unspecified realm of the possible, intriguing, unexpected and, importantly, unintended outcomes that are evoked by these nine lines of dialogue.

To start, consider the various instrumentalities and algorithms and other technological things that would have obtained to make this kind of human-non-human conversation technically possible. The exchange here seems just barely a nudge or two richer than what we have nowadays with Alexa or Siri or whatever pronoun we’re meant to use for the Google thing. This kind of exchange is almost certainly happening in “the lab” at GAAMF. For sure. I don’t actually know for sure because I work in a converted garage behind my house, but, you know..it’s happening for sure.

Let’s imagine thoroughly about the specifics of the HR things going on here. Whose algorithm is behind all of this? That’s a good spark of a Design Fiction style conversation.

Has Salesforce® bought some blockchain-based machine learning artificial human resources intelligence start-up that came along to disrupt the old way of doing things with internet-scale capabilities, and now tied that into a “360 Suite” of sales and human operations “Solutions”?

Has ML and AI and all of that become so effectively normalized as a component of the everyday software ‘stack’ that not having it as part of your ‘solution’ would be as weird as a company not having a website today? What is a world where all that stuff is hygiene - just routine to where you don’t think twice about using an AI to promote your wares, evaluate and interview prospects, and talk to the algorithmically represented personalities of deceased friends and family (as Caleb does in this episode)? Or a world where AI agents are dispatched, created, and shared with the same gusto, fervor, whimsy and recklessness as TikTok posts? “Hey, check out this AI I just made! Set it loose in your cooking environment — it makes an awesome paella with extruded shrimp-like material just like my mom used to!” Is this a world where “influencers” peddle homeopathic anti-aging AIs for crypto the way they influence us to buy wool beanies and matcha tea today?

Do you get an AI with your social media account as part of the bargain, as normal and unspoken and assumed as a phone that comes with a camera and a car that comes with doors? (Anyone remember when intermittent windshield wipers for your car were a luxury extra? Or when phones were just telephones, not gobal network endpoints with cameras?) Like — you don’t even ask if your social media account comes with an AI anymore than you ask if your phone has a camera — you ask how to enable technical Farsai on your AI and make sure that it has the latest user engagement engine from Gamestop Labs.

What is the larger social apparatus and collective desires that would want a world such as this — or want to avoid living in it? Can you imagine a world in which AIs are legal to operate only in specific social arenas — like doing operational tasks such as screening prospective employees? Perhaps some small resistant social order that happens to love matcha tea, but only the kind grown outside, not printed in vats, has arisen whereby AIs “in the wild” are run against some social norms — like smoking indoors, or open carrying of a handgun.

It’s productive and generative to imagine that AIs just out and about are seen as too mechanistic, as something that indicates a lack of desire for the authentic ‘spirit’ of thinking for oneself rather than through an algorithm, of doing things without the assistance of an algorithm like parallel parking “by hand.” This insurgent social formation looks down upon using algorithms that optimize what you say in a conversation with friends, or dull you by deciding for you what to order for dinner. This might be a kind of social movement where a certain sense of pleasure and accomplishment is celebrated for obtaining mastery over a craft, like talking to a normal, old-fashioned human guided by the evolutionary impulses wired into the old fashioned meat-based algorithm 🧠.

Anyway.

What we’re doing here, with our Design Fiction mindset, is this: we’re using the unfolding of this moment of drama to prototype, in a cinematic context, through visual storytelling, a world with AI agents doing such things as rejecting job applicants. But there is all of this other material that gets cooked out of our breakdown and analysis — all of which was generated by allowing my Design Fiction mind to imagine the contingent circumstances that would make this moment come to pass in some fictional near future world.

Why Design Fiction?

Sometimes I get to this point and I wonder — why did I imagine all of these things? What’s the point of spending time in this hypothetical and imaginary world that doesn’t even exist, and that isn’t even a direct, diegetic component of what is on the screen? Who cares if I imagined Salesforce® getting into blockchain, ML and AI to expand their offering?

I’ll give you three reasons. There may be more.

  1. This kind of future world-wondering and considering is like tossing medicine balls and hauling kettle bells for the imagination. It’s useful and generative to try on new, unanticipated future possibilities that deviate away from expected outcomes, or the more normal trajectories of Moore’s Law futures or doom-and-apocolypse futures. And thinking of alternatives to the pre-canned futures we’re fed is fucking hard. It really is. Generally, if you ask someone to think of an alternative future, one ends up with lots of “What about..” statements that trail off in embarrasing mumbles and shrugs. It can be like watching someone trying earnestly to do one single dead-hang pull-up and completely failing, legs kicking and pumping and flailing to absolutely no effect. Beat red face. Doubled-over, apologizing. Making excuses about a heavy lunch. It’s embarrasing to watch. It really is. We think we have a handle on thinking about the future, but there’s really not much evidence to indicate such. We should have a primary school curriculum for Active Imagining, Levels I, II, II.

  2. Design Fiction allows us opportunities to prototype things in a way that is generative of discovery, fresh insights, new understandings and even new aspects of an existing idea that wouldn’t have been revealed otherwise. It is very much like doing the work of building a software prototype of an idea. In the work of doing that construction, you learn more about what it is you want to create and likely have more than a few ideas generated along the way. We never really know precisely what we want when we’re in that early discovery and exploration phase of an idea — so we try things out. Design Fiction is a way of trying things out, only the material is diegetic props and prototypes rather than Xcode and a Raspberry PI, or some such. Without getting too deep into the definitional fisticuffs of what is or is not Design Fiction, I’ll say this at risk of contradicting myself in the near future: the diegetic prototype, which is the root of a Design Fiction, is any object through which some archetype of a conceit — a scene in a television show, an unboxing video, a fictional quick start guide, an advertisement, a blister pack of some weird ingestible nutritional supplement, etc. — leads to thoughtful conversations/imaginings around the implications of possible near future worlds. (In the Westworld scene I’d argue that the earbud thing serves as the object-based manifestation that is then enhanced by those nine lines of in-world dialgue. This is sufficient enough for me to go down the kind of Design Fiction rabbit hole I described above.)

  3. Humans are notoriously bad at planning for the near future, let alone imagining future outcomes that are other than what they desire. Design Fiction is a bit of an antidote to our often stilted, stammering inability to imagine change. Holding multiple possibilities in your head simultaneously with a possibly incongruent present is hard work, sort of like doing cross training is hard work. So, you know — cross train your imagination more.

The Diegetic Prototype

I promised I’d sort out this ‘diegetic’ thing. It’s fairly simple and a litte bit technical. Diegesis refers to the things that happen in the moment of a film or other kind of dramatic action, like a play or opera or whatever. It refers to things in the narrative that are within the world the film itself is representing.

Extradiegetic or non-diegetic refers to things that are not in the moment of the film that perhaps we, as the audience, are privvy to but no one “in” the film is. The classic example of the extradiegetic is the soundtrack of a movie, which we hear as the audience but which the actors on screen do not. Unless they do, which is an occasional trick and bit of a nod to the ‘fourth wall’ by clever and sometimes overly clever filmmakers. If memory serves, Quintin Tarantino is known to do this to typical Tarantinoesque effect: we the audience hear in full fidelity a pop song from the 1970s, which is contemporary with the setting of the film. As the audience, we understand this to be part of the soundtrack. That is until it seamlessly transitions into a slightly crackle-y, low-fidelity audio track that’s now clearly coming from the crappy radio in the car driven by a character in the film. At that point, the extradiegetic sound has become part of the diegesis — because the actor is whistling and snapping their fingers to the song that used to be just the soundtrack, which they wouldn’t do if they couldn’t hear it.

So what’s this diegetic prototype thing have to do with Design Fiction?

The term was introduced in this context originally by David Kirby, who teaches at Cal Poly in San Luis Obispo. He wrote a paper that described diegetic prototypes precisely.

This paper focuses specifically on the production process in order to show how entertainment producers construct cinematic scenarios with an eye towards generating real-world funding opportunities and the ability to construct real-life prototypes. I introduce the term ‘diegetic prototypes’ to account for the ways in which cinematic depictions of future technologies demonstrate to large public audiences a technology’s need, viability and benevolence. Entertainment producers create diegetic prototypes by influencing dialogue, plot rationalizations, character interactions and narrative structure. These technologies only exist in the fictional world – what film scholars call the diegesis – but they exist as fully functioning objects in that world. The essay builds upon previous work on the notion of prototypes as ‘performative artefacts’. The performative aspects of prototypes are especially evident in diegetic prototypes because a film’s narrative structure contextualizes technologies within the social sphere. Technological objects in cinema are at once both completely artificial – all aspects of their depiction are controlled in production – and normalized within the text as practical objects that function properly and which people actually use as everyday objects. - The Future Is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-World Technological Development by David Kirby

Effectively, Kirby is saying that diegetic prototypes are constructions of scenarios meant to enroll an audience into thinking about a world in which such scenarios might come to pass. Design Fiction leverages Kirby’s insights and work heavily both as he specifically describes in his paper (and his book, “Lab Coats in Hollywood”) and extensions to these ideas that we’ve developed over the years, with experience and practice under our collective belt.

I have a podcast episode in the final bits of editing where David Kirby and I talk this all through. It’ll drop soon — so you should subscribe!

A still from Spike Jonze's film 'Her'
Same but different future prototype

In Closing

I’ll leave you with this moment from Spike Jonze’s “Her”, which I had in my last newsletter and which triggered me to go back to that Westworld scene.

You’ll undoubtedly be familiar with the film. It’s a similarly fraught and rich relationship that is represented between Theodore (Phoenix) and Samantha (V.O. Johansson), and that between Caleb and the deceased war buddy / friend who Caleb has “subscribed” to talk to – presumably entombed in some kind of conversational AI algorithmic representation of his dead buddy’s personality.

In Westworld we’re led to believe that Caleb comes to some kind of richer understanding of himself before finally letting his friend “go” so he can live more for his future than pine for the past when his buddy was alive.

Is there a more deliberate kind of “Her” type OS that is like a talking algorithmic “cure” to various ailments of the psyche? I mean, not literally the drama of “Her” where you discover that everyone falls in love with the same algorithm/personality — that’s Hollywood stuff. But where you have an AI that comes to know you in a particular way that helps you say outloud the kinds of things that will unlock whatever is preventing you from achieving your full self? And maybe is better on your liver than Prosac or whatever people take nowadays.

I wonder about a possible world with better-for-you algorithms, constructed to achieve a more meaningful set of outcomes than just crappy “user engagement” or more time on screen. I don’t know what this world looks like, but the idea of effective, non-corrosive, non-coercisve AIs that help us, make us happier, more functional, with a better understanding of our inner psyches could be worth some Design Fiction discovery and exploration.

Okay. Well — there you go. Until the next thoughts, be well and get to those creativity kettle bells!