Contributed By: Julian Bleecker
Published On: Mar 2, 2024, 09:25:58 PST
Updated On: Mar 4, 2024, 08:48:53 PST
This post is a few things, including an exercise in having an idea and rapidly prototyping it, doing so with an LLM, and then remembering a similar project from 20 years ago in which I drew together the idioms of psychogeography, location-based sensing, and poetry.
This was a combination of the facility of having some LLM or another help translate little day dreams into materialized representations.
This is what the Poem Operating System robot I made with OpenAI and Elevenlabs has been creating 👇🏽
Someone somewhere in the NFL Discord during a recent Office Hours perhaps it was, mentioned a world in which an Operating System was somehow a Poem-based OS.
“What is that?”, I wondered.
And that was that, until I wondered again while listening to 🐝BeeBot, the latest study and exploration to investigate how to bring the soul back into the internet from Hopscotch Research Lab.
So here I am, walking Chewy the Dog, and I saw a flow of interconnected APIs that were incanting a Poem outloud.
And I’m stuck on Matt’s Poem/1 project which I eagerly backed because, well, I couldn’t think of a reason not to. I mean, I could, but I ignored that because not everything has to have utility, although the first thing I’ll do is open it up and figure out how to get that USB cable from sticking out the side and making my eyeballs hurt.1 So now I’m back in NFL Global HQ and I’m sitting at Workstation 1A, and I’m seeing something: It’s OpenAI —> ElevenLabs —> Spoken Poems. That’s it. Nothing crazy. It’s an exercise. But I had this spark of an idea and I wanted to see if it would ignite a little fire.
I started tapping.
[[Pssst! PoemOS and projects like this are inspired by the awesome conversations and spirit of shared collaborative and coordinated creativity that comes directly out of the Near Future Laboratory Discord. If you’re interested in being a part of this community, then join us 👇🏽]]
Some of this musing and wondering outloud came out of these awesomely meandering biweekly calls with dens and there’s been a thread where we wander around his continuing experiment with place-based audio interactions. It was maybe..well, let’s see — it says 2020 where I was alpha testing a little bot that he called MarsBot
that was being worked on at Foursquare Labs back in 2020. It was the simplest little thing that would duck into your audio stream if you had airpods on and it would tell you something related to where you were.
[[Parenthetically this never ceases to crack me up when I went to watch the whole Crowley Clan on Family Feud win the something-or-another round. We were prepped by some PA to come bolting out of the stands to go crazy. I was genuinely amped. I remember Dennis’ dad saying to me, ‘who the hellareyou?! while we were all hooping it up.’ Fun dinner that evening, I tell you what.]]
Anyway.
The concept of audio design for location-based things is super playful and super cool and gets lots of good discussion going on our little biweekly standing call. I’ve been trying out a new version called 🐝BeeBot
. It just kind of sits there. You don’t really pay attention to it and can easily forget it’s even there until, while listening to, like..The AI Breakdown podcast, 🐝BeeBot
ducks in and tells me something about where I am. (I happened to be sitting on the beach at, you know — Venice Beach — and it said something about some feature of the beach behind me. Early days and super evocative as you wonder what one would do with such a baseline interaction paradigm.)
So now I’m sitting on the beach, wondering about lightweight forms of audio design that aren’t overburdened podcasts. Little things that would just appear in your ear. And perhaps with spatial audio integrated into them. (I’ve got some VST plugins I’ll use for theNear Future Laboratory Podcast that can move audio around in 3-space, so that’s kind of interesting..)
And then I’m thinking back to my WiFiKu project from, like..2004 back in New York City. I proposed a commission to Christina Ray over at Glowlab for the annual PsyGeoConflux event she ran.(Check out those links — and thank you to Rhizome for maintaining these important waypoint.)
I wanted to do a project where I’d walk around neighborhoods in Manhattan with a backpack on that had some hardware that basically harvested WiFi SSIDs (just the names of WiFi access points it found) and cobble them together into Haiku.
Right? You with me? Makes absolutely no sense! But to the psychogeographer, it’s baseline psygeocartographic mapping.
This was effectively wardriving only I was walking around with a funny backpack or portable computer rig. (Remember — 2004. No smartphones and PDAs probably could be rigged to wardrive/walk, but like…it’s not really about the instrument.)
I’d gather these lists in these data files and then I’d have to manually make a flimsy file of where I found what, like this file 👇🏽 from in and around lower Manhattan / LES. (ps I’d just get rid of the tons of Linksys
and Netgear
APs that people didn’t bother renaming.)
So I guess, thinking about Poem/1 and such — somehow the idiom of Poetry has been with me for, well — that’s 20 years, eh? I think that’s partially because you can get away with non-sense or things that are not immediately valuable for their utility. Like — people don’t wardrive for the sake of making poetry, they usually do it to find open access points and then exploit them for nefarious who-knows-what-but-you-might-guess. If someone wardrove and said they were a War Driving Poet, you might at one level be, like..oh, okay. But…why?
So now here I am wandering into wondering about a Poem OS so I asked ChatGPT to assume I had an OpenAI API key (which I do) and an Elevenlabs API key (which I do) and gave it some speciifications.
How about a Python program that interfaces with the Open AI API and with the ElevenLabs API that generates a short pithy poem and then generates audio of that poem read by an ElevenLabs voice. Assume I have API keys for both of these services.
I’d like to be able to provide the specific prompt to generate the poem in a configuration file as an array of possible prompts for each day of the week, and assume one poem is generated per day.
I got this back amongst some other stuff, that didn’t quite work but was enough of a simple scaffolding that I could then start to fix and refine it.
Less than an hour or so of tweaking the initial response from ChatGPT — which was a great start but not quite there and had some API interfacing entirely wrong — I started effectively evolving, reconsidering, and refining what I wanted PoemOS thing to do. I’m thinking — okay, different kind of Poem for each day of the week, just focusing on the day of the week.
And now I’m thinking..what about a version that listens to action in the Near Future Laboratory Discord community and makes a poem about that, or the kind of low-hanging-fruit edition that does a News Poem based on what happened that day or the day before — or a weekly summary as a Poem. I bet that’d be kinda weird?
So now I’m imagining what I would do with an mp3 file of something/some voice reading a poem that something else made.
But, sitting there in the studio and listening to the reading voice reading a machine generated Poem didn’t feel done or complete.
So I drop that audio file into Descript, which is one hammer in the toolbox I use to produce the Near Future Laboratory Podcast so now I have a transcript and can use Descript’s very awkward side tool to create one of those audiograms that basically strums out the libretto/text/transcript or whatever you’d call it for the generated audio. So now I have a poem, being read, with the words playing out in a video.
And now I’m taking this video and I drop it into Davinci Resolve Studio Edition, and stir in some graphics and some background video I shot the other day, and now I have a little video poem.
So — this was just a couple hour Thursday evening experiment, the day after a pretty focussed and intense day at Chapman University when I wanted to do something that just felt like something where no one was necessarily expecting anything.
A study of an artifact from a future in which you get a Poem popped up into your audioscape.
Next steps? Well — there are so many adjacent possibilities that I’ll reserve new ideas for another one of those kinds of wandering days looking for different kinds of possible futures.
Here’s the Github repo, if you’re curious: https://github.com/NearFutureLaboratory/poem_os
And this is the evolution with me banging away on the keyboard to imagine in code and wandering around half-baked ideas about what I was doing about 45 minutes after what ChatGPT initially suggested 👇🏽
And this is an example of the JSON file containing weekday prompts. I go in here and adjust these routinely. I don’t want this to be some 100% automated robot. That’s somehow less interesting than it being more of a wind-up toy that you have to, you know — wind up, clean, adjust, set on a path rather than some daemon that just runs in the background.
1. That is not a quibble or a snarky remark. I’ve built commercial hardware before and I’m not talking about with a huge team but rather on a bench in my backyard studio which is somewhat how I imagine Matt’s doing this Poem/1 thing, so I’m entirely empathetic to the challenges, trade-offs, anxiety, and all of that. Truly. If I put on my dilettante’s spectacles, the cable completely throws off the lines and means this would not fit anywhere on my shelf without trailing a cable off alongside of plants and photo frames, so I’m a bit baffled. Maybe it’s just for charging and you can remove it? Maybe you can horde a bunch of poems for the week and don’t have to leave it connected? I haven’t dug into it but we’ll see when mine gets delivered.