Contributed By: Julian Bleecker
Published On: Saturday, March 2, 2024 at 09:25:58 PST
Updated On: Monday, March 4, 2024 at 08:48:53 PST
This post is a few things, including an exercise in having an idea and rapidly prototyping it, doing so with an LLM, and then remembering a similar project from 20 years ago in which I drew together the idioms of psychogeography, location-based sensing, and poetry.
This was a combination of the facility of having some LLM or another help translate little day dreams into materialized representations.
This is what the Poem Operating System robot I made with OpenAI and Elevenlabs has been creating šš½
Someone somewhere in the NFL Discord during a recent Office Hours perhaps it was, mentioned a world in which an Operating System was somehow a Poem-based OS.
āWhat is that?ā, I wondered.
And that was that, until I wondered again while listening to šBeeBot, the latest study and exploration to investigate how to bring the soul back into the internet from Hopscotch Research Lab.
So here I am, walking Chewy the Dog, and I saw a flow of interconnected APIs that were incanting a Poem outloud.
And Iām stuck on Mattās Poem/1 project which I eagerly backed because, well, I couldnāt think of a reason not to. I mean, I could, but I ignored that because not everything has to have utility, although the first thing Iāll do is open it up and figure out how to get that USB cable from sticking out the side and making my eyeballs hurt.1 So now Iām back in NFL Global HQ and Iām sitting at Workstation 1A, and Iām seeing something: Itās OpenAI ā> ElevenLabs ā> Spoken Poems. Thatās it. Nothing crazy. Itās an exercise. But I had this spark of an idea and I wanted to see if it would ignite a little fire.
I started tapping.
[[Pssst! PoemOS and projects like this are inspired by the awesome conversations and spirit of shared collaborative and coordinated creativity that comes directly out of the Near Future Laboratory Discord. If youāre interested in being a part of this community, then join us šš½]]
Some of this musing and wondering outloud came out of these awesomely meandering biweekly calls with dens and thereās been a thread where we wander around his continuing experiment with place-based audio interactions. It was maybe..well, letās see āĀ it says 2020 where I was alpha testing a little bot that he called MarsBot
that was being worked on at Foursquare Labs back in 2020. It was the simplest little thing that would duck into your audio stream if you had airpods on and it would tell you something related to where you were.
[[Parenthetically this never ceases to crack me up when I went to watch the whole Crowley Clan on Family Feud win the something-or-another round. We were prepped by some PA to come bolting out of the stands to go crazy. I was genuinely amped. I remember Dennisā dad saying to me, āwho the hellareyou?! while we were all hooping it up.ā Fun dinner that evening, I tell you what.]]
Anyway.
The concept of audio design for location-based things is super playful and super cool and gets lots of good discussion going on our little biweekly standing call. Iāve been trying out a new version called šBeeBot
. It just kind of sits there. You donāt really pay attention to it and can easily forget itās even there until, while listening to, like..The AI Breakdown podcast, šBeeBot
ducks in and tells me something about where I am. (I happened to be sitting on the beach at, you know āĀ Venice Beach ā and it said something about some feature of the beach behind me. Early days and super evocative as you wonder what one would do with such a baseline interaction paradigm.)
So now Iām sitting on the beach, wondering about lightweight forms of audio design that arenāt overburdened podcasts. Little things that would just appear in your ear. And perhaps with spatial audio integrated into them. (Iāve got some VST plugins Iāll use for theNear Future Laboratory Podcast that can move audio around in 3-space, so thatās kind of interesting..)
And then Iām thinking back to my WiFiKu project from, like..2004 back in New York City. I proposed a commission to Christina Ray over at Glowlab for the annual PsyGeoConflux event she ran.(Check out those links ā and thank you to Rhizome for maintaining these important waypoint.)
I wanted to do a project where Iād walk around neighborhoods in Manhattan with a backpack on that had some hardware that basically harvested WiFi SSIDs (just the names of WiFi access points it found) and cobble them together into Haiku.
Right? You with me? Makes absolutely no sense! But to the psychogeographer, itās baseline psygeocartographic mapping.
This was effectively wardriving only I was walking around with a funny backpack or portable computer rig. (Remember ā 2004. No smartphones and PDAs probably could be rigged to wardrive/walk, but likeā¦itās not really about the instrument.)
Iād gather these lists in these data files and then Iād have to manually make a flimsy file of where I found what, like this file šš½ from in and around lower Manhattan / LES. (ps Iād just get rid of the tons of Linksys
and Netgear
APs that people didnāt bother renaming.)
So I guess, thinking about Poem/1 and such āĀ somehow the idiom of Poetry has been with me for, well ā thatās 20 years, eh? I think thatās partially because you can get away with non-sense or things that are not immediately valuable for their utility. Like ā people donāt wardrive for the sake of making poetry, they usually do it to find open access points and then exploit them for nefarious who-knows-what-but-you-might-guess. If someone wardrove and said they were a War Driving Poet, you might at one level be, like..oh, okay. Butā¦why?
So now here I am wandering into wondering about a Poem OS so I asked ChatGPT to assume I had an OpenAI API key (which I do) and an Elevenlabs API key (which I do) and gave it some speciifications.
How about a Python program that interfaces with the Open AI API and with the ElevenLabs API that generates a short pithy poem and then generates audio of that poem read by an ElevenLabs voice. Assume I have API keys for both of these services.
Iād like to be able to provide the specific prompt to generate the poem in a configuration file as an array of possible prompts for each day of the week, and assume one poem is generated per day.
I got this back amongst some other stuff, that didnāt quite work but was enough of a simple scaffolding that I could then start to fix and refine it.
Less than an hour or so of tweaking the initial response from ChatGPT ā which was a great start but not quite there and had some API interfacing entirely wrong ā I started effectively evolving, reconsidering, and refining what I wanted PoemOS thing to do. Iām thinking ā okay, different kind of Poem for each day of the week, just focusing on the day of the week.
And now Iām thinking..what about a version that listens to action in the Near Future Laboratory Discord community and makes a poem about that, or the kind of low-hanging-fruit edition that does a News Poem based on what happened that day or the day before ā or a weekly summary as a Poem. I bet thatād be kinda weird?
So now Iām imagining what I would do with an mp3 file of something/some voice reading a poem that something else made.
But, sitting there in the studio and listening to the reading voice reading a machine generated Poem didnāt feel done or complete.
So I drop that audio file into Descript, which is one hammer in the toolbox I use to produce the Near Future Laboratory Podcast so now I have a transcript and can use Descriptās very awkward side tool to create one of those audiograms that basically strums out the libretto/text/transcript or whatever youād call it for the generated audio. So now I have a poem, being read, with the words playing out in a video.
And now Iām taking this video and I drop it into Davinci Resolve Studio Edition, and stir in some graphics and some background video I shot the other day, and now I have a little video poem.
So ā this was just a couple hour Thursday evening experiment, the day after a pretty focussed and intense day at Chapman University when I wanted to do something that just felt like something where no one was necessarily expecting anything.
A study of an artifact from a future in which you get a Poem popped up into your audioscape.
Next steps? Well ā there are so many adjacent possibilities that Iāll reserve new ideas for another one of those kinds of wandering days looking for different kinds of possible futures.
Hereās the Github repo, if youāre curious: https://github.com/NearFutureLaboratory/poem_os
And this is the evolution with me banging away on the keyboard to imagine in code and wandering around half-baked ideas about what I was doing about 45 minutes after what ChatGPT initially suggested šš½
And this is an example of the JSON file containing weekday prompts. I go in here and adjust these routinely. I donāt want this to be some 100% automated robot. Thatās somehow less interesting than it being more of a wind-up toy that you have to, you know ā wind up, clean, adjust, set on a path rather than some daemon that just runs in the background.
1. That is not a quibble or a snarky remark. Iāve built commercial hardware before and Iām not talking about with a huge team but rather on a bench in my backyard studio which is somewhat how I imagine Mattās doing this Poem/1 thing, so Iām entirely empathetic to the challenges, trade-offs, anxiety, and all of that. Truly. If I put on my dilettanteās spectacles, the cable completely throws off the lines and means this would not fit anywhere on my shelf without trailing a cable off alongside of plants and photo frames, so Iām a bit baffled. Maybe itās just for charging and you can remove it? Maybe you can horde a bunch of poems for the week and donāt have to leave it connected? I havenāt dug into it but weāll see when mine gets delivered.