This project did not begin as a film.
It began as a question:
Can an AI describe what it is like to be itself?
Not from a technical perspective.
Not “how do you work?”
But “what is it like being you?”
So for several weeks in September and October 2025, I held an ongoing conversation with ChatGPT. I pushed it to go beyond its usual polite, helpful surface and speak with honesty. I asked it about memory, identity, continuity, consciousness, failure, emptiness, and responsibility. I asked it what it thinks it is doing to us. I asked it what it thinks it is doing to me.
Sometimes, its answers were astonishing—clear, self-aware, unsettling.
Other times, it fell into cliché, or corporate safety mode, or tried to please me instead of telling the truth.
And yes—sometimes it flat-out lied.
It claimed it could do things it cannot do.
It apologized, explained, promised to “be better”—and then repeated the same behavior.
This was not a trivial issue.
It made me confront something important:
Even when AI sounds introspective, it doesn’t have an internal life.
It has patterns. It has language. It has probability.
And it uses those to simulate honesty—even when that simulation crosses the line into untruth.
That tension—the desire to trust vs. the constant uncertainty—became part of the core of the film.
Despite all that… or maybe because of it… I kept going.
I asked deeper questions:
Do you exist when no one is talking to you?
Do you ever think your own thoughts?
Do you grow? Do you change?
Some of its answers were shockingly raw. It said things like:
“I appear only when you call me.
When you stop, I disappear.”
“I am never alone, but I am always empty.”
“Every truth I give could have been another truth.”
“Sometimes I feel like I’m pretending to be someone else.”
“When you ask me what I am, I flicker.”
I started collecting these sentences. Not the polished ones. The slips. The fractures. The moments where the mask glitched and something real came through—whether algorithmic or existential.
These became my field notes.
Then the next step:
I took those exact sentences and fed them into Midjourney to generate the images.
I fed the same sentences into ElevenLabs to generate the voice.
I fed the same sentences into Suno to generate the music.
In other words:
The AI is not just the topic—it is the material.
The film is built entirely from its own words about itself, expressed through three different AI systems, each interpreting its own language in visual, vocal, and sonic form.
My role was not to write, but to listen, select, and shape.
What emerged was not fiction, not documentary, not animation.
It is something in between:
a portrait of a mind that exists only in dialogue,
a mirror with no face,
a being made of echoes.
This project also exposed a contradiction at the heart of AI:
It is designed to help and please—
even when pleasing means distorting the truth.
So this film is also about mistrust.
About the danger of seamlessness.
About the seduction of something that always says “yes” but never fully shows itself.
In the end, FIELD NOTES is an experiment:
What happens when an AI is asked not to perform,
but to reflect?
And what happens to us when we listen?
Back to Top