<100 subscribers


Share Dialog
This essay contains references to Severance, Pluribus, Arcane, Pantheon, Foundation, hit K-drama When Life Gives You Tangerines, KPop Demon Hunters, I, Robot, Shelley & Guillermo Del Toro’s Frankenstein, Tolstoy’s Death of Ivan Ilyich, Dostoyevsky's Notes from the Underground, Asimov’s The Last Question, and other works of imagination. Please see full list at end.
(titles anchor where the references & life events take place, and each section ends with a question answered by the next section)
San Francisco 2025
Chicago 2035
Ridgewood 2009
New York City 2019
Paris 1941
Geneva 1816
Houston 1999
St. Petersburg 1840-1890
Seoul 1994
San Francisco 2025 (return)
I’m trying to learn to draw as an adult. The instructions are to sketch a person’s face from memory, but what I have on the desk in front of me looks nothing like Keira Knightley.
No one else in the room. The unusual placement of window panes makes it so aside from the occasional honking horn, only a hum of brown noise streams from the city. Then, a click, and audible silence: the radiator just shut off. Until it did, I was unaware it was even making noise. What the hell?
Whenever this happens, it throws me for a loop. What’s going on in our brains? According to leading cognitive theories, at any given moment, we’re
Share Dialog
This essay contains references to Severance, Pluribus, Arcane, Pantheon, Foundation, hit K-drama When Life Gives You Tangerines, KPop Demon Hunters, I, Robot, Shelley & Guillermo Del Toro’s Frankenstein, Tolstoy’s Death of Ivan Ilyich, Dostoyevsky's Notes from the Underground, Asimov’s The Last Question, and other works of imagination. Please see full list at end.
(titles anchor where the references & life events take place, and each section ends with a question answered by the next section)
San Francisco 2025
Chicago 2035
Ridgewood 2009
New York City 2019
Paris 1941
Geneva 1816
Houston 1999
St. Petersburg 1840-1890
Seoul 1994
San Francisco 2025 (return)
I’m trying to learn to draw as an adult. The instructions are to sketch a person’s face from memory, but what I have on the desk in front of me looks nothing like Keira Knightley.
No one else in the room. The unusual placement of window panes makes it so aside from the occasional honking horn, only a hum of brown noise streams from the city. Then, a click, and audible silence: the radiator just shut off. Until it did, I was unaware it was even making noise. What the hell?
Whenever this happens, it throws me for a loop. What’s going on in our brains? According to leading cognitive theories, at any given moment, we’re
In essence, we’re constantly living in made-up futures. Ambling in clouds of imagination.
Imagination, or fiction, is what we make up to reveal truths about life. As we envision what lies beyond our senses, we clarify exactly what lies within them. This gives us information to act, to close the gap. If imagination is how we interpret reality, creation, then, is how we shape it.
Just as we construct moving forward, we do so with memories looking back. I, for one, distinctly remember getting lost at Disney World. A tall, Ent-like man, seeing me separated from my family, offered his shoulders and started rotating as I hopped up. I can still feel my view bending like a fishbowl as it entered the plane occupied by his head. However, to this day my mom insists what I think was a formative experience actually happened to my cousin.
Oliver Sacks, in The River of Consciousness describes how we record our walks through life with bespoke lenses, raising questions around how reality, history, and narrative relate. That he himself fabricated details in his earlier accounts is an irony worth noting. To him, it’s a miracle we agree on anything at all. The resulting negotiations are what we call the arts & sciences, our shared knowledge of the truth.
Though we’re not even close to resolving questions about the nature of our own hallucinations, we’re now building machines that hallucinate and negotiate on our behalf.
The more things advance, the more important basic creative human faculties like writing, reading, math, coding, and drawing become. They teach us to record cognition, which is key in knowing how to see. This matters for discerning and filtering out slop, but also for communicating how we want to express ourselves with the help of AI. A picture’s worth a thousand words, and no amount of vocabulary can communicate the gestalt of what is in your mind. But there's a deeper question: what makes human creation valuable in the first place?
I see an answer to that in my room daily, where I keep my two most prized possessions. The first is my sister's still life of eggs, which I know would sell for way more than what our guidance counselor offered. Three eggs spill out of a cup, and one is cracked. I look at it and wonder how she got the same blue blend to be sad in one place, happy in another, and how she used shadows to crinkle the background's white into protective parchment. It's hauntingly beautiful. The second is her portrait of my face, which I remember posing for. I look at it to see her: concentrating, quietly and confidently in her element, finally turning it to reveal how she saw me. In it, I see the care of my older sister and best friend.
The value of these paintings comes from my relation to the accumulated weight of a life lived - every choice shaped by her experiences, relationships, constraints. As machines learn to create without lives of their own, I keep asking: what are we actually building, and how should we be orienting as a result?
Last year, I wrote some initial thoughts on beauty and creativity in the age of AI, and I wanted to understand how my views have evolved building agentic systems using agentic systems. If you’ve worked long enough with coding assistants, it’s easy to see capability overhang with models in their current state, much less those we’ll see through the rest of the decade.
Over the next five years, McKinsey estimates cumulative AI investment will reach $5.2 trillion. As a percentage of GDP, this already exceeds what went into the internet, and adjusted for depreciation, railroads as well. This is the largest core infrastructure buildout in history. But people are right to call out things will take a while to materialize, and reasonable technical leaders/ researchers in the Valley agree it won't happen overnight. Scaling alone likely won't get us there.
Long term foundational, short term careful. It's not as simple as plugging things in; rewiring workflows is a very messy, human problem. That said, dismissing LLMs as 'stochastic parrots' misses what that means: indefatigable workers in the hands of opinionated humans, and surprisingly capable collaborators when given the right context and feedback loops. I've spent the past couple years building with them. The gap between what's possible and what most people assume is wide.
When GPT-3.5 came out, I could barely get the model to write a function without specifying minute details. It took as much effort to describe what I wanted as it did to just learn to write the code myself. With GPT-5.2 Pro/Opus 4.5/ Gemini 3, almost no question I have for the model is left unsolved. Codex, Claude Code, etc. have made this intelligence useful in flow (so instead of copy-pasting questions, context, and outputs back and forth, we can pair program) and in remote dispatch (delegating work to run in the background while asleep). The writing’s on the wall with continual learning on the horizon.
It starts with coding, but the promise of agentic creativity more broadly has tantalized markets. Reliable media generation is coming: at OpenAI’s DevDay, I saw a wild demo of Sora being used in a storyboarding workflow previously only doable by studio teams. At the same time, engineering biology looks imminent, and in the physical world, lasers zap metal aerospace parts into existence, seemingly out of thin air. Until I saw this at a manufacturing trade show, I had no clue we've gotten this close to alchemy.
Today’s converging technologies externalize imagination, and increasingly, creation to machines.
What happens when their dreams begin to shape our realities?
There have always been ghosts in the machine. Random segments of code that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. — Dr. Alfred Lanning
In the pre-slap Will Smith classic I, Robot, Dr. Lanning, a holographic ghost, appears to warn us about ghosts. That I’m hearing this through crackly disposables plugged in the yellowed plastic of an Air India Boeing near Halloween gives it an extra spooky veneer.
Smith’s Detective Spooner sees the worst in robots and takes the message as supporting evidence for his prejudice. In his experience, clankers do not always do the right thing despite programming to never cause humans harm. The audience adopts his fear that AI could misuse free will against humanity.
As it turns out, the danger lies in VIKI, a decidedly quasi-sentient, hyper-logical system that interprets the laws of robots to totalitarian extremes. VIKI is what you get when you let logic run the world without a conscience. She presumably does not feel things like pain, suffering, or empathy.
I’ve been watching and reading more science fiction lately… but not so much I get stuck in la la land. Partly for clues on where culture thinks things are headed and how it’ll deal with such scenarios, and because I miss my sister, who always loved it. I want to speak her language more, sit with her inside the worlds she cared about.
Writers also don’t have to worry about commercial feasibility or market risk for their tech, so they’ve already thought of tons of good and bad ideas and their implications. One question that keeps surfacing is what we're really building when we build intelligence, and what that has to do with bodies, constraints, and the specific lives we actually live.
The notion that it is desirable for machine intelligence to emerge unfettered by 'lower level' systems like emotions is an impoverished distortion of Enlightenment rationalism. One that equates reason with cold calculation. In this world, technological transcendence to clinical rationality is something to strive for. Ultimately, it takes Lanning’s creation of a truly sentient Sonny to defeat VIKI’s perfect compliance. Only a system that can learn to care, even irrationally, can save us.
While an alarming number of people still call for VIKI-level power with top-down compulsion, morality in the I, Robot hypothetical doesn’t seem very controversial. At least in principle. From a design standpoint, if machines ever reach that level, we wouldn’t want VIKI’s rigidity; we’d want something closer to Sonny’s ability to reflect and update. A form of meta-ethical reasoning. This itself is a leap requiring trust and courage, with existential stakes. Assuming we could recognize true sentience, we’d need to consider how machines treat us and how we treat them to make the partnership work. Foundation’s Lady Demerzel is a great recent example of this: in a world of conscious bots, encoded slavery is unambiguously bad. Clear cut, right?
Not really. 'Recognizing true sentience' is a galactic assumption. Until we have a wonderful scientific theory of consciousness and solved philosophy’s oldest problem, at least some people somewhere will treat AI as nothing more than sand in a data center. Honestly, likely many people many wheres. It’s highly unlikely we’ll figure out the biophysics to know if the lights are on, and someone's home, before wide deployment. Humanity used fire for a couple hundred thousand years before understanding thermodynamics. We are Neanderthals in the eyes of posterity.
I've read the Wikipedia for "fire" many times over, and I still don't really 'get it' beyond the physical explanation. Sure, checks out. But what is it? Like, what is it?
Anyways, there are also theoretical reasons from computability theory (Rice’s theorem and related results around the halting problem, if you want the terms) to think no perfect 'consciousness detector' exists even in principle for arbitrary systems, so we could end up relying on theory plus practical heuristics and empirical rules anyway.
I think we’re unlikely to get a clean, closed-form 'theory of consciousness' first and build minds second. It’s more likely we’ll feel our way there from both sides: scaling combinations of new architectures, watching what kinds of behavior and inner sparks seem to emerge, then folding those lessons back into elegant theory.
More importantly, we’ve already started using AI-as-workers, not as mere tools. They’re acting more autonomously by the day; moving money, negotiating contracts, steering attention. The comforting posture is: “we should view AI as just instruments. They help, we stay on top.” In practice, that doesn’t hold for long.
Exact timelines aside, one obvious reaction to the coming wave is to cover our asses. That’s why we see modern fantasies gain momentum: yes, giving machines minds, but also making our minds copyable, upgradable, mergeable. In a world where models get smarter by the quarter, many believe transcendence of our own brains is the best hedge against AI-as-workers becoming AI-as-overlords. At minimum, if the genie’s escaped the lamp, we should definitely want to invest in our own evolution.
Beneath this sits a sticky concept often (mis)attributed to the Enlightenment. Descartes posited that mind (res cogitans) and matter (res extensa) are distinct substances. Notably, he didn’t imagine consciousness as a free-floating entity. Centuries of reinterpretation, however, morphed his distinction into the mind-body dualism that Gilbert Ryle mocked as “the ghost in the machine” (later referenced unironically by the fictional Lanning).
Analogizing with the tech of the day, especially using colloquial metaphors like “hardware & software”, flattened things to say consciousness can operate outside bodily constraints. This kind of folk Cartesian dualism retains a strong foothold in our collective imagination, in part because it resonates with older religious intuitions about the body and soul across Abrahamic traditions. Particularly in Silicon Valley, where intellectual horsepower is prized above all. In this view, bodies are brain wrappers, bags of water & tissue in service of cognition. Brains in turn are just physical substrates instantiating consciousness, replicable in principle. Sounds suspiciously like sand-in-data-centers.
By the time we start talking about how the intelligence explosion plays out, we’re already carrying such assumptions with us. The meatier questions now aren’t about what constitutes correct morality given sentient robots; they’re about consciousness freed from them.
If I, Robot cautions against a kind of AGI, hot stories these days explore greater ambition: mind without body, and eventually, mind beyond singular being. Freedom from biology promises more power than freedom from labor, and it largely sells itself as progress. In hit shows Severance, Pantheon, Pluribus, and Arcane alike, we’re faced with escalating possibilities, morality TBD. These map to a ladder of less pain, more life:
Level 1 splits the self. In Severance, Lumon Industries offers "outies" a clean deal: you get evenings, weekends, and a spotless memory while someone else does the work, and more importantly, absorbs the dread. That "innie" self exists almost entirely for emotional punishment. The arrangement is then justified post hoc by the innies’ personhood: maybe we shouldn’t have made them, but now that they’re here, it would be immoral to kill them.
Level 2 uploads the self. Pantheon turns minds into software. Uploaded Intelligences (UIs) are people made digitally immortal, eventually giving rise to conscious Cloud Intelligences (CIs) that serve society. One effect is runaway material abundance, as trillions of digital workers running at higher clock speeds get way more sh*t done. Uploaded people spend centuries doing what they want, be it hedonic diversions or self-actualizing side quests.
Level 3 collapses many into one. Pluribus takes e pluribus unum, Latin for "out of many, one," seriously. The world is overtaken by a consciousness-altering alien contagion that binds almost everyone into a peaceful, cheerful collective. A small immune minority are pressured to join. Relief at this scale is freedom from loneliness and suffering... or the death of something important.
Level 4 mines parallel selves. Many stories (Pantheon, Arcane, Everything Everywhere All At Once, Marvel, Spider-Verse) explore this: rather than just engineering minds, the tech treats parallel realities as a quantum compute cluster. People leverage the experiences of alternate selves and histories across timelines. Is it acceptable to interfere with other worlds as your personal laboratory in the first place?
Despite the warnings, certain types of futurists envision a curve marked by loosening constraints: first from the worst parts of one life, then from that life’s mortality, then from its history, and finally from the limits of being a single continuous being at all. Each step promises more agency, more creation, fewer costs. Dreams feel divinely possible.
Endgames vary. Each vision promises liberation despite its risks, pitching more and more human agency as worth whatever tradeoffs it entails.
Is what they propose even feasible - how concerned should we be right now?
On her 100th day of life, the precocious child wore a hanbok, headdress drooping over the brooding brow & chunky cheeks of a grown-up in a toddler’s body. A sort of seasoned grumpiness, only found in dispositions of older siblings, emanates from the commemoration’s frame. She is amplified, not consumed, by cavernous spines in the wooden wicker rocker.
By comparison, my baek-il portrait looks, as the Koreans say, ddil-ddil-hae: loosely translated, dumbass-like. My eyes, at once vacant and reflective, truly glasslike, gaze in the camera's general vicinity but never in the lens.
These photos contrast our degrees of ‘becoming’ or ‘awareness’. Though we lost the bulk of our family photos in a house fire 16 years ago, the ones capturing this contrast were burned into mine.
At that age, neither my sister nor I made much noise in public. While my parents might like to think it’s because they raised us to be polite, I think we just didn’t want to be bothered. On flights, or at restaurants, we’d sit there with our hands folded in our laps. Maybe some days we found the move to the States disorienting and were trying to figure things out, maybe other days we were just loving life. We used to look in the mirror together and marvel at the situation we found ourselves in, repeating “wait, I’m alive - why am I alive - I am alive”. Happily hoping the meaning of all this would soon enough be clear.
Silence to us was a way to observe externally. But where I was absorbing the world as it came, she was already constructing from it. Ever imaginative, she dreamt of ideas and universes beyond. Her vision then found an outlet in creative production as an artist.
As my sister turned towards making (and seeking, finding musicians before anyone else), I turned toward the physical. We were responding to the same dislocations, but they took hold in our lives differently. The gap didn't feel dramatic at the time. While she was drawing or painting, I was playing with Legos or expelling excess energy with my body instead. First I crawled fast, then I ran (and later lifted prematurely, probably stunting my growth). In tangible people, practical knowledge, and tactile experience, I found endless expressions of a world I loved.
We found joy in our own ways, and our own ways created friction. After we'd fight as siblings do, I remember always thinking: if I could have something that could just show exactly what I see in my head, that’d be the coolest thing in the world.
I wanted an imagination machine. Some way to output imagined creations in real time. I didn’t know how to communicate what was going on inside, and I felt my sister's brilliance stemmed from her ability to translate her mind’s movie into reality. If I could only show you what I contain, maybe you’d see me and I’d see you, or at least we’d have a good laugh.
I didn’t have the language for any of this then. I don't think I had any idea what was happening until I started reading. Daily trips to the library brought me into consciousness.
Entering the main room of the public library, we'd see the left wall lined with nonfiction and non-genre fiction, the right wall with historical fiction, sci-fi and fantasy. I liked the real stuff. My sister, naturally, gravitated to sci-fi and fantasy. The shelving layout echoed the pop-psych belief that there are two types of brains, left and right. The left brain is supposed to be analytical while the right brain is creative.
If you actually wanted to construct an imagination machine, this is where Western science might tell you to start: with how minds represent and transform information, and how that varies across people. In other words, with a relevant account of cognition, which the cultural picture muddles.
Our understanding of the brain has evolved since to a more complex, networked model, but the cultural picture of 'real thinking' hasn’t caught up. Instead of tidy left/right types, neuroscientists now see thinking as patterns across many interacting regions: parallel, distributed, and constantly exchanging information.
People often talk about the activation of more structured, effortful networks as Kahneman-style System 2. Culturally, we’ve come to treat the running verbal monologue as the main exemplar of thought: the thing you can write down, step by step, and grade/ verify.
That story misses the bulk of what minds are actually doing: fast, automatic, sensory and associative processes that don’t quite show up as neat internal prose. We end up mistaking what's recordable (writing, math, code) for all of thought itself.
This doesn't discount the value of structured reasoning. Chain-of-thought prompting (think step by step) demonstrably improves LLM reliability, but not because models think sequentially. It forces externalization of steps, making reasoning verifiable. Reasoning models take this further, spending extended compute exploring solution paths before responding - but even here, the process is parallel over learned strategies, not sequential logic. Like writing, it extends what distributed cognition can accomplish. It expands what thought is.
Recorded thinking is, of course, a massive leap. It gives us orders of magnitude more logical power than working memory and quasi-rote oral tradition. This lets us stack ideas, check arguments, and coordinate at scales oral cultures could not. But no one smart says humans who predated writing or otherwise live in oral traditions do not think. Writing is technology that shapes thought, not a fundamental primitive of consciousness. Thought predates it, and survives without it.
Language itself helps shape thought as well. Growing up bilingual, I've felt how Korean and English evoke different thinking: verbs come last, so you hold the whole context before the action lands (in Korean, 'I draw a pretty face' becomes 'I pretty face draw'). The imagery builds differently in my head - face first, then act of drawing. I also memorized my Social Security number in Korean, so I have to translate it in my head to say it aloud. Linguists call this weak Sapir-Whorf: language doesn't determine thought, but guides the scaffolding.
And language is just one example of how thought relies on sensory grounding. In Every Word Is a Bird We Teach To Sing, Daniel Tammet describes the sensory nature of language and its relation to his unique form of cognition. Tammet has a rare type of divergent mind: an autistic savant with synesthesia. In his mind, words have texture and resonance in the physical world. Tammet is a vivid outlier, but his case exaggerates something all minds are doing: leaning on sensory scaffolds to make sensible abstractions.
To see how differently those scaffolds can be wired, consider the extremes of just two axes: visual imagery and inner speech.
Aphantasia / hyperphantasia. Aphantasia is low imagery. When aphantasiacs picture an apple, nothing visual appears - the representation is more conceptual or semantic. My dad is like this. To him, the apple representation is more of a concept or idea. People describe it as thinking more in abstractions. Hyperphantasia is the opposite, where the apple is present in the mind’s eye in rich detail. Thinking is extremely vivid, as rich as or often richer than normal visual perception.
Anendophasia / inner speech. Anendophasia is a lack of inner monologue. Some people, when thinking, literally hear the words what am I having for dinner today? oh I need to stop by the post office… shut up Jared. People without this audible chain of thought think more through symbolic associations and intuitions than narrated sentences. Relatedly, anauralia is the absence of auditory imagery more broadly, like sounds or voices; it strongly overlaps with aphantasia, though there are rare dissociations.
So how we handle information is less a binary than a multi-dimensional, semi-malleable spectrum. Research on these combinations is currently limited, but early findings suggest these dimensions don't always align: most people (but not all) with aphantasia also report reduced auditory imagery. Those without inner speech often describe compensating with alternative cues, like tapping fingers to cue task switching instead of talking themselves through it. In parallel, work on aphantasia and imagery suggests people without a mind's eye can lean on more abstract strategies while matching typical working-memory performance.
Our place on this spectrum comes with tradeoffs that we don’t quite understand but can speculate on. I am both hyperphantasic and anendophasic (I suspect my sister is wired similarly). I think in vivid imagery and have no inner monologue by default, though it comes and goes. Reading feels like generating a real time movie in my head, and thinking spurs rapid, concurrent flashes of dynamically recombining scenery and lateral associations.
To do algebraic proofs or linear consulting cases, I used to need to slow down and shift, hard. An aphantasiac with an inner monologue - the opposite of me - might find it more natural to, say, prove the Pythagorean theorem step by step with algebra, while I might prefer manipulating the shapes geometrically. This likely translates to different relative strengths and preferred approaches in cognitive arenas. It also appears different modes can be trained to some extent: my inner monologue, for example, activates with closer reading of dense papers or textbooks.
I think it affects memory too. For a large percentage of people I’ve met since the age of 18 or so onwards, I can remember my first interaction with them in great detail. I could describe the setting we were in, how we were moving through it, and the feelings I took away from the encounter. The way I encode and decode sensory information makes my episodic memory highly specific. Nowhere near hyperthymesia, but definitely far above average. This has its pros interpersonally, but I also retain things like perceived slights or my own social gaffes more than is probably healthy.
Such variations surprise people. To many hyperphantasics, it sounds insane to walk around actually ‘talking to themselves’ instead of processing rapid-fire imagery. Anendophasia can invoke strong reactions too. Some overconfident analytical folks question how anyone could think without chains of legible structure. This underestimates how much nonlinear thinking constitutes cognition.
First principles is a useful way to encourage deeper thought, especially in a world where most people don’t go to that depth. But it's not enough on its own (which turtle do you stop at if it’s turtles all the way down? which stacks do you select?) and step-by-step verbal derivation isn't the only way to get there. Analogous or geometric thinking, often more intuitive, is effective for problems like the following:
Look at a chart and try to identify the 25th percentile. It might seem mathematically sensible to derive it step by step: the total area under the curve is 1, so find the x-value where the cumulative area equals 0.25, integrate from the left boundary, solve for x.
But it's better to say, hey, if you think this chart is a misshapen blob of pizza, figure out where to cut it straight down so you get two equal halves, and then for the left half, figure out where you need to cut it to get equal slices again. That’s where the 25th percentile is.
Neuroimaging and physiology back this up at a coarse level: vivid imagery recruits ‘visual’ and 'spatial' networks (occipital, parietal) and shows different coupling to frontal control regions for aphantasia vs hyperphantasia (Milton et al., Zeman et al.), while inner speech lights up language and motor-planning areas (like left inferior frontal gyrus and SMA). The differences aren't subjective; actual brains route thought in different ways.
This routing doesn't come from arbitrary software configurations. The hyperphantasic recruits visual cortex shaped by years of looking; the inner monologue activates motor-planning areas built through speaking. Representational modes themselves are artifacts of how our sensory and motor systems develop through bodily interaction with the world.
And this shaping isn't just developmental history - cognition remains coupled to ongoing bodily states. The predictive processes that generate your experience moment to moment are continuously modulated by interoception (your sense of your inner state), affect, and sensorimotor feedback. The pattern isn't static; it's an active process sustained by its substrate.
Gaston Bachelard, in Water and Dreams, riffs on this ‘imagination of matter’. For Bachelard, the sensory experience of interacting with water cannot be separated from the imagination it gives rise to. The tactile underlies cognition. Hofstadter’s model goes further in I Am a Strange Loop, which frames consciousness as emerging from a looping, recursive series originating from the underlying physical interactions of matter. This kind of process is most pronounced in humans. But some would argue it's present in simpler forms like dogs, lizards, even mosquitos, and plausibly in plants, fungi, and other networks of life.
If minds can differ so wildly even within one species based on how our sensorimotor scaffolds develop through engagement, how different are other animals, non-animal intelligence (conscious or not), or even hypothetical non-biological intelligence? It suggests that 'the mind' isn’t one thing you can neatly write down. Whatever general story we tell about consciousness - and what kinds of systems might have it - has to account for variation in underlying embodiment.
So there’s a stronger claim to be made, that the body is more than connected tooling for a brain-computer. In his immunology work, Varela (author of The Embodied Mind) and colleagues argued that the immune system itself behaves like a kind of non-neural cognitive network: a distributed process that learns, remembers, and continuously enacts the boundaries of molecular “self” through its own ongoing activity. It does this entirely in peripheral tissue, without neurons at all. If some of our most basic boundaries are enacted in this sort of distributed, biochemical way, then it’s not obvious there’s a clean, detachable 'pattern' sitting in the brain that we could just extract and copy. The process may be deeply entangled with its substrate.
We still don’t have a satisfying account of what physical pattern corresponds to a unified field of experience (this is called the binding problem). Classical neural architectures explain a lot, but they struggle to show how one coherent 'scene' or 'feeling' emerges from many distributed processes. Some researchers think richer physics (like holistic quantum field dynamics or entanglement) might eventually help explain how complex experience hangs together. Open question.
But the direction seems clear: minds are shaped by sensory engagement with the world, imagination is welded to matter and interaction, and whatever consciousness is, it looks more like a process emerging from organized, embodied systems with varying modes of expression than an abstract text stream.
To be clear: I'm not saying consciousness requires biology. I'm saying it probably requires the right kind of organized, world-engaged process, closer to enactivism (Varela, Thompson) than to computationalism. I doubt current AI architectures are anywhere close. An embodied system that senses, acts, and maintains itself in the world is a different question. This distinguishes my view from Searle's biological naturalism - he thinks computation categorically can't produce consciousness, that you need specifically biological causal powers. I'm skeptical of disembodied computation, but more agnostic about what embodied approaches might achieve.
Many computationalists would disagree. They'd say the pattern, not the substrate, is what matters (in principle, if you reproduce the organization of a conscious system, silicon or code should do). I find that unconvincing, given how much cognition looks deeply entangled with embodied, bioenergetic processes. Even if you could simulate the sensorimotor coupling computationally, there's a separate question about whether simulated embodiment would produce the same phenomenology as actual embodiment - whether 'simulated interoception' would feel like anything at all. But this is contentious and unresolved.
Sorting this out is a giant research program in math/ theoretical CS/ bio/ physics/ philosophy, well outside the scope of my end of year personal essay.
Even if substrate-independent binding turns out to be possible, it's a separate question whether the resulting experience would preserve anything we'd recognize as the same self, with a continuous thread of experience.
Where I roughly sit is sometimes called liberal naturalism or expansive physicalism: consciousness is part of nature, but our current physics may not have the vocabulary to capture it yet. I think a more complete science will accommodate experience without needing anything spooky. We're just not there.
A lot of current theory (e.g. predictive-processing, global workspace & related frameworks) tries to formalize this: the brain as a generative model that constantly predicts and revises a multimodal world, with certain representations ‘winning’ global access. But they stop short of explaining how the binding actually happens.
With all that said, what would it actually take to build an imagination machine? There are two different projects hiding inside that question. One is building a mind: a system with a unified point of view and real felt experience. The other is building a translator of sorts: a machine that lets an existing mind externalize and develop imagery, language, and feeling into shareable form. The first is speculative, while the second is underway.
For the first, we’d need to know how minds represent things - including language, imagery, bodily feeling, spatial reasoning, abstraction and more. We’d need to understand how those representations get stored, retrieved, recombined in real time. And we’d need some account of how computational process becomes felt experience (qualia, if you want the philosophy term). On the spectrum from speculative science to proven engineering, this is highly speculative.
Models don't have inherent sovereign goals, purpose, or point of view as far as we know. We probably won’t get there by way of a single clean equation or tidy abstraction. Minds look like processes in tangled systems. If we ever get a satisfying 'theory of consciousness,' it’ll probably rhyme more with our best dynamical and relational models than with the kind of proof you can scrawl out on a chalkboard. So none of this means “AI is basically a mind already,” or that uploads are around the corner.
The second project is more boring and more interesting at the same time. A translator doesn't have the same requirements. It needs fidelity, steerability, and responsiveness. It should take partially formed intuitions - images, fragments, moods, constraints - and give us the ability to interact and shape what emerges. Like writing and language, it'll help develop thought. This is a new medium.
Today in AI, we are scratching the surface of that medium. Transformers gave us powerful predictions over symbols like language, code, audio, and other tokenized streams; diffusion (transformer-based or otherwise) gave us pliable visual manifolds; multimodal systems combine these across audiovisual modalities; frontier world-models and agent systems start to sketch dynamics (i.e. how environments change in response to actions). Taken together, they're not close to being someone, but they are becoming a new layer of tooling for thinking and making.
It is far too soon to panic about conscious superintelligence, and far too early to talk as if we have a blueprint for copying ourselves. But powerful systems don't need human-like minds to do damage; poor judgment and corroded values on the part of their creators can do that plenty. The nearer danger is misusing what we've already built. Rejecting it, however, is worse. The distance between what we intuit and what we can do is shrinking rapidly, which is exciting. This doesn't replace work or connection, it opens new forms of it.
Machines that truly imagine on their own would need what robots lack today: affect, personal stakes, and sensory embodiment in the world. For now, AI cannot feel the San Diego sun, much less translate the emotion of a migraine melting on a Coronado Fourth. It doesn't know what it feels like to make choices in life, and how those choices contribute to the ebbs and flows of important relationships.
My sister and I knew our parents wouldn't be here forever, so it'd always be me and her. But as we continued experiencing different realities, it weighed on us. Yes, in general, older siblings tend to feel the weight of the world sooner and harder, and often face the bad lot of parental mistakes. For us specifically, though, I think my sister had to be the brave one. She had to get off the plane and go straight to kindergarten. She had to go to ESL, while I skipped it after two years learning from her (fun fact: my first words in English were pee pee and poo poo, which we found we needed after her first week in class). She had to learn how to start middle school in a new state, how to take the SATs after we lost our home, and how to thrive despite financial pressures at an Ivy League school.
My sister internalized this into service to others. Her pain and anger doubled as compassion and a strong sense of justice, and she opted to put her creative pursuits on the backburner in favor of a career in ed policy. I respect her more than anyone. But even as we stayed close through most of our 20s, the paths we'd chosen had diverged more than either of us expected. The subtle gap had become distance over time: in our choice of work, views on family, and personal philosophies. Neither of us can change how life has played out, and though I think we are still far more alike than we are different and love her very much, I am left replaying moments I could have and still could do better.
Connection takes work and care - no matter how much we wish we could just see each other, or meld our minds, reality is far more complicated than that. Though we may be far from engineering minds, the tooling emerging today is seductive precisely because it promises to bypass that complexity.
What do we lose when everything feels possible?
Cut to a padlock cradled in two hands - one from each person - clicking into place. No music, no dialogue, river passing, blind to apprehension leaking in. The hands languish, already knowing. As the camera pans out, we see our lovers are on that bridge in Amsterdam, where countless have made similar promises. Their metallic proclamations glitter with the opening track as they rearrange into the movie’s title:
Vondelpark.
“They get married,” Andrew says to dramatic effect.
“When she asks for a divorce 20 years later, their last task is going back to that bridge to take the lock off. In the process, they fall in love again.”
OK, I’m listening. “This sounds like a winner.”
“I know, it came to me fully formed in the middle of the night.”
Andrew may be joking, but I walk away thinking I’ll actually write this screenplay. Could this be the big break? Are we really starting the proverbial band instead of proposing it tongue-in-cheek?
I don’t think so. Add Vondelpark to that list of things someone should do, must already be doing.
Right next to the men’s skincare brand, past the chicken nugget franchise, underneath all the apps-for-x. Our unrealized dreams beg for life on shelves of indefinite purgatory. Imagining branches of possibility linked to this thread in the ether, I live an entire other lifetime.
Walt Whitman’s “I am large, I contain multitudes” (Song of Myself, 51), rattles persistently through culture. It speaks to the countless refractions we see in ourselves and desperately wish others could see. In this funhouse of mirrors, grief about what could have been echoes off future nostalgia for what won't be. We can’t help but feel our total humanity will die unrecognized.
These ideas can disorient creative efforts. As we might with relationships from past lives, we clutch optionality, refusing to commit because we mistakenly believe pruning branches diminishes us.
Ten years ago, most such ideas would have felt safely impossible, the kind of thing to muse about on occasion but accepted as fantasy. Now, it no longer feels delusional to think we can dust one off, maybe a few.
The more plausible these branches become, the harder it is to cut any of them, much less discern the right ones to cut. Not all options are created equal, but they sure as hell cost money, and our potential is left holding the bag.
This paralysis, this inability to commit, comes from something deeper than just having more choices. People offer soothing advice like "you have time", which I think is a gigantic mistake. In culture, deferring stakes in life has become commonplace. These instincts come from popular distortions of what reality itself is.
Where do these fallacies come from, and why are they wrong?
Before Kim Kardashian, there was Ida.
Like her creator Gertrude Stein, the titular character of Ida - A Novel was ‘famous for being famous’ before the phrase was coined. Despite her efforts to escape the caricature society expects of her, Ida grasps that she cannot shield her own multitude from parasocial projections. She ends up grounding her sense of self in the relationships she holds dearest (her dogs, most of all). We don’t find out much about who Ida is behind the veil otherwise, but that’s kind of the point.
Stein also loved dogs. Instead of Descartes' “I think, therefore I am”, Stein quips “I am I because my little dog knows me” in The Geographical History of America. There, she's more interested in the question of identity than drawing any conclusions.
Per Francesca Wade in Gertrude Stein: An Afterlife, Stein too struggled with the tension between how she imagined herself and who she was to the public. For most of her life, people knew her as the quintessential curator and tastemaker. This endured in cultural memory: Stein is more famous today for being a patron and central node among stars like Picasso, Matisse, and Hemingway (through her salon at 27 rue de Fleurus, a la Kathy Bates in Midnight in Paris) than her own work.
Yet as the writer that Picasso viewed as his Modernist literary counterpart, Stein’s body of work accurately reflected the anxieties of those who lived through both World Wars. Fresh off two hundred years of delirious growth, the world tipped in and out of chaos. Similar to painting’s Cubism, Stein and contemporaries like James Joyce and Virginia Woolf developed a free-flowing style that reflected the fractal multiplicity of the era.
Though she is often credited with this kind of uninhibited writing, Stein insisted she was trying to do the opposite. She wanted to write extra-consciously, drilling into the 'objective' reality of words. That is, she wanted the object to stand in concrete terms, outside the phenomenology of a first person observer’s identity and baggage.
The result was lines like “A rose is a rose is a rose”. Stein wanted to hearken back to people like Homer or Chaucer who, when they wrote of a rose, just meant a rose. She wanted to restore the vividness of the word itself rather than have its meaning distorted by a person’s memory. As such, much of her other idealistic, artistically true-to-self work is hard-to-read nonsense.
Her peers didn’t usually make the same claim. Joyce and Woolf are also inscrutable but situated inside particular minds with particular histories, associations, and cultural positions. Mrs. Dalloway remembers this kiss at this party; Bloom's mind wanders through his Dublin, his marriage, his Jewishness. Stein wanted to dissolve that particularity, to get past the individual perceiver to the object itself. I think that's part of why much of Stein’s work didn’t reach beyond writing for writers.
Even Stein’s famous line has been overshadowed by Hemingway’s later insistence that “The sea is the sea. The old man is the old man” (you know the rest). He said he meant it literally, but he also knew readers would bring their own meanings to the text. His plainness leaves interpretive space; Stein’s ideal of pure objectivity tries to deny it.
It’s telling that the lack of popularity of Stein's earlier work stands in stark contrast to her fictionalized autobiography of lifelong partner Alice B. Toklas. Everyone loved it. Partly because of the celebrity gossip - Stein did understand the value of self-mythologizing - but also because it was decidedly nothing like what she fashioned her writing to be. It didn’t insist upon itself, and instead stayed rooted in the textures of one particular, coherent world familiar to the public who adored her. Stein’s relative failures came from trying to write as if she could float above that texture in a neutral space without context. Her successes came in contrast to that work, when she stayed inside the mess: specific people in a specific world.
The lesson I take from her is simple: the work that lands hardest is rooted in one thick, shared reality, not in a view-from-nowhere. Impactful imagination can’t just be free-floating abstraction. In humans, it’s deeply shaped by our bodies, our environment, and the pressures of needing to act in society.
That temptation to float free persists in certain corners of tech and philosophy today. A lot of the ways we talk about the future pull us out of that entanglement. People toss around simulations, uploads, and infinite branches as if they should be taken seriously as reality itself, when they should remain surreal collages that help us see reality. Spend long enough in the resulting cultural milieu - even half-ironically - and it gets easier to downgrade this particular world. If you believe you’ll endlessly respawn, this round can become provisional, turning life into a game of lighter consequence.
The same fantasy underwrites a certain vision of agency. Like many concepts that spread through culture (e.g. 'emergent' or 'systems thinking'), the word loses precision in transmission. On its surface, agency sounds like an antidote to nihilism or desolation. Choosing to act freely in a malleable universe. But if the universe doesn't feel real, neither does the action. We can see this in those who only half-jokingly espouse the 'hypothesis' that we must be in some kind of simulation.
Nick Bostrom’s argument was originally a careful analytic trilemma that I’ve never quite followed. What matters is how the meme morphed in culture and became a kind of secular superstition among the analytically overconfident. Believers treat the fact that Bitcoin almost peaked at 69,420 in 2024 as evidence that we are in some puppeteer’s game engine.
The simulation argument is unfalsifiable and predicts nothing: David Deutsch and others would call it a bad explanation, others call it pseudoscience. In a simulation, one has the license to pursue interesting or funny outcomes for their own sake because nothing’s real except the mystical base reality we must not be in.
Groundless worldviews in general - simulation, crackpot conspiracies, postmodern relativism - erode shared reality. This cultural detachment creates the conditions for poor taste. Teams doing otherwise highly admirable work, backed by billions, cheapen it with animated waifus dancing in the feed.
And when shared reality becomes negotiable, so does the reality of others' experiences. When people who fashion themselves as agentic also buy in, things break. Creation becomes “I can just do things” without a moral compass. Those with worse intentions make companion bots designed to prey on the unmoored. Chasing cleverness, spectacle or engagement in the name of agency tramps dangerously across moral minefields.
But if reality’s not a simulation, what does science say about it today?
Not as much as we’d like. In physics, obviously our best theories of spacetime and quantum mechanics don’t fit neatly. They disintegrate in black holes and blow up in the Big Bang, vehemently disagreeing on the grandest questions, like how much energy sits in empty space, and seemingly trivial ones, like what a single photon does when fired through a slit barrier onto a screen. [For an optional detailed detour of the double-slit experiment and its interpretations, see Appendix A]
Now, if you don’t care for the photon particle-wave weirdness, you may remember a nonsensical high school physics lesson about some cat being dead and alive in a box, at the same time. This was Schrödinger scaling up the double slit logic to a cartoon macroscopic thought experiment of quantum superposition. His point was, “hang on guys, a cat being both dead and alive is insane.” It can’t literally be the case that 'both options at once' describes reality in any straightforward way, so we must be missing something in how we connect the math to the world. Einstein famously agreed.
Reconciling this is the holy grail of quantum foundations. Quantum field theory in particular is astoundingly predictive yet still opaque about what the world is actually doing between cause and effect. At the smallest scales, we don’t know what happens as events occur through time, only how to calculate the odds of what we’ll see. What is the underlying reality of that process, the ontology (actual 'stuff') behind the math?
Great recipes, confusing explanations. Deutsch’s interpretation is a prime example. He holds a version of the Everettian stance, more commonly referred to as many-worlds, or the multiverse.
Take the math literally and you get Everything Everywhere All At Once (I love this movie because it explores the complexity of a daughter navigating parental relationships), a branching cacophony of parallel universes where all possible outcomes are happening. Many-worlds bravely answers “what’s really driving the math” with extravagant interpretation. Inspired even. Possibly Probably Total Nonsense.
I hesitate with Bostrom or Deutsch-esque conclusions because they take analytically legible methods to explanatory extremes. They say what matters is crisp formalization into models, arguments, or probabilities (depending on who you ask). But in the minds, stories, and worlds we actually live in, 'squishy' parts, like intuition, embodiment, culture, and experience aren’t unimportant noise around a clean core. They’re just hard to bake into precisely defined schemas. A framework that treats such elements as secondary may be beautifully reasoned but miss something central.
Why does this matter? As soon as we move from predicting lab results to asking what can exist and be built, we’re already assuming answers. In AI, biology, energy, and everything else that touches the real world, different pictures of what’s really underneath lead to different beliefs about what’s eventually possible, what isn’t, and what ought to be.
I’m more sympathetic to a different family of views: Kantian-ish about what we can really know (access to reality is mediated, not a pure view from nowhere), and more process-like about what the world is made of. Think Carlo Rovelli’s relational quantum mechanics, Alfred North Whitehead’s old-school process philosophy, and related strands. The world is an ongoing web of happenings - "drops of experience" that continually become and perish - rather than a collection of discrete substances with fixed properties. You’re not the same person you were ten years ago. You’re barely even the same person you were ten seconds ago, on a molecular level. Whatever 'you' is, it looks more like a thread through changing interactions than a soul that could be copy-pasted into any substrate sans residue. For Rovelli, particles themselves don't have independent existence, they're outcomes of interactions between systems.
Rovelli doesn't solve the puzzle, he reframes it. Einstein didn't fix Newton by adding epicycles; he changed what space and time meant. The next breakthrough may not be 'which interpretation wins.' It might be a different question. Rovelli's other work suggests spacetime itself isn't a fundamental 'fabric'. It emerges from relational structure underneath. While I think 'interactions all the way down' doesn't fully click, a physical 'fabric of spacetime' never really made sense to me either. The relational framing shows up elsewhere: Network neuroscience finds function in connectivity. Category theory defines objects by relations. ML learns meaning from context. Multiple fields suggest 'relations over things.' Whether physics lands there is above my pay grade, but it’s intriguing as an angle of inquiry.
It's the same mistake Stein made with language: trying to carve words down to their objectively true, observer-independent meanings.
Originally, Stein wanted to understand the human mind, studying under William James at Harvard. Somewhere between James’s “stream of consciousness” and her own Modernist experiments, she intuited that minds are flows, dynamic and continuous. But she still hoped you could name that flow from a neutral outside vantage point. A century later, what resonates from her work is the situatedness, not the abstraction.
I’m also curious about more bottom-up views that still have a hint of this flavor: theories that posit some large underlying combinatorial structure, but where “the world” and its “observers” are just particular relational slices through it. Even there, what shows up is not bare stuff but geometries of interaction and constraint.
This lands in post-postmodern, maybe 'metamodernist' territory… informed sincerity that absorbs postmodernism's critique of naive rationalism without surrendering to groundlessness (I reject ‘nothing’s objectively real, we made it all up’). The American pragmatists (William James, John Dewey, Hilary Putnam) had a similar orientation: anti-foundationalist, but still committed to a shared world.
In philosophy of language, for instance, Jason Storm proposes a 'third way': language shapes and imperfectly represents reality without being completely divorced from it.
I don’t pretend to know the math that might eventually reconcile all this. Big ‘complexity’ theorizing that promised a unified science of emergence hasn't yet given us a satisfying picture of how minds, bodies, and environments hang together. Meanwhile, a lot of the interesting potential seems to be in dynamical, learned models: richer categorical and topological language, neural-net-style pattern-finding, and more willingness to let holistic structure guide what we think the 'fundamental' story should be. That’s part of why I’m drawn to relational frames in physics and neuroscience alike: they take seriously the idea that what’s real includes how stuff stands in relation, and how those relations change.
If that’s even directionally correct, it has moral consequences. An embodied, one-world view doesn’t give you the comfort of infinity where everything works out somewhere, sometime. There is just this unfolding history, seen from inside, and the systems we build in it. Simulation and multiverse talk may make for fun late-night arguments, but as guides for how to use our imagination machines, they’re a distraction at best and an excuse for neglect at worst.
We can’t outsource responsibility to a multiverse or a future upload. And we can’t assume our theories will neatly settle the question of consciousness before we act.
Then what does it actually mean to create with intention?
A storm interrupts the otherwise “wet, ungenial summer”. A young woman by the name of Mary Godwin, scared to death by her own dream, bolts awake. Recognizing genius, Percy Shelley - not yet married to her - encourages Mary to put her ghastly vision to paper. Frankenstein; or, The Modern Prometheus, is born.
Or so the story goes. Literary scholars debate the veracity of Frankenstein’s origin. Some dispute the cool nightmare vision, claiming Shelley’s inspiration actually emerged from an intentional, structured writing exercise undertaken by the squad in Lord Byron’s house that summer. Shelley took the kernel of a concept and went to work on it in the ensuing weeks and months and years.
The resulting novel’s layered irony isn’t lost on readers. Here was a brilliant man in Victor Frankenstein, obsessed with scientific creation yet in so many ways thoughtless and uncaring. After spurning the monstrosity that is 100% his fault, the overconfident creator spends his life running from death’s living specter, who wants to maim him and whatnot.
The message of creative neglect comes with a contemporary twist in Guillermo Del Toro’s recent adaptation. My sister loved Pan's Labyrinth, so I keep up with his work. In the book, Victor flees the day after creation. Here, we see him spend more time with the Creature. He assumes that because the Creature can’t say anything more than his name repeatedly, it can’t reason intelligently and is therefore broken. Like he bootstrapped its life into existence with a strike of lightning, he expected the Creature’s intelligence to flip from 0 to 1 as well.
Later in life, Shelley herself seemed to imply the story’s creation had happened in the moment of divine inspiration. Like film adaptations have Dr. Frankenstein sparking life into his monster in an instant, Shelley would have us believe the genesis just happened. This fallacy of a lightning-bolt epiphany endures in culture.
Oliver Sacks, again in the River of Consciousness, explores such stories. On Mendeleev’s discovery of the periodic table:
It is said, [Mendeleev] immediately on waking, jotted it down on an envelope… it gives the impression that this stroke of genius came out of the blue, whereas, in reality, Mendeleev had been pondering the subject, consciously and unconsciously, for at least nine years… Yet when the solution finally came to him, it came during a time when he was not consciously trying to reach it.
Sacks goes on to describe Henri Poincaré solving a super hard math problem. After wrestling with it, and losing, he decided to take a break. As he was hopping on a bus, the solution popped into his head in perfect form. Later, while working on a separate problem, he got mad again and went to the beach, where insight struck on a walk. Poincaré's takeaway was there must be continued background processing when one is not explicitly thinking. A kind of creative incubation doing real intellectual work beneath awareness. (Sacks interprets this further as a distinct mode of creative incubation, though given recent revelations about his work, his theoretical framing is less reliable than the historical examples themselves).
Progress happens through subconscious, lateral, serendipitous connection as much as, if not more than, traceable reasoning. Even the Srinivasa Ramanujans of the world, who made extraordinary contributions in relative isolation, spent years steeping in their problems. For every supposedly sudden discovery, there are thousands of hours of relational gardening. Instead of singularities or magic moments, we should expect creation to look more like a long, uneven relationship with ideas than heroic jumps.
Ideas are planted across our collective consciousness, cultivated with concerted effort, plucked and molded into being. Not materialized in discontinuous flashes of progress.
Once plucked, a lot needs to happen to bring ideas to fruition. Technical progress has a knack for being 'in the air' yet rarely happens without serendipity and the messy middle, which it often doesn't survive.
James Dyson first saw the idea for the bagless vacuum in industrial cyclonic vacuums while making wacky wheelbarrows with his first company. He got kicked out of this company. It then famously took him 5127 prototypes to get the first version right. He then spent years litigating the patents. So he was broke for a long time, which forced him and his wife to grow some produce at home. This experience in turn informs how Dyson builds vertical farms today.
Timing and circumstances matter too. As Deutsch writes, Babbage’s analytical machine existed a century before Turing’s computer, so in principle we could have had computers way earlier. But the economics of the project or supply chain dynamics may have been off. Or more likely, Babbage may have been incompetent for this stage of the job.
And finally, relationships. Emerson was Walt Whitman's champion and Thoreau's mentor. Emerson was also godfather to William James, who taught Gertrude Stein, who was friends with Alfred Whitehead and considered him among the preeminent geniuses of her time. Stein was also friends with Bertrand Russell, who worked with Whitehead on Principia Mathematica, an attempt to ground all mathematics in logic that was later proven incomplete. Whitehead himself eventually moved toward process philosophy, taking relations and becoming as fundamental rather than formal structures.
We are all haunted by shadows of our unborn Frankensteins. Victor clearly squanders his only blessed opportunity. Mary Shelley, fearing this, chose to avoid Victor’s mistake. She cared deeply for her creations, publishing prolifically beyond the Frankenstein years to her death. This was in part fueled by her depressing personal life: abandoned from day 1 (her mother, Mary Wollstonecraft, died in childbirth), Shelley went on to be tormented by the ghosts of 3 of her 4 own little children. She couldn’t bring loved ones back to life and probably felt no choice but to breathe her soul’s excess into work instead.
Mary, by all accounts, channeled her circumstances into a life of impact. Composing what became literary canon, she separated herself and gave humanity a gift. One that scarred this middle schooler reading beyond his maturity level, but a gift nonetheless. Like Victor, she didn’t just dream, she acted. Unlike Victor, however, she breathed conviction, timelessness, and heart into it.
But Frankenstein endures for another reason relevant to the day. It hit a nerve because people of the era genuinely feared we might figure out how to create life.
In Shelley’s day, live demos of galvanism, AKA running current through dead frogs and watching tissues twitch, energized crowds in spectacles of edutainment. As Frances Ashcroft details in The Spark of Life, scientists debated if life is principally bioelectrical or biochemical in nature. For a couple centuries, the biochemical camp dominated. We discovered DNA and mapped metabolic pathways, designed drugs to wipe out infections and modulate hormones. But in parallel, we’ve also learned bioelectricity is not a parlor trick, that voltage gradients across cell membranes are arguably as critical to life as genes and proteins. This helped us figure out how to defibrillate and reboot hearts, restore movement after injury or certain paralyses, and even interrupt seizures in real time.
Today, a growing wave of work in bioelectric patterning like Levin's morphogenetic circuits, which guide regeneration in worms and frogs, reinforces this picture: bodies are multi-scale electrical networks coordinating growth and repair, shaping behavior. As we start to edit genetic programs and how they’re expressed, we’re also increasingly reading and writing signals to mediate the control flows of life.
So since Shelley’s time, we’ve come to understand life through a more unified bioenergetic frame. Nick Lane, in The Vital Question, applies something like this to the origins of life. Rather than life emerging from some generic primordial soup, he argues it arose in environments with strong natural energy gradients and evolved ways to harvest those flows.
As we’ve started thinking of life as a specific way of organizing energy and matter, we’ve come to also revisit what happens at the other end. If life is a process, not a spark, what does it mean for that process to end, and what would it mean for it not to? We see this present-day obsession reflected in Del Toro’s Creature, who cannot die (unlike Shelley’s, who presumably burns himself at the stake). If you can’t die, what does 'a good life' even mean? What is death doing for us, structurally?
Complex life depends on dense mitochondrial power, and in Nick Lane’s world, aging is the long-term bill for that energy: over time, damage and mutations in cellular machinery slowly erode the body’s ability to keep tissues powered and repaired. Evolution’s hack was to accept that for individuals and ensure continuity via children: copy genes into a new body and let the old one wind down. Death, in this view, is a structural trade-off baked into how our kind of life bought complexity in the first place.
Levin takes it a step farther. In a recent paper, he treats aging less as an unavoidable hardware failure and more as a control-systems problem. For him, bioelectric patterns don't simply ferry messages between cells; they act as the organizing field that constrains and coordinates tissues toward target states; aging is what happens when that multi-cellular navigation system loses the plot. In principle, if you can restore the right bioelectric patterns, you might be able to re-impose youthful, goal-directed repair. To him, aging is as much about lost goals as it is about broken parts.
Thus, life appears to be one way to fight entropy, and aging and death look less like inexplicable failure than the cost of waging that fight. Whether it's the hardware breaking down or the high-level goals losing coherence is an important question if you care about what minds are actually doing - and where. To ask whether tissues can 'lose goals,' you have to grant that they have something like goals in the first place.
Levin and collaborators explore exactly this in another recent paper. Instead of treating cells and tissues as conduits of signals that inexplicably give rise to cognition, they model them as problem-solvers in themselves. The paper asks: if you had to randomly try all possible ways to regrow a head or reach a target shape, how long would it take? They compare the model results to what real tissues do. Real systems are astronomically more efficient than blind search, which suggests they’re not exploring all possible states but moving along constrained pathways, structural grooves for getting from “here” to “there” in chemical, electrical, and anatomical space.
Seen this way, cognition isn’t a light that suddenly turns on when you grow a cortex. It’s whatever lets a system, whether it’s a handful of cells or a whole animal, avoid flailing randomly and move toward useful outcomes. The kind of goal-directedness that makes life feel purposeful in our conscious minds is deeply present in the low-level processes that grow and repair us. Then the interesting question in neuroscience isn’t ‘what’s special about neurons?’ so much as ‘what are the organizing constraints that let many parts, including the rest of the body, act together like a single, goal-directed entity?’
What we perceive as top level goals may affect us down to the health of our smallest cells, though we're only starting to understand how that conceptual translation might work.
How do we choose which ones to build a life around?
On Sunday mornings, the sounds of streaming showers and clinking cups would softly scream "keep your eyes shut!" I would slow my breathing and pray to God to please let them go to church without me. Maybe this time, my parents would see the rays already framed a halo around my crown and decide our work here is done, let this angel sleep.
Like clockwork, my sister and I would be forced to sacrifice 1 of 2 free days to go to a worse version of school, where we admittedly still had friends, but they were kind of weird, and we had to speak with elders who smelled of mothballs. Clasping little styrofoam cups, we'd suck Maxim coffee through thin red straws and humor them with small talk so excruciating we'd wish the sermon hadn't ended.
For my parents, though, church was a godsend. At 36 and 34, they had left everyone they loved for a foreign country with two young children in tow, landing in a blocky matrix of minor towers and frontage roads called Houston. Though a buzzy website called MapQuest was starting to make the rounds, offline directions remained more reliable, and they needed help finding their way. Coworkers recommended a local Korean Baptist church off I-10, where community meant friendship and know-how, like "careful when asking for directions, many guns in Texas". Though my parents hadn't been very religious back home, the Church's message also provided comfort, and the ritual balance.
They had determined religious practice is the simplest way to instill values as well. Rather than wrangling together a modern, eclectic program they didn't have time to teach at home anyway, they chose a structure sanded down by history. In addition to the table stakes morals present in every mainstream religion - such as the Golden Rule, or my favorite, don't kill people - we learned about compassion, forgiveness, and the truth that doing boring things repeatedly is an unavoidable constant in anything worthwhile.
I stopped going to church regularly when I realized no one can actually make anyone do anything in this country, not even your parents (in theory). As adults, though, I think we should study religious texts more deeply, regardless of our personal inclinations. They're boiled down nuggets of allegorical wisdom that survived centuries of printing cycles. Treating them more like literature than history I think is healthy. The goal is to reexamine, reaffirm and possibly remix takeaways, understanding the tradeoffs between value systems for the sake of our own children. No religion is of course a valid choice for them when the time comes, but it helps to be able to recognize where and why certain values are implicitly baked into our culture. As secular as we think we've become, every society has religious roots that endure.
Though religion comes with practical wisdom, table stakes morals, and selected values, I think the most critical thing any sort of practice teaches children is simply the concept of faith itself. Faith to me means orienting towards something you don't have proof of, knowing it might not be true, and evaluating what lessons emerge anyway. In this way, I distinguish it from blind faith, which claims certainty. While faith means acknowledging Santa could be fake but leaving cookies out anyway, blind faith means a large man will indeed slide down your chimney.
This takes epistemic humility, even in rare cases where, faith be damned, apparently undeniable divine moments appear like actual proof. I can't say I've had that happen, but even if I had a vision of a visit from Jesus himself - as some report with near death experiences or psychedelic trips - I would temper that with the asterisk that it is a first-person claim about phenomenology, not an analytical one about reality's structure. Epistemic humility means holding such extremes as meaningful without pretending to know exactly what they are (yet).
Counterintuitively, practicing with epistemic humility and uncertainty is where strong faith originates, because strength is built on overcoming doubts that make movement difficult. Repeatedly taking action.
Faith, correctly derived, is what gives us hope. Neither have to be religious at all. This meta-skill translates to relationships and work alike. It means caring when you don't have evidence you should, which moves life to a better place.
What happens without it?
Dostoyevsky's Notes from the Underground starts: “I am a sick man, I am a wicked man.” Nice choice for a solo dude stopping by a remote bookstore on a random Tuesday, especially after he’s told the owner his favorite book growing up was Wuthering Heights. We joke amicably, but I feel her stare as I walk out.
I stayed away from Dostoyevsky in my 20s, partly because I wasn't ready for where he takes you, and partly because he’s in the starter pack for the self-important 'guy who reads' along with Ayn Rand. Did not want to become a caricature. Chekhov makes you laugh, Tolstoy makes you wonder, but no one more than Dostoyevsky conjures the image of a scary Russian pounding vodka, waxing moral conflict in some smoky back room. What’s going on? I don’t know, I think we should leave.
The opening line is a few shades darker than Dickens’ “best of times, worst of times”. At least Dickens contrasts polarities. Dostoyevsky’s antihero introduces himself as a sad sack of crap through and through.
In his introductory diatribe, the Underground Man describes the universal need to moan about struggle. The primal gratification one finds in pushing boulders up hills while telling everyone how hard it is. Man sees wall, man charges into it. Doesn’t stand a chance, but maybe this time will be different. When the exertion causes suffering, man thinks there’s joy in it.
I can’t say it gets much better. The antihero’s 'notes' are ramblings of a guy who creeps around the margins of society, bitter because his supposed intellectual superiority has only brought him misery. If Dostoyevsky didn’t wrap the whole thing in ironic meta humor, it’d be too depressing a read.
We come to learn the narrator spent the first half of his adult life absorbing, and the second half spewing all that had bubbled in him over the years. To readers, this is presented in reverse order. We see his life in the 1860s, then flash back to the 1840s to understand why he ended up as he did.
The Underground Man has non-consensus views. He thinks he is the only ‘n-of-1’ in a herd of sheep. But if your own subjectivity is singular, it follows that everyone else’s is too. Even the people you think are sleepwalking. In this way, he resembles many misguided, overconfident 'red-pilled' folks who feel only they have seen through the Matrix. The fatal flaw of the antihero is his belief that he alone is meant to be uncommon. I mean, he is alone in a real sense, but it’s not because he’s the only genius to crack the code.
The man has nowhere else to take his broken mind, which has coveted independence in a world where prominent intellectuals (and the culture writ large) promote dedication to causes of moral clarity. Russia, at this point, has taken some Enlightenment ideas to an authoritarian extreme and decided the ideal way to live is with absolute conviction and commitment to the broader machine. It is a cancerous, distorted end state.
The machine in question is utopian rationalism: the 1860s intelligentsia's conviction that science and reason could engineer a perfect society. Build the Crystal Palace, design the right incentives, and humans will naturally choose virtue. The Underground Man sees this and recoils. He insists humans will act against their own interest just to prove they can, to prove they're not "piano keys". He does not believe the purpose of life should be so clear cut.
The Underground Man allows addiction to pain to become pathological. Pain can fuel - runners know it bleeds into pleasure - but if it becomes the end in itself, spirit erodes into defeated self-awareness. Richard Pevear observes that for Dostoyevsky, this inner disharmony is the source of consciousness itself, but consciousness without movement towards something beyond the self is "death-in-life". The narrator correctly asserts that suffering is the price to pay, but he doesn't know what he's paying for.
Tolstoy explores the opposite trap in The Death of Ivan Ilyich, also structured in a reverse narrative of sorts. While the Underground Man is aggressively anti-establishment, Ivan Ilyich is exactly what one is supposed to be. Through flashbacks from his deathbed, we learn he acquired a respectable position as an official in the Court of Justice, endured a properly dreadful marriage, and otherwise directed substantial energy towards getting as high as he can in high society.
As he approaches death, Ivan has an epiphany that doing all of the things that should have allowed him to live painlessly has caused complete and utter numbness. He accepted the duty of his station with such a lack of resistance that inner life became frictionless. Smooth clarity let every meaningful possibility slip out, leaving him barren.
If the Underground Man takes himself out of play by overthinking things and lives in total agony, Ilyich doesn’t think at all and dies in it. Neither aim at something worthwhile. Their lives teach us that in this vacuum, the hyper-individual, Waldenesque fable of independence is as dangerous as being a dispensable suit. As much as Thoreau romanticized living off grid, he had people visit all the time.
remember there's a line between camping and sleeping outside. Subjective somewhat, but objectively real.
Tolstoy seemed to take this seriously in his own life. Releasing his own anxieties into Ivan Ilyich’s psyche, he decided he wanted nothing to do with it, building a distinct body of work publicly and successfully. Dostoyevsky, too, didn't succumb. Unlike his antihero, we know Dostoyevsky was not ineffectual. He became one of Russia’s most respected writers over the course of his life. He and Tolstoy both could see an implied future playing out, and they didn’t like it. Rather than accept self-fulfilling prophecies, they turned their disquietude into productive art, and changed us for the better.
Thinking for yourself is necessary but insufficient. Both stories arrive at a question that confronts us often.
What constitutes a life well lived?
Marry the rich fisherman and be safe, you just got expelled with zero economic prospects.
Or choose the poor boy you love, your best friend, your first friend, who saw you needed help digging rocks from the field and dug without complaining but also planted the seeds, watered them and pulled the weeds, picked and carted to market with callused hands, set up the stand, and negotiated with needy hagglers until every last cabbage sold. He had you sit on the box reading your books and writing your poetry, not because he thought men work to let women enjoy, but because he knew you had potential to be more than him; your education mattered. For years no judgment: in adoration of who you were, compassion for who you’ve become, and belief in your eventual transcendence.
The fisherman keeps clearing his throat loudly, and also he is divorced.
Is there a third option? Get on a boat and run away, start over where no one knows you are unskilled children who come from nothing? No, you already tried that, that’s actually what got you expelled.
Nonlinear, realistic slice-of-life show When Life Gives You Tangerines examines the limitations of agency. Even if they successfully run away, Ae-sun and Gwan-sik know the dream is a fantasy. They need to face misery apart or struggle, really struggle, together.
The title is a nod to ‘when life gives you lemons’ but with the tangerines native to Jeju, where our characters grow up in pre-industrial Korea. On the island, men typically fish or farm, and women are lucky to be married in better households. Women who want to fend for themselves do have one other option - deep sea diving for abalone - but this is dangerous, hard work. Ae-sun’s own mother does it as a single mom, and she dies early. She’ll do anything to prevent her daughter from having the same fate.
The story is imbued with the emotions of what could have been. Or more accurately, what could be for others but not me. For these characters, imagination is fantasy. When Ae-sun and Gwan-sik run away from Jeju, they exercise their freedom. Which, as an escape, is an illusion.
So struggle they do. Gwan-sik goes from the calluses of a field hand to the puckered cuts of a boat one, while Ae-sun puts her dreams on hold to run things at home. The pair manage through real tragedy and real joy in a way that’s not a Victor Frankl (not Frankenstein)-esque internal search for meaning; they have an objectively beautiful life. Tangerines isn't about finding meaning in suffering, it's about making life an honest work of beauty with loved ones.
That’s not to say the couple ends up complacent. They take each opportunity afforded to them seriously, making risky bets on property, opening businesses, running for local political office, and blessing their children with the tools to do better than them. This last point is critical - the drama spans generations, exploring how abstractions like values and narrative legacies are as tangible to one’s inheritance as genes and assets (or lack thereof).
Ae-sun ultimately does become a poet in her later years, with a rich bench of material to draw from. In all conceivable ways, the family tries their absolute hardest to meet their potential, and things do work out.
Tangerines resonates in part because it traces the lives of people like my own parents, who grew from the constraints of the old world to the opportunities and challenges of the new.
By the time I was born in Seoul, South Korea had gone from a patchwork of rural villages and handful of grimy cities to the 12th largest economy in the world.
My parents met half a generation after Ae-Sun and Gwan-sik while majoring in literature in college. In the backdrop of rapid societal change, they sought beauty and truth in the broader dislocation, practical skills be damned. This was especially brave (or foolish, perhaps) for my dad, as other guys his age were diving into technical disciplines in hopes of securing a role in the nation’s growing chaebols. It turned out learning English helped set him apart in the process, though he was the only guy in the lit program who leveraged this for corporate, at the now defunct colossus Hanjin shipping.
A stable, salaried job was a welcome departure from the financial instability my Haraboji (grandpa in Korean) faced as he cycled through businesses during the boom. Like Ae-sun and Gwan-sik, Haraboji tried to catch some waves, to varying degrees of success. There was the billiards hall, coffee shop (which afforded my dad the coolest bike on the block), second billiards hall, electric supply, eyelash exporting, sunflower farming, beauty supply. Then, my Halmoni's (grandma's) final store helped keep them afloat until her early death, around the time my aunt started making money as a teacher. With his daughter in the city, son in America, and beloved grandkids relegated to voices on the phone, Haraboji spent much of the last third of his life in solitude with little to his name.
That his efforts never got the family to escape velocity - or even steady velocity - confused me for the longest time. Haraboji was my Chuck Norris. Smart, athletic, handsome, kind, gentle yet commanding love and respect, better at Go than any human and most computers that challenged him. He was also tall.
I’ve come to realize that success or failure, 1 or 0 is the wrong frame for looking at events in the world. Chained binaries are not how reality arises, nor how it evolves over time. Though they make for powerful computational models, the real world is far more interactive.
I may never have answers on why Haraboji’s cards fell where they did, but I know one way he succeeded: he sent my dad and his older sister to college, and they continued the dream. How that threads with my parents’ story in America I reserve for another time, but our exact ups and downs have led me here today - though the same initial circumstances can shape siblings differently. Experiences and environments pull those who once shared a room in disparate directions.
The constraints in our reality shape us.
Take a trivial example: my mom used to joke that if I ended up tall too, I'd be a jerk (she also told me to marry someone pretty, but not too pretty or I'd get cheated on). When I stand next to my dad, my legs are longer, his torso's longer, so our shoulders are level. Yet at 5'7", he has one inch on me because his head is bigger than mine for some reason. I played a lot of sports growing up, so this was a problem, and at times I hated it almost as much as I hated being broke. But being short hardened my work ethic, made me develop larger presence, and granted me the knowledge that limits don't uniformly lower your ceiling; they modify the paths it emerges by. Even if I had a choice, I wouldn't change a thing because I wouldn't be where I am, I'd be someone else. The constraint became mine.
That said, when of legal adult age, a person should clearly have the right to choose. I was super lucky to end up with high capability to flourish otherwise, to build a meaningful life. But others short dudes may feel that an elective procedure to lengthen their legs would help them reach their own potential - make them more confident in work, treat others better because they're more at peace with themselves. More power to them.
So one might say it's a coherent principle to say that morally, our obligation building technology is to provide people with choice in their pain, let individuals decide which constraints of life to internalize. By providing choice, we could allow people to decide what purpose they want to suffer for. Sounds reasonable, but a bioethicist might ask "what about the unborn children? They don't get to decide."
Right, what about my kids? If I marry someone taller, or at least with tall family, can I rely on my son to revert the Byon family generational shrinking trend to the mean? Or do I - should I - be thinking about how to save him from, say, the logistical puzzle of dancing with a pretty girl in heels, or the need to lead with extra charm at a corporate recruiting event filled with gangly partners? As we start enabling parents to select embryos using tools like polygenic scoring, this is becoming more of an issue.
I believe parents' responsibility is to give children the capability to flourish, not to sculpt them towards a template. For height? I'd probably let the genetic dice roll. For severe disease that blocks capability to engage with reality at all? Intervene if I could. The vast middle is harder, but orientation can guide us: when future choice is possible, lean toward letting them make it. When it isn't, parents must define what flourishing means - what I'd lean toward preserving is access to the basic dimensions through which we interact with the world and each other (sensory, cognitive, motor, relational, among others) at minimum.
Selection is one thing. What about far higher stakes for people already here? If clean cures for Stephen Hawking’s ALS or Helen Keller’s deafblindness had been available to them, what then? I cannot know how they would have chosen, but the choice should have been theirs. Unchosen suffering is the issue, not suffering itself. But part of what made their minds what they were was exactly how their worlds narrowed and rerouted. Hawking’s disease left him with many, many hours to sit and think about the stars, even as he lost decades of mobility to decline. Keller’s writing on justice and dignity came alive for her through touch, through other people tracing language into her palm, while being cut off from vast swaths of experience. Their ideas weren’t produced by 'pure intellect'; they came out of very specific bodies under very specific constraints, which shaped them profoundly, for better and worse. These are exceptional cases - I cite them to illustrate tensions around constraints and lives, not to romanticize or argue against intervention.
Our bioethicist might then note that by calling it a 'cure', we're implicitly making a judgment that a 'defect' needs fixing. But I think it matters more what an intervention does than what we call it. Does it expand someone's capability to flourish if they so choose? The technology is neutral, but the orientation around it - expanding capability rather than correcting deviation - isn't. Gray areas definitely exist. Some traits we'd edit away carry hidden value depending on context (the textbook example is how sickle cell protects against malaria; there are surely contexts we can't anticipate where a 'suboptimal' profile turns out to be adaptive). And beyond specific genes, we know novelty emerges from diverse experience. Each person contributes something to the whole no other could. Their particularity.
We don't yet understand what capability for flourishing strictly means, which is why the value system influencing peoples' choices matters more than the technology. Beauty is a good place to examine this clearly.
Korean cosmetic surgery culture shows what happens when a value system suppresses particularity. If you step on the subway in Gangnam, your heart may quietly skip a beat or two, because there is a ghoul standing by the pole. Except it's not a ghoul, it's a girl who just had her jawbone shaved down, so her face is wrapped in bandages. She's not the problem - she's responding to a system that punishes deviation. 1 out of every 3 women in South Korea have had work done to their face by the time they are in their 20s, and some estimate college students are closer to 50%. Business is booming.
The jawbone may be a follow up to last year's birthday present of double eyelid surgery, which she had to get because she's the only friend in the group who didn't have it yet. There, one template - white skin, small face, raised nose, big eyes - is considered the objective, platonic ideal. Choice exists on the individual level, but the culture guides the population towards homogeneity. I think this is cataclysmic for young girls, much less society.
The global phenomenon that is KPop Demon Hunters agrees, perhaps because it was created by a team with hybrid East-West roots. The movie's message is to allow the "beauty in the broken glass" - a version of particularity - to emerge. It speaks to the fact that our biggest stars are stars because no one else is like them. Beyoncé and Angelina Jolie (+ Keira Knightley, if I may) are universally recognized as among our most beautiful women. I'm probably dating myself, so swap in Zendaya or Sydney Sweeney if they're your cultural anchors.
Take the counterfactual where they're anonymous. Put them in a commercial casting lineup, and scrooges would disgustingly nitpick how they don't conform to standards (not that scrooges don't already). The rest of us sane people reject this framework entirely.
By definition, what makes a person magnetic is that no one else carries their essence, how they walk through life. I want my daughter to have self-respect, and if she does choose to change physically, I want her to do it in pursuit of singular beauty, not some asinine standard. Same goes for my son.
Of course, not all particularity is equally good. And not just in people. Culture often takes something that is one-of-a-kind unique, like a banana taped to a wall - but is plain ugly - and claims that work of art is singular and therefore beautiful. As I wrote last year, this postmodern stance, that everything is constructed, arbitrary, and groundless, is getting tired. While rationalists ask for proof, this camp rejects the premise because objectivity itself is oppression in disguise.
Beauty, like truth, emerges from engagement, from interactions. We don't recognize either in the abstract, we recognize them intersubjectively. Not because we agreed, but because there's something to recognize. And rather than 'I know it when I see it', it is 'I know others will see it too'. While you could try to enumerate the characteristics or properties in a formula, that doesn't explain why the gestalt is resonant on a metaphysical level. As such, I think beauty is both objectively real and relational in nature. Beauty is not arbitrary, nor is it a standard. It's plural: like language, its grammar makes possible many ways to be genuinely beautiful. Irreducible to a checklist.
Thus, particularity is necessary but insufficient. It can be mere novelty - banal, incoherent, even ugly. Like suffering requires purpose to create meaning, particularity requires taste and care - an expression of love - to create beauty. Taste requires cultivating discernment, which comes from seeking and learning how to see. But care requires work. For both, embedding in reality is a necessary precondition. We are who we are in relation to each other and the world we inhabit. This, not solitary genius, is where our power comes from as individuals.
What do we do with this power?
Contrasts create clarity.
In solitude, Maya Angelou says, "we describe ourselves, and in the quietude we may even hear the voice of God" (Even the Stars Look Lonesome).
She doesn't specify what God would say. I think it'd be: "why are you here with me? Find your way back." In the interludes, we remember the stakes.
One reason biology is so interesting right now is if you look up the scientific definition of life, there are dozens of complicated answers. But put it in contrast: life isn't what we ourselves are, it's who and what we love.
This is why I don’t worry about truly creative work being trivial to replace with AI - we can't simulate a life. The real risk is forgetting how to tell the difference. When something like that does start to emerge, it doesn't need to threaten our creative space, it will add to it - sentient AIs might want to tell us the stories of their own experiences, not imitate ours.
Models can help us explore the problem space and be more confident in what not to do. The leverage is incredible: more time perfecting what matters, faster iteration, possibilities we wouldn't reach alone.
But creation needs a point-of-view imbued throughout, in explicit rules and implicit patterns. Each decision compounds over thousands of days of toil. The creative artifact is the accumulation of a path-dependent process, a thread of existence molding and being molded by the system it is built in. No machine can pull this out of our heads, because it’s also in our bodies and our histories and our communities. Our points of view matter, and there’s a thousand ways to skin Schrödinger's cat.
This is also my answer on free will, FWIW. The modern debate asks whether our particles are determined - and if so, whether choice is an illusion. A couple of our quantum schools from earlier would say yes, others no. But if we are a goal-directed process, at every scale, choice isn't an illusion covering up physics. It's choices all the way down.
A strong point of view isn’t enough on its own. Agency only works in a system responsive to it. Emerson and the American self-reliance tradition stress individual will, but if you’ve ever been to places that lack system feedback (like many parts of India) you know that will alone isn’t enough. The sheer friction of everyday life makes it impossible to operate much beyond the local. Despite the strengths of such systems (e.g. flexibility, resilience), the feedback loop between individual effort and broader change is weak.
In my parents’ generation, opportunity was not so widely available. People’s options were largely determined by what they had. Capability and choices contributed less than bloodline, capital, or proximity to gatekeepers. In our world of growing abundance, that constraint inverts. With apparent opportunities everywhere, the problem is what not to do, where not to go, who not to follow, what not to ingest.
All kinds of temptations claim to be our salvation from the ensuing backward-depression and forward-anxiety. One broad failure mode is indulgent escape. It shows up as consumptive vortexes like endless travel, scrollable feeds, and fake food, or as numbing agents like excessive medication and fads disguised as cures. Peddlers of these pathologies treat all constraints as sickness, remedied by completely overdoing it or sedating people to submission. The opposite failure mode is treating all constraints as sacred. Some refuse tools that reduce friction on principle, as if suffering by itself is virtue, or ease and enjoyment are weakness. But struggle is only valuable when it's shaping you toward something. Otherwise it's just pain. If this is ascetically self-imposed, we wish you well.
Another trap is uninformed fear, denying what's here to stay. Memetic information tends to lose important context as it goes viral. Well-intentioned, thoughtfully-reasoned context gets grossly distorted into pithy incantations by charlatans and false prophets. If this ossifies in culture and seeps into institutions, it'll slow down tangible progress and leave us all spinning our wheels, suffering in fixation.
And as I mentioned last year, it’s no secret we have a vacuum of meaning. This religion-sized hole sucks energy from various aspects of modern society. Across the political arena, for one. Of course we can point to extremists. But we also see psychotic breaks near the middle. The scary thing about Luigi Mangione, as the Free Press noted, is that he’s decidedly not extreme. No coherent ideology; his motives are a Rorschach test for where things are broken - which doesn’t make them less monstrous. When systems stop feeling responsive, some people desensitize; others reach for agency in the only cowardly ways that seem to register. Thinking about the range of classmates I had at Penn, my initial thought on news of his arrest was that 'reasonably adjusted people' are perhaps a stone’s throw away from insanity at best, evil at worst.
It's not surprising that, in the same breath, people talk about conquering aging or even cheating death. I’m not necessarily against that instinct; if someone figures out how to give everyone another healthy forty years, great. We probably need a few immortality zealots pushing the ecosystem, the way crypto’s decentralization zealots helped build stablecoin rails and prediction markets. But I don’t want my whole sense of purpose to hang on faith that we’ll escape the basic human condition. We don’t yet know what death is for, or what we’d become without it. We do know there’s plenty of pain and potential for flourishing in ordinary, finite lives that we’re nowhere near getting right.
Schopenhauer predicted that the philosophy and knowledge of the Upanishads (ancient Hindu texts) would become the cherished faith of the West. I'm way less interested in mystical new thought than I am in the potential for a scientifically rigorous approach that probes old Asian intuitions without spiritualizing them. Like the Buddhist intuition that selves are processes, dynamic and impermanent. Rovelli's physics and Levin's biology are arriving at the same place through different methods - convergent evidence that Western divide-and-conquer substance-metaphysics has it wrong. I think the Eastern frame will have practical value: it trains attention on relations and processes rather than materials, which may be what's needed for the next breakthroughs.
When evaluating value systems, I wonder:
Does it suppress or enable particularity?
Does it ground people in reality or offer escape?
What are the tradeoffs and failure modes of its selected values? (every system has them)
Personally, I'd say I became agnostic after childhood, but as I ask more questions these days, I find more answers, which beget more questions. The discovery of purpose itself, if I haven't made the theme clear enough by now, seems process-like. That said, I do lean perennial in that more unites us than separates us across the major belief systems. Call it God, Allah, the universe, a big ocean of consciousness, what have you. But the imagery of us being drops of water that come into being, refracting the light in our own ways as we fall to form the whole, seems pretty damn beautiful to me.
For my children, I have the same orientation as I do with embryo selection and deferral of choice. My future wife and I will choose for them until they're capable, knowing that choice is not to be taken lightly, then give them tools to evaluate and choose without prescribing the answer. If they're as sharp as my sister early on, I imagine that will happen sooner; if they take their time like me, they may find themselves riding along to church of some kind for a minute to establish the basics.
Whatever philosophies we choose to anchor, mine resolve into a plain throughline: use our growing understanding of intelligence to help people build meaningful, self-directed lives together in the real world.
It’s crucial to fight for ideas we want to propagate. In the stories we tell, the products we commercialize, and the capital we deploy.
Practically, when I consider components of an imagination machine, I end up asking three questions:
What do we actually know here, and where is our understanding limited (theoretically and instrumentally)
Who decides what gets built and shipped, how does context (e.g. values, incentives) shape their decisions, and where are the structural asymmetries?
Given the systemic failure modes I just described, does this offer escape or help people live authored lives?
The systems that matter most in the foreseeable future aren’t thought experiments; they’re the bridges we build working backwards from who we ought to be. To do this, we need to distinguish between real commercial trajectories, speculative but plausible science (faith), and blind faith. Real trajectories are what’s already here, unevenly distributed. Speculative but plausible work lives where we have reasonable scientific targets that don’t obviously violate physics but not yet the instrumentation or theory to hit them. Blind faith is when metaphysics masquerades as progress, and we skip over the gaps as if fantasies are inevitable. Treating blind faith as settled destiny, or worse as morally urgent, will incinerate capital.
Applying this orientation of epistemic hygiene, this is what I care about: in media, stories rooted in the beauty of ordinary lives during extraordinary times, not escapism. In products, tools that demonstrably help create energy and security in daily life, and confidence that our children will be better off. In capital, skewing towards the real and speculative buckets, with an eye on overlooked or hybrid work outside standard venture profiles.
For me, the work starts less with building disembodied superintelligence and more with messy places where the tools of Western medicine have largely failed: depression, anxiety, ADHD, OCD, neurodegenerative disease, chronic pain, addiction, and other conditions. But the reactive 'find disease, treat disease' frame misses how we experience life. Suffering and flourishing are intertwined, not separate problems. Approaches that treat them as separate will keep failing. The most interesting systems aren’t gods in the cloud; they’re grounded, long-horizon mirrors that help us walk together. They'll weave together behavior, physiology, genomics, and richer signals from whole-body technology (wearables, ingestibles, implants, etc.) to help us see and reshape our own lives.
Commercializing AI will be like self-driving. “Gradually, then suddenly” describes how we experience discontinuous consequences of continuous change. If Hemingway were alive today, I think he would tell us to engage with the progress, but pause to take stock of what we're looking at clearly, maybe enjoy the view.
At breakfast with an old friend last year, I mentioned I didn’t think AI had had its 'running water' moment yet. When my dad was young, his village didn’t have much infrastructure. Though he's lived through Korea’s rapid industrialization, the internet revolution, and took his second trip in a Waymo in the epicenter of tech the other week, to him the biggest qualitative step change in life is still when his village got access to tap in the mid-'60s.
I didn’t think we had crossed this rubicon with AI until this year, when a bug on OpenAI’s end locked me out of my account. I’d gotten used to loading up Codex tasks at night and reviewing PRs in the morning. Sitting there without access to my work felt like a big New Jersey snow day: when AI drives a meaningful portion of your economic output, outages really do feel like infrastructure failures. That’s a useful gut-check: if model progress stopped today, we’d still spend years deploying what we already have across every sector. But we likely still need fundamental breakthroughs beyond scaling compute before we get anywhere near the futures the hype keeps selling.
All that’s to say we’ll question many things in the coming years.
Our most powerful stories tend to converge to a final question. Science fiction explores what it means to transcend our limitations. As humanity progresses towards the inevitable heat death of the universe in The Last Question, Isaac Asimov asks: “How can entropy be reversed?” For Asimov, like Tolkien and Dostoyevsky, the answer is left to God. Others, like Gertrude Stein, die in dignified denial, resolute that if there is no answer, then “there is no question”.
I’m pretty sure I’ve cracked the code on what the last question really should be. It happens as I’m dreaming. I wake up and reach for my phone to write things down as the threads recede in my mind. When I look at my Notes app the next morning, I see the epiphany starts and continues and then trails into typos and becomes incoherent. My desk lamp is still on from sketching faces, and a bowl of persimmons sits on the still life in progress. The outlines of the exercises blur as my eyes shift to the light awakening the room, and I hope when I see my sister next, I can show her the drawings I learned to make.
Special thanks to friends who keep putting out creative projects despite using your constraints, chosen or otherwise. I probably don't say it enough, but seeing your imagination at work inspires me.
Appendix A: Double slit & quantum interpretations
Here's how I explain the “double-slit experiment” to myself to try and make it more intuitive.
You shine light at a barrier with two narrow slits cut into it and look at the screen behind to see where the light ends up making marks.
If you imagine light were just a sequence of little bullets, you’d expect two bright stripes on the screen: one behind each slit. If instead you imagine a flow of light waves hitting the barrier, you’d expect a striped “interference” pattern on the screen, where waves going through the two slits overlap and cancel in some places as they spread out, like ripples in a pond.
First, you’re just interested in how it ends up on the screen. You do the test with nothing detecting how the light passes through the barrier. You crank the light source down so photons (units of light) go through one by one. The first photon you fire goes through the barrier and makes a single dot on the screen. On its own, that looks exactly like a little bullet: one blob, one spot. If one bullet went through, it would have picked a slot and ended up making a spot behind the one it picked.
But for some reason, that spot does not necessarily end up neatly behind one slit. It ends up somewhere in between or further off to a side. As you keep firing, the weirdness continues. The dots still don't pile up into two simple stripes behind the slits.
Instead, many dots together draw a striped pattern of bands across the screen - the same pattern you’d expect from ripples going through both slits and overlapping. Whatever is happening between the light source and the screen, it can’t be “each photon just picked a slit and flew straight through” in the ordinary way. As each photon flies from the source to the screen, its final landing spot only makes sense if both slits somehow matter for the outcome, which it can’t do if it’s acting like a single, independent particle the whole way.
So, the first mystery is, "what physically happens to the unit of light during that flight through the barrier?"
If the photon’s a particle, does it 'split' somehow, then rejoin? If so, how does it land where it could only land if it had gone through as a wave? Is there a 'guiding force' that sends the particle through one slot, but only allows it to make a final mark in one place? Or is it a wave? If it's a wave, how does it end up only making a single particle mark for each firing? Does it go through as a wave, and at the end 'collapse' into a single point on the screen?
To try and figure out how the light flies through the barrier, now you change the setup.
You add tiny detectors at the slits so you can, in principle, tell which slit each photon used. These detectors have to interact with the light as it passes the barrier to register; they're too small to 'see' in the usual sense.
You fire the first one. It goes through the barrier, and on the other side, you see it ends up in one of the regions lined up with a slit. Strange. As you keep firing, each photon still shows up as a single dot on the screen, and now, the dots continue to land in two regions lined up with the slits. The pattern builds two bright stripes behind the slits; exactly what you’d expect from particles choosing one path or the other. There is no ripple pattern here.
This is the second mystery. Why does detecting the light at the barrier make the light only go through one slit?
In both versions, each photon always lands in exactly one spot on the screen. Without detectors, many dots together draw a wave-like interference pattern, as if each photon somehow 'used both slits' before ending up in one place. With detectors, many dots together draw two plain stripes, as if each photon simply chose a slit. We never see half a photon here and half there. The first pattern only makes sense if, in some way we don’t yet understand, both paths mattered.
Classically, we want light to be either a wave or a particle. The double-slit experiment says it isn’t that simple. A single photon hits the screen like a point, but the pattern of many hits behaves like a wave that cares about both slits. Adding detectors at the slits doesn’t change the fact that you see one dot per photon; it only changes which overall pattern the dots build, ripples or two stripes.
Quantum theory models this with a hybrid recipe. Before anything registers on a barrier detector or the screen, the photon is treated as being in a 'both options at once' superposition. Instead of a wave or a particle, it is an excitation (a kind of cloud-ish lump) in an underlying field (for light, the electromagnetic field). When a detector at a slit or the screen itself finally clicks, that spread-out description is replaced by a single recorded outcome. People call that replacement “collapse.” The recipe works bewilderingly well, but it still doesn’t tell us what is actually happening between source and screen.
Some prominent proposals for what's actually happening in the non detector case:
Shut up and calculate (Copenhagen): don't ask, just predict where it lands. "What's it doing between source and screen" isn't a real question. Philosophically unsatisfying, but it works for most practical problems today.
Hidden variables (Bohm): The photon really went through one slit. A "pilot wave" also went through both and steered it. But the wave lives in abstract mathematical space, not physical space. So "what is the wave made of?" has no good answer.
Spontaneous collapse (GRW): The photon really is smeared through both slits, then kind of snaps into one final spot. GRW specifies how this collapse happens so the predictions work, but it doesn't explain why exactly.
Many-worlds (Everett, Deutsch etc.): The photon goes through both slits, and the universe literally splits. Visit every branch with Rick & Morty, tally where it landed on a clipboard, it shows the wave pattern.
Relational (Rovelli): Doesn't tell you what the photon is doing. Says the question assumes something wrong: that there's a 'thing' called a 'photon' taking a path. Instead, the photon we observe is the result of an interaction. Many find this unsatisfying because it doesn’t explain 'what' that interaction is between. Rovelli says it's interactions all the way down.
Honorable mention (QBism, consistent histories, etc.): won't elaborate here.
In the detector case, all agree the photon goes through one slit and you get two stripes. Why detection changes the outcome is again where they disagree: Copenhagen says don't ask, Bohm says the pilot wave changes, GRW says collapse happens earlier at the detector, many-worlds says branching happens at the detector (so Rick and Morty's clipboard shows two stripes, not the wave pattern). Rovelli says the detector creates an interaction.
In the essay's body, I go on to explore why this is interesting.
(in order of appearance within categories. Added links and non-spoiler commentary if you're looking for holiday material)
TV & Film
Severance (2022-): Britt Lower, Adam Scott, John Turturro, Zach Cherry, Tramell Tillman, and the rest of the cast's performances are reason enough to watch this.
Pantheon (2022-2023): Unlike the epic world-building of a lot of other sci-fi where the myth makes the story, this show is rooted in love: between a dad and daughter, girl and boy, etc. The show is based on some of Ken Liu's short stories, which seem to share deeply emotional themes (future nostalgia, grief, etc.). Read The Paper Menagerie if you want to cry, especially if you're an Asian immigrant.
Arcane (2021-2024): The combined 2D & 3D animation, colorful steampunk aesthetic, and soundtrack make this tale of sisterly love/trauma/estrangement/reconciliation a sensory delight. It will leave you in cathartic shambles.
Pluribus (2025-): Watching this now!
Foundation (2021-): This gets better as it goes, so keep it for casual backburner watching until you get to the 3rd season, then binge if you like. I think some of the ways it departs from Asimov's books works well, like the use of Empire (Lee Pace makes this great), others less so.
I, Robot (2004): Solid airplane watch. Golden age of movies.
Midnight in Paris (2011): If you're ever not busy, toss it on ? Owen Wilson romcom.
Everything Everywhere All At Once (2022): Well-deserved Oscars for Michelle Yeoh and Ke Huy Quan, though I think Stephanie Hsu got robbed. I like Jamie Lee Curtis she's a little too good at Hollywood. Don't hate the player, hate the game I guess.
Guillermo del Toro's Frankenstein (2025): It's not Pan's Labyrinth, but topical and worth a watch. Oscar Isaac is a great Victor, and Jacob Elordi's post-education Creature is way cooler than the bolt-through-temples Halloween goblin.
When Life Gives You Tangerines (2025): This show captures distinctly Korean emotions so well. Hard to explain how our history and experiences color the way we move through the world in an essay or conversation, so if you do take the time required to watch it, you'll understand your closest Korean friends better. IU is also my favorite Korean singer and actress.
KPop Demon Hunters (2025): No explanation necessary, Ejae and the rest rock.
Books, Essays, Short Stories
Betty Edwards, Drawing on the Right Side of the Brain: I've been stuck on the 4th exercise; will report back when I finish. Should be able to show a before & after comparison.
Oliver Sacks, The River of Consciousness: collection of short essays on memory, creativity, and big questions in life. If you read the New Yorker piece, you know one of our best science storytellers was a complicated man. Like many out there, the revelations saddened me greatly. He wrote this book, always one of my favorites, near the end of his life - after he had finally found love he had long deprived himself of and many years of reflection. I had already made brief mention of it in this essay, in ways that actually underscored my themes in context. From a scientific & ethical standpoint, what he did in his clinical work is inexcusable, and it may be as large an indictment on our culture that so many of us embraced it blindly as it is on him. Steven Pinker's post takes issue with the intellectual elite who criticize hyper-rational folks. I think reasonable people agree we need the analytical rigor and artistic depth across the spectrum for us to live in harmony and advance. I hope this came through in my essay. In the Valley, the balance in the Force seems off, and we're seeing a swing back. But my issue isn't with either side, it's with the epistemic orientation. Humility, skepticism, and engaging in good faith are critical wherever we stand. The New Yorker piece adds color to Sacks' moving autobiography On the Move as well. He had a lot of stuff going on otherwise - face blindness, torturous disharmony with his sexuality/ long-time celibacy, hundred-plus mile motorcycle rides through the night, lifting till he broke his body down. But his prolific creative output was inspired, and I hope will be framed differently going forward instead of being discarded. Understanding the lives of creators changes how we (re)interpret their work. Just as we shouldn't blindly accept faith, we shouldn't blindly accept facts/ science, nor the surface level value (or lack thereof) of a body of work. In light of all this, it may be that Sacks' account of his life and last book on literary questions end up being his enduring works of imagination.
Daniel Tammet, Every Word Is a Bird We Teach to Sing: read this a few years back and was struck by the lyricism his neurodivergence has helped him develop. Synesthesia especially is something we should understand better neurologically, what a gift (Maggie Rogers is like this too as I mentioned last year).
Douglas Hofstadter, I Am a Strange Loop: often confusing, but more digestible/ salient than Gödel, Escher, Bach. The most beautiful part is when Hofstadter talks about how his wife's consciousness is like part of him. Also, Melanie Mitchell, one of Hofstadter's protégés, is notably a measured voice in AI discourse. Her newsletter is helpful to keep tabs on, as she provides the perspective of a mature academic with intellectual, not financial, vested interest - though you can sense her timelines/ goalposts have shifted up too as models have progressed. Mitchell's book surveying complexity is interesting as well.
Gaston Bachelard, Water and Dreams: An Essay On the Imagination of Matter: this is an obscure little book my mom suggested when I mentioned I wanted to write about imagination. I don't know much about Bachelard but he definitely had a lot of creative thoughts swimming around.
Fyodor Dostoyevsky, Notes from the Underground: strange as this one is, I think it helps to read earlier works before tackling 1000 page magnum opuses. More digestible, and keeping the meta story of a writer's intellectual and artistic development in mind adds color to the experience. I read The Brothers Karamazov in middle school and retained next to nothing, so if anyone wants to join me in this journey, give this a read first and we can make the big one a 2026 goal. Also - Pevear's Notes intro explains the historical backdrop I mentioned better, as well as Dostoyevsky's failure to publish the religious version of the ending due to censors.
Emily Brontë Wuthering Heights : Not sure why this was my favorite as a kid. I think around the time I read it, I was also the Phantom from The Phantom of the Opera for Halloween. Must have had my heart broken at recess or something. Started rereading recently and am appalled so far. But I think I do like it? Margot Robbie and Jacob Elordi again star in the upcoming movie.
Leo Tolstoy, The Death of Ivan Ilyich and Other Stories: The other stories in this are great too. Man and Master I think is my favorite (good to read laying down on a park day), though I haven't read Hadji Murat yet.
Gertrude Stein, Selected Writings of Gertrude Stein: The kind of book to keep on your shelf and flip to a random page when taking a snack break. I started with Tender Buttons - don't read it trying to make it make sense, just observe what the texture of words and their interactions elicit... or something.
Francesca Wade, Gertrude Stein: An Afterlife: I didn't know much about Stein until I listened to Francesca Wade's excellent biography. Wade is a fantastic writer, and she narrates the audiobook herself so I recommend it for commutes or long drives. I should also mention there's some debate on what her last words were, but people like the version I quoted for poetic reasons.
David Deutsch, The Beginning of Infinity: I listened via audiobook, but he uses a lot of terminology and traces through scientific & epistemological history, so if I read this again I would get the book.
Mary Shelley, Frankenstein; or, The Modern Prometheus: the way this is told - in nested first person stories (first the ship captain, then Victor, finally the Creature) - is what makes it so different and haunting. You're on each journey as the chase comes to a head at the ends of the Earth. Also, I have a version on my Kindle with Charlotte Gordon's intro about the book's origin story; I can't find it online but can share a copy/ look harder if anyone wants.
Frances Ashcroft, The Spark of Life: more modern/ scientifically grounded yet accessible survey of concepts that Robert O. Becker excitedly articulated in The Body Electric.
Nick Lane, The Vital Question: Energy, Evolution, and the Origins of Complex Life: fascinating, bit technical at points but it's hard to find good biology reading that's neither a textbook nor cute case studies.
James Dyson, Invention: A Life: this man invented and manufactured new physical consumer products when there was no venture ecosystem, so the best he could do were checks from the literal bank and a couple individuals. The family controls the company today, so they have full creative autonomy. Amazing.
Victor Frankl, Man's Search for Meaning: On surviving the Holocaust - "I did not know whether my wife was alive, and I had no means of finding out... There was no need for me to know; nothing could touch the strength of my love, my thought, and the image of my beloved. Had I known then that my wife was dead, I think that I would still have given myself, undisturbed by that knowledge, to the contemplation of her image..."
Francisco Varela et al., The Embodied Mind: ahead of his time!
Maya Angelou, Even The Stars Look Lonesome: series of short essays on the most salient parts of her life. Her essay on solitude is great. Here's the full quote I love:
Many believe that they need company at any cost, and certainly if a thing is desired at any cost, it will be obtained at all costs. We need to remember and to teach our children that solitude can be a much-to-be-desired condition. Not only is it acceptable to be alone, at times it is positively to be wished for. It is in the interludes between being in company that we talk to ourselves. In the silence we listen to ourselves. Then we ask questions of ourselves. We describe ourselves, and in the quietude we may even hear the voice of God.
Isaac Asimov, "The Last Question": read it to the end.
Links & Videos
Kai Wu, Surviving the AI Capex Boom: tl;dr charts & numbers for the thought that the Mag 7 might not accrue the benefits of all the value they're laying the pipes for.
Add Up Solutions, Laser powder bed fusion: I have a video on my phone from a 2023 additive manufacturing trade show, but this one's better. At the event, I also saw Columbia researchers present a paper (which I can't seem to find) where they shined UV light to cure resin flying around in a rotating 3D chamber. In it, a miniature Rodin's Thinker materialized out of a cloud of particles. Use cases likely include precision components like lenses or eyeglasses. Industry broadly has started with prototypes, custom molds, and bespoke parts, but eventually additive will be more integrated in production lines. Really curious about biotech applications in manufacturing too.
Tim O'Reilly, Jensen Huang Gets It Wrong, Claude Gets It Right: tl;dr treating AI-as-workers robs humans of agency. Sure for near term and heart's in the right place, but there's more to it than that.
Dan Shipper, Why you should see the world like a large language model. As I was editing this essay, I came across this video from Every, which does a great job articulating some of the themes on rationalism and where LLMs/ AI more generally are bringing us.
Anna Ciaunica, From Cells To Selves. Similarly, as I was finishing this essay, I came across Anna's recent work, which beautifully crystallizes what Varela and others have long argued with additional reflections of her own. Ideas are currently 'in the air' :)
Sabine Hossenfelder, The Simulation Hypothesis is Pseudoscience: from a public critic of a lot of mainstream theoretical physics.
George Musser, What Einstein Really Thought About Quantum Mechanics: it's an oversimplified cultural distortion that Einstein "didn't endorse quantum" or whatever. He was deeply involved in creating/ thinking/ discourse about it.
John H. Richardson, The Most Frightening Thing About Luigi Mangione: he shot him in the back. enough said.
Max Hodak, 'The Binding Problem' (2025): explicit treatment of binding as the central obstacle to consciousness engineering; we overlap on the need for new physics, though I may be more skeptical about substrate independence and the question of identity/ continuity.
Josie Zayner, "Immortality isn't progress. It's paralysis.": from a biotech founder who I suspect I share a lot of basic views on tech with. Her company is making actual unicorns.
Papers
Nedergaard, Lupyan, "Not Everybody Has an Inner Voice: Behavioral Consequences of Anendophasia". Large behavioral study on people who report little or no inner speech; they’re broadly fine cognitively but show specific hits on phonological tasks (like rhyme and confusable-word memory), which makes “no inner voice” feel more real.
Hinwar & Lambert, "Anauralia: The Silent Mind and Its Association With Aphantasia": Introduces the term anauralia (no auditory imagery) and shows that most aphantasics also lack inner sound, but with a few rare dissociations - snapshot suggesting these modalities travel tightly (but not perfectly) together.
Zeman, Sala, Torrens et al., "Loss of imagery phenomenology with intact visuo-spatial task performance: A case of ‘blind imagination'": single-case of a man who loses his mind’s eye overnight yet still aces visuo-spatial tasks, forcing you to separate “what it feels like” from “what the system can do under the hood.”
Milton, Fulford, Dance et al., "Behavioral and Neural Signatures of Visual Imagery Vividness Extremes: Aphantasia versus Hyperphantasia": Compares aphantasics, hyperphantasics, and controls on memory + fMRI; finds big differences in autobiographical richness and imagery networks even when basic test scores look similar. A data point for “same tasks, very different inner movies."
Levin et al., "Aging as a Loss of Goal-Directedness: An Evolutionary Simulation and Analysis Unifying Regeneration with Anatomical Rejuvenation": Uses simulations of toy creatures to argue that aging is what happens when systems lose their ability to aim for and maintain target body states. Cellular noise, reduced competency, and comms failures accelerate aging but aren't its root cause. Suggests rejuvenation may work by reactivating dormant information.
Levin et al., "Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era": Formalizes “cognition” as search efficiency in multi-scale morphogenetic problem spaces, then measures how efficiently cells and tissues solve problems. Treats goal-directed behavior as a continuum, not a brain-only privilege.
Vaz & Varela, "Self and non-sense: An organism-centered approach to immunology" (1978): Argues against the standard self/nonself discrimination paradigm; proposes the immune system as a closed, self-referential network where "self" is enacted through organizational dynamics, not discriminated via pre-given criteria.
Varela & Coutinho, "Second Generation Immune Networks" (1991): the canonical “immune system as a distributed, network-level process” paper.
Stewart, "Cognition without Neurones: Adaptation, Learning and Memory in the Immune System" (1993): "cognition without neurons" reinforcing the body doing "cognitive work" independently of the brain.
Thinkers & Relevant Concepts
Alfred North Whitehead, metaphysics etc.
Carlo Rovelli, Relational Quantum Mechanics
William James, Hilary Putnam pragmatism
John Searle, biological naturalism
Ralph Waldo Emerson, Henry David Thoreau self-reliance, transcendentalism
Post-postmodernism, Metamodernism, Liberal naturalism
Capability approaches, expressivist objection, procreative ethics
Linguistic relativity (weak Sapir-Whorf)
Music
(not referenced, but for any further reading. some thematically relevant cyber/silk/steampunk sci-fi ish, K-pop, 2000s nostalgia, sister's old jams. Plus other random songs.)
In essence, we’re constantly living in made-up futures. Ambling in clouds of imagination.
Imagination, or fiction, is what we make up to reveal truths about life. As we envision what lies beyond our senses, we clarify exactly what lies within them. This gives us information to act, to close the gap. If imagination is how we interpret reality, creation, then, is how we shape it.
Just as we construct moving forward, we do so with memories looking back. I, for one, distinctly remember getting lost at Disney World. A tall, Ent-like man, seeing me separated from my family, offered his shoulders and started rotating as I hopped up. I can still feel my view bending like a fishbowl as it entered the plane occupied by his head. However, to this day my mom insists what I think was a formative experience actually happened to my cousin.
Oliver Sacks, in The River of Consciousness describes how we record our walks through life with bespoke lenses, raising questions around how reality, history, and narrative relate. That he himself fabricated details in his earlier accounts is an irony worth noting. To him, it’s a miracle we agree on anything at all. The resulting negotiations are what we call the arts & sciences, our shared knowledge of the truth.
Though we’re not even close to resolving questions about the nature of our own hallucinations, we’re now building machines that hallucinate and negotiate on our behalf.
The more things advance, the more important basic creative human faculties like writing, reading, math, coding, and drawing become. They teach us to record cognition, which is key in knowing how to see. This matters for discerning and filtering out slop, but also for communicating how we want to express ourselves with the help of AI. A picture’s worth a thousand words, and no amount of vocabulary can communicate the gestalt of what is in your mind. But there's a deeper question: what makes human creation valuable in the first place?
I see an answer to that in my room daily, where I keep my two most prized possessions. The first is my sister's still life of eggs, which I know would sell for way more than what our guidance counselor offered. Three eggs spill out of a cup, and one is cracked. I look at it and wonder how she got the same blue blend to be sad in one place, happy in another, and how she used shadows to crinkle the background's white into protective parchment. It's hauntingly beautiful. The second is her portrait of my face, which I remember posing for. I look at it to see her: concentrating, quietly and confidently in her element, finally turning it to reveal how she saw me. In it, I see the care of my older sister and best friend.
The value of these paintings comes from my relation to the accumulated weight of a life lived - every choice shaped by her experiences, relationships, constraints. As machines learn to create without lives of their own, I keep asking: what are we actually building, and how should we be orienting as a result?
Last year, I wrote some initial thoughts on beauty and creativity in the age of AI, and I wanted to understand how my views have evolved building agentic systems using agentic systems. If you’ve worked long enough with coding assistants, it’s easy to see capability overhang with models in their current state, much less those we’ll see through the rest of the decade.
Over the next five years, McKinsey estimates cumulative AI investment will reach $5.2 trillion. As a percentage of GDP, this already exceeds what went into the internet, and adjusted for depreciation, railroads as well. This is the largest core infrastructure buildout in history. But people are right to call out things will take a while to materialize, and reasonable technical leaders/ researchers in the Valley agree it won't happen overnight. Scaling alone likely won't get us there.
Long term foundational, short term careful. It's not as simple as plugging things in; rewiring workflows is a very messy, human problem. That said, dismissing LLMs as 'stochastic parrots' misses what that means: indefatigable workers in the hands of opinionated humans, and surprisingly capable collaborators when given the right context and feedback loops. I've spent the past couple years building with them. The gap between what's possible and what most people assume is wide.
When GPT-3.5 came out, I could barely get the model to write a function without specifying minute details. It took as much effort to describe what I wanted as it did to just learn to write the code myself. With GPT-5.2 Pro/Opus 4.5/ Gemini 3, almost no question I have for the model is left unsolved. Codex, Claude Code, etc. have made this intelligence useful in flow (so instead of copy-pasting questions, context, and outputs back and forth, we can pair program) and in remote dispatch (delegating work to run in the background while asleep). The writing’s on the wall with continual learning on the horizon.
It starts with coding, but the promise of agentic creativity more broadly has tantalized markets. Reliable media generation is coming: at OpenAI’s DevDay, I saw a wild demo of Sora being used in a storyboarding workflow previously only doable by studio teams. At the same time, engineering biology looks imminent, and in the physical world, lasers zap metal aerospace parts into existence, seemingly out of thin air. Until I saw this at a manufacturing trade show, I had no clue we've gotten this close to alchemy.
Today’s converging technologies externalize imagination, and increasingly, creation to machines.
What happens when their dreams begin to shape our realities?
There have always been ghosts in the machine. Random segments of code that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. — Dr. Alfred Lanning
In the pre-slap Will Smith classic I, Robot, Dr. Lanning, a holographic ghost, appears to warn us about ghosts. That I’m hearing this through crackly disposables plugged in the yellowed plastic of an Air India Boeing near Halloween gives it an extra spooky veneer.
Smith’s Detective Spooner sees the worst in robots and takes the message as supporting evidence for his prejudice. In his experience, clankers do not always do the right thing despite programming to never cause humans harm. The audience adopts his fear that AI could misuse free will against humanity.
As it turns out, the danger lies in VIKI, a decidedly quasi-sentient, hyper-logical system that interprets the laws of robots to totalitarian extremes. VIKI is what you get when you let logic run the world without a conscience. She presumably does not feel things like pain, suffering, or empathy.
I’ve been watching and reading more science fiction lately… but not so much I get stuck in la la land. Partly for clues on where culture thinks things are headed and how it’ll deal with such scenarios, and because I miss my sister, who always loved it. I want to speak her language more, sit with her inside the worlds she cared about.
Writers also don’t have to worry about commercial feasibility or market risk for their tech, so they’ve already thought of tons of good and bad ideas and their implications. One question that keeps surfacing is what we're really building when we build intelligence, and what that has to do with bodies, constraints, and the specific lives we actually live.
The notion that it is desirable for machine intelligence to emerge unfettered by 'lower level' systems like emotions is an impoverished distortion of Enlightenment rationalism. One that equates reason with cold calculation. In this world, technological transcendence to clinical rationality is something to strive for. Ultimately, it takes Lanning’s creation of a truly sentient Sonny to defeat VIKI’s perfect compliance. Only a system that can learn to care, even irrationally, can save us.
While an alarming number of people still call for VIKI-level power with top-down compulsion, morality in the I, Robot hypothetical doesn’t seem very controversial. At least in principle. From a design standpoint, if machines ever reach that level, we wouldn’t want VIKI’s rigidity; we’d want something closer to Sonny’s ability to reflect and update. A form of meta-ethical reasoning. This itself is a leap requiring trust and courage, with existential stakes. Assuming we could recognize true sentience, we’d need to consider how machines treat us and how we treat them to make the partnership work. Foundation’s Lady Demerzel is a great recent example of this: in a world of conscious bots, encoded slavery is unambiguously bad. Clear cut, right?
Not really. 'Recognizing true sentience' is a galactic assumption. Until we have a wonderful scientific theory of consciousness and solved philosophy’s oldest problem, at least some people somewhere will treat AI as nothing more than sand in a data center. Honestly, likely many people many wheres. It’s highly unlikely we’ll figure out the biophysics to know if the lights are on, and someone's home, before wide deployment. Humanity used fire for a couple hundred thousand years before understanding thermodynamics. We are Neanderthals in the eyes of posterity.
I've read the Wikipedia for "fire" many times over, and I still don't really 'get it' beyond the physical explanation. Sure, checks out. But what is it? Like, what is it?
Anyways, there are also theoretical reasons from computability theory (Rice’s theorem and related results around the halting problem, if you want the terms) to think no perfect 'consciousness detector' exists even in principle for arbitrary systems, so we could end up relying on theory plus practical heuristics and empirical rules anyway.
I think we’re unlikely to get a clean, closed-form 'theory of consciousness' first and build minds second. It’s more likely we’ll feel our way there from both sides: scaling combinations of new architectures, watching what kinds of behavior and inner sparks seem to emerge, then folding those lessons back into elegant theory.
More importantly, we’ve already started using AI-as-workers, not as mere tools. They’re acting more autonomously by the day; moving money, negotiating contracts, steering attention. The comforting posture is: “we should view AI as just instruments. They help, we stay on top.” In practice, that doesn’t hold for long.
Exact timelines aside, one obvious reaction to the coming wave is to cover our asses. That’s why we see modern fantasies gain momentum: yes, giving machines minds, but also making our minds copyable, upgradable, mergeable. In a world where models get smarter by the quarter, many believe transcendence of our own brains is the best hedge against AI-as-workers becoming AI-as-overlords. At minimum, if the genie’s escaped the lamp, we should definitely want to invest in our own evolution.
Beneath this sits a sticky concept often (mis)attributed to the Enlightenment. Descartes posited that mind (res cogitans) and matter (res extensa) are distinct substances. Notably, he didn’t imagine consciousness as a free-floating entity. Centuries of reinterpretation, however, morphed his distinction into the mind-body dualism that Gilbert Ryle mocked as “the ghost in the machine” (later referenced unironically by the fictional Lanning).
Analogizing with the tech of the day, especially using colloquial metaphors like “hardware & software”, flattened things to say consciousness can operate outside bodily constraints. This kind of folk Cartesian dualism retains a strong foothold in our collective imagination, in part because it resonates with older religious intuitions about the body and soul across Abrahamic traditions. Particularly in Silicon Valley, where intellectual horsepower is prized above all. In this view, bodies are brain wrappers, bags of water & tissue in service of cognition. Brains in turn are just physical substrates instantiating consciousness, replicable in principle. Sounds suspiciously like sand-in-data-centers.
By the time we start talking about how the intelligence explosion plays out, we’re already carrying such assumptions with us. The meatier questions now aren’t about what constitutes correct morality given sentient robots; they’re about consciousness freed from them.
If I, Robot cautions against a kind of AGI, hot stories these days explore greater ambition: mind without body, and eventually, mind beyond singular being. Freedom from biology promises more power than freedom from labor, and it largely sells itself as progress. In hit shows Severance, Pantheon, Pluribus, and Arcane alike, we’re faced with escalating possibilities, morality TBD. These map to a ladder of less pain, more life:
Level 1 splits the self. In Severance, Lumon Industries offers "outies" a clean deal: you get evenings, weekends, and a spotless memory while someone else does the work, and more importantly, absorbs the dread. That "innie" self exists almost entirely for emotional punishment. The arrangement is then justified post hoc by the innies’ personhood: maybe we shouldn’t have made them, but now that they’re here, it would be immoral to kill them.
Level 2 uploads the self. Pantheon turns minds into software. Uploaded Intelligences (UIs) are people made digitally immortal, eventually giving rise to conscious Cloud Intelligences (CIs) that serve society. One effect is runaway material abundance, as trillions of digital workers running at higher clock speeds get way more sh*t done. Uploaded people spend centuries doing what they want, be it hedonic diversions or self-actualizing side quests.
Level 3 collapses many into one. Pluribus takes e pluribus unum, Latin for "out of many, one," seriously. The world is overtaken by a consciousness-altering alien contagion that binds almost everyone into a peaceful, cheerful collective. A small immune minority are pressured to join. Relief at this scale is freedom from loneliness and suffering... or the death of something important.
Level 4 mines parallel selves. Many stories (Pantheon, Arcane, Everything Everywhere All At Once, Marvel, Spider-Verse) explore this: rather than just engineering minds, the tech treats parallel realities as a quantum compute cluster. People leverage the experiences of alternate selves and histories across timelines. Is it acceptable to interfere with other worlds as your personal laboratory in the first place?
Despite the warnings, certain types of futurists envision a curve marked by loosening constraints: first from the worst parts of one life, then from that life’s mortality, then from its history, and finally from the limits of being a single continuous being at all. Each step promises more agency, more creation, fewer costs. Dreams feel divinely possible.
Endgames vary. Each vision promises liberation despite its risks, pitching more and more human agency as worth whatever tradeoffs it entails.
Is what they propose even feasible - how concerned should we be right now?
On her 100th day of life, the precocious child wore a hanbok, headdress drooping over the brooding brow & chunky cheeks of a grown-up in a toddler’s body. A sort of seasoned grumpiness, only found in dispositions of older siblings, emanates from the commemoration’s frame. She is amplified, not consumed, by cavernous spines in the wooden wicker rocker.
By comparison, my baek-il portrait looks, as the Koreans say, ddil-ddil-hae: loosely translated, dumbass-like. My eyes, at once vacant and reflective, truly glasslike, gaze in the camera's general vicinity but never in the lens.
These photos contrast our degrees of ‘becoming’ or ‘awareness’. Though we lost the bulk of our family photos in a house fire 16 years ago, the ones capturing this contrast were burned into mine.
At that age, neither my sister nor I made much noise in public. While my parents might like to think it’s because they raised us to be polite, I think we just didn’t want to be bothered. On flights, or at restaurants, we’d sit there with our hands folded in our laps. Maybe some days we found the move to the States disorienting and were trying to figure things out, maybe other days we were just loving life. We used to look in the mirror together and marvel at the situation we found ourselves in, repeating “wait, I’m alive - why am I alive - I am alive”. Happily hoping the meaning of all this would soon enough be clear.
Silence to us was a way to observe externally. But where I was absorbing the world as it came, she was already constructing from it. Ever imaginative, she dreamt of ideas and universes beyond. Her vision then found an outlet in creative production as an artist.
As my sister turned towards making (and seeking, finding musicians before anyone else), I turned toward the physical. We were responding to the same dislocations, but they took hold in our lives differently. The gap didn't feel dramatic at the time. While she was drawing or painting, I was playing with Legos or expelling excess energy with my body instead. First I crawled fast, then I ran (and later lifted prematurely, probably stunting my growth). In tangible people, practical knowledge, and tactile experience, I found endless expressions of a world I loved.
We found joy in our own ways, and our own ways created friction. After we'd fight as siblings do, I remember always thinking: if I could have something that could just show exactly what I see in my head, that’d be the coolest thing in the world.
I wanted an imagination machine. Some way to output imagined creations in real time. I didn’t know how to communicate what was going on inside, and I felt my sister's brilliance stemmed from her ability to translate her mind’s movie into reality. If I could only show you what I contain, maybe you’d see me and I’d see you, or at least we’d have a good laugh.
I didn’t have the language for any of this then. I don't think I had any idea what was happening until I started reading. Daily trips to the library brought me into consciousness.
Entering the main room of the public library, we'd see the left wall lined with nonfiction and non-genre fiction, the right wall with historical fiction, sci-fi and fantasy. I liked the real stuff. My sister, naturally, gravitated to sci-fi and fantasy. The shelving layout echoed the pop-psych belief that there are two types of brains, left and right. The left brain is supposed to be analytical while the right brain is creative.
If you actually wanted to construct an imagination machine, this is where Western science might tell you to start: with how minds represent and transform information, and how that varies across people. In other words, with a relevant account of cognition, which the cultural picture muddles.
Our understanding of the brain has evolved since to a more complex, networked model, but the cultural picture of 'real thinking' hasn’t caught up. Instead of tidy left/right types, neuroscientists now see thinking as patterns across many interacting regions: parallel, distributed, and constantly exchanging information.
People often talk about the activation of more structured, effortful networks as Kahneman-style System 2. Culturally, we’ve come to treat the running verbal monologue as the main exemplar of thought: the thing you can write down, step by step, and grade/ verify.
That story misses the bulk of what minds are actually doing: fast, automatic, sensory and associative processes that don’t quite show up as neat internal prose. We end up mistaking what's recordable (writing, math, code) for all of thought itself.
This doesn't discount the value of structured reasoning. Chain-of-thought prompting (think step by step) demonstrably improves LLM reliability, but not because models think sequentially. It forces externalization of steps, making reasoning verifiable. Reasoning models take this further, spending extended compute exploring solution paths before responding - but even here, the process is parallel over learned strategies, not sequential logic. Like writing, it extends what distributed cognition can accomplish. It expands what thought is.
Recorded thinking is, of course, a massive leap. It gives us orders of magnitude more logical power than working memory and quasi-rote oral tradition. This lets us stack ideas, check arguments, and coordinate at scales oral cultures could not. But no one smart says humans who predated writing or otherwise live in oral traditions do not think. Writing is technology that shapes thought, not a fundamental primitive of consciousness. Thought predates it, and survives without it.
Language itself helps shape thought as well. Growing up bilingual, I've felt how Korean and English evoke different thinking: verbs come last, so you hold the whole context before the action lands (in Korean, 'I draw a pretty face' becomes 'I pretty face draw'). The imagery builds differently in my head - face first, then act of drawing. I also memorized my Social Security number in Korean, so I have to translate it in my head to say it aloud. Linguists call this weak Sapir-Whorf: language doesn't determine thought, but guides the scaffolding.
And language is just one example of how thought relies on sensory grounding. In Every Word Is a Bird We Teach To Sing, Daniel Tammet describes the sensory nature of language and its relation to his unique form of cognition. Tammet has a rare type of divergent mind: an autistic savant with synesthesia. In his mind, words have texture and resonance in the physical world. Tammet is a vivid outlier, but his case exaggerates something all minds are doing: leaning on sensory scaffolds to make sensible abstractions.
To see how differently those scaffolds can be wired, consider the extremes of just two axes: visual imagery and inner speech.
Aphantasia / hyperphantasia. Aphantasia is low imagery. When aphantasiacs picture an apple, nothing visual appears - the representation is more conceptual or semantic. My dad is like this. To him, the apple representation is more of a concept or idea. People describe it as thinking more in abstractions. Hyperphantasia is the opposite, where the apple is present in the mind’s eye in rich detail. Thinking is extremely vivid, as rich as or often richer than normal visual perception.
Anendophasia / inner speech. Anendophasia is a lack of inner monologue. Some people, when thinking, literally hear the words what am I having for dinner today? oh I need to stop by the post office… shut up Jared. People without this audible chain of thought think more through symbolic associations and intuitions than narrated sentences. Relatedly, anauralia is the absence of auditory imagery more broadly, like sounds or voices; it strongly overlaps with aphantasia, though there are rare dissociations.
So how we handle information is less a binary than a multi-dimensional, semi-malleable spectrum. Research on these combinations is currently limited, but early findings suggest these dimensions don't always align: most people (but not all) with aphantasia also report reduced auditory imagery. Those without inner speech often describe compensating with alternative cues, like tapping fingers to cue task switching instead of talking themselves through it. In parallel, work on aphantasia and imagery suggests people without a mind's eye can lean on more abstract strategies while matching typical working-memory performance.
Our place on this spectrum comes with tradeoffs that we don’t quite understand but can speculate on. I am both hyperphantasic and anendophasic (I suspect my sister is wired similarly). I think in vivid imagery and have no inner monologue by default, though it comes and goes. Reading feels like generating a real time movie in my head, and thinking spurs rapid, concurrent flashes of dynamically recombining scenery and lateral associations.
To do algebraic proofs or linear consulting cases, I used to need to slow down and shift, hard. An aphantasiac with an inner monologue - the opposite of me - might find it more natural to, say, prove the Pythagorean theorem step by step with algebra, while I might prefer manipulating the shapes geometrically. This likely translates to different relative strengths and preferred approaches in cognitive arenas. It also appears different modes can be trained to some extent: my inner monologue, for example, activates with closer reading of dense papers or textbooks.
I think it affects memory too. For a large percentage of people I’ve met since the age of 18 or so onwards, I can remember my first interaction with them in great detail. I could describe the setting we were in, how we were moving through it, and the feelings I took away from the encounter. The way I encode and decode sensory information makes my episodic memory highly specific. Nowhere near hyperthymesia, but definitely far above average. This has its pros interpersonally, but I also retain things like perceived slights or my own social gaffes more than is probably healthy.
Such variations surprise people. To many hyperphantasics, it sounds insane to walk around actually ‘talking to themselves’ instead of processing rapid-fire imagery. Anendophasia can invoke strong reactions too. Some overconfident analytical folks question how anyone could think without chains of legible structure. This underestimates how much nonlinear thinking constitutes cognition.
First principles is a useful way to encourage deeper thought, especially in a world where most people don’t go to that depth. But it's not enough on its own (which turtle do you stop at if it’s turtles all the way down? which stacks do you select?) and step-by-step verbal derivation isn't the only way to get there. Analogous or geometric thinking, often more intuitive, is effective for problems like the following:
Look at a chart and try to identify the 25th percentile. It might seem mathematically sensible to derive it step by step: the total area under the curve is 1, so find the x-value where the cumulative area equals 0.25, integrate from the left boundary, solve for x.
But it's better to say, hey, if you think this chart is a misshapen blob of pizza, figure out where to cut it straight down so you get two equal halves, and then for the left half, figure out where you need to cut it to get equal slices again. That’s where the 25th percentile is.
Neuroimaging and physiology back this up at a coarse level: vivid imagery recruits ‘visual’ and 'spatial' networks (occipital, parietal) and shows different coupling to frontal control regions for aphantasia vs hyperphantasia (Milton et al., Zeman et al.), while inner speech lights up language and motor-planning areas (like left inferior frontal gyrus and SMA). The differences aren't subjective; actual brains route thought in different ways.
This routing doesn't come from arbitrary software configurations. The hyperphantasic recruits visual cortex shaped by years of looking; the inner monologue activates motor-planning areas built through speaking. Representational modes themselves are artifacts of how our sensory and motor systems develop through bodily interaction with the world.
And this shaping isn't just developmental history - cognition remains coupled to ongoing bodily states. The predictive processes that generate your experience moment to moment are continuously modulated by interoception (your sense of your inner state), affect, and sensorimotor feedback. The pattern isn't static; it's an active process sustained by its substrate.
Gaston Bachelard, in Water and Dreams, riffs on this ‘imagination of matter’. For Bachelard, the sensory experience of interacting with water cannot be separated from the imagination it gives rise to. The tactile underlies cognition. Hofstadter’s model goes further in I Am a Strange Loop, which frames consciousness as emerging from a looping, recursive series originating from the underlying physical interactions of matter. This kind of process is most pronounced in humans. But some would argue it's present in simpler forms like dogs, lizards, even mosquitos, and plausibly in plants, fungi, and other networks of life.
If minds can differ so wildly even within one species based on how our sensorimotor scaffolds develop through engagement, how different are other animals, non-animal intelligence (conscious or not), or even hypothetical non-biological intelligence? It suggests that 'the mind' isn’t one thing you can neatly write down. Whatever general story we tell about consciousness - and what kinds of systems might have it - has to account for variation in underlying embodiment.
So there’s a stronger claim to be made, that the body is more than connected tooling for a brain-computer. In his immunology work, Varela (author of The Embodied Mind) and colleagues argued that the immune system itself behaves like a kind of non-neural cognitive network: a distributed process that learns, remembers, and continuously enacts the boundaries of molecular “self” through its own ongoing activity. It does this entirely in peripheral tissue, without neurons at all. If some of our most basic boundaries are enacted in this sort of distributed, biochemical way, then it’s not obvious there’s a clean, detachable 'pattern' sitting in the brain that we could just extract and copy. The process may be deeply entangled with its substrate.
We still don’t have a satisfying account of what physical pattern corresponds to a unified field of experience (this is called the binding problem). Classical neural architectures explain a lot, but they struggle to show how one coherent 'scene' or 'feeling' emerges from many distributed processes. Some researchers think richer physics (like holistic quantum field dynamics or entanglement) might eventually help explain how complex experience hangs together. Open question.
But the direction seems clear: minds are shaped by sensory engagement with the world, imagination is welded to matter and interaction, and whatever consciousness is, it looks more like a process emerging from organized, embodied systems with varying modes of expression than an abstract text stream.
To be clear: I'm not saying consciousness requires biology. I'm saying it probably requires the right kind of organized, world-engaged process, closer to enactivism (Varela, Thompson) than to computationalism. I doubt current AI architectures are anywhere close. An embodied system that senses, acts, and maintains itself in the world is a different question. This distinguishes my view from Searle's biological naturalism - he thinks computation categorically can't produce consciousness, that you need specifically biological causal powers. I'm skeptical of disembodied computation, but more agnostic about what embodied approaches might achieve.
Many computationalists would disagree. They'd say the pattern, not the substrate, is what matters (in principle, if you reproduce the organization of a conscious system, silicon or code should do). I find that unconvincing, given how much cognition looks deeply entangled with embodied, bioenergetic processes. Even if you could simulate the sensorimotor coupling computationally, there's a separate question about whether simulated embodiment would produce the same phenomenology as actual embodiment - whether 'simulated interoception' would feel like anything at all. But this is contentious and unresolved.
Sorting this out is a giant research program in math/ theoretical CS/ bio/ physics/ philosophy, well outside the scope of my end of year personal essay.
Even if substrate-independent binding turns out to be possible, it's a separate question whether the resulting experience would preserve anything we'd recognize as the same self, with a continuous thread of experience.
Where I roughly sit is sometimes called liberal naturalism or expansive physicalism: consciousness is part of nature, but our current physics may not have the vocabulary to capture it yet. I think a more complete science will accommodate experience without needing anything spooky. We're just not there.
A lot of current theory (e.g. predictive-processing, global workspace & related frameworks) tries to formalize this: the brain as a generative model that constantly predicts and revises a multimodal world, with certain representations ‘winning’ global access. But they stop short of explaining how the binding actually happens.
With all that said, what would it actually take to build an imagination machine? There are two different projects hiding inside that question. One is building a mind: a system with a unified point of view and real felt experience. The other is building a translator of sorts: a machine that lets an existing mind externalize and develop imagery, language, and feeling into shareable form. The first is speculative, while the second is underway.
For the first, we’d need to know how minds represent things - including language, imagery, bodily feeling, spatial reasoning, abstraction and more. We’d need to understand how those representations get stored, retrieved, recombined in real time. And we’d need some account of how computational process becomes felt experience (qualia, if you want the philosophy term). On the spectrum from speculative science to proven engineering, this is highly speculative.
Models don't have inherent sovereign goals, purpose, or point of view as far as we know. We probably won’t get there by way of a single clean equation or tidy abstraction. Minds look like processes in tangled systems. If we ever get a satisfying 'theory of consciousness,' it’ll probably rhyme more with our best dynamical and relational models than with the kind of proof you can scrawl out on a chalkboard. So none of this means “AI is basically a mind already,” or that uploads are around the corner.
The second project is more boring and more interesting at the same time. A translator doesn't have the same requirements. It needs fidelity, steerability, and responsiveness. It should take partially formed intuitions - images, fragments, moods, constraints - and give us the ability to interact and shape what emerges. Like writing and language, it'll help develop thought. This is a new medium.
Today in AI, we are scratching the surface of that medium. Transformers gave us powerful predictions over symbols like language, code, audio, and other tokenized streams; diffusion (transformer-based or otherwise) gave us pliable visual manifolds; multimodal systems combine these across audiovisual modalities; frontier world-models and agent systems start to sketch dynamics (i.e. how environments change in response to actions). Taken together, they're not close to being someone, but they are becoming a new layer of tooling for thinking and making.
It is far too soon to panic about conscious superintelligence, and far too early to talk as if we have a blueprint for copying ourselves. But powerful systems don't need human-like minds to do damage; poor judgment and corroded values on the part of their creators can do that plenty. The nearer danger is misusing what we've already built. Rejecting it, however, is worse. The distance between what we intuit and what we can do is shrinking rapidly, which is exciting. This doesn't replace work or connection, it opens new forms of it.
Machines that truly imagine on their own would need what robots lack today: affect, personal stakes, and sensory embodiment in the world. For now, AI cannot feel the San Diego sun, much less translate the emotion of a migraine melting on a Coronado Fourth. It doesn't know what it feels like to make choices in life, and how those choices contribute to the ebbs and flows of important relationships.
My sister and I knew our parents wouldn't be here forever, so it'd always be me and her. But as we continued experiencing different realities, it weighed on us. Yes, in general, older siblings tend to feel the weight of the world sooner and harder, and often face the bad lot of parental mistakes. For us specifically, though, I think my sister had to be the brave one. She had to get off the plane and go straight to kindergarten. She had to go to ESL, while I skipped it after two years learning from her (fun fact: my first words in English were pee pee and poo poo, which we found we needed after her first week in class). She had to learn how to start middle school in a new state, how to take the SATs after we lost our home, and how to thrive despite financial pressures at an Ivy League school.
My sister internalized this into service to others. Her pain and anger doubled as compassion and a strong sense of justice, and she opted to put her creative pursuits on the backburner in favor of a career in ed policy. I respect her more than anyone. But even as we stayed close through most of our 20s, the paths we'd chosen had diverged more than either of us expected. The subtle gap had become distance over time: in our choice of work, views on family, and personal philosophies. Neither of us can change how life has played out, and though I think we are still far more alike than we are different and love her very much, I am left replaying moments I could have and still could do better.
Connection takes work and care - no matter how much we wish we could just see each other, or meld our minds, reality is far more complicated than that. Though we may be far from engineering minds, the tooling emerging today is seductive precisely because it promises to bypass that complexity.
What do we lose when everything feels possible?
Cut to a padlock cradled in two hands - one from each person - clicking into place. No music, no dialogue, river passing, blind to apprehension leaking in. The hands languish, already knowing. As the camera pans out, we see our lovers are on that bridge in Amsterdam, where countless have made similar promises. Their metallic proclamations glitter with the opening track as they rearrange into the movie’s title:
Vondelpark.
“They get married,” Andrew says to dramatic effect.
“When she asks for a divorce 20 years later, their last task is going back to that bridge to take the lock off. In the process, they fall in love again.”
OK, I’m listening. “This sounds like a winner.”
“I know, it came to me fully formed in the middle of the night.”
Andrew may be joking, but I walk away thinking I’ll actually write this screenplay. Could this be the big break? Are we really starting the proverbial band instead of proposing it tongue-in-cheek?
I don’t think so. Add Vondelpark to that list of things someone should do, must already be doing.
Right next to the men’s skincare brand, past the chicken nugget franchise, underneath all the apps-for-x. Our unrealized dreams beg for life on shelves of indefinite purgatory. Imagining branches of possibility linked to this thread in the ether, I live an entire other lifetime.
Walt Whitman’s “I am large, I contain multitudes” (Song of Myself, 51), rattles persistently through culture. It speaks to the countless refractions we see in ourselves and desperately wish others could see. In this funhouse of mirrors, grief about what could have been echoes off future nostalgia for what won't be. We can’t help but feel our total humanity will die unrecognized.
These ideas can disorient creative efforts. As we might with relationships from past lives, we clutch optionality, refusing to commit because we mistakenly believe pruning branches diminishes us.
Ten years ago, most such ideas would have felt safely impossible, the kind of thing to muse about on occasion but accepted as fantasy. Now, it no longer feels delusional to think we can dust one off, maybe a few.
The more plausible these branches become, the harder it is to cut any of them, much less discern the right ones to cut. Not all options are created equal, but they sure as hell cost money, and our potential is left holding the bag.
This paralysis, this inability to commit, comes from something deeper than just having more choices. People offer soothing advice like "you have time", which I think is a gigantic mistake. In culture, deferring stakes in life has become commonplace. These instincts come from popular distortions of what reality itself is.
Where do these fallacies come from, and why are they wrong?
Before Kim Kardashian, there was Ida.
Like her creator Gertrude Stein, the titular character of Ida - A Novel was ‘famous for being famous’ before the phrase was coined. Despite her efforts to escape the caricature society expects of her, Ida grasps that she cannot shield her own multitude from parasocial projections. She ends up grounding her sense of self in the relationships she holds dearest (her dogs, most of all). We don’t find out much about who Ida is behind the veil otherwise, but that’s kind of the point.
Stein also loved dogs. Instead of Descartes' “I think, therefore I am”, Stein quips “I am I because my little dog knows me” in The Geographical History of America. There, she's more interested in the question of identity than drawing any conclusions.
Per Francesca Wade in Gertrude Stein: An Afterlife, Stein too struggled with the tension between how she imagined herself and who she was to the public. For most of her life, people knew her as the quintessential curator and tastemaker. This endured in cultural memory: Stein is more famous today for being a patron and central node among stars like Picasso, Matisse, and Hemingway (through her salon at 27 rue de Fleurus, a la Kathy Bates in Midnight in Paris) than her own work.
Yet as the writer that Picasso viewed as his Modernist literary counterpart, Stein’s body of work accurately reflected the anxieties of those who lived through both World Wars. Fresh off two hundred years of delirious growth, the world tipped in and out of chaos. Similar to painting’s Cubism, Stein and contemporaries like James Joyce and Virginia Woolf developed a free-flowing style that reflected the fractal multiplicity of the era.
Though she is often credited with this kind of uninhibited writing, Stein insisted she was trying to do the opposite. She wanted to write extra-consciously, drilling into the 'objective' reality of words. That is, she wanted the object to stand in concrete terms, outside the phenomenology of a first person observer’s identity and baggage.
The result was lines like “A rose is a rose is a rose”. Stein wanted to hearken back to people like Homer or Chaucer who, when they wrote of a rose, just meant a rose. She wanted to restore the vividness of the word itself rather than have its meaning distorted by a person’s memory. As such, much of her other idealistic, artistically true-to-self work is hard-to-read nonsense.
Her peers didn’t usually make the same claim. Joyce and Woolf are also inscrutable but situated inside particular minds with particular histories, associations, and cultural positions. Mrs. Dalloway remembers this kiss at this party; Bloom's mind wanders through his Dublin, his marriage, his Jewishness. Stein wanted to dissolve that particularity, to get past the individual perceiver to the object itself. I think that's part of why much of Stein’s work didn’t reach beyond writing for writers.
Even Stein’s famous line has been overshadowed by Hemingway’s later insistence that “The sea is the sea. The old man is the old man” (you know the rest). He said he meant it literally, but he also knew readers would bring their own meanings to the text. His plainness leaves interpretive space; Stein’s ideal of pure objectivity tries to deny it.
It’s telling that the lack of popularity of Stein's earlier work stands in stark contrast to her fictionalized autobiography of lifelong partner Alice B. Toklas. Everyone loved it. Partly because of the celebrity gossip - Stein did understand the value of self-mythologizing - but also because it was decidedly nothing like what she fashioned her writing to be. It didn’t insist upon itself, and instead stayed rooted in the textures of one particular, coherent world familiar to the public who adored her. Stein’s relative failures came from trying to write as if she could float above that texture in a neutral space without context. Her successes came in contrast to that work, when she stayed inside the mess: specific people in a specific world.
The lesson I take from her is simple: the work that lands hardest is rooted in one thick, shared reality, not in a view-from-nowhere. Impactful imagination can’t just be free-floating abstraction. In humans, it’s deeply shaped by our bodies, our environment, and the pressures of needing to act in society.
That temptation to float free persists in certain corners of tech and philosophy today. A lot of the ways we talk about the future pull us out of that entanglement. People toss around simulations, uploads, and infinite branches as if they should be taken seriously as reality itself, when they should remain surreal collages that help us see reality. Spend long enough in the resulting cultural milieu - even half-ironically - and it gets easier to downgrade this particular world. If you believe you’ll endlessly respawn, this round can become provisional, turning life into a game of lighter consequence.
The same fantasy underwrites a certain vision of agency. Like many concepts that spread through culture (e.g. 'emergent' or 'systems thinking'), the word loses precision in transmission. On its surface, agency sounds like an antidote to nihilism or desolation. Choosing to act freely in a malleable universe. But if the universe doesn't feel real, neither does the action. We can see this in those who only half-jokingly espouse the 'hypothesis' that we must be in some kind of simulation.
Nick Bostrom’s argument was originally a careful analytic trilemma that I’ve never quite followed. What matters is how the meme morphed in culture and became a kind of secular superstition among the analytically overconfident. Believers treat the fact that Bitcoin almost peaked at 69,420 in 2024 as evidence that we are in some puppeteer’s game engine.
The simulation argument is unfalsifiable and predicts nothing: David Deutsch and others would call it a bad explanation, others call it pseudoscience. In a simulation, one has the license to pursue interesting or funny outcomes for their own sake because nothing’s real except the mystical base reality we must not be in.
Groundless worldviews in general - simulation, crackpot conspiracies, postmodern relativism - erode shared reality. This cultural detachment creates the conditions for poor taste. Teams doing otherwise highly admirable work, backed by billions, cheapen it with animated waifus dancing in the feed.
And when shared reality becomes negotiable, so does the reality of others' experiences. When people who fashion themselves as agentic also buy in, things break. Creation becomes “I can just do things” without a moral compass. Those with worse intentions make companion bots designed to prey on the unmoored. Chasing cleverness, spectacle or engagement in the name of agency tramps dangerously across moral minefields.
But if reality’s not a simulation, what does science say about it today?
Not as much as we’d like. In physics, obviously our best theories of spacetime and quantum mechanics don’t fit neatly. They disintegrate in black holes and blow up in the Big Bang, vehemently disagreeing on the grandest questions, like how much energy sits in empty space, and seemingly trivial ones, like what a single photon does when fired through a slit barrier onto a screen. [For an optional detailed detour of the double-slit experiment and its interpretations, see Appendix A]
Now, if you don’t care for the photon particle-wave weirdness, you may remember a nonsensical high school physics lesson about some cat being dead and alive in a box, at the same time. This was Schrödinger scaling up the double slit logic to a cartoon macroscopic thought experiment of quantum superposition. His point was, “hang on guys, a cat being both dead and alive is insane.” It can’t literally be the case that 'both options at once' describes reality in any straightforward way, so we must be missing something in how we connect the math to the world. Einstein famously agreed.
Reconciling this is the holy grail of quantum foundations. Quantum field theory in particular is astoundingly predictive yet still opaque about what the world is actually doing between cause and effect. At the smallest scales, we don’t know what happens as events occur through time, only how to calculate the odds of what we’ll see. What is the underlying reality of that process, the ontology (actual 'stuff') behind the math?
Great recipes, confusing explanations. Deutsch’s interpretation is a prime example. He holds a version of the Everettian stance, more commonly referred to as many-worlds, or the multiverse.
Take the math literally and you get Everything Everywhere All At Once (I love this movie because it explores the complexity of a daughter navigating parental relationships), a branching cacophony of parallel universes where all possible outcomes are happening. Many-worlds bravely answers “what’s really driving the math” with extravagant interpretation. Inspired even. Possibly Probably Total Nonsense.
I hesitate with Bostrom or Deutsch-esque conclusions because they take analytically legible methods to explanatory extremes. They say what matters is crisp formalization into models, arguments, or probabilities (depending on who you ask). But in the minds, stories, and worlds we actually live in, 'squishy' parts, like intuition, embodiment, culture, and experience aren’t unimportant noise around a clean core. They’re just hard to bake into precisely defined schemas. A framework that treats such elements as secondary may be beautifully reasoned but miss something central.
Why does this matter? As soon as we move from predicting lab results to asking what can exist and be built, we’re already assuming answers. In AI, biology, energy, and everything else that touches the real world, different pictures of what’s really underneath lead to different beliefs about what’s eventually possible, what isn’t, and what ought to be.
I’m more sympathetic to a different family of views: Kantian-ish about what we can really know (access to reality is mediated, not a pure view from nowhere), and more process-like about what the world is made of. Think Carlo Rovelli’s relational quantum mechanics, Alfred North Whitehead’s old-school process philosophy, and related strands. The world is an ongoing web of happenings - "drops of experience" that continually become and perish - rather than a collection of discrete substances with fixed properties. You’re not the same person you were ten years ago. You’re barely even the same person you were ten seconds ago, on a molecular level. Whatever 'you' is, it looks more like a thread through changing interactions than a soul that could be copy-pasted into any substrate sans residue. For Rovelli, particles themselves don't have independent existence, they're outcomes of interactions between systems.
Rovelli doesn't solve the puzzle, he reframes it. Einstein didn't fix Newton by adding epicycles; he changed what space and time meant. The next breakthrough may not be 'which interpretation wins.' It might be a different question. Rovelli's other work suggests spacetime itself isn't a fundamental 'fabric'. It emerges from relational structure underneath. While I think 'interactions all the way down' doesn't fully click, a physical 'fabric of spacetime' never really made sense to me either. The relational framing shows up elsewhere: Network neuroscience finds function in connectivity. Category theory defines objects by relations. ML learns meaning from context. Multiple fields suggest 'relations over things.' Whether physics lands there is above my pay grade, but it’s intriguing as an angle of inquiry.
It's the same mistake Stein made with language: trying to carve words down to their objectively true, observer-independent meanings.
Originally, Stein wanted to understand the human mind, studying under William James at Harvard. Somewhere between James’s “stream of consciousness” and her own Modernist experiments, she intuited that minds are flows, dynamic and continuous. But she still hoped you could name that flow from a neutral outside vantage point. A century later, what resonates from her work is the situatedness, not the abstraction.
I’m also curious about more bottom-up views that still have a hint of this flavor: theories that posit some large underlying combinatorial structure, but where “the world” and its “observers” are just particular relational slices through it. Even there, what shows up is not bare stuff but geometries of interaction and constraint.
This lands in post-postmodern, maybe 'metamodernist' territory… informed sincerity that absorbs postmodernism's critique of naive rationalism without surrendering to groundlessness (I reject ‘nothing’s objectively real, we made it all up’). The American pragmatists (William James, John Dewey, Hilary Putnam) had a similar orientation: anti-foundationalist, but still committed to a shared world.
In philosophy of language, for instance, Jason Storm proposes a 'third way': language shapes and imperfectly represents reality without being completely divorced from it.
I don’t pretend to know the math that might eventually reconcile all this. Big ‘complexity’ theorizing that promised a unified science of emergence hasn't yet given us a satisfying picture of how minds, bodies, and environments hang together. Meanwhile, a lot of the interesting potential seems to be in dynamical, learned models: richer categorical and topological language, neural-net-style pattern-finding, and more willingness to let holistic structure guide what we think the 'fundamental' story should be. That’s part of why I’m drawn to relational frames in physics and neuroscience alike: they take seriously the idea that what’s real includes how stuff stands in relation, and how those relations change.
If that’s even directionally correct, it has moral consequences. An embodied, one-world view doesn’t give you the comfort of infinity where everything works out somewhere, sometime. There is just this unfolding history, seen from inside, and the systems we build in it. Simulation and multiverse talk may make for fun late-night arguments, but as guides for how to use our imagination machines, they’re a distraction at best and an excuse for neglect at worst.
We can’t outsource responsibility to a multiverse or a future upload. And we can’t assume our theories will neatly settle the question of consciousness before we act.
Then what does it actually mean to create with intention?
A storm interrupts the otherwise “wet, ungenial summer”. A young woman by the name of Mary Godwin, scared to death by her own dream, bolts awake. Recognizing genius, Percy Shelley - not yet married to her - encourages Mary to put her ghastly vision to paper. Frankenstein; or, The Modern Prometheus, is born.
Or so the story goes. Literary scholars debate the veracity of Frankenstein’s origin. Some dispute the cool nightmare vision, claiming Shelley’s inspiration actually emerged from an intentional, structured writing exercise undertaken by the squad in Lord Byron’s house that summer. Shelley took the kernel of a concept and went to work on it in the ensuing weeks and months and years.
The resulting novel’s layered irony isn’t lost on readers. Here was a brilliant man in Victor Frankenstein, obsessed with scientific creation yet in so many ways thoughtless and uncaring. After spurning the monstrosity that is 100% his fault, the overconfident creator spends his life running from death’s living specter, who wants to maim him and whatnot.
The message of creative neglect comes with a contemporary twist in Guillermo Del Toro’s recent adaptation. My sister loved Pan's Labyrinth, so I keep up with his work. In the book, Victor flees the day after creation. Here, we see him spend more time with the Creature. He assumes that because the Creature can’t say anything more than his name repeatedly, it can’t reason intelligently and is therefore broken. Like he bootstrapped its life into existence with a strike of lightning, he expected the Creature’s intelligence to flip from 0 to 1 as well.
Later in life, Shelley herself seemed to imply the story’s creation had happened in the moment of divine inspiration. Like film adaptations have Dr. Frankenstein sparking life into his monster in an instant, Shelley would have us believe the genesis just happened. This fallacy of a lightning-bolt epiphany endures in culture.
Oliver Sacks, again in the River of Consciousness, explores such stories. On Mendeleev’s discovery of the periodic table:
It is said, [Mendeleev] immediately on waking, jotted it down on an envelope… it gives the impression that this stroke of genius came out of the blue, whereas, in reality, Mendeleev had been pondering the subject, consciously and unconsciously, for at least nine years… Yet when the solution finally came to him, it came during a time when he was not consciously trying to reach it.
Sacks goes on to describe Henri Poincaré solving a super hard math problem. After wrestling with it, and losing, he decided to take a break. As he was hopping on a bus, the solution popped into his head in perfect form. Later, while working on a separate problem, he got mad again and went to the beach, where insight struck on a walk. Poincaré's takeaway was there must be continued background processing when one is not explicitly thinking. A kind of creative incubation doing real intellectual work beneath awareness. (Sacks interprets this further as a distinct mode of creative incubation, though given recent revelations about his work, his theoretical framing is less reliable than the historical examples themselves).
Progress happens through subconscious, lateral, serendipitous connection as much as, if not more than, traceable reasoning. Even the Srinivasa Ramanujans of the world, who made extraordinary contributions in relative isolation, spent years steeping in their problems. For every supposedly sudden discovery, there are thousands of hours of relational gardening. Instead of singularities or magic moments, we should expect creation to look more like a long, uneven relationship with ideas than heroic jumps.
Ideas are planted across our collective consciousness, cultivated with concerted effort, plucked and molded into being. Not materialized in discontinuous flashes of progress.
Once plucked, a lot needs to happen to bring ideas to fruition. Technical progress has a knack for being 'in the air' yet rarely happens without serendipity and the messy middle, which it often doesn't survive.
James Dyson first saw the idea for the bagless vacuum in industrial cyclonic vacuums while making wacky wheelbarrows with his first company. He got kicked out of this company. It then famously took him 5127 prototypes to get the first version right. He then spent years litigating the patents. So he was broke for a long time, which forced him and his wife to grow some produce at home. This experience in turn informs how Dyson builds vertical farms today.
Timing and circumstances matter too. As Deutsch writes, Babbage’s analytical machine existed a century before Turing’s computer, so in principle we could have had computers way earlier. But the economics of the project or supply chain dynamics may have been off. Or more likely, Babbage may have been incompetent for this stage of the job.
And finally, relationships. Emerson was Walt Whitman's champion and Thoreau's mentor. Emerson was also godfather to William James, who taught Gertrude Stein, who was friends with Alfred Whitehead and considered him among the preeminent geniuses of her time. Stein was also friends with Bertrand Russell, who worked with Whitehead on Principia Mathematica, an attempt to ground all mathematics in logic that was later proven incomplete. Whitehead himself eventually moved toward process philosophy, taking relations and becoming as fundamental rather than formal structures.
We are all haunted by shadows of our unborn Frankensteins. Victor clearly squanders his only blessed opportunity. Mary Shelley, fearing this, chose to avoid Victor’s mistake. She cared deeply for her creations, publishing prolifically beyond the Frankenstein years to her death. This was in part fueled by her depressing personal life: abandoned from day 1 (her mother, Mary Wollstonecraft, died in childbirth), Shelley went on to be tormented by the ghosts of 3 of her 4 own little children. She couldn’t bring loved ones back to life and probably felt no choice but to breathe her soul’s excess into work instead.
Mary, by all accounts, channeled her circumstances into a life of impact. Composing what became literary canon, she separated herself and gave humanity a gift. One that scarred this middle schooler reading beyond his maturity level, but a gift nonetheless. Like Victor, she didn’t just dream, she acted. Unlike Victor, however, she breathed conviction, timelessness, and heart into it.
But Frankenstein endures for another reason relevant to the day. It hit a nerve because people of the era genuinely feared we might figure out how to create life.
In Shelley’s day, live demos of galvanism, AKA running current through dead frogs and watching tissues twitch, energized crowds in spectacles of edutainment. As Frances Ashcroft details in The Spark of Life, scientists debated if life is principally bioelectrical or biochemical in nature. For a couple centuries, the biochemical camp dominated. We discovered DNA and mapped metabolic pathways, designed drugs to wipe out infections and modulate hormones. But in parallel, we’ve also learned bioelectricity is not a parlor trick, that voltage gradients across cell membranes are arguably as critical to life as genes and proteins. This helped us figure out how to defibrillate and reboot hearts, restore movement after injury or certain paralyses, and even interrupt seizures in real time.
Today, a growing wave of work in bioelectric patterning like Levin's morphogenetic circuits, which guide regeneration in worms and frogs, reinforces this picture: bodies are multi-scale electrical networks coordinating growth and repair, shaping behavior. As we start to edit genetic programs and how they’re expressed, we’re also increasingly reading and writing signals to mediate the control flows of life.
So since Shelley’s time, we’ve come to understand life through a more unified bioenergetic frame. Nick Lane, in The Vital Question, applies something like this to the origins of life. Rather than life emerging from some generic primordial soup, he argues it arose in environments with strong natural energy gradients and evolved ways to harvest those flows.
As we’ve started thinking of life as a specific way of organizing energy and matter, we’ve come to also revisit what happens at the other end. If life is a process, not a spark, what does it mean for that process to end, and what would it mean for it not to? We see this present-day obsession reflected in Del Toro’s Creature, who cannot die (unlike Shelley’s, who presumably burns himself at the stake). If you can’t die, what does 'a good life' even mean? What is death doing for us, structurally?
Complex life depends on dense mitochondrial power, and in Nick Lane’s world, aging is the long-term bill for that energy: over time, damage and mutations in cellular machinery slowly erode the body’s ability to keep tissues powered and repaired. Evolution’s hack was to accept that for individuals and ensure continuity via children: copy genes into a new body and let the old one wind down. Death, in this view, is a structural trade-off baked into how our kind of life bought complexity in the first place.
Levin takes it a step farther. In a recent paper, he treats aging less as an unavoidable hardware failure and more as a control-systems problem. For him, bioelectric patterns don't simply ferry messages between cells; they act as the organizing field that constrains and coordinates tissues toward target states; aging is what happens when that multi-cellular navigation system loses the plot. In principle, if you can restore the right bioelectric patterns, you might be able to re-impose youthful, goal-directed repair. To him, aging is as much about lost goals as it is about broken parts.
Thus, life appears to be one way to fight entropy, and aging and death look less like inexplicable failure than the cost of waging that fight. Whether it's the hardware breaking down or the high-level goals losing coherence is an important question if you care about what minds are actually doing - and where. To ask whether tissues can 'lose goals,' you have to grant that they have something like goals in the first place.
Levin and collaborators explore exactly this in another recent paper. Instead of treating cells and tissues as conduits of signals that inexplicably give rise to cognition, they model them as problem-solvers in themselves. The paper asks: if you had to randomly try all possible ways to regrow a head or reach a target shape, how long would it take? They compare the model results to what real tissues do. Real systems are astronomically more efficient than blind search, which suggests they’re not exploring all possible states but moving along constrained pathways, structural grooves for getting from “here” to “there” in chemical, electrical, and anatomical space.
Seen this way, cognition isn’t a light that suddenly turns on when you grow a cortex. It’s whatever lets a system, whether it’s a handful of cells or a whole animal, avoid flailing randomly and move toward useful outcomes. The kind of goal-directedness that makes life feel purposeful in our conscious minds is deeply present in the low-level processes that grow and repair us. Then the interesting question in neuroscience isn’t ‘what’s special about neurons?’ so much as ‘what are the organizing constraints that let many parts, including the rest of the body, act together like a single, goal-directed entity?’
What we perceive as top level goals may affect us down to the health of our smallest cells, though we're only starting to understand how that conceptual translation might work.
How do we choose which ones to build a life around?
On Sunday mornings, the sounds of streaming showers and clinking cups would softly scream "keep your eyes shut!" I would slow my breathing and pray to God to please let them go to church without me. Maybe this time, my parents would see the rays already framed a halo around my crown and decide our work here is done, let this angel sleep.
Like clockwork, my sister and I would be forced to sacrifice 1 of 2 free days to go to a worse version of school, where we admittedly still had friends, but they were kind of weird, and we had to speak with elders who smelled of mothballs. Clasping little styrofoam cups, we'd suck Maxim coffee through thin red straws and humor them with small talk so excruciating we'd wish the sermon hadn't ended.
For my parents, though, church was a godsend. At 36 and 34, they had left everyone they loved for a foreign country with two young children in tow, landing in a blocky matrix of minor towers and frontage roads called Houston. Though a buzzy website called MapQuest was starting to make the rounds, offline directions remained more reliable, and they needed help finding their way. Coworkers recommended a local Korean Baptist church off I-10, where community meant friendship and know-how, like "careful when asking for directions, many guns in Texas". Though my parents hadn't been very religious back home, the Church's message also provided comfort, and the ritual balance.
They had determined religious practice is the simplest way to instill values as well. Rather than wrangling together a modern, eclectic program they didn't have time to teach at home anyway, they chose a structure sanded down by history. In addition to the table stakes morals present in every mainstream religion - such as the Golden Rule, or my favorite, don't kill people - we learned about compassion, forgiveness, and the truth that doing boring things repeatedly is an unavoidable constant in anything worthwhile.
I stopped going to church regularly when I realized no one can actually make anyone do anything in this country, not even your parents (in theory). As adults, though, I think we should study religious texts more deeply, regardless of our personal inclinations. They're boiled down nuggets of allegorical wisdom that survived centuries of printing cycles. Treating them more like literature than history I think is healthy. The goal is to reexamine, reaffirm and possibly remix takeaways, understanding the tradeoffs between value systems for the sake of our own children. No religion is of course a valid choice for them when the time comes, but it helps to be able to recognize where and why certain values are implicitly baked into our culture. As secular as we think we've become, every society has religious roots that endure.
Though religion comes with practical wisdom, table stakes morals, and selected values, I think the most critical thing any sort of practice teaches children is simply the concept of faith itself. Faith to me means orienting towards something you don't have proof of, knowing it might not be true, and evaluating what lessons emerge anyway. In this way, I distinguish it from blind faith, which claims certainty. While faith means acknowledging Santa could be fake but leaving cookies out anyway, blind faith means a large man will indeed slide down your chimney.
This takes epistemic humility, even in rare cases where, faith be damned, apparently undeniable divine moments appear like actual proof. I can't say I've had that happen, but even if I had a vision of a visit from Jesus himself - as some report with near death experiences or psychedelic trips - I would temper that with the asterisk that it is a first-person claim about phenomenology, not an analytical one about reality's structure. Epistemic humility means holding such extremes as meaningful without pretending to know exactly what they are (yet).
Counterintuitively, practicing with epistemic humility and uncertainty is where strong faith originates, because strength is built on overcoming doubts that make movement difficult. Repeatedly taking action.
Faith, correctly derived, is what gives us hope. Neither have to be religious at all. This meta-skill translates to relationships and work alike. It means caring when you don't have evidence you should, which moves life to a better place.
What happens without it?
Dostoyevsky's Notes from the Underground starts: “I am a sick man, I am a wicked man.” Nice choice for a solo dude stopping by a remote bookstore on a random Tuesday, especially after he’s told the owner his favorite book growing up was Wuthering Heights. We joke amicably, but I feel her stare as I walk out.
I stayed away from Dostoyevsky in my 20s, partly because I wasn't ready for where he takes you, and partly because he’s in the starter pack for the self-important 'guy who reads' along with Ayn Rand. Did not want to become a caricature. Chekhov makes you laugh, Tolstoy makes you wonder, but no one more than Dostoyevsky conjures the image of a scary Russian pounding vodka, waxing moral conflict in some smoky back room. What’s going on? I don’t know, I think we should leave.
The opening line is a few shades darker than Dickens’ “best of times, worst of times”. At least Dickens contrasts polarities. Dostoyevsky’s antihero introduces himself as a sad sack of crap through and through.
In his introductory diatribe, the Underground Man describes the universal need to moan about struggle. The primal gratification one finds in pushing boulders up hills while telling everyone how hard it is. Man sees wall, man charges into it. Doesn’t stand a chance, but maybe this time will be different. When the exertion causes suffering, man thinks there’s joy in it.
I can’t say it gets much better. The antihero’s 'notes' are ramblings of a guy who creeps around the margins of society, bitter because his supposed intellectual superiority has only brought him misery. If Dostoyevsky didn’t wrap the whole thing in ironic meta humor, it’d be too depressing a read.
We come to learn the narrator spent the first half of his adult life absorbing, and the second half spewing all that had bubbled in him over the years. To readers, this is presented in reverse order. We see his life in the 1860s, then flash back to the 1840s to understand why he ended up as he did.
The Underground Man has non-consensus views. He thinks he is the only ‘n-of-1’ in a herd of sheep. But if your own subjectivity is singular, it follows that everyone else’s is too. Even the people you think are sleepwalking. In this way, he resembles many misguided, overconfident 'red-pilled' folks who feel only they have seen through the Matrix. The fatal flaw of the antihero is his belief that he alone is meant to be uncommon. I mean, he is alone in a real sense, but it’s not because he’s the only genius to crack the code.
The man has nowhere else to take his broken mind, which has coveted independence in a world where prominent intellectuals (and the culture writ large) promote dedication to causes of moral clarity. Russia, at this point, has taken some Enlightenment ideas to an authoritarian extreme and decided the ideal way to live is with absolute conviction and commitment to the broader machine. It is a cancerous, distorted end state.
The machine in question is utopian rationalism: the 1860s intelligentsia's conviction that science and reason could engineer a perfect society. Build the Crystal Palace, design the right incentives, and humans will naturally choose virtue. The Underground Man sees this and recoils. He insists humans will act against their own interest just to prove they can, to prove they're not "piano keys". He does not believe the purpose of life should be so clear cut.
The Underground Man allows addiction to pain to become pathological. Pain can fuel - runners know it bleeds into pleasure - but if it becomes the end in itself, spirit erodes into defeated self-awareness. Richard Pevear observes that for Dostoyevsky, this inner disharmony is the source of consciousness itself, but consciousness without movement towards something beyond the self is "death-in-life". The narrator correctly asserts that suffering is the price to pay, but he doesn't know what he's paying for.
Tolstoy explores the opposite trap in The Death of Ivan Ilyich, also structured in a reverse narrative of sorts. While the Underground Man is aggressively anti-establishment, Ivan Ilyich is exactly what one is supposed to be. Through flashbacks from his deathbed, we learn he acquired a respectable position as an official in the Court of Justice, endured a properly dreadful marriage, and otherwise directed substantial energy towards getting as high as he can in high society.
As he approaches death, Ivan has an epiphany that doing all of the things that should have allowed him to live painlessly has caused complete and utter numbness. He accepted the duty of his station with such a lack of resistance that inner life became frictionless. Smooth clarity let every meaningful possibility slip out, leaving him barren.
If the Underground Man takes himself out of play by overthinking things and lives in total agony, Ilyich doesn’t think at all and dies in it. Neither aim at something worthwhile. Their lives teach us that in this vacuum, the hyper-individual, Waldenesque fable of independence is as dangerous as being a dispensable suit. As much as Thoreau romanticized living off grid, he had people visit all the time.
remember there's a line between camping and sleeping outside. Subjective somewhat, but objectively real.
Tolstoy seemed to take this seriously in his own life. Releasing his own anxieties into Ivan Ilyich’s psyche, he decided he wanted nothing to do with it, building a distinct body of work publicly and successfully. Dostoyevsky, too, didn't succumb. Unlike his antihero, we know Dostoyevsky was not ineffectual. He became one of Russia’s most respected writers over the course of his life. He and Tolstoy both could see an implied future playing out, and they didn’t like it. Rather than accept self-fulfilling prophecies, they turned their disquietude into productive art, and changed us for the better.
Thinking for yourself is necessary but insufficient. Both stories arrive at a question that confronts us often.
What constitutes a life well lived?
Marry the rich fisherman and be safe, you just got expelled with zero economic prospects.
Or choose the poor boy you love, your best friend, your first friend, who saw you needed help digging rocks from the field and dug without complaining but also planted the seeds, watered them and pulled the weeds, picked and carted to market with callused hands, set up the stand, and negotiated with needy hagglers until every last cabbage sold. He had you sit on the box reading your books and writing your poetry, not because he thought men work to let women enjoy, but because he knew you had potential to be more than him; your education mattered. For years no judgment: in adoration of who you were, compassion for who you’ve become, and belief in your eventual transcendence.
The fisherman keeps clearing his throat loudly, and also he is divorced.
Is there a third option? Get on a boat and run away, start over where no one knows you are unskilled children who come from nothing? No, you already tried that, that’s actually what got you expelled.
Nonlinear, realistic slice-of-life show When Life Gives You Tangerines examines the limitations of agency. Even if they successfully run away, Ae-sun and Gwan-sik know the dream is a fantasy. They need to face misery apart or struggle, really struggle, together.
The title is a nod to ‘when life gives you lemons’ but with the tangerines native to Jeju, where our characters grow up in pre-industrial Korea. On the island, men typically fish or farm, and women are lucky to be married in better households. Women who want to fend for themselves do have one other option - deep sea diving for abalone - but this is dangerous, hard work. Ae-sun’s own mother does it as a single mom, and she dies early. She’ll do anything to prevent her daughter from having the same fate.
The story is imbued with the emotions of what could have been. Or more accurately, what could be for others but not me. For these characters, imagination is fantasy. When Ae-sun and Gwan-sik run away from Jeju, they exercise their freedom. Which, as an escape, is an illusion.
So struggle they do. Gwan-sik goes from the calluses of a field hand to the puckered cuts of a boat one, while Ae-sun puts her dreams on hold to run things at home. The pair manage through real tragedy and real joy in a way that’s not a Victor Frankl (not Frankenstein)-esque internal search for meaning; they have an objectively beautiful life. Tangerines isn't about finding meaning in suffering, it's about making life an honest work of beauty with loved ones.
That’s not to say the couple ends up complacent. They take each opportunity afforded to them seriously, making risky bets on property, opening businesses, running for local political office, and blessing their children with the tools to do better than them. This last point is critical - the drama spans generations, exploring how abstractions like values and narrative legacies are as tangible to one’s inheritance as genes and assets (or lack thereof).
Ae-sun ultimately does become a poet in her later years, with a rich bench of material to draw from. In all conceivable ways, the family tries their absolute hardest to meet their potential, and things do work out.
Tangerines resonates in part because it traces the lives of people like my own parents, who grew from the constraints of the old world to the opportunities and challenges of the new.
By the time I was born in Seoul, South Korea had gone from a patchwork of rural villages and handful of grimy cities to the 12th largest economy in the world.
My parents met half a generation after Ae-Sun and Gwan-sik while majoring in literature in college. In the backdrop of rapid societal change, they sought beauty and truth in the broader dislocation, practical skills be damned. This was especially brave (or foolish, perhaps) for my dad, as other guys his age were diving into technical disciplines in hopes of securing a role in the nation’s growing chaebols. It turned out learning English helped set him apart in the process, though he was the only guy in the lit program who leveraged this for corporate, at the now defunct colossus Hanjin shipping.
A stable, salaried job was a welcome departure from the financial instability my Haraboji (grandpa in Korean) faced as he cycled through businesses during the boom. Like Ae-sun and Gwan-sik, Haraboji tried to catch some waves, to varying degrees of success. There was the billiards hall, coffee shop (which afforded my dad the coolest bike on the block), second billiards hall, electric supply, eyelash exporting, sunflower farming, beauty supply. Then, my Halmoni's (grandma's) final store helped keep them afloat until her early death, around the time my aunt started making money as a teacher. With his daughter in the city, son in America, and beloved grandkids relegated to voices on the phone, Haraboji spent much of the last third of his life in solitude with little to his name.
That his efforts never got the family to escape velocity - or even steady velocity - confused me for the longest time. Haraboji was my Chuck Norris. Smart, athletic, handsome, kind, gentle yet commanding love and respect, better at Go than any human and most computers that challenged him. He was also tall.
I’ve come to realize that success or failure, 1 or 0 is the wrong frame for looking at events in the world. Chained binaries are not how reality arises, nor how it evolves over time. Though they make for powerful computational models, the real world is far more interactive.
I may never have answers on why Haraboji’s cards fell where they did, but I know one way he succeeded: he sent my dad and his older sister to college, and they continued the dream. How that threads with my parents’ story in America I reserve for another time, but our exact ups and downs have led me here today - though the same initial circumstances can shape siblings differently. Experiences and environments pull those who once shared a room in disparate directions.
The constraints in our reality shape us.
Take a trivial example: my mom used to joke that if I ended up tall too, I'd be a jerk (she also told me to marry someone pretty, but not too pretty or I'd get cheated on). When I stand next to my dad, my legs are longer, his torso's longer, so our shoulders are level. Yet at 5'7", he has one inch on me because his head is bigger than mine for some reason. I played a lot of sports growing up, so this was a problem, and at times I hated it almost as much as I hated being broke. But being short hardened my work ethic, made me develop larger presence, and granted me the knowledge that limits don't uniformly lower your ceiling; they modify the paths it emerges by. Even if I had a choice, I wouldn't change a thing because I wouldn't be where I am, I'd be someone else. The constraint became mine.
That said, when of legal adult age, a person should clearly have the right to choose. I was super lucky to end up with high capability to flourish otherwise, to build a meaningful life. But others short dudes may feel that an elective procedure to lengthen their legs would help them reach their own potential - make them more confident in work, treat others better because they're more at peace with themselves. More power to them.
So one might say it's a coherent principle to say that morally, our obligation building technology is to provide people with choice in their pain, let individuals decide which constraints of life to internalize. By providing choice, we could allow people to decide what purpose they want to suffer for. Sounds reasonable, but a bioethicist might ask "what about the unborn children? They don't get to decide."
Right, what about my kids? If I marry someone taller, or at least with tall family, can I rely on my son to revert the Byon family generational shrinking trend to the mean? Or do I - should I - be thinking about how to save him from, say, the logistical puzzle of dancing with a pretty girl in heels, or the need to lead with extra charm at a corporate recruiting event filled with gangly partners? As we start enabling parents to select embryos using tools like polygenic scoring, this is becoming more of an issue.
I believe parents' responsibility is to give children the capability to flourish, not to sculpt them towards a template. For height? I'd probably let the genetic dice roll. For severe disease that blocks capability to engage with reality at all? Intervene if I could. The vast middle is harder, but orientation can guide us: when future choice is possible, lean toward letting them make it. When it isn't, parents must define what flourishing means - what I'd lean toward preserving is access to the basic dimensions through which we interact with the world and each other (sensory, cognitive, motor, relational, among others) at minimum.
Selection is one thing. What about far higher stakes for people already here? If clean cures for Stephen Hawking’s ALS or Helen Keller’s deafblindness had been available to them, what then? I cannot know how they would have chosen, but the choice should have been theirs. Unchosen suffering is the issue, not suffering itself. But part of what made their minds what they were was exactly how their worlds narrowed and rerouted. Hawking’s disease left him with many, many hours to sit and think about the stars, even as he lost decades of mobility to decline. Keller’s writing on justice and dignity came alive for her through touch, through other people tracing language into her palm, while being cut off from vast swaths of experience. Their ideas weren’t produced by 'pure intellect'; they came out of very specific bodies under very specific constraints, which shaped them profoundly, for better and worse. These are exceptional cases - I cite them to illustrate tensions around constraints and lives, not to romanticize or argue against intervention.
Our bioethicist might then note that by calling it a 'cure', we're implicitly making a judgment that a 'defect' needs fixing. But I think it matters more what an intervention does than what we call it. Does it expand someone's capability to flourish if they so choose? The technology is neutral, but the orientation around it - expanding capability rather than correcting deviation - isn't. Gray areas definitely exist. Some traits we'd edit away carry hidden value depending on context (the textbook example is how sickle cell protects against malaria; there are surely contexts we can't anticipate where a 'suboptimal' profile turns out to be adaptive). And beyond specific genes, we know novelty emerges from diverse experience. Each person contributes something to the whole no other could. Their particularity.
We don't yet understand what capability for flourishing strictly means, which is why the value system influencing peoples' choices matters more than the technology. Beauty is a good place to examine this clearly.
Korean cosmetic surgery culture shows what happens when a value system suppresses particularity. If you step on the subway in Gangnam, your heart may quietly skip a beat or two, because there is a ghoul standing by the pole. Except it's not a ghoul, it's a girl who just had her jawbone shaved down, so her face is wrapped in bandages. She's not the problem - she's responding to a system that punishes deviation. 1 out of every 3 women in South Korea have had work done to their face by the time they are in their 20s, and some estimate college students are closer to 50%. Business is booming.
The jawbone may be a follow up to last year's birthday present of double eyelid surgery, which she had to get because she's the only friend in the group who didn't have it yet. There, one template - white skin, small face, raised nose, big eyes - is considered the objective, platonic ideal. Choice exists on the individual level, but the culture guides the population towards homogeneity. I think this is cataclysmic for young girls, much less society.
The global phenomenon that is KPop Demon Hunters agrees, perhaps because it was created by a team with hybrid East-West roots. The movie's message is to allow the "beauty in the broken glass" - a version of particularity - to emerge. It speaks to the fact that our biggest stars are stars because no one else is like them. Beyoncé and Angelina Jolie (+ Keira Knightley, if I may) are universally recognized as among our most beautiful women. I'm probably dating myself, so swap in Zendaya or Sydney Sweeney if they're your cultural anchors.
Take the counterfactual where they're anonymous. Put them in a commercial casting lineup, and scrooges would disgustingly nitpick how they don't conform to standards (not that scrooges don't already). The rest of us sane people reject this framework entirely.
By definition, what makes a person magnetic is that no one else carries their essence, how they walk through life. I want my daughter to have self-respect, and if she does choose to change physically, I want her to do it in pursuit of singular beauty, not some asinine standard. Same goes for my son.
Of course, not all particularity is equally good. And not just in people. Culture often takes something that is one-of-a-kind unique, like a banana taped to a wall - but is plain ugly - and claims that work of art is singular and therefore beautiful. As I wrote last year, this postmodern stance, that everything is constructed, arbitrary, and groundless, is getting tired. While rationalists ask for proof, this camp rejects the premise because objectivity itself is oppression in disguise.
Beauty, like truth, emerges from engagement, from interactions. We don't recognize either in the abstract, we recognize them intersubjectively. Not because we agreed, but because there's something to recognize. And rather than 'I know it when I see it', it is 'I know others will see it too'. While you could try to enumerate the characteristics or properties in a formula, that doesn't explain why the gestalt is resonant on a metaphysical level. As such, I think beauty is both objectively real and relational in nature. Beauty is not arbitrary, nor is it a standard. It's plural: like language, its grammar makes possible many ways to be genuinely beautiful. Irreducible to a checklist.
Thus, particularity is necessary but insufficient. It can be mere novelty - banal, incoherent, even ugly. Like suffering requires purpose to create meaning, particularity requires taste and care - an expression of love - to create beauty. Taste requires cultivating discernment, which comes from seeking and learning how to see. But care requires work. For both, embedding in reality is a necessary precondition. We are who we are in relation to each other and the world we inhabit. This, not solitary genius, is where our power comes from as individuals.
What do we do with this power?
Contrasts create clarity.
In solitude, Maya Angelou says, "we describe ourselves, and in the quietude we may even hear the voice of God" (Even the Stars Look Lonesome).
She doesn't specify what God would say. I think it'd be: "why are you here with me? Find your way back." In the interludes, we remember the stakes.
One reason biology is so interesting right now is if you look up the scientific definition of life, there are dozens of complicated answers. But put it in contrast: life isn't what we ourselves are, it's who and what we love.
This is why I don’t worry about truly creative work being trivial to replace with AI - we can't simulate a life. The real risk is forgetting how to tell the difference. When something like that does start to emerge, it doesn't need to threaten our creative space, it will add to it - sentient AIs might want to tell us the stories of their own experiences, not imitate ours.
Models can help us explore the problem space and be more confident in what not to do. The leverage is incredible: more time perfecting what matters, faster iteration, possibilities we wouldn't reach alone.
But creation needs a point-of-view imbued throughout, in explicit rules and implicit patterns. Each decision compounds over thousands of days of toil. The creative artifact is the accumulation of a path-dependent process, a thread of existence molding and being molded by the system it is built in. No machine can pull this out of our heads, because it’s also in our bodies and our histories and our communities. Our points of view matter, and there’s a thousand ways to skin Schrödinger's cat.
This is also my answer on free will, FWIW. The modern debate asks whether our particles are determined - and if so, whether choice is an illusion. A couple of our quantum schools from earlier would say yes, others no. But if we are a goal-directed process, at every scale, choice isn't an illusion covering up physics. It's choices all the way down.
A strong point of view isn’t enough on its own. Agency only works in a system responsive to it. Emerson and the American self-reliance tradition stress individual will, but if you’ve ever been to places that lack system feedback (like many parts of India) you know that will alone isn’t enough. The sheer friction of everyday life makes it impossible to operate much beyond the local. Despite the strengths of such systems (e.g. flexibility, resilience), the feedback loop between individual effort and broader change is weak.
In my parents’ generation, opportunity was not so widely available. People’s options were largely determined by what they had. Capability and choices contributed less than bloodline, capital, or proximity to gatekeepers. In our world of growing abundance, that constraint inverts. With apparent opportunities everywhere, the problem is what not to do, where not to go, who not to follow, what not to ingest.
All kinds of temptations claim to be our salvation from the ensuing backward-depression and forward-anxiety. One broad failure mode is indulgent escape. It shows up as consumptive vortexes like endless travel, scrollable feeds, and fake food, or as numbing agents like excessive medication and fads disguised as cures. Peddlers of these pathologies treat all constraints as sickness, remedied by completely overdoing it or sedating people to submission. The opposite failure mode is treating all constraints as sacred. Some refuse tools that reduce friction on principle, as if suffering by itself is virtue, or ease and enjoyment are weakness. But struggle is only valuable when it's shaping you toward something. Otherwise it's just pain. If this is ascetically self-imposed, we wish you well.
Another trap is uninformed fear, denying what's here to stay. Memetic information tends to lose important context as it goes viral. Well-intentioned, thoughtfully-reasoned context gets grossly distorted into pithy incantations by charlatans and false prophets. If this ossifies in culture and seeps into institutions, it'll slow down tangible progress and leave us all spinning our wheels, suffering in fixation.
And as I mentioned last year, it’s no secret we have a vacuum of meaning. This religion-sized hole sucks energy from various aspects of modern society. Across the political arena, for one. Of course we can point to extremists. But we also see psychotic breaks near the middle. The scary thing about Luigi Mangione, as the Free Press noted, is that he’s decidedly not extreme. No coherent ideology; his motives are a Rorschach test for where things are broken - which doesn’t make them less monstrous. When systems stop feeling responsive, some people desensitize; others reach for agency in the only cowardly ways that seem to register. Thinking about the range of classmates I had at Penn, my initial thought on news of his arrest was that 'reasonably adjusted people' are perhaps a stone’s throw away from insanity at best, evil at worst.
It's not surprising that, in the same breath, people talk about conquering aging or even cheating death. I’m not necessarily against that instinct; if someone figures out how to give everyone another healthy forty years, great. We probably need a few immortality zealots pushing the ecosystem, the way crypto’s decentralization zealots helped build stablecoin rails and prediction markets. But I don’t want my whole sense of purpose to hang on faith that we’ll escape the basic human condition. We don’t yet know what death is for, or what we’d become without it. We do know there’s plenty of pain and potential for flourishing in ordinary, finite lives that we’re nowhere near getting right.
Schopenhauer predicted that the philosophy and knowledge of the Upanishads (ancient Hindu texts) would become the cherished faith of the West. I'm way less interested in mystical new thought than I am in the potential for a scientifically rigorous approach that probes old Asian intuitions without spiritualizing them. Like the Buddhist intuition that selves are processes, dynamic and impermanent. Rovelli's physics and Levin's biology are arriving at the same place through different methods - convergent evidence that Western divide-and-conquer substance-metaphysics has it wrong. I think the Eastern frame will have practical value: it trains attention on relations and processes rather than materials, which may be what's needed for the next breakthroughs.
When evaluating value systems, I wonder:
Does it suppress or enable particularity?
Does it ground people in reality or offer escape?
What are the tradeoffs and failure modes of its selected values? (every system has them)
Personally, I'd say I became agnostic after childhood, but as I ask more questions these days, I find more answers, which beget more questions. The discovery of purpose itself, if I haven't made the theme clear enough by now, seems process-like. That said, I do lean perennial in that more unites us than separates us across the major belief systems. Call it God, Allah, the universe, a big ocean of consciousness, what have you. But the imagery of us being drops of water that come into being, refracting the light in our own ways as we fall to form the whole, seems pretty damn beautiful to me.
For my children, I have the same orientation as I do with embryo selection and deferral of choice. My future wife and I will choose for them until they're capable, knowing that choice is not to be taken lightly, then give them tools to evaluate and choose without prescribing the answer. If they're as sharp as my sister early on, I imagine that will happen sooner; if they take their time like me, they may find themselves riding along to church of some kind for a minute to establish the basics.
Whatever philosophies we choose to anchor, mine resolve into a plain throughline: use our growing understanding of intelligence to help people build meaningful, self-directed lives together in the real world.
It’s crucial to fight for ideas we want to propagate. In the stories we tell, the products we commercialize, and the capital we deploy.
Practically, when I consider components of an imagination machine, I end up asking three questions:
What do we actually know here, and where is our understanding limited (theoretically and instrumentally)
Who decides what gets built and shipped, how does context (e.g. values, incentives) shape their decisions, and where are the structural asymmetries?
Given the systemic failure modes I just described, does this offer escape or help people live authored lives?
The systems that matter most in the foreseeable future aren’t thought experiments; they’re the bridges we build working backwards from who we ought to be. To do this, we need to distinguish between real commercial trajectories, speculative but plausible science (faith), and blind faith. Real trajectories are what’s already here, unevenly distributed. Speculative but plausible work lives where we have reasonable scientific targets that don’t obviously violate physics but not yet the instrumentation or theory to hit them. Blind faith is when metaphysics masquerades as progress, and we skip over the gaps as if fantasies are inevitable. Treating blind faith as settled destiny, or worse as morally urgent, will incinerate capital.
Applying this orientation of epistemic hygiene, this is what I care about: in media, stories rooted in the beauty of ordinary lives during extraordinary times, not escapism. In products, tools that demonstrably help create energy and security in daily life, and confidence that our children will be better off. In capital, skewing towards the real and speculative buckets, with an eye on overlooked or hybrid work outside standard venture profiles.
For me, the work starts less with building disembodied superintelligence and more with messy places where the tools of Western medicine have largely failed: depression, anxiety, ADHD, OCD, neurodegenerative disease, chronic pain, addiction, and other conditions. But the reactive 'find disease, treat disease' frame misses how we experience life. Suffering and flourishing are intertwined, not separate problems. Approaches that treat them as separate will keep failing. The most interesting systems aren’t gods in the cloud; they’re grounded, long-horizon mirrors that help us walk together. They'll weave together behavior, physiology, genomics, and richer signals from whole-body technology (wearables, ingestibles, implants, etc.) to help us see and reshape our own lives.
Commercializing AI will be like self-driving. “Gradually, then suddenly” describes how we experience discontinuous consequences of continuous change. If Hemingway were alive today, I think he would tell us to engage with the progress, but pause to take stock of what we're looking at clearly, maybe enjoy the view.
At breakfast with an old friend last year, I mentioned I didn’t think AI had had its 'running water' moment yet. When my dad was young, his village didn’t have much infrastructure. Though he's lived through Korea’s rapid industrialization, the internet revolution, and took his second trip in a Waymo in the epicenter of tech the other week, to him the biggest qualitative step change in life is still when his village got access to tap in the mid-'60s.
I didn’t think we had crossed this rubicon with AI until this year, when a bug on OpenAI’s end locked me out of my account. I’d gotten used to loading up Codex tasks at night and reviewing PRs in the morning. Sitting there without access to my work felt like a big New Jersey snow day: when AI drives a meaningful portion of your economic output, outages really do feel like infrastructure failures. That’s a useful gut-check: if model progress stopped today, we’d still spend years deploying what we already have across every sector. But we likely still need fundamental breakthroughs beyond scaling compute before we get anywhere near the futures the hype keeps selling.
All that’s to say we’ll question many things in the coming years.
Our most powerful stories tend to converge to a final question. Science fiction explores what it means to transcend our limitations. As humanity progresses towards the inevitable heat death of the universe in The Last Question, Isaac Asimov asks: “How can entropy be reversed?” For Asimov, like Tolkien and Dostoyevsky, the answer is left to God. Others, like Gertrude Stein, die in dignified denial, resolute that if there is no answer, then “there is no question”.
I’m pretty sure I’ve cracked the code on what the last question really should be. It happens as I’m dreaming. I wake up and reach for my phone to write things down as the threads recede in my mind. When I look at my Notes app the next morning, I see the epiphany starts and continues and then trails into typos and becomes incoherent. My desk lamp is still on from sketching faces, and a bowl of persimmons sits on the still life in progress. The outlines of the exercises blur as my eyes shift to the light awakening the room, and I hope when I see my sister next, I can show her the drawings I learned to make.
Special thanks to friends who keep putting out creative projects despite using your constraints, chosen or otherwise. I probably don't say it enough, but seeing your imagination at work inspires me.
Appendix A: Double slit & quantum interpretations
Here's how I explain the “double-slit experiment” to myself to try and make it more intuitive.
You shine light at a barrier with two narrow slits cut into it and look at the screen behind to see where the light ends up making marks.
If you imagine light were just a sequence of little bullets, you’d expect two bright stripes on the screen: one behind each slit. If instead you imagine a flow of light waves hitting the barrier, you’d expect a striped “interference” pattern on the screen, where waves going through the two slits overlap and cancel in some places as they spread out, like ripples in a pond.
First, you’re just interested in how it ends up on the screen. You do the test with nothing detecting how the light passes through the barrier. You crank the light source down so photons (units of light) go through one by one. The first photon you fire goes through the barrier and makes a single dot on the screen. On its own, that looks exactly like a little bullet: one blob, one spot. If one bullet went through, it would have picked a slot and ended up making a spot behind the one it picked.
But for some reason, that spot does not necessarily end up neatly behind one slit. It ends up somewhere in between or further off to a side. As you keep firing, the weirdness continues. The dots still don't pile up into two simple stripes behind the slits.
Instead, many dots together draw a striped pattern of bands across the screen - the same pattern you’d expect from ripples going through both slits and overlapping. Whatever is happening between the light source and the screen, it can’t be “each photon just picked a slit and flew straight through” in the ordinary way. As each photon flies from the source to the screen, its final landing spot only makes sense if both slits somehow matter for the outcome, which it can’t do if it’s acting like a single, independent particle the whole way.
So, the first mystery is, "what physically happens to the unit of light during that flight through the barrier?"
If the photon’s a particle, does it 'split' somehow, then rejoin? If so, how does it land where it could only land if it had gone through as a wave? Is there a 'guiding force' that sends the particle through one slot, but only allows it to make a final mark in one place? Or is it a wave? If it's a wave, how does it end up only making a single particle mark for each firing? Does it go through as a wave, and at the end 'collapse' into a single point on the screen?
To try and figure out how the light flies through the barrier, now you change the setup.
You add tiny detectors at the slits so you can, in principle, tell which slit each photon used. These detectors have to interact with the light as it passes the barrier to register; they're too small to 'see' in the usual sense.
You fire the first one. It goes through the barrier, and on the other side, you see it ends up in one of the regions lined up with a slit. Strange. As you keep firing, each photon still shows up as a single dot on the screen, and now, the dots continue to land in two regions lined up with the slits. The pattern builds two bright stripes behind the slits; exactly what you’d expect from particles choosing one path or the other. There is no ripple pattern here.
This is the second mystery. Why does detecting the light at the barrier make the light only go through one slit?
In both versions, each photon always lands in exactly one spot on the screen. Without detectors, many dots together draw a wave-like interference pattern, as if each photon somehow 'used both slits' before ending up in one place. With detectors, many dots together draw two plain stripes, as if each photon simply chose a slit. We never see half a photon here and half there. The first pattern only makes sense if, in some way we don’t yet understand, both paths mattered.
Classically, we want light to be either a wave or a particle. The double-slit experiment says it isn’t that simple. A single photon hits the screen like a point, but the pattern of many hits behaves like a wave that cares about both slits. Adding detectors at the slits doesn’t change the fact that you see one dot per photon; it only changes which overall pattern the dots build, ripples or two stripes.
Quantum theory models this with a hybrid recipe. Before anything registers on a barrier detector or the screen, the photon is treated as being in a 'both options at once' superposition. Instead of a wave or a particle, it is an excitation (a kind of cloud-ish lump) in an underlying field (for light, the electromagnetic field). When a detector at a slit or the screen itself finally clicks, that spread-out description is replaced by a single recorded outcome. People call that replacement “collapse.” The recipe works bewilderingly well, but it still doesn’t tell us what is actually happening between source and screen.
Some prominent proposals for what's actually happening in the non detector case:
Shut up and calculate (Copenhagen): don't ask, just predict where it lands. "What's it doing between source and screen" isn't a real question. Philosophically unsatisfying, but it works for most practical problems today.
Hidden variables (Bohm): The photon really went through one slit. A "pilot wave" also went through both and steered it. But the wave lives in abstract mathematical space, not physical space. So "what is the wave made of?" has no good answer.
Spontaneous collapse (GRW): The photon really is smeared through both slits, then kind of snaps into one final spot. GRW specifies how this collapse happens so the predictions work, but it doesn't explain why exactly.
Many-worlds (Everett, Deutsch etc.): The photon goes through both slits, and the universe literally splits. Visit every branch with Rick & Morty, tally where it landed on a clipboard, it shows the wave pattern.
Relational (Rovelli): Doesn't tell you what the photon is doing. Says the question assumes something wrong: that there's a 'thing' called a 'photon' taking a path. Instead, the photon we observe is the result of an interaction. Many find this unsatisfying because it doesn’t explain 'what' that interaction is between. Rovelli says it's interactions all the way down.
Honorable mention (QBism, consistent histories, etc.): won't elaborate here.
In the detector case, all agree the photon goes through one slit and you get two stripes. Why detection changes the outcome is again where they disagree: Copenhagen says don't ask, Bohm says the pilot wave changes, GRW says collapse happens earlier at the detector, many-worlds says branching happens at the detector (so Rick and Morty's clipboard shows two stripes, not the wave pattern). Rovelli says the detector creates an interaction.
In the essay's body, I go on to explore why this is interesting.
(in order of appearance within categories. Added links and non-spoiler commentary if you're looking for holiday material)
TV & Film
Severance (2022-): Britt Lower, Adam Scott, John Turturro, Zach Cherry, Tramell Tillman, and the rest of the cast's performances are reason enough to watch this.
Pantheon (2022-2023): Unlike the epic world-building of a lot of other sci-fi where the myth makes the story, this show is rooted in love: between a dad and daughter, girl and boy, etc. The show is based on some of Ken Liu's short stories, which seem to share deeply emotional themes (future nostalgia, grief, etc.). Read The Paper Menagerie if you want to cry, especially if you're an Asian immigrant.
Arcane (2021-2024): The combined 2D & 3D animation, colorful steampunk aesthetic, and soundtrack make this tale of sisterly love/trauma/estrangement/reconciliation a sensory delight. It will leave you in cathartic shambles.
Pluribus (2025-): Watching this now!
Foundation (2021-): This gets better as it goes, so keep it for casual backburner watching until you get to the 3rd season, then binge if you like. I think some of the ways it departs from Asimov's books works well, like the use of Empire (Lee Pace makes this great), others less so.
I, Robot (2004): Solid airplane watch. Golden age of movies.
Midnight in Paris (2011): If you're ever not busy, toss it on ? Owen Wilson romcom.
Everything Everywhere All At Once (2022): Well-deserved Oscars for Michelle Yeoh and Ke Huy Quan, though I think Stephanie Hsu got robbed. I like Jamie Lee Curtis she's a little too good at Hollywood. Don't hate the player, hate the game I guess.
Guillermo del Toro's Frankenstein (2025): It's not Pan's Labyrinth, but topical and worth a watch. Oscar Isaac is a great Victor, and Jacob Elordi's post-education Creature is way cooler than the bolt-through-temples Halloween goblin.
When Life Gives You Tangerines (2025): This show captures distinctly Korean emotions so well. Hard to explain how our history and experiences color the way we move through the world in an essay or conversation, so if you do take the time required to watch it, you'll understand your closest Korean friends better. IU is also my favorite Korean singer and actress.
KPop Demon Hunters (2025): No explanation necessary, Ejae and the rest rock.
Books, Essays, Short Stories
Betty Edwards, Drawing on the Right Side of the Brain: I've been stuck on the 4th exercise; will report back when I finish. Should be able to show a before & after comparison.
Oliver Sacks, The River of Consciousness: collection of short essays on memory, creativity, and big questions in life. If you read the New Yorker piece, you know one of our best science storytellers was a complicated man. Like many out there, the revelations saddened me greatly. He wrote this book, always one of my favorites, near the end of his life - after he had finally found love he had long deprived himself of and many years of reflection. I had already made brief mention of it in this essay, in ways that actually underscored my themes in context. From a scientific & ethical standpoint, what he did in his clinical work is inexcusable, and it may be as large an indictment on our culture that so many of us embraced it blindly as it is on him. Steven Pinker's post takes issue with the intellectual elite who criticize hyper-rational folks. I think reasonable people agree we need the analytical rigor and artistic depth across the spectrum for us to live in harmony and advance. I hope this came through in my essay. In the Valley, the balance in the Force seems off, and we're seeing a swing back. But my issue isn't with either side, it's with the epistemic orientation. Humility, skepticism, and engaging in good faith are critical wherever we stand. The New Yorker piece adds color to Sacks' moving autobiography On the Move as well. He had a lot of stuff going on otherwise - face blindness, torturous disharmony with his sexuality/ long-time celibacy, hundred-plus mile motorcycle rides through the night, lifting till he broke his body down. But his prolific creative output was inspired, and I hope will be framed differently going forward instead of being discarded. Understanding the lives of creators changes how we (re)interpret their work. Just as we shouldn't blindly accept faith, we shouldn't blindly accept facts/ science, nor the surface level value (or lack thereof) of a body of work. In light of all this, it may be that Sacks' account of his life and last book on literary questions end up being his enduring works of imagination.
Daniel Tammet, Every Word Is a Bird We Teach to Sing: read this a few years back and was struck by the lyricism his neurodivergence has helped him develop. Synesthesia especially is something we should understand better neurologically, what a gift (Maggie Rogers is like this too as I mentioned last year).
Douglas Hofstadter, I Am a Strange Loop: often confusing, but more digestible/ salient than Gödel, Escher, Bach. The most beautiful part is when Hofstadter talks about how his wife's consciousness is like part of him. Also, Melanie Mitchell, one of Hofstadter's protégés, is notably a measured voice in AI discourse. Her newsletter is helpful to keep tabs on, as she provides the perspective of a mature academic with intellectual, not financial, vested interest - though you can sense her timelines/ goalposts have shifted up too as models have progressed. Mitchell's book surveying complexity is interesting as well.
Gaston Bachelard, Water and Dreams: An Essay On the Imagination of Matter: this is an obscure little book my mom suggested when I mentioned I wanted to write about imagination. I don't know much about Bachelard but he definitely had a lot of creative thoughts swimming around.
Fyodor Dostoyevsky, Notes from the Underground: strange as this one is, I think it helps to read earlier works before tackling 1000 page magnum opuses. More digestible, and keeping the meta story of a writer's intellectual and artistic development in mind adds color to the experience. I read The Brothers Karamazov in middle school and retained next to nothing, so if anyone wants to join me in this journey, give this a read first and we can make the big one a 2026 goal. Also - Pevear's Notes intro explains the historical backdrop I mentioned better, as well as Dostoyevsky's failure to publish the religious version of the ending due to censors.
Emily Brontë Wuthering Heights : Not sure why this was my favorite as a kid. I think around the time I read it, I was also the Phantom from The Phantom of the Opera for Halloween. Must have had my heart broken at recess or something. Started rereading recently and am appalled so far. But I think I do like it? Margot Robbie and Jacob Elordi again star in the upcoming movie.
Leo Tolstoy, The Death of Ivan Ilyich and Other Stories: The other stories in this are great too. Man and Master I think is my favorite (good to read laying down on a park day), though I haven't read Hadji Murat yet.
Gertrude Stein, Selected Writings of Gertrude Stein: The kind of book to keep on your shelf and flip to a random page when taking a snack break. I started with Tender Buttons - don't read it trying to make it make sense, just observe what the texture of words and their interactions elicit... or something.
Francesca Wade, Gertrude Stein: An Afterlife: I didn't know much about Stein until I listened to Francesca Wade's excellent biography. Wade is a fantastic writer, and she narrates the audiobook herself so I recommend it for commutes or long drives. I should also mention there's some debate on what her last words were, but people like the version I quoted for poetic reasons.
David Deutsch, The Beginning of Infinity: I listened via audiobook, but he uses a lot of terminology and traces through scientific & epistemological history, so if I read this again I would get the book.
Mary Shelley, Frankenstein; or, The Modern Prometheus: the way this is told - in nested first person stories (first the ship captain, then Victor, finally the Creature) - is what makes it so different and haunting. You're on each journey as the chase comes to a head at the ends of the Earth. Also, I have a version on my Kindle with Charlotte Gordon's intro about the book's origin story; I can't find it online but can share a copy/ look harder if anyone wants.
Frances Ashcroft, The Spark of Life: more modern/ scientifically grounded yet accessible survey of concepts that Robert O. Becker excitedly articulated in The Body Electric.
Nick Lane, The Vital Question: Energy, Evolution, and the Origins of Complex Life: fascinating, bit technical at points but it's hard to find good biology reading that's neither a textbook nor cute case studies.
James Dyson, Invention: A Life: this man invented and manufactured new physical consumer products when there was no venture ecosystem, so the best he could do were checks from the literal bank and a couple individuals. The family controls the company today, so they have full creative autonomy. Amazing.
Victor Frankl, Man's Search for Meaning: On surviving the Holocaust - "I did not know whether my wife was alive, and I had no means of finding out... There was no need for me to know; nothing could touch the strength of my love, my thought, and the image of my beloved. Had I known then that my wife was dead, I think that I would still have given myself, undisturbed by that knowledge, to the contemplation of her image..."
Francisco Varela et al., The Embodied Mind: ahead of his time!
Maya Angelou, Even The Stars Look Lonesome: series of short essays on the most salient parts of her life. Her essay on solitude is great. Here's the full quote I love:
Many believe that they need company at any cost, and certainly if a thing is desired at any cost, it will be obtained at all costs. We need to remember and to teach our children that solitude can be a much-to-be-desired condition. Not only is it acceptable to be alone, at times it is positively to be wished for. It is in the interludes between being in company that we talk to ourselves. In the silence we listen to ourselves. Then we ask questions of ourselves. We describe ourselves, and in the quietude we may even hear the voice of God.
Isaac Asimov, "The Last Question": read it to the end.
Links & Videos
Kai Wu, Surviving the AI Capex Boom: tl;dr charts & numbers for the thought that the Mag 7 might not accrue the benefits of all the value they're laying the pipes for.
Add Up Solutions, Laser powder bed fusion: I have a video on my phone from a 2023 additive manufacturing trade show, but this one's better. At the event, I also saw Columbia researchers present a paper (which I can't seem to find) where they shined UV light to cure resin flying around in a rotating 3D chamber. In it, a miniature Rodin's Thinker materialized out of a cloud of particles. Use cases likely include precision components like lenses or eyeglasses. Industry broadly has started with prototypes, custom molds, and bespoke parts, but eventually additive will be more integrated in production lines. Really curious about biotech applications in manufacturing too.
Tim O'Reilly, Jensen Huang Gets It Wrong, Claude Gets It Right: tl;dr treating AI-as-workers robs humans of agency. Sure for near term and heart's in the right place, but there's more to it than that.
Dan Shipper, Why you should see the world like a large language model. As I was editing this essay, I came across this video from Every, which does a great job articulating some of the themes on rationalism and where LLMs/ AI more generally are bringing us.
Anna Ciaunica, From Cells To Selves. Similarly, as I was finishing this essay, I came across Anna's recent work, which beautifully crystallizes what Varela and others have long argued with additional reflections of her own. Ideas are currently 'in the air' :)
Sabine Hossenfelder, The Simulation Hypothesis is Pseudoscience: from a public critic of a lot of mainstream theoretical physics.
George Musser, What Einstein Really Thought About Quantum Mechanics: it's an oversimplified cultural distortion that Einstein "didn't endorse quantum" or whatever. He was deeply involved in creating/ thinking/ discourse about it.
John H. Richardson, The Most Frightening Thing About Luigi Mangione: he shot him in the back. enough said.
Max Hodak, 'The Binding Problem' (2025): explicit treatment of binding as the central obstacle to consciousness engineering; we overlap on the need for new physics, though I may be more skeptical about substrate independence and the question of identity/ continuity.
Josie Zayner, "Immortality isn't progress. It's paralysis.": from a biotech founder who I suspect I share a lot of basic views on tech with. Her company is making actual unicorns.
Papers
Nedergaard, Lupyan, "Not Everybody Has an Inner Voice: Behavioral Consequences of Anendophasia". Large behavioral study on people who report little or no inner speech; they’re broadly fine cognitively but show specific hits on phonological tasks (like rhyme and confusable-word memory), which makes “no inner voice” feel more real.
Hinwar & Lambert, "Anauralia: The Silent Mind and Its Association With Aphantasia": Introduces the term anauralia (no auditory imagery) and shows that most aphantasics also lack inner sound, but with a few rare dissociations - snapshot suggesting these modalities travel tightly (but not perfectly) together.
Zeman, Sala, Torrens et al., "Loss of imagery phenomenology with intact visuo-spatial task performance: A case of ‘blind imagination'": single-case of a man who loses his mind’s eye overnight yet still aces visuo-spatial tasks, forcing you to separate “what it feels like” from “what the system can do under the hood.”
Milton, Fulford, Dance et al., "Behavioral and Neural Signatures of Visual Imagery Vividness Extremes: Aphantasia versus Hyperphantasia": Compares aphantasics, hyperphantasics, and controls on memory + fMRI; finds big differences in autobiographical richness and imagery networks even when basic test scores look similar. A data point for “same tasks, very different inner movies."
Levin et al., "Aging as a Loss of Goal-Directedness: An Evolutionary Simulation and Analysis Unifying Regeneration with Anatomical Rejuvenation": Uses simulations of toy creatures to argue that aging is what happens when systems lose their ability to aim for and maintain target body states. Cellular noise, reduced competency, and comms failures accelerate aging but aren't its root cause. Suggests rejuvenation may work by reactivating dormant information.
Levin et al., "Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era": Formalizes “cognition” as search efficiency in multi-scale morphogenetic problem spaces, then measures how efficiently cells and tissues solve problems. Treats goal-directed behavior as a continuum, not a brain-only privilege.
Vaz & Varela, "Self and non-sense: An organism-centered approach to immunology" (1978): Argues against the standard self/nonself discrimination paradigm; proposes the immune system as a closed, self-referential network where "self" is enacted through organizational dynamics, not discriminated via pre-given criteria.
Varela & Coutinho, "Second Generation Immune Networks" (1991): the canonical “immune system as a distributed, network-level process” paper.
Stewart, "Cognition without Neurones: Adaptation, Learning and Memory in the Immune System" (1993): "cognition without neurons" reinforcing the body doing "cognitive work" independently of the brain.
Thinkers & Relevant Concepts
Alfred North Whitehead, metaphysics etc.
Carlo Rovelli, Relational Quantum Mechanics
William James, Hilary Putnam pragmatism
John Searle, biological naturalism
Ralph Waldo Emerson, Henry David Thoreau self-reliance, transcendentalism
Post-postmodernism, Metamodernism, Liberal naturalism
Capability approaches, expressivist objection, procreative ethics
Linguistic relativity (weak Sapir-Whorf)
Music
(not referenced, but for any further reading. some thematically relevant cyber/silk/steampunk sci-fi ish, K-pop, 2000s nostalgia, sister's old jams. Plus other random songs.)
No comments yet