What is Artificial Experience (AX)? Why the Application Layer Is the Interface and the Human Is the Limit

William Morgan

04.08.2025.LDN.UK


publications HAIID Interfaces LLMS Artificial Experience Epistemology
Multi Agent Systems Infrastructural Ontology Infrastructure Turn Experience


What if the most important question about AI isn't whether it's intelligent, but what kinds of experiences it makes possible that nothing else could? William Morgan argues we're living through a categorical shift - AI is becoming infrastructure like electricity, fading into the background of daily life. But unlike electricity, this infrastructure thinks.

Morgan warns that current AI development is trapped in what he calls the “tour guide gravity well”. Endlessly refining ways to access machine intelligence rather than discovering genuinely novel forms of human experience. We're getting better chatbots when we could be inhabiting persistent multi-agent societies, emotional scaffolding that evolves over weeks, or decision rehearsals that let us test-drive alternative futures.

The transition from "artificial intelligence" to "artificial experience" (AX) requires an abandonment of our current interface paradigms. Morgan introduces HAIID (Human-AI Interaction Design), predicting that AI will demand new "psychosocial metaphor systems" based on character and sociality rather than the spatial metaphors (files, folders) that enabled human-computer interaction. This means moving beyond screen-keyboard-user relationships toward interfaces that don't force humans to decompose their experiential worlds.

Current AI products mostly amplify existing activities—email proofing, image generation—rather than unlocking the genuinely transformative experiences that infrastructural AI could enable: situated sensor-rich feedback loops, collective synchronization, decision rehearsals where you can inhabit counterfactual futures before choosing between them.

Morgan's solution: become "artificially intelligent"—not like machines, but intelligent about consciously designing our technological future rather than sleepwalking into whatever emerges.




   The Infrastructure Turn

The question of artificial intelligence today is less and less “Is the model intelligent?” and more and more “What kind of novel experience does AI make possible?” In other words, artificial intelligence is quickly maturing into artificial experience (AX).

Just as most companies today run on electricity but are not defined as 'electricity companies,' most future enterprises will deeply incorporate AI without themselves becoming 'AI companies.' Therefore artificial intelligence should not be understood so much as a feature, but a foundational layer, enabling wholly new modes of operating and interacting.

In a previous essay, I began writing about the infrastructure turn motivating AI’s maturation. Here I don’t mean infrastructure in the narrower academic sense of the term, meaning the data centers, labor and energy inputs that are necessary for the AI revolution to take place. Rather, when I say AI is becoming infrastructure, I am making the argument that the main output of AI (intelligence) is best understood as itself an infrastructural phenomenon, comparable to electricity or the internet

To speak of infrastructural AI thusly is to invoke a layer of intelligence becoming superimposed atop the built environment. You might imagine it similar to the way that one thinks about the city of Venice having a “history” layer draped atop it. In that earlier essay, I also argued that for Venice today, the tour guide represents a kind of interface technology, enabling individuals to interact with the city’s history layer. As a framing device, the tour guide-as-interface surfaces questions for the age of AI about how, when and where one will interact with the coming ubiquity of infrastructural AI.  

In this piece I want to further expand on the implications of the becoming-infrastructure of AI, specifically with an eye towards reorienting designers from the question “How do we deliver the experience of intelligence across different form factors?” towards the more fruitful inquiry, 'What fundamentally new and distinct kinds of experiences can infrastructural AI make possible?'


Intelligence Fades into Noise

If AI has increasingly little to do with intelligence, it is not because there is no intelligence there.1 Generally speaking, I find this ‘AI is a misnomer’ argument to be largely unsympathetic. Like it or not AI is the name of the game and in the (ironic) words of Gilles Deleuze and Felix Guattari “it’s nice to talk like everybody else.” Second, I am also quite persuaded by Benjamin Bratton’s point that while it is strictly speaking true to say that an LLM is only a scaled agglomeration of sub-functions like gradient descent, to argue that AI is just that is a bit like saying a symphony is just a sequence of sounds. To make either argument is, to my mind, to dramatically miss the point in an echo of that overused but here-appropriate Rick Santorum quote, “those who were seen dancing were thought to be insane by those who could not hear the music."2

When I say AI has increasingly little to do with intelligence I mean it in the same way that one’s experience of electricity or the internet has very little to do with electrons or fiber optic cables. Very simply put, when I turn on a lamp, I do not (most of the time) exclaim “Wow! Such Electricity!” To a lesser extent given its relative novelty, I am also not particularly charmed by the opportunity to send an email or join a virtual meeting.  

With both electricity and the internet, I expect their infrastructures to function and only notice them in their absence. Further still, the way I register that absence is through the loss of afforded experience rather than through the absence of the infrastructure itself: I am late for a meeting, I can’t scroll Twitter, the power is out and the food in the refrigerator won’t keep.

Something similar is happening with AI. Increasingly, we rely on AI models in our daily routines, proof the email, generate the image, do the math. What this routinization heralds is a larger trend away from intelligence to experience, a process the quantum physicist Richard Feynman once described as “strange, strange, and then familiar."

In most instances, email proofing, image retouching, and math do-ing do not deliver me an extreme rush. And yet these fairly thin examples are currently the main experiential affordances of AI products. As a result, what we think of as an AI experience today is almost always subordinated to the noteworthy novelty of machine intelligence’s existence. As we exit the novelty phase of AI and machine intelligence becomes increasingly familiar, I am increasingly less awed by the existence of chatbots. Even agents, like OpenAI’s, which has just booked me a haircut and a doctor’s appointment, begin to fade into routine.

As electricity afforded a revolution in available time, and the internet, a revolution in communicable space, my hope is that the familiarization of the chatbot combined with the infrastructuralization of AI catalyzes the emergence of a genuinely novel substrate of experience. In other words, rather than as an amplifier for experiences I am already accustomed to having, what the infrastructure of intelligence ought to unlock is a set of novel experiences that I cannot get anywhere or from anything else.


   Where Are All the Transformative Experiences?

Why don’t novel AI experiences already exist? Everyone, everywhere it seems already champions the transformative power of artificial intelligence. If AI is indeed so revolutionary, where are its transformative experiences? Why does everything still feel more or less the same? My contention for why when it comes to AI experiences there is nothing rather than something is that the genuinely novel experiences that do exist are getting vacuumed up and subsumed under the rhetorical rubric and market hype that surrounds artificial intelligence as intelligence. 

Consider the tour guide-as-interface metaphor, as a model for how people might access the infrastructural wisdom of AI as well as what they can expect to receive from it, the tour guide operates is a gravity well, channeling development energy towards devices that allow you to leverage AI by putting a model’s intelligence into your pocket (OpenAI), around your neck (Friend), or on your wrist (Amazon). The point of AI, these devices tell us, is to have more access to more intelligence more of the time. In other words, market hype self-fulfills its present instantions by catalyzing the funding structures whose activity delivers continued market hype.

Another reason for the lag effect on promises of AI-enabled experiential novelty is that the infrastructuralization of AI is still far from complete. The hyperscalers alongside newer players like Coreweave, Windsurf and Cursor are working quickly to scale compute power and enable new creators to leverage it, but one reason these companies fetch such high multiples in private and public markets is that most analysts project a fairly long spending runway to complete the AI buildout. 

If that trajectory is relatively established and already well-capitalized, the more open and interesting question is what genres of experience can AI as a new substrate of modern life afford? What is the revolutionary experience layer that this infrastructure brings into existence for the first time? Speaking for myself, I am unwilling to accept that the infrastructuralization of AI unlocks only the omnipresence of the tour guide. 

To reiterate, what I see today is a landscape of research and development which is heavily indexed on intelligence as the prime affordance of AI. This stems from the understandable fact that intelligence is the novelty aspect of AI that we currently have the most access to and from the fact that the term artificial intelligence itself artificially bakes in intelligence as the answer to the question of what AI is for. 

I can’t say enough how much I think this level of path dependency is a mistake. It is a near-universal truth that people overrate intelligence’s importance in their own lives and decision-making whilst criminally underestimating their reliance on primal instincts, drives and desires. To be clear, I am not saying that intelligence is itself fundamentally irrational, rather what I am pointing out is that the ends to which one applies the means of intelligence are ones which often have been selected partly through rational means, but also are heavily influenced by extremely non-rational currents. Look at how homo economicus’ supposedly universal utility maximization function tends to shrivel up in the face of Alan Greenspan’s irrational exuberance.

Human-AI relations are for this reason far more likely to develop according to experiences that engage populations at the emotional level and less so according to isolated individuals that seek to max out their intelligence stats. No prediction on this last point is really necessary: Would anyone claim that AI slop is about intelligence maximization? Consider Anthropic’s research showing that human-AI relationships tend to be forged upon foundations of emotional support, advice and companionship.

If we change our focus to locate intelligence as an infrastructural output of AI, we resolve both the inertia of nominative determinism and redirect designers towards the experiential affordance question. At the same time, we do not lose sight of the intelligence of models, because while not being the experience layer itself, intelligence remains that which conditions and enables applications built atop it to deliver unique, modular and epoch-defining experiences. 


   HAIID (Human-AI Interaction Design)

At the first Antikythera studio, I worked on a research project with Sarah Olimpia Scott and Daniel Barcay called HAIID (Human-AI Interaction Design). In the final piece, we argued that human-computer interaction (HCI) succeeded by developing an intuitive spatial metaphor system composed of buttons, files, folders and clouds, which leveraged humanity’s evolved sensorimotor system into metaphorical spaces (the GUI) such that humans and machines could interact. Simply put, humans know how to move the trash into the trash can, so a basic skeuomorphism is able to translate this evolved spatial literacy into feasible human-machine communication. 

Sarah, Daniel and I forecasted that AI was likely to both require and catalyze a new metaphor system for human-AI interaction. We predicted the rise of a psychosocial system, which rather than riding atop humans’ sensorimotor intuitions, would leverage our in-built understandings of character, personhood and sociality. Given the predominance of chatbot therapists and AI romantic partners, I would say that so far we’ve broadly been proven right, but as the accelerationist / Bachman-Turner Overdrive dictum goes, “you ain't seen nothing yet.” 

I remain fairly convinced that we are still in the early days of a revolution in human-AI interaction design. And this certainty stems in part from the indisputable fact that every computation paradigm to date has relied, as a matter of necessity, on a malformed ontology of the human-as-user. Convenient interface devices like the keyboard, screen and the mouse each require humans to decompose key aspects of their experiential worlds (space, time, embodiment, etc.) in order to enter into communication with machines. This is true bidirectionally as the GUI and other skeuomorphs translate only a fraction of computational ontology into the human-legible interaction space. A certain genre of reader might say that human-computer interaction has always been defined by a bidirectional lack, while, on the other hand, the most radical forms of experience tend to unfold in environments in which participants are able to inhabit and delight in the fullness of their native worlds. 

For the infrastructuralization of AI to succeed as more than a productivity booster, this bidirectionally lacking interface arrangement must evolve. The way one navigates from a model’s intelligence to its infrastructural form back to oneself cannot be based upon the extant screen, keyboard, user relationship. This is not to say screens and keyboards cannot continue to exist in the human-AI interaction space, but it is to contend that the model as a whole must change the way that each is positioned in the larger schema. 


   Designing the AX Layer

In a forthcoming paper, Shane Sanderson argues that the transformation prompted by AI’s infrastructuralization will lead to the rise of “artificial experience design” because AI models “have begun to occupy a new ontological space” with “vast social agency.” Looping this back to an earlier essay I wrote about technical evolution, you could say that with AI the human-technology evolutionary feedback loop gets a new player who goes beyond the screen and implies a vast and untapped social agency. 

Artificial experience does not refer to just any technologically mediated interaction; it is precisely the kind of experience that is uniquely enabled by AI’s infrastructural properties. AX asks explicitly: 'What experiential affordances can AI uniquely deliver that no other medium, tool, or infrastructure could?' This question is infrastructural specificity in action, which I take to be a hallmark of AX design.

Furthermore, AX only emerges when the infrastructural intelligence of AI becomes ubiquitous enough to disappear into habit. In other words, experience is what remains when infrastructure is no longer visible. For this reason, AX design is not about the direct perception of intelligence, but rather the surprising, yet welcome experiences that you didn't anticipate but are glad to encounter. 

As a sign that this transition is underway, look for companies to shift their consumer promises from delivering 'the experience of artificial intelligence' to advertising their own intelligence about how to deliver artificial experiences. Downstream of this, the emergence of AX as a vertical is likely to trigger a feedback loop between experience and habits, where each new experiential layer acclimatizes humans to some novelty and simultaneously produces surplus creative capacity, feeding back recursively into further experiential innovation. Electricity brought us the electric guitar, but Jimi Hendrix introduced another order of novelty altogether. The true promise of AI resides in Jim Hendrix, in the scalable design of unprecedented aesthetic, emotional, and relational possibilities that are uniquely enabled by AI’s pattern-making and generative capacities being unlocked at infrastructural scale. 


   Old Dreams, New Life

The question of artificial experience invites reconsideration of fundamental but not totally transparent or obvious assumptions about how we live and work with technology: the individual user as the elemental unit of the internet, the 1:1 human-AI interaction surface enshrined by the chatbot, the seemingly negative externalities generated by AI models, such as slop

Old dreams of cyborgization and the metaverse have new life breathed into them by the emergence of ersatz wearables promising access to infrastructural intelligence through body and gestural conduits. Boutique AR/VR innovations don’t offer themselves as endpoints, but grant access to new kinds of worlds across virtual and meatspace. Think of Snow Crash’s Hiro Protagonist–”you’ll never forget the name”–whose labor IRL is downstream of expert samurai skills employed on the Street. 

Moving from intelligence to experience also rectifies the belt and shaft fallacy that I’ve mentioned before wherein new inventions get swallowed up by old frameworks, cauterizing their novelty. Electricity was at first dismissed as a power source in factories because it didn’t turn central shafts as efficiently. It took years before anyone realized that what electricity really meant was that the architecture of the factory needed to be transformed. 

The architecture of experience is malleable today with AI. But, crafting genuine AX will require working against our impulses to force AI into conformity with expectations based on existing technical affordances. Recall sage Heraclitus: “If you do not expect the unexpected you will not find it.” A related side point: This is why we at Restless Egg are so bullish on artist-founders; this emerging genre of creator has been training their whole life how to discover the things that others don’t know are missing.

 

   Conclusion: The Becoming-Artificial of Human Intelligence

Final point: the intelligence of the model is not the limiting factor on artificial experience. The human is the limit. As AI becomes increasingly infrastructural it will also become a spatial majority and the norm for spatial design. The human quickly becomes a spatial exception. Already, AI models speak to other models in their own voluminous dialogues without being forced to decompose their discourse into human legible outcomes. 

We can imagine that situations of human-AI interaction will increasingly constitute a minority of spatial outcomes. As a result, the infrastructuralization of AI is primed to reverse the polarity of prompt engineering, translating humans into the place of models, needing to be prompted into experiences. Just as electricity reorganized factory labor and the internet reconditioned social life, infrastructural intelligence cues up human behavior for the very experiences it affords (stochastic parrots could never…). 

Recognizing this drift early is the only way to design those cues as opposed to sleepwalking into them. Today therefore the task before the intelligent human (especially the intelligent human designer) is to become artificial. I don’t mean that in the sense of becoming more like an AI model, but in the sense that becoming artificial is what happens to human intelligence when it is applied to the problem of infrastructural AI. To become artificially intelligent means to become intelligent about artifice and the design of artificial experiences directly rather than one’s slouching into them unplanned technical innovations, whilst distracted by questions of which model has more juice. 

In Morphing Intelligence, the philosopher Catherine Malabou draws a distinction between “problems to be solved” and “problems for thought.” As she recounts, it was cognitive science that gave us the contemporary understanding of intelligence as a problem solving function that could be measured in terms of an ability to complete tasks. But, Malabou says, there is a different and deeper kind of intelligence that considers a priori problems of a different kind. As Malabou reasons, “In order to solve a problem, you must be able to express what is problematic in the problem.” The becoming artificial of human intelligence means our intelligence’s becoming capable of expressing that which is rendered problematic by artificial intelligence. It is the application of our intelligence to artificial experience as something that is quickly happening to us, as a problem for thought. 

As always, not choosing is still a choice, and in this case the consequence of such a decision is that one day soon you may awaken to find yourself in a world that you feel no authorship over, saying, this is not my beautiful house, how did I get here? If, on the other hand, artificial experience becomes a problem for thought, meaning that serious time, effort and capital goes into the question of what artificial experiences that one wishes to have, then we may happily find ourselves the recipients of a set of mysterious, yet awesome affordances borne of the infrastructuralization of AI: persistent multi-agent societies, situated sensor-rich feedback loops, emotional scaffolding, decision rehearsals, collective synchronization and much more.

These early AX sightings (appended in greater detail below) are possible but imply humanity’s active participation in shaping the new forms of sociality, emotion, and cognition that infrastructural AI uniquely affords. 

To conclude, this is what it means to become artificially intelligent, it means understanding that the problem of AI is fundamentally not a problem to be solved, but a problem posed to our species as a problem for thought: how do you wish to live?



appendices

I. Conceptual Glossary 

Infrastructure: invisible, always‑on capability (cf. electricity, internet).

Intelligence layer: models/weights/APIs

Experience layer / AX: novel modes of lived interaction enabled by the intelligence layer.

Tour‑guide gravity well: UX patterns that trap AX potential in familiar interfaces.



II. AX Sightings

Persistent multi‑agent societies: AI models can impersonate a cast, debate by themselves, keep quasi‑stable memories if instructed, and negotiate goals with other instances. Few users create an ongoing polity of agents whose internal frictions they can observe or steer.

Situated, sensor‑rich loops: Transformer architecture readily ingests structured data, location streams, biometrics, IoT telemetry. Very few front‑ends feed such context continuously, so models answer in a vacuum that feels “smart” but not situated.

Emotional scaffolding: a model can modulate tone and track sentiment trajectories across turns, but the typical chat resets affect after each response. Humans seldom ask models to hold an emotional frame across days or weeks.

Decision rehearsal: models can spin up counterfactual futures (job A vs. job B), populate them with synthetic detail, and let users inhabit the differences. Almost no one does, because AX tooling isn’t there yet.

Collective synchronisation: Group chats exist, but the agent rarely orchestrates across members’ calendars, moods, or documents in real time. I see many 1:1 silos, few 1:many or many:many choreographies.


Experimental infrastructure design by Ian Margo




  1. Representative examples of that argument can be found here, here, and here)
  2. Apocryphal attribution.


related entries


Lexical Report of the Linguistic Visualization Test Program Based on Relational Meaning and Etymological Coordinates
Ian Margo


publication artwork software

(...) wall, it, with animal, inhuman, appear, cavernous, word, close-up, surface, black, eye, not, is, an, reflect, code, trait, overcoded, produced, dimension, percept, proper, story, flesh, volume, dream, model, signifier, construct, before, distinction, system, general, pain, no, eyes, even, necessary, new, polyvocal, structure, drift, holes, horror, light, or, and, sometimes, process, trace, all, face, the, of, should, are, coded, holey, order, also, crystalline, what, just, thing, at, circumstances, begin, has, error, head, facialized, cannot, said, operation, hole, unconscious, function, without, combination, dance, already, never, remains, white, but, signify...(more)
Design in Rising WindsFlora Weil


publication

(...) What might we learn from materials that refuse our models? Nora Khan proposes thinking of AI as "as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent."50 Perhaps sand offers another metaphor: intelligence as that which slips between categories, accumulating into new forms through its very resistance to capture. As Catherine Malabou states, "Intelligence 'is' not; rather, it only exists through its own transformations."51 These demon cities and herd kindergartens show us perhaps that notions that have come to define our species–intelligence, evolution–emerge precisely from where our models fail. What might science look like when it mirrors the incoherent, the unsound, and the anomalous? In a project describing future human-AI interactions,...(more)
What we do in the shadows
Ivar Frich, Jenn Leung, Chloe Loewith



(...)When ‘seeing in the light is blindness’24, and when the epistemology of light is being replaced by an epistemology of darkness, can we still think of contemporary computation, and thus AI alignment, as something which mirrors human thoughts and values? 

Artist Diemut Strebe and Brian Wardle, professor of aeronautics and astronautics at MIT, collaborated on an arts and science project to create the blackest black material to date. In an interview Wardle proposes that the darkest material is ‘is a constantly moving target’25. The aerospace community celebrates darkness to prevent glare; perhaps this same principle needs to be redirected toward alignment research. As Pasquinelli asks, “will darkness ever have its own medium of communication? Will it ever be possible to envision a medium that operates via negation, abduction, absence, the void, and the non-luminous?”...(more)