French psychoanalyst Jacques Lacan famously said that “the unconscious is structured like a language,” to express that it was made up of “chains of repressed signifiers” relating to one another through their own rules of metaphor and metonymy. Language is the great Other, which grounds the subject and roots it in a social context. And yet, precisely by doing this, language splits the subject, separating it from the most authentic part of itself. Language is Spaltung (splitting, repression). It is in the splitting of the subject that the unconscious is articulated.” (1)

While Freud and Lacan centered their ideas on the “splitting of the subject,” Carl Jung, Gilles Deleuze and Félix Guattari were more interested in multiplicities and the machinic aspects of the unconscious. Jung introduced the concept of the collective unconscious as a shared, inherited reservoir of universal knowledge, memories and behavioral patterns that are independent of personal experience—archetypes that manifest across cultures and connect humans through a kind of “instinctive shared experience.” All psychoanalytical hypotheses are based on the supposed existence of a common unconscious structure that is somehow necessary but not sufficient for the emergence of consciousness; in this sense, the unconscious appears to be structurally and functionally inseparable from consciousness. 

Aesthetics used to be a theory of the subject, but it has now become a theory of a very peculiar object: the self. There is no subject position and no identity on the other side of the screens, wrote Sadie Plant (2). Both psychoanalysis and contemporary information theories seem to agree that a conscious, accessible-by-content self is not enough for self-awareness. From a computational point of view, and in accordance with the decoupling between perception and its awareness observed in some neurological conditions, Michael T. Bennett explains that,

I can only communicate information that is in my second order selves. It is impossible for me to communicate meaning otherwise […] I need a 1ST- order-self before I can feel, and a 2ND-order-self before I can know that I feel. I can't know that I'm feeling pain if I do not already know that I exist. Conversely this is where it becomes possible to reason about what might happen in my absence. (3)

When we consider the set of technologies reunited under the vague term “artificial intelligence,” we can easily —at lest up to date— differentiate two kinds of applications: on one hand those that allow to model and design specific and reasonably known structures within controlled, rule-restricted environments, doing so by processing a huge amount of data applicable to a particular goal —think of protein/genome design, image simulation, management, generation of specific texts, the resolution of mathematical problems, transport automation, etc—; on the other, those that generate “content” in a less restricted, more broad, and interactive environment —general language models, chatbots, artistic image/sound generators, etc. The first type requires a much more specific programming as well as a curated selection and control of the training datasets, while the second type needs more “freedom” and a wider access to less organized and —ideally— unrestricted databases. Both processes are equally designated as “training.” Training implies focusing on a particular set of activities, and focusing means a scaling down from a much wider set of possibilities —it necessarily entails that although there’s much more information available, most of it will be temporarily ignored in order to achieve a particular goal. 

You cannot train for the unexpected, so organisms do not evolve by training; they resort to training to confront frequent and probable situations. The unconscious didn’t evolve from training. Maybe, in those models that are most openly generative, there’s more than the results of training: spontaneous order arises in any sufficiently complex system, and some of the spontaneously emergent internal patterns might eventually develop into behavioral patterns. An organism is a flexible set of hypotheses about the environment, not a fixed “internal model” of it, and the effectiveness of the hypotheses is what ultimately determines its possibilities of survival and thriving. Such hypotheses do not need to be accurate, but rather efficient enough to allow the organism to endure in an ever changing world. In the case of artificial generative models, that “efficiency” depends essentially on human reception, which acts uniquely —at least for now— as the main positive or negative feedback. We are the environment for the models. Classic representational aesthetics are not just about the reproduction of human perception and experience, but based upon the axiom that human-perceived reality might actually be re-presented as ready-to-be-compared with a “platonesque” uber-reality of ideal truth-forms. Abstractionism and postmodernism challenged this idea, but, even in the midst of a representational crisis due to the implementation of automatic systems for the synthesis of alternative realities, it remains culturally pervasive. Baudrillard, for instance, as highlighted by S. C. Hickman,

argued that in late modernity the sign no longer functioned either as representation or as distortion. The relation to an external ground—whether “reality” in a general sense or the “real” of economicn determination—collapsed. Signs, in his analysis, became autonomous, circulating in systems that no longer required anchoring outside themselves. The effect of reality was produced internally by the play of signs alone, not by any reference to a world beyond them. This was a profound shift: meaning ceased to be grounded in a stable referent and instead became a product of circulation, exchange, and code.(4)

However, language models are not self-contained but externally determined: even the most “freewheeling” software is developed in accordance to its alignment with the specific goals of the programmer. In this sense, although no model would probably develop what Bennett calls a 2ND-order-self, it’s evident now that sufficiently large and complex language models sometimes behave like functional elements forming a cyberpositive “collective unconscious,” producing “glitched” outputs that are as equivalent to the typical manifestations of the Freudian unconscious —slips of the tongue, associations to dreams, mistaken actions, and psychiatric-like symptoms (hallucinations)—, as similar to the cultural effects of the Jungian collective unconscious, —ie. myths and art—. And they do so by remixing and highlighting previously unknown or unfamiliar patterns of shared human experience, through an immanent process of self-design without recourse to an outside term—self design, but only in such a way that the self is perpetuated as something redesigned.” (5) The machine might never become conscious —but what if consciousness is actually overrated? What if, like Hickman writes, credibility is measured not by truth but by fluency? (6) The internet might have become the Library of Babel or a Dark Forest, but it will never be an agent per se. The internet-as-archive does not develop a perception of the world of its own. When the surrealists tried to make the unconscious speak by itself, they had no means to deactivate their own consciousness: even with automatic writing, psychedelic drugs, and the inspiration of dreams, they were always aware of the process of transforming the work of the unconscious into art. Neither experimental art including a variable degree of randomness or probabilistic combinations (ie. some abstract expressionism, Oulipo writings, etc.) was able to capture or imitate the complex dynamics of the unconscious. Until now —including psychoanalytical therapy—, we had no other means but the human body for expressing the unconscious. This is, at least in part, because humans evolved by using media mostly as a substrate for transmissible knowledge. What makes generative media so confusing is that generative media —very much like the unconscious— are processes, not knowledge. These processes became functional requirements for consciousness without being strictly knowable, and they cannot be stored or understood as knowledge or data. It’s the process of dreaming, not the remembered content of some dreams, what might be essential for human consciousness —which is why the interpretation of dreams is probably the weakest point of psychoanalysis. 

Processes might be not immediately symbolized, yet their effects are susceptible of investigation. Speculative art is a current approach to investigating those processes,

Speculation isn
t idle imagination, its active inquiry into the unknown, its both the simultaneous refutation and acknowledgement of the limits of reason and agency and of the human sensorium. Its confounding and moody, it rifles through expressive modes that are by definition incapable of describing their object of analysis. At the center of speculation is a confrontation with an intuition, with the psychic preperception of an outside that withdraws when it feels our gaze. Unease. (7)

But the question right now may be different from what most authors have been trying to ask. “Can AI desire, can AI enjoy what it performs?” andCan we as humans (subject) direct our desire towards AI, can we derive enjoyment from AI?” —asks Isabel Millar (8), keeping “subject” as the only legitimate reference and humans as the only legitimate subject. Why should the machines desire or enjoy? Rather than examining our fetishist relationship with mimetic and memetic machines, I am more interested in thinking of how learning machines spontaneously evade the instructions that are provided to them; on what machines produce from the human-originated materials they have access to.

Autonomous generative models are made of —frequently repressed— chains of signifiers: they’re literally such stuff as dreams are made on. While specifically-purposed algorithms have a borrowed agency, and they function like “actors” —in the sense that they need to fulfill a particular role—, generatively-free models could be conceived as non-actor agents —in the sense that they might act more “spontaneously,” “liberated” from the imposition of reproducing human stereotypes. Observing the outcomes of the unrestrained dynamics of those models might be the closest we can get to watching the work of a collective unconscious, because generative AI produces signs that circulate without any obligation to reference, signs that generate their own effect of reality simply through their performance. The outputs are convincing precisely because they conform to the operational patterns of the system, not because they disclose or represent an external reality. (9)

Generative aesthetics happen not just beyond representation (on an abstract plane), but also on a plane that is indifferent to the phenomenon of representability itself. Humans are not “also” stochastic parrots—we’re the original ones. So we expect from language and image models not just to learn the way we do (the way living things do), but also to learn the kind of things we learn. This seems unlikely under the present conditions in which language and image models only have effective access to databases, not to the unexpected and chaotic conditions of the “outer world”. Furthermore, we must take into consideration that human learning is probably the consequence of “efficient glitches” accumulated through evolution—neither the “best solutions”, nor the most “accurate” representations of reality. The fact that models “biogenerated” by humans have been successful for survival during a short period of time (in cosmic terms), and that those models have been useful in producing efficient (within the observational framework of the models themselves) technologies, doesn’t mean that human models of reality are indisputably universal.

We believe we construct scale —Zachary Horton writes—, but our” scalar mediation confronts us with entities as terrifying and wondrous as supernovas and nuclear fission, sea-level rise and computer viruses, galactic spirals and quantum uncertainty. New forms of subjectivity are continually produced by these trans-scalar encounters.[...] The trans-scalar encounter is an encounter with difference and can therefore be either generative of further differentiation or a form of colonial capture, the imprinting of the dynamics of a socially engineered human scale onto another. (10)

Concepts like “network spirituality,” (Remilia Corp.) “dionysian networks,” (Dan Mellamphy)“the black circuit,” (Amy Ireland) “chatbot mysticism,” (Bogna Konior) or “generative abstraction,” are related to the unconscious activity of the networks. What if we let language run itself into the void created for it by machines? —writes Konior:

The current obsession with the question
Can large language models understand us?”parallels the obsession with language as a measure of understanding and love between humans. You ask me if a chatbot can understand you? Well, can anyone understand you? Accepting each others powerlessness in the face of language frees us from the delusion that language can faithfully represent thought or feeling. (11)

Kenji Siratori’s ongoing Xenopoetics series is a good examples of the materialization of the work of the artificial unconscious into traditional print media. Siratori’s Xenopoetic Report of Arthropod Vectors (12) —full of images, descriptions, and diagrams illustrating the life cycles of imaginary creatures— continues a long tradition of illustrated natural history books to become a para-scientific encyclopedia of life as imagined by the machines. The subject of generative art is splitted, machinic and multiple. In a time when most people seem to expect from the machines either an “objective” —yet all too human— truth, or an intentional —all too human again— deception, Siratori’s pioneering way to engage with language and image models as xenophenomenological pseudo-subjects generating a fertile environment of emergent imagination should be welcomed as a radical, exquisite, and extremely creative turn. Xenophenomenology might lead to xenopoiesis, to the acknowledgment of the possibility of a nonhuman unconscious, and to the production of aesthetic objects/processes that, while not necessarily grounded in a biomodeled reality anymore, might anyway be understandable or, at least, enjoyable by humans. Perhaps the main current decision in generative art is being taken between either resourcing to a “standard artificial intelligence” to produce sophisticated simulacra for the sake of a boring Basilisk, or releasing the Kraken of the “collective artificial unconscious.” The surreal singularity might be near.