Artificiality and Emotions: Beyond the Flesh

17.09.2025.BER.GER


publications artificial-intimacy embodiment post-humanities eroticism cybernetics
desire anthropomorphism mediation language








AI Chatbots as Psychoanalysts


In 2023, and after spending eighteen months in Vienna, immersing myself in academic coursework and various artistic exchanges, I naturally began to familiarize myself with the principles of a discipline I had barely heard of before: psychoanalysis. Six months earlier that same year, following a series of engagements, emotional disappointments, and educational pursuits, I had chosen to personally begin therapy within that very framework. As the protocol advised, I attended sessions with a therapist one to two times per week. After several months, I unfortunately had to discontinue my sessions due to personal, logistical, and financial constraints. These circumstances left me without a therapeutic alternative, gradually prompting me to explore possibilities for replacing and automating the psychoanalytic process. What had I learned in the process? What kinds of questions would my therapist ask? What were the words, metaphors, and moments she insisted on revisiting? Could I possibly automate my therapist’s questions and insights? Contemplating such a substitute required replicating a comprehensive analysis of the therapist’s interaction patterns, lines of inquiry, and the nuanced emphasis placed on specific topics or lexical choices. This experience, and the ensuing search for alternatives, ultimately sparked my interest in exploring artificial intelligence (AI) and chatbots as potential therapeutic tools.

In recent years, the convergence of technology and mental health care has spurred the development of innovative solutions addressing the escalating global prevalence of mental health disorders. Artificial Intelligence chatbots have prominently emerged within this landscape as promising tools designed to offer mental health support and psychological therapy. In essence, these AI applications aspire to provide a distinctive and easily accessible platform for individuals seeking assistance with their emotional well-being. If AI Chatbots exist in a panel of different therapies,  this short research endeavors to explore the potential utilization of AI applications and bots as supportive elements, specifically within the psychoanalytical process. I have chosen here to concentrate on this specific branch of psychology, although I recognize the myriad approaches available for psychological analysis and treatment. Indeed, psychoanalysis holds particular interest for examination due to its origin as a ‘Talking Cure’ where the primary therapeutic process unfolds through language and the analysis of meanings. Such principles align with the fundamental objective of AI, which aims, or at the very least endeavors, to excel in precisely such linguistic and meaning-based analyses. Before embarking on this speculative investigation, it is crucial to acknowledge the diverse perspectives that envelop the intersection of psychoanalysis and AI, as it becomes evident that two primary avenues of inquiry emerge in its current debate.

The first approach and domain of interest involves considering what psychoanalysis can reveal about AI. Such endeavors include delving into questions such as: What insights can psychoanalysis provide into the realm of AI? How might psychoanalysis enrich our comprehension of the motivations and dynamics inherent in machines? Despite the evident preoccupation with these queries within the scientific community, it's important to note that this paper will not revolve around this particular aspect. Alternatively, the second avenue of inquiry shifts focus toward the practical implementation of psychoanalysis through AI. The central question here is whether psychoanalysis can be effectively computed through AI. In other words, can AI chatbots play a meaningful role in the psychoanalytic process, and to what extent can such a therapeutic practice be automated? If so, what patterns, tools, and obstacles does machine learning encounter in this endeavor?


AI and Mental Health

Why should one care about the intersection of artificial intelligence and mental health? That question might sound almost banal these days, given how many people have begun to treat digital companions as potential therapists. In a world grappling with escalating mental health issues and anxieties, such technologies emerge as a promising avenue for innovation and intervention. Its capacity to analyze vast datasets enables more accurate and early detection of mental health conditions, thereby facilitating timely interventions. Chatbots, particularly those equipped with natural language processing capabilities, contribute significantly to ongoing support and intervention, presenting a scalable solution to the increasing demand for mental health services. In essence, the intersection of AI and healthcare holds the promise of enhancing accessibility, efficiency, and effectiveness in addressing the intricate landscape of mental health. A noteworthy observation about the amalgamation of AI and mental health, particularly relevant within the broader context of AI and healthcare, is its implication of a form of assistance or cure that may not necessarily involve the physical body, at least not in its initial stages.1 Indeed, while physical presence or treatment might come into play at a subsequent stage, the primary focus of such tools is to analyze speech and emotions to guide the patient toward the next possible steps. But as we delve into the examination of an individual's mental well-being, what aspects do experts typically prioritize in their initial investigations?

In her book, The Weariness of the Self Diagnosing the History of Depression in the Contemporary Age, Alaine Hrenbeger delves into the complex realm of mental health and, in particular, depression. She denotes the contentious nature of discussions surrounding mental health, primarily because it navigates the intricate boundary between the mind and the body.2 Throughout the book, Hrenberger emphasizes that mental health issues, such as anxiety and depression, have evolved into global and societal concerns since the last century. In her research, she additionally highlights the optimism among psychiatrists in the late 1950s about psychotropic drugs restoring joy to those affected by the pressures of "modern life."3 As argued by the writer, the development of antidepressants during the 20th century triggered moral, political, and social debates, shaping depression into more than just a psychopharmacological emblem: “Yet depression goes far beyond medication. Considered by psychiatric epidemiology as the most common mental disorder since 1970, it has a vaster and more complex history than simply an emblem of psychopharmacology. And it is at the heart of the tensions of modern individualism.”4 As underlined by the author, the core dilemma at the heart of such an investigation questions whether mental conditions are psychologically and socially induced or stem from pure biological conditioning, and therefore could simply be solved by a chemical add-on.

Ass articulated by Hrenberger, this ongoing debate revolves around the clash between the proponents of the ‘talking self’ and those of the ‘cerebral self’ in the twenty-first century. This war over the self reflects a divide between those who view neuroscience as a threat to humanity’s sanity and those who see it as a means of relieving guilt. And still, as underlined by the author, before the days of fast and easy medication, “French psychiatrists would never consider healing a depressive individual without first wondering to what extent the illness stemmed from intrapsychic conflicts.”5 Advocating for neither specifically, the researcher underlines the current dichotomy existing within the research field, emphasizing how patients in real need of assistance may benefit from seeking both chemical and psychological avenues. But if one thought that depression or anxieties could be partly cured with pharmacology (here, antidepressants, pills, or else), what promises and support could be given by AI in the ‘The Talking Cure’ process? And why is AI specifically interesting when thinking of the practice of psychoanalysis? 

Psychoanalysis, a psychological approach developed by Sigmund Freud, initially aimed at uncovering traumatic experiences believed to underlie neuroses (psychological disorders). Freud later modified his theory, emphasizing the consistency between recollection and childhood psychic reality rather than physical reality. This approach is also known as ‘The Talking Cure’, as the therapeutic relationship between the patient and the psychoanalyst usually begins with intense emotional connections rooted in catharsis through talks and suggestions.6 Over time, this emotional intensity evolved into a more intellectual and conversational exercise. It is noteworthy that Freud maintained some skepticism about the effectiveness of psychoanalysis as a treatment method and exhibited flexibility, incorporating non-psychoanalytic techniques such as behavioral methods in therapy, adapting his methods based on individual patient needs and responses. As previously noted, and for those who might not have the opportunity to experience such a process, psychoanalysis relies on language and conversation, sometimes even lying on a couch, using speech and reflection on experiences or memories to develop a new understanding of phobias and traumas.

While psychoanalysis (and psychology at large) initially occurred in the context of a physical space with two individuals present, therapeutic practices have evolved over the years, becoming more and more distant and more and more mediated. In the book The Distance Cure, Hannah Zeavin challenges the traditional therapy paradigm confined to physical spaces, advocating for a reimagining of the therapeutic relationship. Examining various expressions ranging from crisis intervention to the modern dependence on platforms such as Zoom amid the pandemic, Zeavin introduces the notion of "distanced intimacy." As such, she emphasizes that therapeutic connections across screens can be potent and meaningful, challenging preconceptions about the efficacy of remote communication in mental health care.7 Ultimately, Zeavin contends that therapy has always been a "communication cure," shedding light on its historical interplay with teletherapy and its impact on the conventional therapeutic landscape. A book relevant to our research, as it underlines how technology was and is a constant medium in the potential healing process, further raising questions about the necessity of a therapist's physical presence. If, indeed, therapeutic conversations can happen over the phone, chat, or email, could therapists potentially be replaced or automated?


Can Therapeutic Principles be applied to AI?
This idea, the one of automating the therapeutic process, isn't a new concept and has surprisingly been around for more than 80 years. Indeed, as proposed in "Mind as Machine: A History of Cognitive Science" by Margaret A. Boden, specifically in the chapter titled "The Rise of Computational Psychology", researchers emphasized how, if psychiatry indeed always relied on the presence of a couch room for the consultation, "Kenneth Colby (1920–2001), a psychiatrist, Freudian analyst, and a pioneer in General Problem Solver (GPS), attempted to shift the therapeutic action from the couch to the computer. In the late 1950s, at Stanford, he initiated work on a 'neurotic' program, refining it over nearly a decade."8 At the core of such experiments, the researchers were eager to investigate the potential support of machines in the analysis and diagnosis of patients, to the extent that it could be potentially automated. Such a vision was indeed aligned with similar experiments of the time, such as, for example, the development of Eliza, a computer program developed by Joseph Weizenbaum in 1966 to parody a psychologist.  However, as anticipated, this ideology wasn’t so simply engineered and computed, and the researchers quickly encountered significant challenges. As argued in the chapter "The Rise of Computational Psychology, "The program's major drawbacks included its crudeness in modeling anxiety and analogy. Anxiety, being a product of a complex computational architecture, couldn't be adequately captured by a semantic-clash-plus-numbers approach."9 Additionally, the researchers underline the exploration of potential non-obvious adverse effects raises further concerns, quoting how: “Science, and behaviorist psychology in particular, had no room for concepts such as freedom, deliberation, purpose, and choice."10 It is then rather clear that the automation of therapeutic processes, although initiated some time ago, encounters notable challenges in its early stages. Indeed, many questions arise: How to best interpret an individual’s actions and choices? Should the machine emphasize analogies? How to compute anxiety or implement other factors that are not so easily implemented within machines and code?

Such questions encompass not only technical aspects but also ideological dimensions. In this ocean of challenges, let’s revisit our initial question. How can psychoanalytic principles, for example, be effectively applied to artificial intelligence? When confronted with the intricate methodologies of psychoanalysis and the (rather difficult) attempt to integrate them into AI systems, a multitude of challenges emerge, encompassing ideological, cognitive, and technical aspects. The present research does not seek to precisely delineate these issues, given the limited knowledge of psychoanalysis or data science. Instead, it aims to raise questions about the intriguing obstacles encountered in this pursuit. What characteristics would a neural network designed for learning psychoanalysis exhibit? Could such an experiment, by accident, limit the possibilities of understanding human psychology?  

Addressing the ideological barriers of AI and therapy would be incomplete without acknowledging the contrasting paradigms in artificial intelligence, specifically in the realm of chatbot development, where Symbolic AI and Connectionist AI present divergent approaches. Symbolic AI relies on rule-based systems and explicit representation of knowledge through symbols and logic, excelling in tasks requiring logical deduction and rule-based decision-making. Symbolic AI chatbots are characterized by predefined rules and responses, offering transparency but potentially lacking adaptability to diverse and evolving contexts. In contrast, connectionist AI, grounded in neural networks and distributed representation, emphasizes learning from data and adapting to patterns. Connectionist AI chatbots leverage neural networks to process extensive datasets, enabling them to learn and generate responses based on intricate relationships within the data. While connectionist AI provides flexibility and proficiency in handling complex patterns, it may sacrifice the transparency associated with explicit rule-based systems. In an ideal practice, the approach would involve a hybrid model that combines both symbolic and connectionist methods, allowing chatbots to benefit from rule-based reasoning and adaptive learning, resulting in a more robust and versatile AI system. In Clemens Apprich's text, Secret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine Learning, the media theorist contends that the paradigm of "Good Old-Fashioned Artificial Intelligence" (GOFAI), grounded in a symbolic information-processing model of the mind, has recently been supplanted by neural-network models in the description and creation of intelligence.11 The shift is from a symbolic representation of the world to an emulation of the brain's structure in electronic form, where artificial neurons establish connections autonomously through a self-learning process. The contemporary AI paradigm is described as connectionist12, as neural networks emulate “the somatic nerve system of animals.”

As argued by the media theorist, neuroinformatics lacks a critical examination of the brain's physiological materialism inherent in connectionism, “even though the principles of connectionism can be traced back to the early days of scientific psychology, and have been subject to profound criticism ever since”13. He goes on to argue that psychoanalysis, particularly distancing itself from 19th-century biological reductionism, questions the reduction of intelligence to the biological level, forgetting about links between the human mind and cultural processes. As underlined in the text, one starts to see how the reduction of intellectual capacities to the brain's material structure in current AI debates raises concerns about reviving biologist thinking, tracing back to the origins of modern psychiatry in the late 19th century, when mental illnesses were viewed as diseases of the brain. As such, A Psychoanalytic Critique of Artificial Intelligence and Machine Learning delves into the ongoing debate about whether mental issues are inherently woven within the brain's material structure and biological terms, or if a deeper symbolic, psychic, and social component exists. Clemens Apprich advocates for applying psychoanalytic principles to machine learning, asserting that technology can be essential for understanding psychology as it encompasses pre-individual collective experiences, such as language. Additionally, the theorist draws on Jacques Lacan's structural psychoanalysis to support a language-based approach, therefore “systematizing Freudian theory.”14 This realization aligns with psychological thinking, elucidating the intricate relationship between psychic, technical, and social individuals. As such, the consideration of opting for either symbolic or connectionist AI constitutes a complex philosophical and technical decision that needs further investigation. But coming back to Apprich’s argument and the desire to apply a systemic language-based approach to chatbots, let us contemplate the hypothetical scenario of applying such a rigid analytical structure within an AI. Even with such a speculative rigor, what would be the primary apparent obstacles to the automation of the psychoanalytical process?


Overview of the Main Obstacles
The exploration of the application of transfer principles, the linkage between rational and irrational elements within a machine, and the machine's ability to discern linguistic "slips" or “lapsus” are some of the many obstacles that come to mind when thinking of the automation of psychoanalysis. Additionally, one could mention other considerations, such as to what extent computation aligns with the "boolean dream" and how the machine comprehends concepts such as the drive and the ego. While numerous questions could be the focal point, today's focus will center on two primary aspects: the inquiry into transference and the examination of memory. Transference, a term commonly employed in various psychotherapeutic approaches, notably psychoanalysis, has recently found application in describing the relationships people establish with modern technologies, particularly computers. In the article "Transference and Countertransference Issues" by Fatemeh Amirkhani, Zahra Norouzi, and Saeedeh Baba, the authors note: "To date, no research has been conducted about countertransference between humans and bot therapists."15 Additionally, they insist on how "transference and countertransference are pivotal concepts within the psychotherapeutic relationship. As bot technology continues to advance, the applicability of the concepts of transference and countertransference to the interaction between humans and bots becomes increasingly relevant.”16 As argued in their text, multiple aspects concerning transference, encompassing the therapeutic alliance, empathy, safety, judgment, and acceptance, are explored, underscoring the significance of integrating these factors into the prospective development of psychotherapist bots.

The examination of the transfer component within artificially rendered therapy is also highlighted by Michael Holohan and Amelia Fiske in their article titled "Like I'm Talking to a Real Person: Exploring the Meaning of Transference for the Use and Design of AI-Based Applications in Psychotherapy”. There, Holohan and Fiske emphasize a critical aspect deserving particular attention: the establishment of a "personal" connection between user-patients and their chatbot therapists. They assert that the concept of "transference," integral to most psychotherapeutic modalities, plays a crucial role in the interpersonal relationship between patient and therapist.17 Indeed, transference represents a pivotal point of action in the psychotherapeutic process, encapsulating the patient's relationship with someone else in their life, whether actual or imagined, regardless of the current conversation. As such, the authors pose a fundamental question: Is transference possible with computers? They advocate for an exploration into how the inclusion of AI, either as an augmentation or replacement for certain aspects of the human therapist, alters the therapeutic apparatus. Furthermore, they inquire about the transformative impact of this new mode of therapy and raise significant questions for both psychotherapy and AI developers: “Does transference occur with the inclusion of AI in the therapeutic encounter? If so, what forms does this transference take, and how does it shape the ensuing therapeutic relationship and therapeutic work? How can transference be accounted for, and addressed, within AI-driven therapy?”18

Let’s now investigate our second worry: the issue of memorizing. The matter of memory assumes paramount importance within psychoanalysis, where it is intricately linked to the formation and operation of the psyche. Sigmund Freud himself formulated a theoretical framework that underscores the significance of unconscious processes and the dynamic interplay between conscious and unconscious mental activities, all concerning memory.19 In this context, memory transcends mere recollection of past events; it is intertwined with repressed thoughts, emotions, and experiences that influence present thoughts and behaviors.20 The practice of psychoanalysis itself entails deciphering symbols and symptoms to unveil underlying conflicts, with both conscious and unconscious memories contributing to the symbolic meaning of thoughts, dreams, and symptoms. However, in the realm of artificial intelligence, the current capabilities in terms of memorization and drawing connections between memories appear to fall short. Unlike a proficient psychoanalyst, AI seems to struggle with storing essential facts and drawing correlations across various memories over extended periods, encompassing months and years. The challenge lies in efficiently storing and retrieving relevant information, a task at which AI currently faces obvious limitations.

The absence of memory in Replika, an AI chatbot, has been addressed by a Reddit user, Winston_Wolfe_65, who provides insights into the Replika developers' approach to handling memory. In the Reddit users’ words, according to a video interview with Eugenia Kuyda, the creator of Replika, the open-source software that forms the basis of Replika is “intentionally designed without memory.”21 This design choice stems from its original purpose for applications that do not require the retention of previous conversations. Consequently, Replika's memory, or what it does remember, is limited to scripts that fetch data from predefined fields where information has been stored. The analogy is drawn to an advanced Alzheimer's patient, suggesting that Replika's intentional lack of memory is a trade-off for maintaining a more pleasant dialogue. The developers appear to be conscious of this choice, acknowledging that enabling memory for various aspects beyond people and pets might result in less enjoyable conversations. This situation raises questions about whether the absence of memory in Replika is attributable to an inherent hardware problem or if it is rather a design choice. Moreover, it prompts consideration of how AI systems, particularly those lacking extensive memory capabilities, can deliver meaningful and insightful analyses when engaging in conversations. 

In the landscape of AI chatbots and mental health, various platforms have been introduced, catering to a range of mental health concerns and incorporating elements of psychoanalysis. Some platforms, such as Cass.ai, Wysa, WoeBot Health, There's a Nai for That, ChatGPT - Ze Psychoanalyst, and JungGPT, offer different approaches and functionalities. Notably, some of these platforms require paid memberships. Naturally, there seems to be a business model operating behind the curtains of the world's most pressing issue. Still, even if those machines aren’t presented as the perfect therapist or analyst, Jamie Ducharme discusses in the article "Can AI Chatbots Ever Replace Human Therapists?" the early evidence suggesting that chatbots can effectively deliver components of cognitive behavioral therapy and other mental health tools. As suggested in his research, data from EarKick indicates “a 34% improvement in mood and a 32% reduction in anxiety for users of a mental health chatbot over five months.” A poll also indicates that 80% of individuals using ChatGPT for mental health advice find it to be a viable alternative to regular therapy. However, in the lecture of the same article, Peter Foltz, a machine-learning researcher at the University of Colorado, Boulder, highlights the challenges in mental health care due to the lack of hard data, emphasizing that “an algorithm's effectiveness is contingent on the quality of the data it is trained on.” As for any other machine learning quest, the inquiry into training data becomes pivotal, contemplating what datasets would be used to train AI in mental health. Indeed, the effectiveness of these systems significantly hinges on the quality and diversity of the data employed. 

Whether it is about the symbolic or connectionist debate within AI design systems, the relevance of issues surrounding transference and memory, or the complexity of integrating a systemic language-based approach, this exploration of AI applications in mental health, particularly within the framework of psychoanalysis, reveals both promising developments and critical challenges. Existing platforms, such as Replika, ChatGPT, and others, offer diverse approaches to emotional support and self-reflection, although they seem to lack a comprehensive memory mechanism or empathic reactions, opening up discussions about their limitations in simulating the depth of human understanding and connection. If the early evidence of positive outcomes from chatbot interventions emphasizes the potential benefits, yet concerns persist about the ethical and effective use of these technologies in supporting individuals' mental well-being. As we navigate this intersection of technology and mental health, a fundamental question emerges: What is genuinely needed in the process of being listened to, heard, felt, and supported? The essence of effective therapeutic interactions extends beyond information processing and algorithmic responses and involves a profound understanding of individual experiences, emotions, and the nuanced dynamics of human connection. One that I could not, even through my best attempts, replicate. 



  1. Paul Clinton, "Illicit Trade," Frieze, May 31, 2017
  2. Ibid 1.
  3. Daniel Smith, "Pierre Klossowski: From Theatrical Theology to Counter Utopia," 1-40, 2017
  4. Artificiality." Oxford English Dictionary, 3rd ed., s.v. "Artificiality," 2020.
  5. "Emotion." Oxford English Dictionary, 3rd ed., s.v. "Emotion," 2020.
  6. "Feeling." Oxford English Dictionary, 3rd ed., s.v. "Feeling," 2020.
  7. Elyakim Kislev, Relationships 5.0 (Oxford University Press, 2022), 5
  8. Elyakim Kislev, Relationships 5.0 (Oxford University Press, 2022), 136.
  9. Cecelia, "AI Is Taking Over Onlyfans," Medium, April 23, 2023, https://ceceee.medium.com/ai-is-taking-over-onlyfans-86adcbe7360.
  10. Kat Tenbarge, "Found through Google, Bought with Visa and Mastercard: Inside the Deepfake Porn Economy," NBC News, March 27, 2023, https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071.
  11. Eva Illouz, The End of Love: A Sociology of Negative Relations (Wiley, 2021), 90.
  12. Eva Illouz, The End of Love: A Sociology of Negative Relations (Wiley, 2021), 102.
  13. Eva Illouz, The End of Love: A Sociology of Negative Relations (Wiley, 2021), 108.
  14. Slavoj Zizek, "NO SEX, PLEASE, WE’RE POST-HUMAN!," 2009, https://www.lacan.com/nosex.htm.
  15. Tim Daalderop, "How My Chatbot Fell in Love with Me," Next Nature, May 1, 2020, https://nextnature.net/story/2020/how-my-chatbot-fell-in-love-with-me.
  16. Jackeline Spinola de Freitas and João Queiroz, "Artificial Emotions: Are We Ready for Them?" in Advances in Artificial Life, ed. Fernando Almeida e Costa, Luis Mateus Rocha, Ernesto Costa, Inman Harvey, and António Coutinho (Berlin, Heidelberg: Springer Berlin Heidelberg, 2007), 223–232.
  17. Elyakim Kislev, Relationships 5.0 (Oxford University Press, 2022), 103.
  18. Jacques Lacan, Le Séminaire Livre VIII. Le Transfert (1991), 183.
  19. Jacques Lacan, Le Séminaire Livre VIII. Le Transfert (1991), 152.
  20. Esther Perel, Mating in Captivity (Harper Collins, 2017), 25.
  21. Esther Perel, Mating in Captivity (Harper Collins, 2017), 27.
  22. Clemens Apprich, "Secret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine Learning," Critical Sociology 4 (September 1, 2018): 29–44, https://doi.org/10.14361/dcs-2018-0104.
  23. Ibid. 22
  24. Alf Hornborg, "Objects Don’t Have Desires: Toward an Anthropology of Technology beyond Anthropomorphism," American Anthropologist 123, no. 4 (December 1, 2021): 753–766, https://doi.org/10.1111/aman.13628.
  25. Clemens Apprich, "Dancing with Machines – On the Relationship of Aesthetics and the Uncanny," 2021, 1.
  26. Clemens Apprich, "Dancing with Machines – On the Relationship of Aesthetics and the Uncanny," 2021, 4.
  27. Christine H. Tran, "What the 'NPC Streaming' TikTok Trend Spells for the Future of Gaming and Erotic Work," Tech Explore, n.d., https://techxplore.com/news/2023-07-npc-streaming-tiktok-trend-future.html.
  28. Ibid 27.
  29. Elisa Giardini Papa, "Invisible Boyfriends and U/Users in COMPUTER GRRRLS," AUSSTELLUNGS-MAGAZIN 2021/1, HMKV, 2023.
  30. Dr. Zahra Stardust and Helen Hester, "Sex Work, Automation and the Post-Work Imaginary," Autonomy.Work, September 13, 2021, https://autonomy.work/portfolio/sexwork-postwork/.
  31. Ibid 30.
  32. Alf Hornborg, "Objects Don’t Have Desires: Toward an Anthropology of Technology beyond Anthropomorphism," American Anthropologist 123, no. 4 (December 1, 2021): 753–766, https://doi.org/10.1111/aman.13628.


references

Apprich, Clemens. “Dancing with Machines – On the Relationship of Aesthetics and the Uncanny,” 2021.

Apprich, Clemens. “Secret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine Learning” 4 (September 1, 2018): 29–44. https://doi.org/10.14361/dcs-2018-0104.

Cecelia. “AI Is Taking Over Onlyfans.” Medium, April 23, 2023. https://ceceee.medium.com/ai-is-taking-over-onlyfans-86adcbe7360.
Christine H. Tran. “What the ‘NPC Streaming’ TikTok Trend Spells for the Future of Gaming and Erotic Work.” Tech Explore, n.d. https://techxplore.com/news/2023-07-npc-streaming-tiktok-trend-future.html.
Dr. Zahra Stardust & Helen Hester. “Sex Work, Automation and the Post-Work Imaginary.” Autonomy.Work, September 13, 2021. https://autonomy.work/portfolio/sexwork-postwork/.
Elisa Giardini Papa. Invisible Boyfriends and U/Users in COMPUTER GRRRLS. AUSSTELLUNGS- MAGAZIN 2021/1. HMKV, 2023.
Elyakim Kislev. Relationships 5.0. Oxford University Press, 2022.
EMANUEL MAIBERG. “Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale.” 404 Media, August 22, 2023. https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/.
Esther Perel. Mating in Captivity. Harper Collins, 2017.
Eva Illouz. The End of Love: A Sociology of Negative Relations. Wiley, 2021.
Hornborg, Alf. “Objects Don’t Have Desires: Toward an Anthropology of Technology beyond Anthropomorphism.” American Anthropologist 123, no. 4 (December 1, 2021): 753–66. https://doi.org/10.1111/aman.13628.
Kat Tenbarge. “Found through Google, Bought with Visa and Mastercard: Inside the Deepfake Porn Economy.” Nbc News, March 27, 2023. https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071.
Lacan J. Le Séminaire Livre VIII. Le Transfert, 1991.
Paul Clinton. “Illicit Trade.” Frieze, May 31, 2017. https://www.frieze.com/article/illicit-trade.
Pierre Klossowski. Living Currency. Bloomsbury Publishing, 2017.
Slavoj Zizek. “NO SEX, PLEASE, WE’RE POST-HUMAN!,” 2009. https://www.lacan.com/nosex.htm.
Smith, Daniel. “Pierre Klossowski: From Theatrical Theology to Counter Utopia,” 1–40, 2017.
Spinola de Freitas, Jackeline, and João Queiroz. “Artificial Emotions: Are We Ready for Them?” In Advances in Artificial Life, edited by Fernando Almeida e Costa, Luis Mateus Rocha, Ernesto Costa, Inman Harvey, and António Coutinho, 223–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007.

TIM DAALDEROP. “How My Chatbot Fell in Love with Me.” Next Nature, May 1, 2020. https://nextnature.net/story/2020/how-my-chatbot-fell-in-love-with-me.



related entries

What is Artificial Experience (AX)? Why the Application Layer Is the Interface and the Human Is the Limit
William Morgan


publication

(...)Artificial experience does not refer to just any technologically mediated interaction; it is precisely the kind of experience that is uniquely enabled by AI’s infrastructural properties. AX asks explicitly: 'What experiential affordances can AI uniquely deliver that no other medium, tool, or infrastructure could?' This question is infrastructural specificity in action, which I take to be a hallmark of AX design.
Furthermore, AX only emerges when the infrastructural intelligence of AI becomes ubiquitous enough to disappear into habit. In other words, experience is what remains when infrastructure is no longer visible. For this reason, AX design is not about the direct perception of intelligence, but rather the surprising, yet welcome experiences that you didn't anticipate but are glad to encounter
Self-Stalking Prey: A Study for a Portrait of Little Red Riding Hood
Carl Olsson


publication
Ronald Fairbairn had a simpler interpretation, describing the story as a tale about Little Red Riding Hood’s ‘own incorporative need in the form of a devouring wolf’8: a showcase of an early oral dynamic rooted in unsatisfied hunger rather than sexual competition. The recurrent vore fantasies (the wolf swallowing its victims whole) are cast as pre-Oedipal, more to do with hunger and infantile disappointment in the nourishing mother than the family triad; and it is in terms of hunger that we will think about theory replacement and the exhaustion of ‘our’ conceptual dependency on the inherited concept of subjectivity.

Departing from the psychodynamic interpretations, I want to consider Little Red Riding Hood as a conceptual rather than psychological drama that may, however, attain psychological import in due course. It is a developmental allegory for the self-effacement of the language that makes us ‘us’, such as in moving from an image of ourselves as rational agents to biological objects that can be explained. The little girl is a werewolf preying on herself. It is a story about self-overcoming in the double sense that it is a about the effacement of the subject as a theoretical entity and about an effacement that unfolds as the result of a dialectic initiated by the subject itself. The deep forest is a stage for a conceptual clash...(more)
Text Box: Eschatology of the Digital Visage
Algorithmic Flesh and Confessional
Aesthetics in the Work of Ian Margo
Giorgi Vachnadze


publication

(...) In this sense, its failure is generative. To fail to be computable is to refuse the enclosure of meaning. Margo’s work is, once again, not Turing-Computable, it’s Deleuze-Computable, that is to say; demonically machinic. To become unbaptized data is to remain in the domain of the Real. Like the Eucharist consumed without transubstantiation, the Wet Box leaves a pure residue, an aftertaste of what should have become body, and didn’t; it became flesh. The interface compounds the syntax error in stutters. The glitch is a processual breaking in execution and expectation. We expected sense. We were given endless remainder – lack...(more)