Creative Liberation or Digital Colonization? A Critical Study of AI-Generated Art

David Lazăr

07.10.2025.LND.UK

publications art representation abstraction phenomenology cybernetics AI Generated Art digital culture



In recent years, artificial intelligence has become increasingly embedded in the arts and creative industries. Major companies have launched tools that can generate images, music, and poetry with striking ease. Artists are becoming more interested in these technologies and are actively exploring them, while institutions eagerly seek out and promote AI-based artworks. From exhibitions and galleries to online platforms, AI-generated art is rapidly gaining visibility.

Promoted as a democratizing force, generative AI art is often praised for lowering the barriers to creative expression. But its appeal extends beyond access: it is framed as a radical innovation—a new medium, or even a co-creator—that challenges traditional boundaries of authorship, intention, and aesthetic production. Many artists and technologists embrace a romantic notion that AI does not replace the artist but expands their toolbox, sparking new forms of hybrid expression. This enthusiasm is often tied to a deeper cultural fascination with posthuman creativity: the idea that meaningful, even beautiful, works can emerge from systems that are not fully human. These narratives carry powerful symbolic weight, especially in an era captivated by disruption.

Yet in practice, many artists and institutions adopt these tools without critically reflecting on their ethical, ecological, political, or militaristic implications. The technologies are celebrated for their speed and scale—enabling rapid image generation or the simulation of complexity—but often at the cost of thoughtful engagement. In many cases, creative agency is subtly transferred from the artist to the machine. Automated aesthetics and outsourced responsibility. Moreover, the same generative systems used for visual experimentation are used as technologies of surveillance, prediction, and control—raising urgent questions about the ideological functions these tools serve beyond the gallery space.

Some of the promises are indeed real—AI tools support productivity—but is this what we want from art? Should artistic practice follow an accelerationist trajectory of pop culture and platform capitalism? And most critically: does AI-generated art truly disrupt entrenched systems of power—or does it, paradoxically, reinforce them under the guise of innovation?

This essay argues that while AI-generated art is marketed as a tool of creative liberation, it often reinforces existing power structures. By concealing authorship, appropriating data without consent, relying on environmentally costly infrastructures, and aligning with systems of surveillance and control, AI art risks replicating the very inequalities it claims to subvert.

   The Myth of Democratization

AI-generated art is often described as a democratising medium—not only because it lowers barriers to artistic production, but because it symbolises a shift in who or what someone can create. Whether framed as a tool, a co-creator, or a new aesthetic intelligence, AI is celebrated for redistributing creativity beyond traditional gatekeepers. This vision aligns with broader cultural ideals of innovation, decentralisation, and posthuman possibility.

However, beneath this narrative of openness lies a tightly controlled infrastructure. The most powerful AI models are developed and governed by private tech companies that decide how they function, what data they are trained on, and who can access them. Far from being neutral tools, these systems operate as proprietary black boxes—embedding the commercial interests, aesthetic values, and political blind spots of those who design them.

While AI art is showcased in leading institutions and promoted as progressive or disruptive, these platforms often favor highly produced, visually polished works that reinforce dominant cultural norms. The technologies behind them rely on massive datasets, computing power, and institutional backing—resources rarely accessible to marginalized or independent creators. As a result, the same hierarchical dynamics persist, just rebranded through the language of inclusion.

As Elizabeth Seger and colleagues observe, claims of democratizing AI often amount to little more than broad accessibility, while access to underlying infrastructure, ownership, and profit remains concentrated, turning “democratization” into a branding strategy rather than a structural shift.


   Invisible Labor and the Question of Authorship

AI-generated images, texts, and sounds may appear novel or spontaneous, but they are built upon vast archives of human labor. The models used by artists today—whether in image generation, language processing, or audio synthesis—are trained on large datasets scraped from the wider internet, often without the knowledge, consent, or attribution of the original creators. These datasets contain artworks, photographs, essays, and voices produced by thousands of people, most of whom remain unnamed and uncompensated. The surface-level innovation of AI art conceals a deeply extractive foundation.

For artists who integrate AI into their practice—not simply as a prompt machine, but as a tool in a longer creative pipeline—the question of authorship becomes more nuanced. These practitioners are actively involved in selecting, curating, and shaping their outputs. Yet their authorship is no longer singular; it becomes entangled with the model, its training data, and the assumptions embedded in its code. In this framework, the artist shifts from sole creator to editor, director, or even interpreter—raising critical questions about agency, control, and artistic responsibility.

Even reflective artists may unknowingly participate in systems built on unacknowledged labor. When cultural memory, personal archives, or community narratives are processed through tools trained on unauthorized data, the risk of appropriation increases. The line between expression and extraction blurs—especially when working with marginalised histories or identities.

If authorship is distributed, then so too is accountability. Without critical reflection on the infrastructures and histories that make AI art possible, creative expression risks becoming an aestheticised simulation—powerful on the surface, but detached from the people, contexts, and labor it relies on.


   Can Machines Create—Or Are We Caught in a Conceptual Trap?

The growing belief that machines can “create” or “collaborate” in artistic processes invites an important philosophical question: do these words make sense when applied to machines? Ludwig Wittgenstein, writing in the Blue Book, argued that questions like “Can a machine think?” may not be real questions at all, but rather expressions of conceptual confusion. He suggested that saying “a machine thinks” is like asking whether “the number 3 has a color”—a sentence that sounds meaningful but actually isn’t.

This argument is taken further by Giorgi Vachnadze in Christian Eschatology of Artificial Intelligence: Pastoral Technologies of Cybernetic Flesh (2024), where he draws on Wittgenstein and Stuart Shanker to challenge the assumptions behind machine intelligence. As Vachnadze explains, machines can follow instructions and produce patterned outputs, but they cannot justify, interpret, or understand their actions. They lack the embedded, social, and reflective context that makes rule-following a meaningful human practice.

In the realm of AI-generated art, this matters. If we treat AI outputs as autonomous creations, we risk confusing automation with authorship. We begin to accept imitation as intention and pattern as creativity. In doing so, we not only obscure the human labor and data behind these systems—we also reinforce a dangerously shallow and depersonalised idea of what it means to create.


   Aesthetic Ideologies and Institutional Power

AI-generated art often produces images that are strikingly coherent, polished, and stylistically hybrid. These works emerge not from human intention but from algorithmic pattern recognition, remixing vast visual datasets to create what Lev Manovich calls a “post-style visuality”—a blend of historical aesthetics stripped of original context or authorship. The model becomes an “aesthetic observer,” trained not to innovate but to reproduce what it has already seen, drawing from the statistical center of visual culture. What results is not disruption, but recombination: compelling, but often culturally conservative.

Although presented as innovative, AI-generated aesthetics tend to reinforce dominant cultural values. According to Wesley Goatley, machine vision is inherently ideological: it is built to sort, identify, and render visible in ways that align with systems of control and classification. When art is generated through these systems, it risks adopting their logics—privileging legibility, emotional neutrality, and formal clarity over ambiguity, dissent, or cultural specificity. In institutional contexts, AI art that conforms to these aesthetics is more likely to be exhibited, as it fits within spectacle-driven environments that prioritise novelty without critique.

The aesthetics most celebrated in AI art are those that align with existing institutional tastes—slick, immersive, and apolitical. Rather than subverting artistic canons, AI art often sustains them, benefiting those already positioned within systems of access, technical expertise, and cultural capital.


   Creativity, Originality, and Cultural Recycling

Julian Rosefeldt’s Manifesto starkly reminds us: “Nothing is original. Steal from anywhere that resonates with inspiration.” Artists have always drawn from earlier work—reinterpreting, remixing, rebelling. So, is AI different? Or is it merely accelerating a human creative process—making it much more visible?

Mark Fisher described hauntology as the process by which our present is haunted by lost futures and past styles that never quite materialised, leading to a culture stuck recycling its ghosts. AI art seems to mirror that condition: blending styles, reanimating aesthetic fragments, making visible the endless loop of cultural memory. If humans are already artists of memory and repetition, perhaps AI is just externalising this process.

Acknowledging that humans always borrow doesn’t let AI off the hook. Machines automate this impulse, smoothing differences and erasing the nuances of negotiation, credit, or transformation. When past material is repackaged without reflection or context, AI amplifies structural issues—culture becomes a machine of repetition, not a site of dialogue.


   Environmental and Technological Costs

While AI-generated art is often framed as frictionless and ephemeral, its production relies on material infrastructures that are anything but. Refik Anadol’s Unsupervised—an immersive generative artwork commissioned by MoMA—ran continuously on a high-powered GPU for months. As the e-flux critique notes, the work required a dedicated NVIDIA A100 graphics card, which consumes roughly 400 watts of energy, even while idling. This is not just aesthetic spectacle—it is a fossil-fueled performance.

Wesley Goatley challenges the premise that any AI art tool can be separated from extractive and militarised infrastructures. In his words, “there is no ethical use of any large-scale generative AI model” because all are trained on data centers powered by environmentally destructive energy regimes, developed by corporations deeply enmeshed in surveillance and defense contracts. The seductive front-end of AI art—slick visuals and interactive interfaces—conceals these back-end realities.

When artists and institutions champion AI’s creative potential without acknowledging these material entanglements, they reinforce a dangerous illusion: that creativity can be clean, weightless, and unaccountable. But there is no such thing as dematerialised art—especially not when computation is involved. Art that ignores its carbon cost risks aestheticising harm.


   Surveillance, Interaction, and the Logic of Control

Acknowledging that many of the same technologies powering AI art—voice recognition, image classification, predictive modeling—are also deployed in surveillance, policing, and autonomous military systems makes it increasingly difficult to view interactive artworks as innocent experiments. Some AI systems, quite literally, have the agency to make lethal decisions. To engage with these technologies in the context of artistic play, without recognising their entanglement with systems of harm, is to bracket off crucial political realities.

In his short essay Postscript on the Societies of Control, Gilles Deleuze argues that we are transitioning from Foucault’s disciplinary society—governed by enclosed institutions like schools and prisons—to a society of control, where power is distributed through continuous modulation and real-time feedback loops. Cybernetic systems track, normalise, and shape behaviour not by constraint, but by freedom—by making us feel interactive, visible, engaged. AI artworks, especially those involving avatars or conversational systems, often mirror this logic.

Voice-driven installations, reactive visuals, or "empathetic" AI interfaces blur the line between user and subject. Participation becomes a form of normalisation: the interface is aesthetic, but the architecture is extractive. Behind the interactivity lies a system trained to profile, predict, and adapt—not just to create, but to control.

When do artistic engagements with AI cross into the territory of data mining, consentless profiling, or soft surveillance? And if the medium already belongs to a control society, can we still speak of subversion—or only of aesthetic camouflage?

   
    Conclusion

Writing this essay as a computational artist has been difficult. Over the past 7 months, I’ve been deeply immersed in the topic—participating in residencies at major cultural institutions and thinking carefully about how AI intersects with artistic practice. This essay is not meant to reject AI in the arts, nor to judge those who use it. Rather, it reflects the many questions that have emerged during this time: questions about authorship, ethics, power, and the futures we are helping to build.

AI will inevitably become more embedded in cultural life. Its novelty and accessibility will continue to attract artists—and understandably so. But I believe there’s value in pausing. In asking what it means to use these tools responsibly. In reflecting on whether we treat them merely as creative instruments or begin attributing agency to them, speaking of them as “dreaming,” imagining, or deciding.

This technology is not neutral. It already plays a role in systems that can hire, fire, grade, surveil—or even kill. That makes the stakes different from past technological shifts in art. I hope this essay offers a space for reflection—about how we frame our engagement with AI, and where we might stand within it.


Special thanks to Dan McQuillan





  1. Corral Design. (2023). AI, harm, and hypocrisy. https://www.corralldesign.com/writing/ai-harm-hypocrisy
  2. Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3–7.
  3. e-flux. (2023). Refik Anadol’s “Unsupervised”. https://www.e-flux.com/criticism/527236/refik-anadol-s-unsupervised
  4. Fisher, M. (2014). Ghosts of my life: Writings on depression, hauntology and lost futures. Zero Books.
  5. Goatley, W. (2023). AI art and rejecting power. Substack. https://wesleygoatley.substack.com/p/8-ai-art-and-rejecting-power
  6. Manovich, L. (2009). AI aesthetics. http://manovich.net/index.php/projects/ai-aesthetics
  7. Rosefeldt, J. (2015). Manifesto [Film installation].
  8. Seger, E., Ovadya, A., Garfinkel, B., Siddarth, D., & Dafoe, A. (2022). Democratising AI: Multiple meanings, goals, and methods. Centre for the Governance of AI. https://governance.ai/papers/democratising-ai
  9. Vachnadze, G. (2024). Christian eschatology of artificial intelligence: Pastoral technologies of cybernetic flesh.
  10. Wittgenstein, L. (1958). The blue and brown books: Preliminary studies for the “Philosophical Investigations”. Harper & Row.