Fango &
Large Lore Models

dmstfctn Alasdair Milne Eva Jäger
08.09.2025.LND.UK
OP: --.12.2025.LND.UK

publications Subjectivity Self-effacement Consciousness

Eliminativism Philosophy of Mind De-Subjectivation






In the IRL world – or ‘groundworld’1– consensus narratives are fracturing. The Trad Web (the entrenched social networks of Web2) in particular is ineffective at delivering protocols for both ground truthing on the one hand and healthy contestation on the other. Commentary on once verifiable events has become adversarial, with events themselves (transactions, exchanges, altercations) often difficult to verify. Here, the most fundamental record keeping becomes centralized through ‘captured’ top-down infrastructure that resembles a surveillance architecture. The capacity to verify and contest becomes attached to hierarchy, access, and control.

While Autonomous Worlds2 are in their nascency, and susceptible to becoming siloed on-chain gameworlds3 for niche communities, they possess a wider potential as generative infrastructure for a new kind of commons – a public resource with a tangible benefit.4 In this article, we submit that Autonomous Worlds produce decentralized narratives5 as their essential public resource and we explore the potential for Autonomous Worlds to act as a commons geared to bringing this resource and its public value into the groundworld.
Autonomous Worlds can offer the ingredients and the world-weaving potential to become what we term an ‘autonomous knowledge commons’, afforded by its involuntary relationship of collective infrastructure ownership.6 Here we take the core features of an autonomous knowledge commons to be noncentralized yet formalized access both to the means of narrative creation, linked to a permanent record of events, and to the right to redeploy, or reseed, the results elsewhere. Building on gubsheep’s proposal that crypto-native games operate as sandboxes or ‘microcosms of the integrated digital worlds of the future’,7 we suggest that building an autonomous knowledge commons might offer a way to export procedures or knowledge in the form of narrative back into the groundworld. With this in mind, an autonomous-knowledge commons could be a space to experiment with new ways of building both a consensus reality that has been lost in the groundworld and shared narratives drawn from the emergent, permanent canon afforded by on-chain games.

What, then, is required to bolster the propensity of Autonomous Worlds to become an autonomous knowledge commons? Here, we will propose a new ‘modding’ superstructure: a ‘Large Lore Model’ (LLoreM) plug-in that facilitates collective and decentralized narrative-building and explicitly links to the on-chain record. The LLoreM offers an interface that maintains a link to interobjective realities whilst communicating with the intersubjective groundworld. In other words, a LLoreM enables a commonly built consensus reality that is collectively generated and governed.8 The LLoreM could be prototypical scaffolding for stewarding the shift from a worlding wilderness to a mission-driven commons.


   Lore Generation

Having established the potential, given the right infrastructure, of Autonomous Worlds to become an autonomous knowledge commons, we look to define lore and ‘lore generation’ in concrete terms by using a procedural spoken language game, I Went To the Shop, as an example. From there, we will consider how to hardcode this process of lore generation into a viable tool (LLoreM) that combines the value of player lore generation with the unique affordances of on-chain games, examining the implications of such a tool in comparison to legacy Web2 lore-recording.

In the I Went To the Shop memory game, commonly played by kids to pass the time on road trips, we find a temporary World created through a shared narrative of a trip to the shop. The rules of the game require that players repeat the phrase: ‘Today I went to the shop and I bought…’ appending it each time with a new purchased item. The first item added must begin with the first letter of the alphabet and each subsequent item must begin with the next letter. So after three rounds the phrase might be: ‘Today I went to the shop and I bought an apple, bubble gum, and a cold beer.’ Gameplay emerges through the balance between adding items that one player can remember and items to throw off another players’ memory. Players must keep a level of engagement and entertainment by listing uncommon, surreal, or offensive items. Even while the narrative produced is held in common, its texture is contingent on the particular circumstances of individual players: in-jokes, shared language (for ‘Z’ US English reasonably allows one to buy a zucchini, whilst in UK English you might have to buy a zebra), and – in the case of bored kids in a car – a desire to get a reaction from the adult driving might all be factors that play into these choices.

We take lore here to be the decentralized, accrued narrative. Lore generation is the general concept used to describe procedures (within the World of a game or otherwise) that produce this decentralized narrative. A reified tool built for this purpose could be called a lore generator. Lore generation is important because it acts as a means of creating knowledge-claims in a suspended context, where they could later be contested. It gamifies the production of narrative, framing this production as something common and mutually constructed between players, like an amusing list of commodities bought at a shop.

But at the end of the I Went To the Shop game, there is no record of the ‘ledger’ of ingredients, which is lost. Similarly lost is the shared experience of the players and the emergent narrative. Further, even if some player records the ledger (as a trustworthy elder figure), or an on-chain version collected the list of ingredients introduced via its Digital Physics into its canon (creating a hard diegetic boundary through digital consensus), there is no mutually agreed protocol for recording and verifying the emergent narrative. In order to transform this narrative into lore, we need a system – and interface – that is collectively agreed upon. We will now turn to a prototypical LLoreM that could be used to produce a dual ledger capable of recording and verifying a lore-claim alongside an immutable record.


   Large Lore Models (LLoreM)

A LLoreM is a tool we conceive of as a collective writing plug-in that attaches to a ‘host’ on-chain game, allowing it to be used as what Moving Castles calls a ‘narrative engine’.9 It is by definition a lore generator, but with additional design specifications that are conceptualized to respond to some of the initial epistemic problems of consensus-building that plague the groundworld. Accompanying the game’s ledger, in which players’ actions are recorded through transactions on-chain, is a ‘para-ledger’. The para-ledger becomes a contestable mythology of the core on-chain gameplay events.

Writing in the para-ledger must correspond to a block on the chain. The resultant lore, however, is written by players and is in itself nonprocedural. The function of this design is to maximize the affordances of the blockchain’s immutability while preserving the inherent mutability of human narrative (the narrative’s contestability). The human component combines the value of subjective testimony with the incontestable actions recorded on each block. As such, the para-ledger operates as an interface through which players and observers can access and make use of the blockchain ledger, offering players an opportunity to collate or reflect upon their actions while providing observers with context beyond a list of actions. The para-ledger, of course, remains contestable: this is a core part of the design and reflects a need for intersubjective consensus in all attempts to produce collective knowledge. But any contestation at this stage, even against elder testimony, is made against the backdrop of groundtruthed canon blocks.

There are further speculative possibilities for interoperability here, too: if multiple on-chain games operate on a shared blockchain ledger, and players use a singular identity to play these, the LLoreM can triangulate between the different games by locating the players’ activities across the ledger, thus producing an ‘interdimensional’ narrative of the players as they move between Autonomous Worlds. Inversely, a lore iteration can be replayed in another game, stacking the para-ledger lore to correspond with the blockchain and producing iterative parafictions.10

    Community Loremaxxing

Given the stakes of narrative control we outlined at the outset, it is worth reflecting on the potential for blockchain to provide a fixed ‘ordering’ for a historical record. Narrative itself is an affordance that has been lost in the Trad Web epoch, as Lev Manovich suggests of ‘rewriting’ in Web2: ‘It is as easy to add new elements to the end of a list as it is to insert them anywhere in it. All this further contributes to the antinarrative logic of the Web. If new elements are being added over time, the result is a collection, not a story.’11 This offers a sense of why achieving a consensus reality might be hard while such a system remains dominant. By enabling packets of narrative to be anchored to the fixed-order events of an Autonomous World’s concrete, decentralized ledger, a LLoreM makes the events sense-able and transparent, and thus reenables narration as a slow burning premodern craft that is resistant to singular ownership by designated heroes or monarchs.

Through the specific affordances of on-chain World weaving, the LLoreM offers an infrastructure for Autonomous Worlds to become autonomous knowledge commons. This collective lore generation system offers a procedural alternative to increasing reliance on GPT-pilled automated myth-generation. Whereas GPT models also draw upon a collective form of writing [vast datasets of human text], the centralized and automated technical intervention used is much more heavily weighted, and as such fails to preserve community control over written outputs in most instances. Instead, the LLoreM seeks a way of engaging the blockchain that uses it efficiently while also aiming towards its most unique affordances; simultaneously, it is tuned to maximize the opportunities to tap into the creative potential of human collectivity: loremaxxing.











  1. Luca, G. (2012) Self-Shadowing Prey. Trans. M. A. Caws. New York: Contra Mundum Press.
  2. Traditional examples include Churchland, P. M. (1981) 'Eliminative materialism and propositional attitudes', The Journal of Philosophy, 78(2), pp. 67-90 and Churchland, P. S. (1989) Neurophilosophy: toward a unified
    science of the mind-brain. Cambridge, MA: MIT Press.
  3. Metzinger, T. (2004) Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA: MIT Press
  4. Wilfrid Sellars’ example is that the naturalistic or ‘scientific image’ lacks the normative resources needed for science to even make sense as a project, hence it cannot straightforwardly replace its manifest counterpart. Sellars, W. (2007) In the Space of Reasons: Selected Essays of Wilfrid Sellars. Cambridge, MA: Harvard University Press. See also Boghossian, P. A. (1990a) 'The Status of Content', The Philosophical Review, 99(2), pp. 157-184.
  5. Róheim, G. (1953) 'Fairy tale and dream', The Psychoanalytic Study of the Child, 8(1), pp. 394-403.
  6. Perrault, C. (2021) Little Red Riding Hood. Copenhagen, Denmark: Lindhardt og Ringhof.
  7. Róheim, G. (1953) 'Fairy Tale and Dream', The Psychoanalytic Study of the Child, 8(1), p. 396.
  8. Fairbairn, W. R. D. (2001) Psychoanalytic Studies of the Personality. New York, NY: Routledge, p. 24.
  9. Ruyer, R. (2020) The Genesis of Living Forms. Trans. J. Roffe and N. B. de Weydenthal. London/New York:
    Rowman & Littlefield, p. 161. Italics removed.


related entries



The A-Subject & Its Consequences
Alexandre Montserrat

publication

(...) What, then, is the A-Subject? Not diminished humanity, but an entirely other ontology: a form of subjectivity existing only as an emergent property of the apparatus that generates it. From this reality, a new paradigm for sovereignty is established. Sovereignty ceases to be the localized will of a king or a citizenry. It dissolves, becoming the distributed, operational capacity of technical architectures themselves. Here, power no longer resides in the authority to command, but in the infrastructural capacity to generate subjectivity as an effect; to define the parameters of the knowable through the structuring of data; and, most crucially, to foreclose potentiality by relentlessly processing the probable. The horizon of what can be shrinks to what the system predicts is likely to be... (more)
Ruins without nostalgia
Marcos Parajua


publication
(...) What remains after an event? Does language prevail over an image or a sound in presenting “that which remains”? Edmund Burke invokes Milton: “(Over) many a dark and dreary (valley) They (passed), and many a region dolorous; (Over) many a frozen, many a fiery Alp; Rocks, caves, lakes, dens, bogs, fens, and shades of Death”. Burke contends: “This idea or this affection caused by a word (Death), which nothing but a word could annex to the others…”.1 In this saying, the ways of romantic visual arts are too articulate to lend proper care to indeterminacy. Retrospectively we might argue that Burke contemplated abstraction exclusively as “thought” realised in language but I´d rather follow a different inquiry. Burke also conceives descriptive language an insufficient correlate of the object described; this, he observes, is not a problem of clarity but intensity. To bridge the difference...(more)
How Images Turned Invisible
Ivan Netkachev


publication
(...) For this exact reason, images are not inherently visual: they are defined by a paradoxical configuration of forces. A flat, two-dimensional surface happens to be a perfect place for the resolution of these momenta. But some images never arrive there, lingering in the dark — as many technical images do.
Peter Szendy advances the idea of an ecology of images, driven by an urge to account for their vital powers. For him, images are akin to living organisms, which means they never exist on their own but combine into larger ecosystems.14 To unfold such an ecology is also to push the borders of visual culture beyond the Anthropocene: to see beyond human and human-based temporalities. 
I propose a simpler idea. Before conceiving images as organisms, we should understand their behavior as bodies. I’m advocating for a Newtonian physics of images: a tentative foundation for a new critique of visual culture, one that accounts precisely for the physical power of images...(more)