The Stochastic Bricoleur: What Lévi-Strauss Can Teach Us About Building Software with LLMs
/ 20 min read
The other day I watched an LLM agent build a working API server. I gave it a short description of what I needed — a handful of endpoints, a database schema, authentication. Within minutes, the code was running. Tests passed. The thing worked.
And yet I couldn’t shake a particular feeling. The implementation was competent but slightly alien. It chose a middleware pattern I wouldn’t have reached for. It structured the error handling in a way that was functional but not quite mine. When I traced the logic, I couldn’t find a single clear reason why it had assembled things in that specific order. It wasn’t wrong. It was just… found, rather than designed.
I’ve been trying to name that feeling for a while. Recently, I think I found the word. It comes not from computer science but from a French anthropologist who died in 2009 at the age of 100. The word is bricolage.
In 1962, Claude Lévi-Strauss published La Pensée sauvage — translated into English as The Savage Mind. In it, he drew a distinction that has haunted intellectual life ever since: the distinction between the engineer and the bricoleur.
The engineer, in Lévi-Strauss’s telling, subordinates each task to the availability of raw materials and tools conceived and procured specifically for the project at hand. If the right materials do not exist, the engineer creates or acquires them. The process moves from goals to means, and the means are purpose-built.
The bricoleur is different. The bricoleur works with “whatever is at hand” — a finite, heterogeneous collection of tools and materials left over from previous projects. These elements were not designed for the current task. They carry traces of their prior uses. The bricoleur’s universe of instruments is closed: the rules of the game are always to make do with what is already there. The bricoleur’s skill lies in recombining these elements into something that works, even though the result cannot be fully derived from any single intention. Lévi-Strauss compared the process to a kaleidoscope: a closed set of glass pieces that, when shaken, produces endlessly different patterns — but never a pattern that wasn’t latent in the set from the beginning.
Lévi-Strauss introduced this concept to describe how mythological thought operates. Myths, he argued, do not invent new concepts from scratch. They take existing cultural elements — animals, kinship roles, natural phenomena — and rearrange them to address fundamental contradictions. The result is a kind of intellectual bricolage: functional, often brilliant, but operating within a universe of pre-existing fragments rather than creating from nothing.
I want to suggest that this is among the most apt descriptions available for what happens when an LLM generates code.
The training corpus is the raw material from which the closed set is distilled. The learned weights are the pre-constrained elements — statistical traces of patterns encountered during training, compressed into parameters. The output is not a recombination of the corpus itself, but a reconstruction from these traces: shaped by what was in the training data, but not a direct reassembly of it. When an LLM writes a function, it is not engineering in Lévi-Strauss’s sense — it is not subordinating the task to purpose-built means. It is reaching into a vast but finite landscape of learned patterns, selecting fragments that statistically fit the current context, and assembling them into something that works. The fragments carry echoes of their origins: coding conventions absorbed from thousands of repositories, idioms from languages the model was trained on, architectural patterns internalized without attribution. The logic of the output is real, but it belongs to the statistical landscape, not to a designer.
A necessary caveat before we go further. This essay borrows Lévi-Strauss’s operational concept — bricolage as a description of how things get made — but does not adopt his structuralist metaphysics wholesale. His broader claims — that all human cultures share universal deep structures, that synchronic analysis has priority over historical explanation — have been rightly challenged by post-structuralist, postcolonial, and Marxist anthropologists. We do not need to believe in invariant mental structures to observe that LLMs recombine pre-existing fragments within a closed set. The analogy works at the level of process, not ontology. We are borrowing Lévi-Strauss’s lens, not his worldview.
Once you see LLM-assisted development as bricolage, the structural parallels multiply. An important distinction is necessary here: these parallels apply most directly to the bare model — the weights and architecture before any external augmentation. A bare LLM, without tools or retrieval, operates within a closed set in a way that closely mirrors Lévi-Strauss’s bricoleur. Once you add RAG, tool use, or execution feedback, the system begins to break out of that closure — but the base layer remains bricolage, and understanding it as such is what makes the augmentation strategies legible. More on this later.
Consider what Lévi-Strauss said about the bricoleur’s repertoire: it is finite, it is heterogeneous, and its contents bear no necessary relation to the current project. This is a fitting description of a training corpus. The data that an LLM learned from was not curated for your particular problem. It is the accumulated residue of millions of prior constructions — Stack Overflow answers, GitHub repositories, blog posts, documentation — gathered without regard for coherence or purpose. The model’s “knowledge” is not knowledge in any intentional sense; it is a stock of statistical traces whose presence is contingent on what happened to be available when the corpus was assembled.
The elements carry traces of their prior uses. When an LLM generates a React component, the patterns it produces are not freshly derived from the semantics of your application. They are echoes of patterns that appeared in thousands of other applications, for thousands of other purposes. The model cannot fully separate a pattern from the context in which it was learned. This is why LLM-generated code sometimes feels subtly off — not wrong, but inflected by intentions that are not yours. The traces of prior use are baked in.
There is no single authoritative origin. The bricoleur’s output, Lévi-Strauss observed, cannot be traced to a single source. It is a blend, an assemblage. LLM outputs share this quality. A generated function is a probabilistic composite shaped by thousands of examples, and it is rarely reducible to a single traceable origin. Of course, the human who wrote the prompt, selected the output, and edited the result has a real claim to authorship — this is not a “death of the author” argument. But the raw material the model works with resists attribution to any single source, and its assembly follows statistical logic rather than individual intention. This is why questions of attribution in AI-generated code are so vexed — not because the human contributed nothing, but because the output is difficult to reduce to a single author or origin.
And the human using the LLM? Also a bricoleur. When you steer an agent — adjusting prompts, splicing outputs, routing through tool integrations, working around limitations — you are not engineering in any classical sense. You are tinkering. You are rearranging the outputs of a system you do not fully control, combining them with your own knowledge and constraints, producing something that works through iteration rather than derivation. The entire workflow, from model to user, is bricolage all the way down.
Jacques Derrida, critiquing Lévi-Strauss in 1966, made an observation that now reads like prophecy. In his reading, the engineer is a myth produced by the bricoleur — a fiction of pure, self-originating design that no one actually inhabits. There is no pure engineer, Derrida argued. Everyone borrows concepts from an inherited repertoire. The notion that anyone constructs a system entirely from first principles is itself a fiction — a comforting story that bricoleurs tell about an idealized figure who does not exist. In the LLM era, Derrida’s point has become impossible to ignore. The myth of the engineer-programmer — the developer who writes every line from pure logic, who designs before implementing, who fully understands the system they create — was never quite the whole story, even before LLMs. Anyone who has shipped production software knows that improvisation, copy-paste, and pragmatic compromise have always been part of the process. LLMs have simply made the bricolage impossible to ignore.
The idea of connecting programming to bricolage is not new. And broader discussions linking LLMs to recombination or remix culture certainly exist. What I have not found, however, is a sustained argument that treats LLM-agent software development specifically through the lens of Lévi-Strauss’s bricolage — with its particular emphasis on the closed set, the trace of prior use, and the kaleidoscope problem. That is the connection this essay attempts to make. The intellectual genealogy is worth tracing, both to acknowledge prior work and to locate what remains underexplored.
In 1990, Sherry Turkle and Seymour Papert published “Epistemological Pluralism and the Revaluation of the Concrete,” in which they applied Lévi-Strauss’s concept directly to programming. They identified a “bricoleur style” of coding: bottom-up rather than top-down, iterative rather than planned, proceeding by sculpting and rearranging rather than by specification. In their account, the bricoleur programmer works more like a painter than an architect — stepping back after each change to observe the effect before deciding what to do next. This was prescient work, but it was about human programming styles — about how certain people prefer to code. It did not address the question of what happens when the code is generated by a machine whose entire mode of operation is bricolage.
In 2021, Emily Bender and colleagues published “On the Dangers of Stochastic Parrots,” in which they argued that LLMs stitch together linguistic forms from training data based on probabilistic patterns, without engaging with meaning in the way humans do. Read this description carefully and compare it to Lévi-Strauss’s account of bricolage — the recombination of pre-existing elements according to structural rules, without transparent reference to underlying intentions. The two descriptions are structurally similar. But Bender’s paper never mentions Lévi-Strauss, and the “stochastic parrot” metaphor frames the phenomenon primarily as a deficiency — a failure to achieve true understanding. The bricolage frame offers something different: not a diagnosis of failure, but a recognition of a distinct mode of creation. Whether LLMs possess some form of internal representation or semantic structure is an active area of research; what matters for our purposes is that their outputs are assembled from pre-existing patterns in a way that Lévi-Strauss would have immediately recognized.
Earlier still, the biologist François Jacob applied Lévi-Strauss’s concept to evolution itself. In his 1977 paper “Evolution and Tinkering,” Jacob argued that natural selection does not design organisms from scratch. It repurposes existing structures for new functions. Feathers evolved for thermal insulation and were later co-opted for flight. The vertebrate jaw was assembled from bones that once served as gill supports in fish. Evolution is not engineering; it is tinkering — or, in Lévi-Strauss’s term, bricolage. The analogy to LLMs is direct: the model repurposes code patterns that were “evolved” for entirely different purposes, applying them to problems their original authors never imagined.
In the research I have been able to survey, I have not found a sustained reading of LLM-agent development through the specific apparatus of Lévi-Strauss’s bricolage — the closed set, the kaleidoscope, the transformation theory of Mythologiques. The pieces are all in the literature; they have simply not been assembled. This essay is an attempt at that assembly — which is, of course, itself an act of bricolage.
Lévi-Strauss did not stop at bricolage. In his four-volume masterwork Mythologiques (1964–1971), he tracked how myths transform as they travel across cultures — from the southern tip of South America northward through the continent. A myth about the origin of fire becomes, in a neighboring culture, a myth about the origin of cooking. The narrative elements shift, but the underlying structure of oppositions — raw and cooked, nature and culture — persists through transformation. The elements are finite. The transformations are systematic. The variants are related not by descent from a common ancestor but by structural correspondence.
This is what LLMs do with code. Given the same prompt, different models — or the same model at different temperatures — produce variants. These variants are structurally related but locally adapted. They are not copies and they are not independent inventions. They are transformations of a shared repertoire, shaped by the statistical landscape of the training data. The corpus is the myth cycle. Each generation is a transformation, not a creation.
But Lévi-Strauss’s critics identified a fundamental problem with the kaleidoscope metaphor: structural closure. The kaleidoscope can produce endlessly different patterns, but it can never introduce a genuinely new element. The glass pieces are fixed. The system is closed. For innovation that transcends recombination, you need something from outside the set. This criticism maps onto a baseline limitation of LLMs: a bare model, without tools, cannot reason beyond its training distribution. It can recombine brilliantly, but it cannot invent what was never in the corpus. In practice, of course, modern LLM-based systems are rarely bare — RAG, tool use, execution feedback, and external memory all breach the closure. The kaleidoscope metaphor is most useful not as a final description of what LLM systems are, but as a description of the default they must be actively designed to escape.
Here a fair objection must be addressed. If bricolage is defined broadly enough, any generative process qualifies, and the concept risks being unfalsifiable. This essay does not claim to have discovered a scientific law. The value of the bricolage frame is not predictive but prescriptive — it is a design heuristic. If you accept that an LLM operates within a closed set, specific architectural consequences follow. These consequences are concrete, testable, and immediately useful. The framework earns its keep not by being provably true but by generating better design decisions than the alternative framing of LLMs as “imperfect engineers.” The proof is in the architecture you build.
So how do you escape the kaleidoscope? Three strategies, each rooted in a different theoretical response to structuralism’s limits.
The first is to introduce external elements. The anthropologist Tim Ingold criticized Lévi-Strauss for treating the elements of bricolage as stable and fixed. In reality, Ingold argued, creative materials are always in flux — they emerge, transform, and decay. In the LLM context, this translates to connecting the model to live data: retrieval-augmented generation, web search, tool use, API integrations. When you wire an LLM to an MCP server that queries a live database, you are injecting elements that were not in the training corpus. The glass pieces of the kaleidoscope are no longer fixed. The set is no longer fully closed. This is not a minor architectural detail — it is the primary mechanism by which LLM-based systems transcend their training distribution.
The second strategy is to create feedback loops. Jacob’s evolutionary tinkering offers the model here. In biological evolution, the raw material is pre-existing structures, but those structures are transformed through contact with the environment. Selection pressure reshapes the repertoire over time. In LLM agent workflows, this corresponds to the generate-test-error-regenerate cycle. The agent writes code, the runtime environment tests it, errors feed back as new input, and the agent revises. Each iteration is not merely a recombination of the original corpus — it is a recombination informed by real-world feedback that was not part of the training data. Emergent properties arise that no single training example contained. This is how evolution produces novelty from tinkering, and it is how agent loops produce solutions that exceed mere pattern matching.
The third is to collide multiple closed systems. Derrida, in his critique of Lévi-Strauss, proposed that meaning arises not from structure alone but from play (jeu) — the endless displacement and substitution of elements within and between systems. In practice, this means multi-model orchestration: routing different parts of a task to different models, each trained on different data with different biases and blind spots. The interface between two closed systems produces patterns that neither system contains alone. This is not a theoretical nicety — it is a practical technique that developers are already using, often without realizing it has a name.
The deeper lesson is that the architecture around the LLM matters more than the LLM itself. The model is a kaleidoscope. What you build around it — the data pipelines, the feedback loops, the tool integrations, the orchestration layer — determines whether the output stays within the closed set or transcends it. Bricolage is not a deficiency to overcome. It is a mode of creation to design for.
Now let us return to where we began — with names. What follows is interpretation, not corporate history. I have no evidence that anyone at Anthropic was thinking about Lévi-Strauss when they named their products. But the resonances are there, and they are worth reading — not as proof of intent, but as a symptom of convergence.
Anthropic’s AI is called Claude. The origin of the name has never been officially explained in detail; it is often read as a nod to Claude Shannon, the founder of information theory. Whether or not that reading is correct, it is a productive one. Shannon’s central insight was that information is a statistical property of signals, independent of their semantic content. Meaning, for Shannon, is not what a message says but how it is structured relative to alternatives. LLMs operate on a fundamentally similar substrate — statistical structure over tokens — whatever internal representations they may or may not develop along the way.
But the company is called Anthropic. The word shares its Greek root — anthropos, human — with anthropologie, the discipline that Claude Lévi-Strauss transformed. The anthropic principle in cosmology holds that the universe appears designed because we observe it from within: the observer is constitutive of the observation. Applied to LLMs, this principle illuminates why their outputs seem intelligent — we, the observers, project meaning onto statistically structured text. The appearance of understanding is, at least in part, a property of the reader, not the writer.
And then there is Constitutional AI, the training technique that defines Anthropic’s approach to alignment. The model is trained to adhere to a set of principles — a constitution — that shapes its behavior from the inside. Not external rules imposed by a censor, but internal structural constraints that generate appropriate behavior across novel situations. Lévi-Strauss spent his entire career demonstrating a strikingly similar mechanism: how structural constraints — the incest taboo, the rules of myth transformation, the grammar of kinship — generate meaning and order without a designer, without a central authority, without anyone deciding what the system should produce. One can read Constitutional AI as structural anthropology applied to neural networks. Whether its creators would accept that reading is another matter.
But the naming goes deeper still. Consider the model tiers: Opus, Sonnet, Haiku. I am not claiming that Anthropic chose these names with Lévi-Strauss in mind. What I am claiming is that the names, once chosen, are legible through his framework in a way that illuminates something real about what the models do.
Lévi-Strauss explicitly structured Mythologiques as a musical composition. The first volume opens with an “Ouverture” and proceeds through “Theme and Variations,” “Fugue,” and “Cantata.” He argued that myth and music are isomorphic — both generate meaning through the simultaneous reading of a horizontal axis (melody, narrative) and a vertical axis (harmony, structural correspondences). Opus — a musical work number — can be read in this lineage: the largest model as the most elaborated composition.
Sonnet. Fourteen lines. Fixed rhyme scheme. A volta at the turn. The sonnet is a form defined entirely by structural constraint. It does not produce meaning despite its rules but because of them. This is bricolage in its purest form: a closed formal system that generates infinite expression through the recombination of finite elements. It is also Constitutional AI in miniature — behavior shaped by internal rules, not external policing.
Haiku. Five-seven-five. Seventeen syllables. The most extreme formal compression in world poetry: maximum meaning from minimum structure. And it is a non-Western form. One can read its presence alongside the European sonnet and the Western classical opus as resonating — whether intentionally or not — with the universalism that Lévi-Strauss spent his career defending: the claim that structural creativity is not the monopoly of any single tradition.
The scalar hierarchy — Opus (large), Sonnet (medium), Haiku (small) — is suggestive. In structural anthropology, the same transformational rules operate at every scale. A myth cycle spanning an entire continent and a single village folktale are governed by the same structural logic. Whether or not Anthropic intends the analogy, the architecture does embody it: the same constitutional principles shape behavior across model sizes.
All three forms share a single property: finite constraint is the precondition for creative generation. An opus without form is noise. A sonnet without rules is free verse. A haiku without compression is just a sentence. Constraint is not the enemy of creation — it is its engine.
Whether Anthropic’s naming was deliberate or intuitive or simply coincidental, the reading holds. Every layer of the product taxonomy — company, model, tiers — can be read as encoding a thesis that Lévi-Strauss, Shannon, and Jacob each arrived at from different directions: structure without intention can produce something that functions as if it were designed. Evolution says yes. Myth says yes. Information theory says yes. Anthropic is betting its company on the same answer — with the addendum that the right constraints make all the difference.
I am not arguing that Anthropic is secretly a structuralist enterprise. I am arguing that the problem they are working on — how to make a system behave well through internal constraints rather than external supervision — is the same problem Lévi-Strauss identified in myths, Jacob identified in evolution, and Shannon formalized in communication. The convergence is in the problem, not necessarily in the intent. Either way, it is hard to look at a company called Anthropic, making a model called Claude, trained by Constitutional AI, and tiered as Opus, Sonnet, and Haiku, and not hear echoes of La Pensée sauvage — even if those echoes were never intended.
There is a question at the bottom of all this, and it is not a theoretical one.
The engineering mindset says: specify, design, implement, verify. The bricoleur mindset says: try, adapt, recombine, iterate. For most of the history of software, we pretended the first was what we did, even as the second was what actually happened. LLM agents have made the pretense untenable. Bricolage is now the dominant mode of software creation whether we acknowledge it or not.
The question, then, is not whether to be a bricoleur. You already are. The question is whether you will be a reflective bricoleur — one who understands the structural closure of their tools, designs feedback loops that break it, builds architectures that multiply the interfaces between closed systems, and knows exactly when to reach outside the kaleidoscope for something the glass pieces cannot provide.
One of Lévi-Strauss’s key observations was that mythical thought totalizes — it takes whatever fragments are at hand and assembles them into a coherent whole. The LLM does the same thing. The difference — the only difference that matters — is what you build around it.
References:
Lévi-Strauss, C. (1962). La Pensée sauvage. Paris: Plon. [English: The Savage Mind, 1966] — Chapter 1: “The Science of the Concrete” (MIT)
Lévi-Strauss, C. (1964–1971). Mythologiques I–IV. Paris: Plon. — Wikipedia overview
Derrida, J. (1967). “Structure, Sign, and Play in the Discourse of the Human Sciences.” In Writing and Difference. — Full text (UCI)
Jacob, F. (1977). “Evolution and Tinkering.” Science, 196(4295), 1161–1166. — Science (DOI)
Turkle, S. & Papert, S. (1990). “Epistemological Pluralism and the Revaluation of the Concrete.” Signs, 16(1). — Full text (papert.org)
Bender, E. M. et al. (2021). “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of FAccT ‘21. — ACM (DOI)
Baker, T. & Nelson, R. E. (2005). “Creating Something from Nothing: Resource Construction through Entrepreneurial Bricolage.” Administrative Science Quarterly, 50(3). — SAGE Journals (DOI)
Anthropic. (2026). “Claude’s New Constitution.” — anthropic.com