10 min read

Dispatch for the week ending 16 March 2025.

Mission elapsed time 20,739 days.

Good afternoon, beloved -

For some time now, I've been mulling a follow-on project to Lifehouse that I think of as "the Lifehouse Cookbook." This would be an offering in the grand tradition of Papanek and Isaacs and Alexander: a compilation of "recipes" for all of the complicated provisions any Lifehouse organized along the lines I laid out in the book would need to be able to call upon, from aquaponic growing racks to the basic makings of a free community healthcare clinic. Think of it as a highly customized pattern language for the creation of democratically-managed, neighborhood-scale relief and recovery hubs.

To be clear, I never intended to write most of these recipes myself, so much as collate them from the very large corpus of relevant information that's already floating around out there — my original idea was to gather all the material on these topics that's either lapsed into the public domain, or was made available to begin with under some license that permits this sort of transformative use. I'd then put a goodly bit of effort behind an editorial process, meant specifically to strip the content of all the paranoid framing and affect that so often burdens the more prepperish sort of material, before rewriting it for consistency and accessibility. The final stage would involve organizing a team of named, qualified individuals to field-test and validate each recipe, and then apply a version-control system so you'd know you had the latest specification. You could then print up the whole Cookbook, zine-style, and circulate it for free, or prebundle PDFs of the recipe files and distribute them on a hardened thumb drive like this one.

This is obviously not a trivial undertaking. Given the research, collation, editorial and validation effort required to release even a single minimally reliable recipe — in other words, not this kind of thing — I always imagined that producing a Cookbook would involve paid staff, working fulltime for a few months at the very least, just to get a viable Version 1.0 out the door.

And this may be where my age and prejudice tell. I told a friend about my idea for the Cookbook and, well, he didn't quite roll his eyes. But he did say: look, it's 2025. Why on Earth are you taking on all that work, when you can just train up a LLM with the material you collate? That way, anybody who wants to organize a Lifehouse can just ask for help with whatever challenge they happen to be facing, and the LLM can kick them out a tailored response.

I get that plenty of people are already relying on knowledge-management strategies just like this, but I think in this context the LLM idea is pretty clearly a nonstarter, if for no other reason than it eliminates the validation and vetting process I'd had in mind. Given the ease and frequency with which LLMs are known to hallucinate, using them to convey life-critical information strikes me as a boundlessly irresponsible thing to do. I mean, just imagine someone in the teeth of a crisis situation, taking ChatGPT's advice on the proper way to connect DC and AC electrical systems, or sterilize medical instruments. Given everything we know about LLMs, the risk of being offered instructions that seem plausible enough, but are subtly, lethally wrong, is far too great to accept.

I think you know my resistance is founded in more than that, though. Nobody burns hotter with the flame of anti-“AI” loathing than I — nobody. My feelings on the subject lean sharply toward hashtag-butlerianjihadnow. Everything I've ever experienced of them tells me that LLMs and GAN image-generation tools are morally, practically and aesthetically shoddy engines for the mass, nonconsensual extraction of value from uncompensated human labor (including my own!), built on ecocidal draughts of energy-intensive "compute," and I can't presently see anything changing my mind about any of that.

I especially can't see being convinced of the value of "AI" by any of the usualherbs hyping crypto and VR-based "Web3" and all the other similarly inimical technologies of our time — their opinions carry zero weight with me. The trouble for me comes when someone I respect, and whose opinion I do value, makes a strong, sustained case for sensitive use of these tools. Because as much as I despise the way we've arrived at "AI," I also don’t like to be religious about things. I like to think that I'm the kind of person who retains the ability to change my mind about my beliefs — even strongly-held ones — when the evidence base underlying them shifts. And it's certainly at least possible that there have been recent developments with LLMs and GANs that alter the equation. So lately I've been feeling like maybe it's time to put my resistance to the test.

The situation is much the same as the one with DAOs that I wrote about a few dispatches back: I remain fiercely skeptical, but if there is indeed a path toward the convivial, comradely, broadly fructifying use of these neural network-based tools, I'd certainly want to know what it is. So when my friend's scenario for production of a Lifehouse Cookbook felt like it offered a reasonable test case, I halfway took him up on the suggestion.

I did this: I fed an LLM a series of highly specified requests. I asked it to prepare for me natural-language, easy-to-understand and nontechnical instructions for three goals: how to manage a productive, deliberative neighborhood assembly, among participants of diverse backgrounds, outlooks and levels of experience; how to install and operate a neighborhood-scale solar- and wind-powered electricity generation and storage microgrid, capable of operating in both "island" and connected modes; and how to build a community-scale water filtration and purification system, proof against microbial, fungal and chemical contaminants, that can be built with readily-available, household or low-cost commodity components. These were all provisions I imagined just about any Lifehouse being able to make immediate use of.

And it delivered, complete with schematics. The written instructions for the two more technical recipes are, so far as I am able to determine off the back of a half-hour's online research, valid (and in the case of the microgrid, compatible with all relevant UK laws and regulations to 1 January 2025, as specified). Both sets of instructions were couched in language that someone like me, with no specialist knowledge or prior experience, could parse easily enough. Though I'd be more than a little uncomfortable relying on them without having first double-checked them against real subject-matter experts, you could definitely use these recipes to at least get started, and I have no doubt that the bill of materials/parts list, sourcing recommendations and cost estimates I asked for would help you even if you'd never before taken on any such challenge.

The provisions it laid out for moderating a neighborhood assembly, further, were not a million miles away from things I've seen in specialist articles about nonviolent communication or Quaker meeting process; if you tried running a meeting with them, I can't see you doing any worse than if you were working from known, trusted material like "Anarchic Agreements," the Seeds for Change guide on effective meetings, or the old IWW standby, Rusty's Rules of Order. Of course, these materials were almost certainly in the LLM's training corpus, so this shouldn't come as a huge surprise...but it's a data point.

If written instructions struck me as being competent, though, my reservations kicked in with redoubled force when we come to the matter of the illustrations meant to accompany them. Consider this ostensible diagram of a community microgrid:

This still suffers from all the signature blunders we associate with AI-generated imagery. It resembles a schematic, rather than actually being one — you try to follow this scribble as a wiring plan and it's liable to get you killed, or burn down your entire block, or both, and we should probably count ourselves fortunate that there's no way you could do so even if you wanted to. What look like captions and callouts are impossible to parse, mere stand-ins for anything meaning-bearing. I am intrigued, though, by the considerable resemblance this image bears to that other longstanding touchstone of mine, Clifford Harper’s Vision 4: Autonomous Terrace; it's almost as if the vector function of "interest in urban microgrids" folds in that aesthetic.

Now consider this schematic drawing of a multistage water-purification rig mounted in a custom, Ken Isaacs-style spaceframe, along with a just-about photorealistic rendering of the built system up and running:

Just as with the microgrid illustration, here the GAN also kinda nailed the aesthetic. This setup looks very much indeed like something you'd find instructions for in 1973's Nomadic Furniture, or actually built up in an environment like raumlaborberlin's wonderful Floating University.

The issue is that, once again, you couldn't actually use the image as a guide to the production of anything that might work, let alone do so safely. Put to one side any suspicion that whatever corpus the GAN drew upon was heavily stocked with M.C. Escher woodcuts, with all the mildly lysergic distortions and impossible spatial relations that implies. Labeling phases of this system "Filtertion," "Distifftion" and "Chemichar" gets you worse than nowhere. Can you safely assume that those refer to filtration and disinfection stages? Even if you could, are they illustrated in anything like the correct sequence? (And just in case you've generously assumed that it might be domain-specific technical terminology you simply weren't familiar with, no, "chemichar" is not a thing).

It isn't simply the visual artifacts that give these away as "AI" productions. It's that inherent tendency of LLMs to deploy lexemes only for their observed frequency distribution, not for their semantic value, and to seem authoritative while doing so. This is still a wildly overconfident gesture at something rather than anything resembling the substance of that thing, in other words, along with a quite dangerous incapacity to tell the difference. (Interestingly, when I explicitly asked the LLM to kick me out plans for a vertical-axis wind turbine, rather than the conventional windmill style, I only got one on the third try, accompanied by a pseudo-sheepish apology.)

So no, you could not use these images in isolation to build working instances of the things they're intended to represent, while using them alongside accurate written instructions seems likely to result in perplexity, frustration and some unavoidable measure of risk. And again, the reason why this is so isn't even a question of safety, but of basic functioning. The general notion of a working system is there, kind of — but its parts are neither accounted for accurately, nor connected to one another in any way that would produce the desired effect. What we have here is not so much a plan as, well, a concept of a plan.

What did I learn from this exercise? And most importantly, did it do anything to change my fundamental, bonedeep resistance to the idea of working with these tools?

The written instructions for both technical and social projects seemed superficially sound, to the limits of my own ability to validate them. Though I would not prefer it to guides written by individuals or groups whose experience I value and whose values I recognize, I would feel comfortable trying to run an assembly on the protocol for doing so I was offered by the LLM, if I had nothing else to rely upon. Conversely, I wouldn't feel safe relying on the recipes for the microgrid or the water-purification setup without first running them by someone with domain knowledge and experience — though, both impressively and dangerously, they certainly struck me as a nonspecialist as being credible enough. If you wanted to produce a Lifehouse Cookbook or anything along those lines with current LLM technology, then, you might be able to produce a crude first pass, but you'd still need qualified humans in the loop who could speak knowledgeably to the wisdom of following the counsel you were being offered.

The imagery was significantly more problematic, especially considering the importance of thoughtful illustration in helping most of us understand complicated multistage instructions with any technical or spatial aspects to them at all. There's a reason why IKEA lavishes such care on its instruction graphics — and in that context the worst blunder it's possible to make involves mounting the shelves on your Billy all wonky, not saddling everyone on your block with a nasty case of giardiasis. At the present stage of their evolution, anyway, it seems to me that GANs are suited only to the evocation of an idea, not at all to the creation of technical drawings meant to guide the execution of it. (It's possible that the limitations I ran into are associated with the free versions of the tools I used, and wouldn't confront users of the premium tiers, in which case I expect to hear about it from one of you.)

To be explicit: I surrender none of my reservations about the sidelining of compensated human labor in the production of these representations, the massive theft of that labor that production is predicated upon, or the awful cost in dirty energy. I would never publish these specifications. I would never rely upon them as anything but the merest starting point for further, conventionally manual, effort- and labor-intensive research. I retain just about all of the doubts and pinpoint loathings for LLMs and GANs I started with. But especially where the initial collation of written instructions is concerned, it’s clearly getting harder to build arguments against their use that my gut tells me most people would find compelling.

Where does this leave my project? I would like to produce and release a v1.0 Lifehouse Cookbook by this time next year, and in fact I warmly solicit your thoughts as to what recipes a useful first pass at any such thing might include. But I can't responsibly produce one to even a minimal standard using the present generation of "AI" tools...and even if I could, I'd feel filthy about doing so.

Other parties will have no such compunction, of course, and I expect to see much more of the epistemic pollution we're already experiencing, which obviously makes the project of validating information even more challenging. In fact I'm just not sure how any human effort holds it own in an environment where an ever-greater proportion of cultural production consists of zero-effort automated slop. But that's a problem for a different day. For now, while I'll continue to test this proposition every few months, and will report back to you if anything changes, I haven't yet seen anything that might justify the use of "AI" techniques in my own domain — and that strikes me as useful information to have.


§ Right now I'm still reading H is for Hawk, a little bit out loud with N every night, which is helping us work through some of the grief we're experiencing; listening to a lot of old folk and gospel and union songs, of the sort that gesture toward some possibility of peace with justice; and watching Adolescence: a little didactic, a tad too consciously Acted, but a flying kick to the chops (and a real blackpill into the bargain, just so you know).

That's about it for now. As always, I hope that you are well, that you have your feet planted solidly on the good Earth, and that you are taking care of those who need you, taking care of yourself, and letting others take care of you. You have

all my faith,

ag

ldn