Special Assistant to the RoboBricklayer
What does AI take-off do to buildings? part 1 of many.
A colleague of mine insulted me this week. He said that I change my mind about the probability of AI doom based on whatever I happened to have read last. For example, if I have just skimmed a Zvi post standing in line for a coffee, then I will come into the office harrumphing about having only 5 more years left, and will say things like ‘but let’s make these five years the best they can possibly be’. On the other hand, if I listen to someone like Jack Clark, then I won’t do quite as much harrumphing, and I might even start saying nice things about the character of AI safety researchers and the power of international co-ordination to create nice things for people.
Because the point of accusations is not to listen to them, but to force oneself to think about why one is justified continuing as one is, I riposted that far from being weathervaney and spineless my behaviour was in fact a reasoned response to my information environment. This environment has a number of characteristics which make it distinctive. First, it contains people with wildly different predictions about what will happen but with unsettlingly similar preoccupations, values, and backgrounds. The doomers in my feed that drop five-or-six-figure-length posts have gone to the same schools, worked in the same companies, and sometimes dated the same people as the panglossian Eternal Growth mongers who post hockey stick graphs. This is not simply the truism that ‘reasonable people can disagree’; it is evidence that the same people can disagree. A second feature of this information environment is that there is no particular correlation between on the one side having credentials, with the insider privilege they might bestow, and on the other having any success in getting things right. In fact, everyone — with one or two notable exceptions — has so far been wrong about pretty much everything.
What’s the best way to process the information that I’m reading then? For other topics, when I am exposed to a varied diet of opinions and bits of evidence I notice that my opinions start off quite wild and ephemeral and then settle down once I know my way around the relevant data. From that point on they shift but only in a seismic way: not at all, then all at once.
When thinking about AI timelines my opinions do not work like this. It’s not the case that the more I read, the more evidence I have for my predictions, and the more confident I become in this predictions. In part, this is a function of the nature of the question. Projecting AI timelines is notoriously difficult because it is impossible to understand the nature of the reflexivity involved. What I mean by this is that for other predictions about human effort — in things like science or economics — you have some level of confidence about the magnitude of productive labour involved, as well as the magnitude of the effort of human intelligence. Obviously this doesn’t apply in the case of AI. The smallest of gradations in the level of AI take-off and self-improvement will, over a one or two year period, lead to a qualitative difference in outcome. If we get agents which have good research taste and can accelerate the rate of frontier lab experimental output and algorithmic process by 10X, then we are in one world. If we get agents which have good research taste, can accelerate algorithmic progress, and can also do research into chip design and improve the quality of hardware, we are in a different world.
The point is that we will either get some kind of exponential growth, or we won’t. There is a caesura between a forecastable world and an unforecastable one. If there is in fact a binary like this, taking the mean position between a number of positions is meaningless. You can’t ‘factor in’ a full take-off prediction worldview into a bottlenecked one. When my colleague accused me of flip-flopping on my opinions, my response should have been something like this: flip-flopping is the only thing you can do when thinking about a binary.
It could be said that the actual downside of flip-flopping is not that it is intellectually bankrupt, but that it puts a lot of strain on my mind. Each time I revise my overarching views on whether there will be take-off, I have to make changes to a number of downstream opinions, like rebasing a long-lived feature branch on an extremely busy master. However, for me this is not in fact a downside. I find this rewarding, and instead of fighting it in the domain of AI progress to try to get some kind of ‘slow thought’, I have found its best to embrace it. Each time I see something convincing, I run the full pipeline of logical entailment. Switch the context, turn the temperature up. Read AI2027, then search on Homestra for huts on Norwegian islands. Try to build a high-volume processing system with Claude Code, then log into Trading212 to dump Meta stock.
This is the phenomenology of life pre-take off. The identity of an ‘online’ guy becomes nearly irrelevant, because it is now compulsory. I have become deliberately and calculatedly unmindful. I avoid the present moment in favour of multiple distant moments. I maximise my brain’s information throughput. Reading this, it probably sounds somewhere along a spectrum of cringe to dystopian, and I’m sure it is, but once you’re all in…
That said, right now I am trying to do the opposite. I am sitting in a pub in Notting Hill as I write this. There is no wifi here, and I am not going to hotspot on my phone because the battery is so low, which means I have to write this without internet connection. It’s a clocks-back situation, ‘kind of empty in the way it sees everything’. The pub is called the Uxbridge Arms, even though I would guess we are ten miles from Uxbridge. Later I am going to the Coronet Theatre to see a play by Jon Fosse, Noregs nest beste forfattar. It is sunny outside, but it is in fact too sunny, and the sun was obscuring my screen when I sat on the benches on the street, so now I am inside, but I can still feel the suncream on my face.
On the pew beside me a guy is watching a Fifa streamer on his phone and just shuffled his weight to one side to deposit an enormous fart payload into the space. He looked around to check if anyone would notice. A few minutes ago woman interrupted me saying that she was a clown. I said excuse me and she said she works as a clown, and that she has a podcast about clowns. It is quite a nice coincidence that the one time I try to write something in public so many things happen to me.
The pub’s coat of arms claims that it has been around since 1869, which in my low-res timeline of the 19th century is during the Disraeli-Gladstone back-and-forths. One of the important things about being in London is that you are very often in buildings which are older than the political order in which you live. This pub was built in an empire that administered a quarter of the earth’s land area. The order of things is now very much changed. The better you build your buildings, the better they ‘do’ what they are meant to do, the more they convey the history of themselves without telling that history. This pub is so good at being a pub that no one ever thinks of it as a monument.
This building as a residue of old order in the present day is a good example of why thinking about AI take-off over the last two years has made me choose my current set of interests in the built environment. That feeling of constancy is obviously a comfort when faced by the reflexivity and ambiguity of AI thinking. The land and the things built upon it will remain fairly constant for a large range of extreme AI timelines, and are perhaps only altered by the most radical and unhinged of them. From talking and learning and thinking, it is clear that both I and many others in the industry are highly sceptical of the idea that in the next five years construction will be eaten by AI in the same way that software, finance, and writing will be.
For example, an enormous cost is exerted by the fact that it is challenging to comply with building regulations. You need your building to be accessible, so you have to iterate through many floorplans to find one which has enough clearance in various entrance ways. You need it to be fire safe, so you need to make sure your fire exits are in particular places. But then you find out your fire escape has been placed on top of an important part of the HVAC system, and so you need to go to the HVAC guy, or maybe that’s you as well, and you need to redesign where that pipe goes, and the changes ripple back through the model with all sorts of hard to foresee consequences. There is great progress AI could make in solving these kinds of ‘skill issues’. But even once these are solved, there is still a co-ordination and probability problem in this section of the industry. Very often, when you design a building, you know that some things are definitely compliant, and you know that there are some things which a particular building officer might plausibly flag as non-compliant. Codes are up for interpretation, and even though you know there is a risk you might get flagged you think it is worth submitting a particular proposal because that proposal is in the end simply your ideal way of doing things. Often in such cases, you might be proven right, but just as often you might be proven wrong, and you might fail in predicting what was in the mind of this building officer. This is the ‘two body problem’ of regulation: ultimately codes are up for interpretation — so too are laws — and so you can never be sure you will get it ‘right’ first time. This dynamic is not going to change until we have a situation where the laws are interpreted and administered by an AI which you as a developer/architect can also access and run against your building, removing the information asymmetry in the regulatory game altogether. This this still seems very far away. It is true that the Singaporean government (of course) is developing a tool called CoreNET that lets them do automated checking of building compliance, but it is unclear how unambiguous it will be, and how much the humans will be out of the loop.
One day there might regulation encoded in a model which runs building compliance. There might be some new framework for legal liability to allow it make decisions. There might also be robo-bricklayers and plasterers that can put up a house in a day. All this is far away. There are things in the built environment which can and will be done better, but that is not the same as there being reflexive self-improvement, which means that there is a fundamental layer of predictability about this domain which makes it possible to reason about, which makes it less manic as an information space. Even though, as I said, I like the frantic skimming between ideas about AI take-off, when it comes to having a career and directing my effort in the world it seems more sensible to find an adequate object, one which might respond to the effort put in.
It’s important too. I won’t bother rehearsing everything you have already know about why it’s so necessary and so hard to get cities, housing, and land ownership right. AI does make it more urgent, too. At some point, building nuclear energy and data centres will become a national security priority for the UK. If and when this does happen, it might be the case that things like residential development, building safety, and urban beauty will fall down the list of priorities. If as an industry we can make systems which can reliably deliver cheaper, safer, lovelier buildings in an ongoing way, there will be less risk that those goods get cast by the wayside when things get spicy. At some point, it might get harder to build the civic built environment we want, not easier.
Yet this sounds all too much like forecasting. I need to remember what I began this post with: that I don’t know reflexivity, so I do not know the future.
Thank you for reading. Please email me to talk about the built environment or AI, or both. Subscribe for more.
www.crossthatbridge.ai



