Web Project Studios

Field notes

Property listings can't be AI written. They can be AI assembled.

11 May 2026

property-businessai-workflowlistings

I reviewed forty-three listings from three different agencies last month. All three had started using AI to write copy. Thirty-one of the listings contained the phrase "an abundance of natural light." Seventeen described the kitchen as "perfect for entertaining." The addresses were different. Everything else was converging.

This is not an AI problem; it is a workflow problem wearing an AI hat.

The agencies in question had done something reasonable: they gave their negotiators a ChatGPT prompt and told them to write listings faster. What they got was faster production of identical output. Volume without differentiation. The Rightmove feed was full. The brand was quietly dissolving.

There is a better approach, and it does not require abandoning AI. It requires changing what AI is asked to do.

When people say AI-written listings, they usually mean one of two things. Either a negotiator pastes some notes into a chat interface and hits generate, or an automation fires when a new property is added to the CRM and produces copy without human input. Both patterns share the same flaw: the AI is being asked to invent.

Invent the tone. Invent the emphasis. Invent which features matter. Invent the sentence that opens the listing.

Invention at scale produces sameness. The model has no loyalty to your brand. It has no memory of the last forty listings you published. It has no idea that your agency has spent six years positioning itself around period properties in conservation areas, and that "open-plan living" is not actually a phrase you use.

I wrote about this pattern in more depth in the brand erosion problem with AI property listings. The short version: generation is the wrong job for AI in a listings workflow. The model is not the problem. The job description is.

Assembly starts from a different premise. The AI does not write the listing. It renders it.

The inputs are verified facts: room dimensions, council tax band, EPC rating, tenure, lease length if applicable, and confirmed features from a structured inspection checklist. The template is yours: your sentence patterns, your preferred opening construction, your word choices, your rules about what never appears in copy (no "deceptively spacious," no "viewing essential," no "an abundance of natural light").

The AI's job is to take those verified inputs and fill the template without drifting. It does not decide what to emphasise. That decision was made upstream, in the inspection checklist, in the brief, in the template itself.

The output is verifiable. If the listing says the property has a south-facing garden, that fact exists in the input record. If it says the EPC rating is C, that came from the certificate, not from inference. Nothing was invented.

This matters for Rightmove and Zoopla feed compliance as much as it matters for brand. Both platforms have data field requirements that assume accuracy: floor area, council tax band, tenure, new build status. An AI that generates freely can hallucinate any of these. An assembly workflow cannot, because those fields are populated from verified source data before the AI touches them.

The mechanism is a structured fact record that sits between inspection and publication. Every field that appears in the listing must have a corresponding entry in the record. The AI prompt references the record, not the negotiator's memory.

Here is the pattern I use with agencies:

{
  "property_id": "WPS-2026-0412",
  "verified_facts": {
    "address": "14 Marlowe Street, Canterbury, CT1 2AB",
    "property_type": "Mid-terrace Victorian",
    "bedrooms": 3,
    "bathrooms": 1,
    "reception_rooms": 2,
    "tenure": "Freehold",
    "council_tax_band": "D",
    "epc_rating": "D",
    "floor_area_sqft": 1104,
    "garden_orientation": "South-facing rear",
    "parking": "On-street permit zone",
    "heating": "Gas central heating, combination boiler (2023)",
    "notable_features": [
      "Original Victorian cornicing retained in reception rooms",
      "Refitted kitchen (2024) with integrated appliances",
      "Loft room with Velux windows, currently used as home office"
    ],
    "restrictions": [],
    "lease_remaining_years": null
  },
  "brand_constraints": {
    "tone": "Considered. Specific. No superlatives.",
    "prohibited_phrases": [
      "abundance of natural light",
      "deceptively spacious",
      "viewing essential",
      "perfect for entertaining",
      "briefly"
    ],
    "opening_construction": "Lead with property type and defining feature. No rhetorical questions."
  },
  "template_version": "residential-v4",
  "verified_by": "J. Okafor",
  "verified_date": "2026-04-29",
  "feed_targets": ["rightmove", "zoopla", "agency_website"]
}

The AI prompt receives this record and is instructed to render copy using only the fields present. It does not fill gaps. It flags them. If garden_orientation is null, the prompt returns a validation error rather than inventing a direction.

That last part is the one most agencies skip. They build the schema but leave the AI free to handle missing data gracefully. "Gracefully" means fabrication. The workflow should treat missing verified facts as a blocker, not an invitation to improvise.

Both platforms ingest structured data. The listing you see on Rightmove is rendered from fields: price, bedrooms, bathrooms, description text, images, floor plan, EPC certificate. The description text is one field among many, and it sits alongside data that must be accurate or the listing gets flagged.

If your AI generates the description freely but the structured fields are populated from your CRM, you already have a split: verified data in the fields, unverified prose in the description. The assembly approach closes that split. The description is derived from the same verified record as the structured fields.

This also makes auditing straightforward. If a prospective buyer later disputes a claim made in the listing description, you have a fact record with a named verifier and a date. You know exactly where every claim originated. That is not a compliance luxury. Under the Consumer Protection from Unfair Trading Regulations, material misinformation in a listing is an offence. An assembly workflow gives you a paper trail. A generation workflow gives you plausible deniability and not much else.

If you are already using AI for listings and the output is sounding repetitive, the fix is not a better prompt. It is a better input structure.

  1. Build a verified fact checklist for your inspection process. Every field that will appear in the listing must appear on the checklist. Room dimensions, orientations, certifications, tenure, confirmed features.
  2. Define your prohibited phrases and your tone rules. Write them down. They become the brand_constraints block in your schema.
  3. Identify who verifies the record before it enters the workflow. Name and date on every record. This is the gate.
  4. Build or adapt a prompt that references the schema and treats missing fields as errors. Test it against ten historic listings and compare the output to what you would have written.
  5. Run the Rightmove and Zoopla field mapping alongside the schema. Every mandatory feed field should have a corresponding verified fact entry.

The whole thing can be built in a week if the inspection checklist already exists in some form. The harder part is usually the brand constraints: most agencies have not written down what they do not want their listings to say. That conversation is worth having before you automate anything.

If you want to see how this connects to the broader question of AI pilots that stall before they produce anything useful, the post on why most AI pilots fail covers the pattern in detail. The listings problem is a specific instance of a general one: AI given a vague job and no structured input produces vague output at speed.

The assembly approach is not a workaround for bad AI. It is good workflow design that happens to use AI at the render step. The interesting work is upstream. It always is.

If you want a structured review of how your current listings workflow handles input, verification, and brand constraints, the AI Workflow Audit is where that conversation starts.