💡
Cities are increasingly using AI for planning analysis. But how do you ensure that AI assistance improves decision-making without obscuring accountability? This guide introduces two frameworks from Anthropic research that planning departments can use to develop organizational AI fluency and establish governance standards.
The Problem: Professional-Looking AI Output Reduces Critical Scrutiny
Anthropic's AI Fluency Index research tracked 9,830 conversations and identified a troubling pattern: when AI generates polished outputs, users become less likely to critically evaluate them.
Specifically:
- Fact-checking decreases by 3.7 percentage points when outputs appear finished
- Critical questioning drops by 3.1 percentage points
- Users invest more upfront guidance but become paradoxically less skeptical
For urban planners, this is a serious governance risk. Professional appearance does not equal accuracy.
What is AI Fluency?
AI fluency is the ability to collaborate effectively with AI systems. The research identified 24 observable behaviors organized around three core competencies:
Directive Behaviors (Setting Clear Parameters)
Give the AI clear instructions. Specific version: Instead of "Analyze the Downtown West proposal," say: "Analyze the Downtown West proposal focusing specifically on: (1) displacement risk for households earning below 80% AMI, (2) alignment with our Affordable Housing Preservation Strategy, (3) comparison to our parking standards from 2024."
Iterative Engagement (Refining Through Conversation)
This is the single strongest predictor of AI fluency. Get an initial output, then ask follow-up questions. You get an output from AI about fiscal impacts. Then you refine: "How does that compare to our 2023 housing policy? Can you break out property tax versus sales tax?" Then further: "The school impact doesn't match our LCAP projections. Show me your assumptions."
Conversations with iteration showed 2.67 additional fluency behaviors.
Critical Evaluation (Maintaining Healthy Skepticism)
Don't assume polished outputs are accurate. Question the reasoning. Verify facts. Identify gaps. Test assumptions. Look for errors.
Key point: The more polished something looks, the more skeptical you should be.
Framework 1: AI Governance for Planning Departments
Planning departments should establish clear policies on AI use:
USE AI FOR:
- Literature review and synthesis
- Data compilation and visualization
- Report drafting and formatting (with human review)
- Policy comparison across jurisdictions
- Initial impact analysis (always verify)
- Accessibility: Converting documents to plain language
DON'T USE AI FOR:
- Final policy decisions
- Legally binding determinations without attorney review
- Confidential project information
- Decisions affecting specific developers without legal counsel
- Analysis where you can't verify the underlying data
- Tasks where you can't explain the AI's reasoning to the public
Framework 2: The Five-Element AI Diligence Statement
Whenever your planning department uses AI in public-facing work, disclose it with these five elements:
Element 1: Specific Tasks
Example: "AI assisted with synthesizing data from five housing studies, drafting initial policy language for the TOD ordinance, and generating comparison matrices of parking requirements across peer cities."
Element 2: Tool Identification
Example: "Claude (by Anthropic), used for document analysis and policy language drafting."
Element 3: Human Review Process
Document what you verified: "The planning staff reviewed all housing projections against regional forecasts (we did not use AI projections; we substituted RPA forecasts), verified transit connectivity claims against our GIS analysis, and cross-checked school impact calculations against our LCAP model."
Element 4: Where You Made Changes
"We identified and corrected the following items: The AI initially recommended single-family zoning (we changed to mixed-use per our Station Area Plan). The AI projected 30% affordable housing (we updated to 40% per Council policy)."
Element 5: Your Accountability Statement
"I have reviewed this analysis and am confident in its accuracy and appropriateness. The planning department remains fully responsible for these findings and recommendations."
Implementation Checklist
This Week:
- Brief your planning director on these frameworks
- Identify one analysis project currently underway
- Apply the three fluency competencies to that project
This Month:
- Develop your first AI disclosure template using the five-element framework
- Identify which tasks your department wants to use AI for
- Identify what's off-limits
This Quarter:
- Formalize an AI use policy
- Train your staff on directive behaviors and critical evaluation
- Establish fact-checking protocols
- Share your AI governance standards with elected officials
Key Takeaways
Iteration is the strongest predictor of effective AI use. Don't accept first drafts. Refine. Engage.
The polished output paradox is real. Professional-looking AI analysis reduces critical scrutiny. Question harder when things look finished.
Transparency builds trust. A clear five-element disclosure demonstrates thoughtfulness and enhances credibility.
You remain accountable. AI is a tool. It doesn't change who signs off or who answers to the public.
Governance comes first. Establish clear policies before deploying AI widely.
Resources
- Anthropic AI Fluency Index: www.anthropic.com/research/AI-fluency-index
- Discussion Guide: claude.com/resources/tutorials/a-discussion-guide-for-the-ai-fluency-index
- AI Diligence Statement Tutorial: claude.com/resources/tutorials/writing-an-ai-diligence-statement