Skip to main content
My Blog
Insights & Stories

Explore my latest thoughts on design, development, branding, and the creative process.

Back to Blog
Strategy12 min read
Can Residents Actually Use Your City Website? A UX Walkthrough Framework for Mid-Sized Municipalities

A repeatable method for city web teams: pick 5 top resident tasks, define success criteria, run simple hallway tests, and log friction points before they become complaints.

E
Excelle Escalada
Digital Experience ArchitectSep 8, 2025

The resident who gave up after seven clicks

She needed to register her daughter for a spring swimming lesson. She knew the city ran the program. The website had a "Recreation" section. Seven clicks later, she was looking at a PDF of a program guide from two seasons ago, a phone number that rang busy, and a login portal for a third-party booking system that didn't recognize her existing city account.

She drove to the community centre and registered in person.

That story isn't unusual. I've heard versions of it from municipal communications directors, front-desk staff, and residents themselves in every UX audit I've done on mid-sized Ontario city websites. The common thread is never malice or laziness. It's that the teams managing these sites are too close to the content to see the friction. They know where everything lives. Their residents don't.

The solution isn't a full redesign. It's a structured method for finding where your website actually breaks down, using real tasks, real people, and about two hours of your time.

What a UX walkthrough is (and isn't)

A UX walkthrough is not a usability lab study. You don't need eye-tracking software, a one-way mirror, or a research budget. What I'm describing here is closer to what UX professionals call a hallway test: a lightweight, structured observation of someone attempting a real task on your site, with you watching and taking notes.

It is not a survey. Surveys ask people what they think. Walkthroughs show you what they actually do. Those are very different things.

It is not an analytics review. Analytics tell you where people drop off. Walkthroughs tell you why.

Done well, a two-hour session with five people will surface more actionable insight than a month of bounce-rate analysis.

The framework in five steps

Here's the repeatable method I use when auditing municipal sites. You can run the full cycle in a week with no outside help.

Step 1: Pick your five resident tasks

Every municipal website exists to help residents do things. Not to showcase the organizational chart. Not to archive every bylaw ever passed. To help residents do things.

Start by identifying the five tasks that generate the most resident contact. Check your:

  • Help desk and call centre logs: What do people phone in to ask? The top five call drivers almost always represent your worst UX failures.
  • Search analytics on your site: What are residents searching for internally? High search volume on a topic means the navigation isn't surfacing it fast enough.
  • Front desk staff: Ask the people at reception what questions they answer ten times a day. They know.
  • Five tasks that consistently top the list for Ontario mid-sized municipalities:

    | Task | Why it's high-stakes |

    |---|---|

    | Pay a property tax bill or set up pre-authorized payment | Financial, deadline-sensitive, high anxiety if unclear |

    | Register for a recreation program (swimming, skating, camps) | Time-sensitive (spots fill fast), involves child safety info, often uses third-party booking |

    | Report a problem (pothole, missed garbage pickup, streetlight) | Citizens expect acknowledgment; unclear process erodes trust |

    | Apply for or renew a permit (building, parking, business licence) | Complex forms, document requirements, payment steps |

    | Find public meeting dates and agenda documents | Civic engagement entry point; often buried in committee structures |

    Pick the five that match your community. Write each one as a task scenario: a real, specific situation a resident might be in.

    Task scenario format:

    Task: [Short label]
    Scenario: [One or two sentences describing who the resident is and what they need to do]
    Starting URL: [The page you ask them to begin on, usually the homepage]
    Success condition: [The specific outcome that means they succeeded]
    Time limit: [Suggested: 5 minutes per task]

    Filled-in example:

    Task: Register for a recreation program
    Scenario: It's January and you want to sign your 8-year-old up for Saturday
    swimming lessons starting in March. Find a lesson that works for your schedule
    and register online.
    Starting URL: https://yourcity.ca
    Success condition: Reaches the confirmation screen or a clear next step for
    completing registration
    Time limit: 5 minutes

    Write all five scenarios before you recruit anyone. This keeps your sessions consistent across participants.


    Step 2: Define success criteria

    "Did it feel easy?" is not a success criterion. Feelings are hard to compare across participants and impossible to act on. You need observable, binary outcomes.

    For each task, define:

    1. Success (unaided): The participant completed the task correctly without any help from you.

    2. Success (aided): The participant completed the task but needed you to clarify something or unblock them once.

    3. Failure: The participant could not complete the task within the time limit, gave up, or arrived at a wrong answer.

    You also want to capture time-on-task (how long it actually took) and the friction count (how many times they hesitated, backtracked, or expressed confusion).

    Success criteria template:

    Task: [Label]
    Success (unaided):   [ ] Yes  [ ] No
    Success (aided):     [ ] Yes  [ ] No  — What was the prompt?
    Failure:             [ ] Yes  [ ] No  — Where did they stop?
    
    Time on task: _____ min _____ sec
    Friction count: _____ (each hesitation, backtrack, or "hmm" = 1)
    Exit URL: [Where they were when the task ended]
    Verbal quote: [Most telling thing they said, verbatim]

    Friction count sounds subjective but works well in practice. You're not judging the person: you're counting moments where the interface caused confusion. Even a small number of participants reveals patterns quickly. If three out of five people pause at the same navigation item, that's your finding.


    Step 3: Recruit your participants

    You do not need a formal recruitment process. You need five people who are not web or communications professionals and who are not already expert users of your city website.

    Good sources:

  • People in the waiting area at a city service location (with permission)
  • Staff from departments with no web responsibility (parks maintenance, bylaw, finance)
  • Residents recruited through social media or a community newsletter
  • Volunteers at a community event
  • Friends and family (imperfect but better than nothing)
  • Five participants will not give you statistical significance. That's fine. You're not running a scientific study. You're finding obvious, recurring problems. Five people is enough to surface the issues that affect everyone. Usability research consistently shows that five participants uncover around 85% of major usability problems. You can always run a second round later.

    What to tell participants (script excerpt):

    "We're testing our website today, not testing you. There are no right or wrong
    answers. If something is confusing, that tells us something important about the
    site, not about you.
    
    I'm going to give you a short scenario and ask you to try to complete a task as
    you normally would. Please think aloud as you go: tell me what you're looking
    for, what you're seeing, and what you're thinking. I won't answer questions or
    give you hints during the task, but I'll be taking notes.
    
    Do you have any questions before we start?"

    Print this. Read it the same way to every participant. Consistency matters.


    Step 4: Run the sessions

    Each session takes 30-45 minutes for five tasks. You need:

  • A laptop or desktop showing your live website (use the actual site, not a staging environment)
  • A printed copy of your five task scenarios (hand one at a time to the participant)
  • Your friction log (a notes template, covered in Step 5)
  • A timer
  • A recorder if the participant consents (optional but helpful for reviewing verbatim quotes later)
  • During the session:

    Start the timer when the participant begins reading the scenario. Watch. Do not help. Do not react visibly when they struggle (this takes practice). Let silence sit. Some participants go quiet when they're thinking: that's fine. Only intervene if they've been completely stuck for more than two minutes and are visibly distressed.

    Note everything:

  • Every URL they land on
  • Every time they use the search bar instead of navigating
  • Every time they go back to the homepage and start over
  • Every time they say "I'd just Google it" or "I'd call instead"
  • Every time they click something and look surprised by where it goes
  • Every verbatim quote that captures confusion or relief
  • "I'd just call" is one of the most important things a participant can say. It means your website failed to replace a phone call, which is the main thing municipal websites exist to do.

    What a session friction log looks like:

    Participant: #__ (no names needed)
    Date: ___________
    Device: Desktop / Laptop / Mobile
    
    TASK 1: [Label]
    Started: ____:____ Ended: ____:____  Total: ____
    Path taken: home > ___ > ___ > ___ > ___
    Friction moments:
      - [Timestamp] [What happened / what they said]
      - [Timestamp] [What happened / what they said]
    Outcome: Unaided success / Aided success / Failure
    Exit URL: ___________
    Key quote: "___________"
    
    [Repeat for Tasks 2-5]
    
    Overall notes: ___________

    Run all five participants through the same tasks in the same order. After the last session, you'll have a stack of five completed logs.


    Step 5: Log and prioritize friction points

    After your sessions, transfer all friction moments into a single consolidated findings table. Group by task, then by location on the site where the friction occurred.

    Friction findings table:

    | Task | Location | Friction description | # of participants affected | Severity |
    |---|---|---|---|---|
    | Recreation registration | Homepage nav | "Recreation" label not found; participants scanned "City Services", "Parks", "About" first | 4/5 | High |
    | Pay property taxes | Tax payment page | CTA button says "Proceed to PAD" — acronym unfamiliar, 3 participants hesitated | 3/5 | Medium |
    | Report a problem | Online form | No confirmation message after form submission; participants unsure if it sent | 5/5 | High |
    | Find public meetings | Agendas page | Meeting type dropdown has 14 options; participants unfamiliar with committee names | 4/5 | High |
    | Permit application | Document checklist | Requirements listed in PDF only, no HTML summary on the page | 3/5 | Medium |

    Severity scoring:

  • High: Blocks task completion or causes most participants to give up. Fix immediately.
  • Medium: Slows participants down or causes confusion but they eventually succeed. Fix in next sprint.
  • Low: Minor annoyance, participant comments on it but completes the task easily. Add to backlog.
  • Anything affecting 3 or more participants is a systemic issue, not an individual quirk. Prioritize those first.


    What to do with your findings

    A findings report is only useful if it moves people to act. Here's a format that works in municipal contexts, where you often need to convince non-technical stakeholders.

    The one-page summary format

    City Website UX Walkthrough: Executive Summary
    Date: ___________
    Participants: 5
    Tasks tested: 5
    
    Overall task completion rate: __% (unaided) / __% (total)
    
    TOP 3 PRIORITY FIXES:
    
    1. [Problem] — [Where] — [Recommended fix] — [Estimated effort: hours]
    2. [Problem] — [Where] — [Recommended fix] — [Estimated effort: hours]
    3. [Problem] — [Where] — [Recommended fix] — [Estimated effort: hours]
    
    QUICK WINS (can be done in under 1 hour each):
    - [Fix]
    - [Fix]
    - [Fix]
    
    LONGER-TERM RECOMMENDATIONS:
    - [Recommendation]
    - [Recommendation]

    Include two or three verbatim participant quotes. Real words from real residents land harder than any data point. "I'd just Google it and hope for the best" in a slide deck gets budget conversations moving.

    Who to share it with

    Share the one-pager with:

  • Your web or communications director (needs the summary and top fixes)
  • Department heads whose content generated friction (they need to see their specific findings, not the whole report)
  • Your IT or development team (they need the detail log for implementation)
  • Do not send the full session logs to senior leadership. That level of detail is for your implementation team. Executive audiences need the three things they can authorize and the expected outcome.

    How often to repeat the cycle

    Run a full five-participant walkthrough:

  • Before any major navigation or homepage redesign
  • After a significant content restructure
  • Once per year as a standard digital service review
  • Run a shorter two-task spot check:

  • Any time you launch a new online service or form
  • After a spike in phone calls or complaint emails about a specific topic
  • The goal isn't to run the test once and declare victory. It's to build a habit of finding out what residents actually experience, on a schedule, before the phone starts ringing.

    A note on remote walkthroughs

    If your municipality can't gather participants in person, remote sessions work. Tools like Zoom with screen sharing, or dedicated user testing platforms, let you observe someone's screen while they talk through tasks over video.

    The trade-off: you lose body language and some of the organic "thinking aloud" quality. The gain: you can recruit participants who represent a wider range of your actual residents, including those who might not visit a city building.

    For remote sessions, the same scripts and templates apply. The only difference is your note-taking location moves from a paper log to a shared document, and you should ask participants to share their full screen, not just a browser tab.

    A realistic expectation

    Your first walkthrough will be humbling. That's the point. The goal isn't to validate how hard your team has worked on the site (though they have). It's to find the gap between what you intended and what residents experience.

    The good news is that municipal UX problems tend to cluster around a small number of root causes: navigation labels that use internal terminology, search that doesn't surface high-traffic content, forms that provide no confirmation, and PDF-heavy pages that block mobile users. Fix those patterns once, and you fix dozens of pages at the same time.

    Start with one walkthrough. Five participants, five tasks, two hours. That's enough to justify your first three fixes.


    Want a facilitated UX walkthrough for your municipal website? Get in touch for a full resident task audit with a prioritized findings report your team can act on immediately.

    Share this article

    More Articles