What I learned about paying attention to details in community listening. From an AI product team.

Walkway over the sand on an Evanston beach on the shores of Lake Michigan.

My former teaching partner and friend Teresa Torres is off on a new venture – the Just Now Possible podcast, where Teresa takes a dive deep with folks who use AI as part of the digital products they design and build.

A recent episode focused on ZenCity, which provides an AI-supported technology platform to help local governments do better by improving how they understand and apply community voices. Teresa texted me when she first finished recording the episode, excited about what she had heard and nudging me to watch for its release because I might be geeked about it. As usual, she was right. [Listen to the podcast episode here. It gets technical after the first 10 minutes, but is still approachable if you know just a wee bit about technology.]

I count myself now as a student of community listening, so I had a lot of thoughts about what was covered in the episode. But the highlight might be something like this: We can all learn from the discipline that ZenCity data scientists and product builders bring to solving community listening problems.

You can’t help but be inspired by the ZenCity team’s deep understanding of the real challenges of inclusive community listening. And the attention they pay to the details of how to design their technology to be in service of addressing those challenges.

Let me pause here for a small rant on AI. In other contexts – media and content creation (including deepfakes), human-AI relationship cases (friendship, therapy), and some education uses, for example – AI scares the shit out of me because of unintended (or maybe intended?) societal consequences. There are also the deeply problematic issues of environmental impact, and AI technology companies being run by and enriching an already too-powerful class of assholes. None of these things are contributing positively to the tenuous state of our current society.

So I don’t want this post to be seen as cheerleading for the technology. It can do cool and beneficial things. But on a scale of 1-10 where 10 = AI-is-the-best-thing-ever and 1 = we’re fucked, I’m probably a 3. At best.

What I really want to highlight in this post is the discipline in designing a tool to solve real problems.

Community listening is a real problem and we use many tools to address it: Events, surveys, technology, processes and practices, etc. Are we really sweating the details across these things?

Let me explain what I mean.

During the past year I have attended a number of events designed to gather citizen input and ideas. Some are run by political leaders, some by city staff, some by consultants. But they all follow a general pattern:

  • Citizens are invited to an event.
  • The event uses a design crafted by the meeting organizers.
  • Citizen participants engage in designed activities to elicit input (data, if you will).
  • Citizen input is assessed and synthesized by the meeting organizers. (This is human black box, because this step is never transparent).
  • Synthesis results in insights for the meeting organizers.
  • Insights are used to aid in making decisions by local officials.

Events vary widely in which specific citizen voices are engaged and contribute to data creation, and how robust is the data. The differences between a town hall, for example, where a limited number of citizen speakers get to share thoughts, compared to an event which allows all participants to engage in thoughtful small-group dialogue and structured data input.

But even highly-participatory designs have their gaps. I’ve written several posts about my experiences and how event designs may fall short of the purported purpose of hearing what the community has to say. See How do you know you are making progress toward inclusive community engagement? Thinking about: Town halls and missed opportunities to engage the engaged The limitation of relying on a single style of community meetings “If this were in place, what would it do for you?”

The ZenCity team sees the general pattern of citizen input and community listening as similar to what I describe above, except that the listening sources and activities are much broader than just events. Think: All types of surveys, data from city services (i.e., 311 calls), social media and news outlet stories, the kinds of events I’ve attended, etc.

  • Citizens engage in activities which elicit input (data).
  • Citizen data is made available to be analyzed and synthesized (by city staff).
  • Synthesis results in insights for city staff.
  • Insights are used to aid in making decisions by local officials.

Not much different from what I’ve seen locally.

But there is a big difference in the level of attention paid to all the details necessary to do three things really well:

  • Allow a conversation between the city staff and all the data. (What do you want to know?)
  • Provide meaningful insights to city staff. (What do we need to know now, for this specific decision?)
  • Ensure trust in the results. (How do we know this is true?)

To do those three things you really need to pay deep attention to the details along the whole process, while simultaneously keeping your eye on the desired result of making good decisions based on high-quality, active citizen listening. For example: Citizen surveys are easy to craft using widely available tools. And they are easy to craft poorly. The ZenCity folks look at that method of listening and work to improve it, so the data gathered is more useful. Same goes for live, in-person events. How might the data gathered from those events be improved so that the entire listening system is fed with good quality stuff?

The team is also very aware of AI issues around hallucination – making up answers that are not based in reality. So they pay deep attention to how they maintain trust in how their system synthesizes data, draws insights, and responds to ad hoc queries.

Finally, they pay attention to the nuances of moments where city staff are making decisions. What someone might need to support part of the budgeting process is different than what someone might need to prepare for a city council meeting. How might you create something useful to meet each moment?

What struck me is that all of these issues exist whether or not AI is involved in the process. You want good quality data coming into your decision making process. You want insights and synthesis emerge from real data that you can trust (and prove the credibility). And context matters: Who you are and what kind of decision you are making impacts what you need to know, at that moment.

Are folks spending ZenCity-team-like energy and attention to the details of their non-AI-driven practices? Maybe. But I have not yet seen any evidence of it.

What I see are a lot of human black boxes throughout the process. Community listening events that miss voices, or are more focused on gathering data most useful to the project team rather than really just listening to understand the community. Consultants and staff synthesizing community listening data using practices that are not transparent. Insights drawn from the data without transparent deep drill-down connections between the insights and data. And folks making decisions with a hodgepodge of decision-making resources.

This is where I see something important about the discipline in designing a tool to solve real problems. Across the board, no matter what tools we use or can afford to use, are we paying attention to the details and our desired outcomes with the same rigor as the ZenCity team? I don’t think so.

I also recognize the focus here is how local government staff and leaders make decisions, and what tools and resources they have at their disposal. But in this focus, it’s the city professionals who hold the power and all the resources. Citizens exist more in the role of consumers (of city services and programs). We of course want those services to be good, and fair and equitable and within the means of our community to support.

I’ve been really paying attention to how to rebalance this power dynamic (See What are the citizen collaboration opportunities?, for example). My three elements of community listening get at it:

  • How a group facilitates listening to the community members it serves.
    • Key question: How do we use the resources we have to invite the community to share their experiences and know-how?
  • How a group helps community members make sense of what it is learning.
    • Key question: How do we use the resources we have to help synthesize data, stories and experiences with community members?
  • How a group facilitates co-designing solutions with the commu
    • Key question: How do we use the resources we have to share power with community members in designing solutions and distributing its benefits?

We think we engage with the community to do all three steps. But it’s largely surface-level performance. For example, the predominant way I see folks synthesizing data with community members is asking a slice of the community to rate or rank or comment on the synthesis completed by city staff or consultants.

Where’s the power in exploring the data and discovering what it means in that scenario?

Not with the community members.

Which leaves me with a thought. What would be really cool is to have local community groups with robust resources for making sense of their slice of the community, supporting their own decisions to allocate resources and to design their own solutions.

Who is working on ZenCity for the people?


The photographs which accompany these posts are taken by me, and show different settings and views of Evanston (where I live). It is a visual reminder that this is the most important setting for belonging and contributing to community: our neighborhoods, our cities.

One thought on “What I learned about paying attention to details in community listening. From an AI product team.

Comments are closed.