I originally wrote about this question in At what point does your spidey-sense tell you that you have the right amount of community input? The question is coming back again in a slightly different context. But it is the same fundamental challenge.
How do you assess progress toward a vision of inclusive community engagement?
The new context involves a draft community engagement policy for the city of Evanston. It came about through my association with Environmental Justice Evanston (EJE), which advocates on behalf of citizens whose experience and voices are not historically (or currently) often considered on issues affecting their health and welfare. I was asked to weigh in on the draft policy as part of EJE’s advocacy efforts.
I am by no means an expert (or even novice) in city government policy. But as someone who’s done a lot of work in how organizations learn and adapt, I picked up a gap in the practices outlined in the policy.
The policy outlines how city staff 1) decide when to engage the community and 2) when they do, ensure they employ good community engagement practices. The gap: I can easily imagine myself following all the steps and practices outlined in the policy and still not move the needle on more inclusive community engagement. No structure in the policy really heightens the focus on doing more, differently, to engage under-represented voices.
Hoe might we resolve that issue?
You can certainly define some metric to monitor and measure progress against an inclusiveness goal, and add that to the policy. You could require an annual report on progress, reviewable by the city council or its Equity and Empowerment Commission. I could see that working. But I could also see it just leading to more performative activity.
That has led me to be interested in exploring mechanisms that would motivate learning and self-correcting behaviors across staff departments.
Why? Avoiding performative bullshit. But there are also two known challenges that jump out at me and which are familiar to my organizational learning and knowledge-sharing eye.
First, folks whose voices are not often part of engagement activities face a wide variety of barriers to participation. They may not know about an engagement activity. They cannot take the time to attend scheduled events due to work, family, and general life demands. They face language barriers. The don’t trust the city government. They may not believe they know how to provide input, or that their input has value.
I’m sure there are many more if we spent time talking to those folks. And these barriers are complex. They intersect and involve a lot of subtle challenges (“I got the communication and wanted to go and learn more, but my work shift changed and, besides, I really don’t know as much as other folks about this issue.”)
What this means is community conditions are complex, and constantly changing. To make progress against some goal to improve inclusivity will require continuous learning. What you learn in some new engagement may change the way you think about future engagements in subtle ways. What works today may not work next year, or next month. The best mindset is to assume only that you can get better, and you may never truly be “done.”
Second, I see how organizational silos can get in the the way of how community engagement is experienced by folks, how the city makes sense of citizen input, and how it keeps an eye on whose voices are included.
Let’s say that the Parks and Recreation department does an exemplary job in gathering community input – especially from voices not typically engaged – across several decisions and projects. They listen deeply and learn a lot about the real, lived experiences of folks and their desires related to individual and family health, open spaces and environment, arts and music as part of quality-of-life, getting to-and-from park locations across the city, community and family event opportunities, etc.
What they learn is undoubtedly also valuable to many other departments: Health, environment, public safety, transportation, etc. Each of those departments is also gathering community input in their respective spaces, perhaps discovering insights about the community that are of value to Parks and Recreation (and to the other departments).
It’s a familiar story: Foiks working in silos and potentially not sharing knowledge. In this case I can identify several opportunities that are missed:
- Combining insights about experiences in core aspects of community life (i.e., transportation, health, housing, etc.)
- Discovering and highlighting patterns among community segments.
- Discovering insights about the intersection of issues affecting well being (employment, housing, health, etc.)
- Creating a city-level view of engagement of community segments (high opportunity to engage vs. low opportunity to engage).
- Identifying successful practices to engage voices that are challenging to include.
I’m sure there are even more opportunities that may be missed by not sharing knowledge. The point is: The draft policy itself does not include a mechanism to motivate knowledge sharing. Perhaps the city staff already have practices or a process in place to do this. But if so, it’s not referenced as an element of the draft policy.
What is referenced is the central, hierarchical role of the city manager’s office. The draft policy sets up a procedure where the city manager’s office receives, authorizes and guides requests from departments to do community engagement. I’m making an assumption here, but it looks like the default for accountability within the city government is to embed it within the power structure of the hierarchy, where the city manager operates as CEO.
So where does this all lead?
I’m really sensitive to designing mechanisms that fit the context of the environment in which the mechanism must operate. In this case: A hierarchical system where cross-functional knowledge sharing may not be the norm, and where political stakeholders (city council, commissions) are a factor in setting policy goals and holding staff accountable to meeting them.
But if I had a free hand to step back and set some design elements to include in the draft policy, it would be something like this:
- Require the city manager’s office, in consultation with departments, to set metrics that answer the question: How do we know we are making progress toward inclusive community engagement? The metrics must include some way that community members tell us we are making progress.
- Require an annual review of progress against the metrics with the Equity and Empowerment Commission and city council. The review should be organized around the way the community experiences the city and its services, rather than city departments. A starting point may be the broad areas outlined in Envision Evanston 2045, the draft long-term plan for the city.
- Neighborhoods and places
- Community systems (includes public safety)
- Getting around
- Environment
- Parks, recreation and open spaces
- Housing
- Health and well-being
- Economic development
- Community building, arts and culture, placemaking
- Preservation
- Require – as part of the annual review – one exemplary success case story.
Setting your own metrics is the first step in establishing a learning cycle. And requiring one or more metrics that are based on what the community actually says about inclusivity keeps the challenge in focus. The staff may lean on internal experience and expertise or external expertise to set the metrics. But no matter where you start, you are making assumptions. Assumptions get tested when you start doing things to impact the metrics. And that’s where the learning happens.
Setting an annual review that is not organized by department is intended to address the outcome of inspiring more cross-functional data analysis and collaboration. The list of topics for the annual report should be developed with that outcome in mind. “Getting around” is a great example: It is a label which is clear, meaningful to the community, and incorporates many types of experiences (public transportation, driving, walking, bikes, scooters, etc.) and settings (getting to shopping, entertainment, parks, healthcare, etc). And there is no “getting around” department.
Finally – a single exemplary success case story is intended to inspire looking at what works and understanding why. To get to one story, you need to evaluate several. That evaluation will be an insightful exercise. Where did we learn the most, and why?
Maybe these elements have value. I am myself making a lot of assumptions, which need to be tested.
But where I am most confident, based on my own experience, is ensuring that the big question guides the way the solutions evolve.
How do we know we are making progress toward inclusive community engagement?
The photographs which accompany these posts are taken by me, and show different settings and views of Evanston (where I live). It is a visual reminder that this is the most important setting for belonging and contributing to community: our neighborhoods, our cities.
One thought on “How do you know you are making progress toward inclusive community engagement?”
Comments are closed.