Research has shown that high-capacity coalitions are more successful in effecting community change. While a number of coalition assessment tools have been developed, documentation is scarce regarding how they are implemented, how the results are used, and whether they are predictive of coalition success in collaborative community change efforts. Developed for a health promotion initiative of a major health foundation, this tool is designed to assess coalition progress in eight key areas across twelve different community coalitions, over the course of a three year initiative.
On May 21, 2013, Veena Pankaj, Kat Athanasiades, Ann Emery, and Johanna Morariu gave a presentation titled "Assessing the Capacity of Community Coalitions to Advocate for Change." The panel was hosted by the Advocacy Planning and Evaluation Program (APEP) at the Aspen Institute in Washington, DC.
The session focused on a coalition assessment tool that was designed by Innovation Network to assess changes in coalition capacity over time. Presenters shared lessons learned from the first year of the initiative about developing and deploying the assessment tool, as well as what these tools can--and can't--tell you about a coalition's capacity in conducting community change work. In addition presenters shared how information collected from this assessment can be communicated back to the coalitions using data visualization approaches to effectively communicate the data.
2024 02 15 AZ GOP LD4 Gen Meeting Minutes_FINAL_20240228.docx
Assessing the Capacity of Community Coalitions to Advocate for Change
1. Veena Pankaj
Kat Athanasiades
Ann Emery
Johanna Morariu
Assessing the Capacity
of Community Coalitions
to Advocate for Change
Advocacy Evaluation Breakfast Series
Hosted by the Aspen Institute
Washington, DC
May 22, 2013
2. Veena Pankaj
Director vpankaj@innonet.org
Kat Athanasiades
Associate kathanasiades@innonet.org
Ann Emery
Associate aemery@innonet.org
Johanna Morariu
Director jmorariu@innonet.org
www.innonet.org | @InnoNet_Eval | #CATeval
About Us
11. Seven categories of CAT
Basic Functioning and
Structure
Ability to Cultivate and
Develop Champions
Coalition Leadership
Ability to Develop Allies and
Partnerships
Development & Implementation
2
Reputation and Visibility
Ability to Learn from the
Community
Sustainability
www.innonet.org | @InnoNet_Eval | #CATeval
12. Vetting process
Kansas Health Foundation
staff
GEO Place-Based
Community of Practice
Kansas Advisory Committee
Development & Implementation
2
Healthy Communities
Initiative Technical Assistance
team
Coalition members
www.innonet.org | @InnoNet_Eval | #CATeval
16. Overall Scores
Reporting to Stakeholders
3
29%
40%
55%
64%
66%
68%
69%
70%
77%
79%
80%
81%
Community 6
Community 4
Community 9
Community 8
Community 2
Community 11
Community 7
Community 12
Community 3
Community 5
Community 10
Community 1
www.innonet.org | @InnoNet_Eval | #CATeval
19. Reporting to Stakeholders
3
84%
77%
75%
72%
70%
69%
52%
70%
0% 20% 40% 60% 80% 100%
Basic Functioning and Structure
Allies and Partnerships
Champions
Reputation and Visibility
Learn from Community
Coalition Leadership
Sustainability
Overall
Community
Members
www.innonet.org | @InnoNet_Eval | #CATeval
20. Reporting to Stakeholders
3
53%
55%
54%
51%
38%
47%
35%
47%
0% 20% 40% 60% 80% 100%
Basic Functioning and Structure
Allies and Partnerships
Champions
Reputation and Visibility
Learn from Community
Coalition Leadership
Sustainability
Overall
TA
Providers
www.innonet.org | @InnoNet_Eval | #CATeval
25. This survey is too long for busy
people.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
26. I hadn’t thought about us collecting
feedback on whether the community
is satisfied with our work because
we are so busy doing the work…
Very interesting question.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
27. The initial mission, vision, and values
are defined but under development.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
28. Our monetary resources are limited
to the planning grant monies.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
29. These questions… are good questions
for us to have as a guide for our work.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
30. Realistically, many of these points
cannot be accomplished within a year,
but it is a goal to strive towards.
—Coalition member
Strengths & Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
31. a. Integrate multiple perspectives
b. Show change over time
c. Contextual data about policy changes
d. Aggregate and community-level data
e. Tool & evaluation = intervention
Strengths
4
www.innonet.org | @InnoNet_Eval | #CATeval
32. a. Integrate multiple perspectives
b. Show change over time
c. Contextual data about policy changes
d. Aggregate and community-level data
e. Tool & evaluation = intervention
Strengths
4
www.innonet.org | @InnoNet_Eval | #CATeval
33. a. Integrate multiple perspectives
b. Show change over time
c. Contextual data about policy changes
d. Aggregate and community-level data
e. Tool & evaluation = intervention
Strengths
4
www.innonet.org | @InnoNet_Eval | #CATeval
34. a. Integrate multiple perspectives
b. Show change over time
c. Contextual data about policy changes
d. Aggregate and community-level data
e. Tool & evaluation = intervention
Strengths
4
www.innonet.org | @InnoNet_Eval | #CATeval
35. a. Integrate multiple perspectives
b. Show change over time
c. Contextual data about policy changes
d. Aggregate and community-level data
e. Tool & evaluation = intervention
Strengths
4
www.innonet.org | @InnoNet_Eval | #CATeval
36. a. Ratings are opinions
b. Self-awareness low at first
c. Language
d. Focus changes over time
Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
37. a. Ratings are opinions
b. Self-awareness low at first
c. Language
d. Focus changes over time
Challenges
4
www.innonet.org | @InnoNet_Eval | #Coal_Assess
38. a. Ratings are opinions
b. Self-awareness low at first
c. Language
d. Focus changes over time
Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
39. a. Ratings are opinions
b. Self-awareness low at first
c. Language
d. Focus might change over time
Challenges
4
www.innonet.org | @InnoNet_Eval | #CATeval
42. Assessing the Capacity
of Community Coalitions
to Advocate for Change
Veena Pankaj
Director vpankaj@innonet.org
Kat Athanasiades
Associate kathanasiades@innonet.org
Ann Emery
Associate aemery@innonet.org
Johanna Morariu
Director jmorariu@innonet.org
www.innonet.org | @InnoNet_Eval | #CATeval
Notes de l'éditeur
VP
VP
VP
Let’s connect, both during the session and afterwards! If you’re tweeting about this session, please use the hashtag #CATeval, which is short for Coalition Assessment Tool evaluation. You can also tweet to Innovation Network at @Innonet_Eval. A few of us are on Twitter too - @KatAthanasiades, @AnnKEmery, and @J_Morariu.
VP
VP
VP
VP
VP
KA
KA
KA
KA
KA
AE
AE
AE
KA
AE
AE
AE
AE
AE
AE
There were two factors that were statistically significantly correlated with the overall coalition assessment scores: the length of time the community member had spent in the coalition, and the number of members in each coalition. In other words, communities where members had been in the coalition for longer tended to score higher, and communities with more people in their coalition tended to score higher.
So, there’s something special going on in coalitions with lots of people who have been in the
(Possible audience participation question if attention is lacking: What do you think is going on there? Is this what you expected to see? Or do these findings surprise you?)
AE
Next, we’re going to talk about the strengths and challenges of developing and administering the coalition assessment tool. The “strengths” section could easily be named “what’s awesome about the CAT” because each of us here on the panel had aspects of the project that really interested us.
But, before we tell you about some of our favorite parts of this tool, we’re going to share some respondent perspectives. As Kat explained, the coalition assessment tool had 7 sections. When we administered the survey on Zoomerang, we added a big text box after each section so that they could share additional comments. Here are a few of those open-ended comments. They add useful contextual details and helped us and the Kansas Health Foundation understand the scores a little more.
AE
Well, just like all surveys, you can imagine that some of the comments were like this…. A coalition member wrote, “This survey is too long for busy people.”
AE
When asked about their coalition’s ability to learn from the community, a coalition member noted, “I hadn’t thought about us collecting feedback on whether the community is satisfied with our work because we are so busy doing the work, the learning, the data collection, and getting the word out, to think about whether THEY are satisfied. Very interesting question.”
AE
Several of the coalition members reminded us that this first year is meant to be a planning year, and therefore their community could not have fully accomplished many of the items outlined in the rubric. For example, a coalition member commented, “The initial mission, vision, and values are defined but under development.”
AE
Several of the coalition members reminded us that their coalition has limited money, and limited time, to fully accomplish the tasks outlined in the rubric. For example, a coalition member wrote, “Our monetary resources are currently limited to the planning grant monies.”
AE
A coalition member wrote, “These questions… are good questions for us to have as a guide for our work.”
AE
A coalition member wrote, “Realistically, many of these points cannot be accomplished within a year, but it is a goal to strive towards.”
AE
And here are the strengths from our perspectives as the evaluators on the project.