We recently wrote about how Collective Intelligence tools can be applied to complex policy areas. This blog is the first in a series of guest posts from policy teams sharing their experiences of working with our Collective Intelligence Lab. We hear from Letitia Holden of the Civil Service’s Policy Profession Unit about using collective intelligence to better understand policymakers across government.
In summer 2022 the Civil Service’s Policy Profession Unit (PPU) worked with the Collective Intelligence Lab (CILab) to host an online collective intelligence debate. We were keen to try this new method of engaging with people as a way to gain an understanding of life in the Policy Profession, our strengths, and where we need to do more. This blog shares personal reflections of working with the Collective Intelligence Lab, and what we learnt.
Background
For this debate, we used Pol.is, an open source software tool that uses machine learning to analyse voting patterns. Within Pol.is, all participants are anonymous. A participant logs in and sees a single statement, which they can vote to agree, disagree or pass. They cannot reply directly to the statement, but they can propose their own alternative statement, or a statement on the broader theme. Other participants will then vote on their contribution, without knowing the author.
Based on the contributions of the 198 participants, we were able to analyse common themes – and draw out areas of consensus and disagreement.
So, how did we do it?
Planning the collective intelligence debate
We started by having a series of meetings with colleagues in the CILab, to understand what a collective intelligence debate is, agree our respective roles in the process, and where we would focus our debate.
My team decided that our aims would be to understand the views of policy professionals in the Civil Service about:
- their sense of identity
- being part of policy communities
- their learning needs
From here, we produced a set of ‘seed statements’ – which were used to set the broad themes of the debate and to initiate responses from participants as well as guiding them to submit their own statements.
This was an iterative process. The CILab helped us by providing feedback on what made for an effective seed statement, and gave constructive challenge about what we hoped to find out by putting specific statements forward. Their experience of running previous debates was beneficial to avoid common pitfalls, and to help us think through what we were really trying to find out.
Holding the collective intelligence debate
For each of the eleven days the debate was live, a panel of three staff from my team and one member of CILab moderated and released new statements that had been proposed by participants. We were given a set of criteria, and independently reviewed all new statements from participants – noting whether we thought statements should be approved, declined, or amended. During a daily moderation meeting, we would then discuss our views as a panel to reach a final decision.
This was an interesting exercise. The panel had different backgrounds and experiences which they brought to moderating, for example, we had panel members who were very new to the Civil Service, from the private sector, and longstanding civil servants. Discussing the statements being put forward by participants in real time was valuable. It prompted team discussions about which statements resonated with us vs. which were a surprise, the role of the Policy Profession in responding to the different needs of policy makers, and what our priorities should be.
New topics emerged during the debate from participants. An example was a discussion around management responsibility, where participants agreed that policy professionals should be able to progress in seniority without necessarily taking on management responsibilities. This was in the context of statements seeking recognition of policymaking expertise and experience, and was something we hadn’t discussed in previous engagement with policy professionals. This demonstrates the value of giving participants a platform like Pol.is to talk anonymously about topics which matter to them.
After the collective intelligence debate
The collective intelligence debate helped us understand the profession better. CILAb's tools allowed us to identify voting patterns amongst discrete groups of participants, drawing out themes and highlighting areas of agreement and disagreement. Matching voting patterns to anonymised demographic data helped us understand how representative our sample was, and to find any demographic trends in voting patterns
This is already informing our work as a team, and provides a really useful source of evidence for conversations with our Senior Responsible Officers (SROs) about where we should focus as a team – and what policy makers across the Civil Service want to see from our Profession. We are considering what further engagement we can do, in order to learn even more from our participants about the views they shared with us.
Overall, the debate has been really useful in providing a platform where we can anonymously hear from our ‘customers’, in their own words. This will be a powerful source of evidence to use as we take our work forward.
This post was amended on 20 December 2022 - the first image and caption were replaced from a mind-map to an illustration
Leave a comment