(Should) All systems go?
Is the desire to maximise impact through embedded systems cramping our communications creativity? Rosalind Goodrich discusses whether communicators need to relax and be more experimental and entrepreneurial.
I recently attended an interesting session with David Nassar, communications vice-president at the Brookings Institution in Washington, United States. He was talking about digital communications in a post-truth age, but it was as much an insight into what Brookings is doing on the digital front.
Brookings has a strong international reputation for its digital communications, and what particularly struck me was his approach to research communications – entrepreneurial is the best way I can describe it.
David seemed keen not to waste time trying to get communications systems set up across the institute, but rather preferred to focus on working with 'scholars' who were prepared to run with a creative idea, apparently achieving impact on an impressive, but more one-off, basis.
Making high finance fun
This was refreshing, and the content produced was innovative and fun. A researcher illustrating equality and social mobility in the US using Lego, for example, communicated a lot of substantive information in a speedy and compelling way.
He also demonstrated a video game – the Fiscal Ship – about the difficulty of balancing US budgets and the policy regime that must be in place to do that, and got the player to choose the policies they believed would work. This made high finance fun – no mean achievement!
It got me thinking whether this approach would work here. Are we too hung up at IIED, for example, in getting systems embedded across the organisation so that every piece of content has the best chance of adding to the impact we are seeking?
Should we instead be pursuing the communications champions among researchers; those who are willing to try (and budget for) something a bit experimental that is creative, likely to be a big win in terms of engagement, but not always guaranteed to work?
And, if we did do this, not spending time trying to convince the reluctant communicators, would our donors support this approach?
Speaking at the WonkComms event (WonkComms is a community for communicators who work for think tanks and research organisations), David Nassar said that he felt the role of the communications team is not to create impact itself, but rather to create opportunities for researchers – it is their responsibility to get the impact.
I'm not sure I really see the difference, especially in an organisation where collaboration is a core value: at IIED, the Communications Group works with researchers and partners as much as it can. It's a rare case when a piece of research is handed over and we work with it independently.
We take it as read (because there are systems in place to make sure – there you go, I knew embedding them would be useful), that the research piece in itself has the potential for impact. If we think it doesn't for communications reasons – because it's written inaccessibly, it doesn't have a clear audience in mind, and so on – then we'll say so.
I also think that where we have to be accountable to donors – and where they are publicly funded, to tax payers as well – we have no choice but to balance communications work that is routinely good with more high-risk stand out content.
That does not mean, however, that this routine work should be dull. We should always be testing and developing what we do while, at the same time, evaluating the content to see whether it achieves its objectives. (Interestingly, David admitted that this was a weak spot at Brookings, but something that he had plans to address.)
Playing the long game
Evaluation requires resource, of course, and in the Communications Group at IIED we've planned in time to do regular evaluation. That means setting up bespoke monitoring and evaluation frameworks for every 'campaign' or key piece of content, alongside monitoring the performance of our routine work and developing our knowledge of priority audiences.
We've started to write up reflections on what we've achieved by producing impact and learning notes for internal use. These are extremely useful for providing evidence of what works (and what doesn't) when we plan similar activities.
They may focus on an event or a particular content type: for instance, we are about to evaluate our longreads – an in-depth piece of content that we only started to produce about 18 months ago.
We've also put together six case studies around particular types of content and activity, based on our recent work, and we're sharing these with our communications peers (this blog is an example).
So back to whether we should be more entrepreneurial. Yes, I think we could do more to find research communication champions and persuade them to test creative content, thus paving the way for colleagues less ready to take the risk.
While we need to be accountable, we also need to seize opportunities, and use creativity to get across complex messages and maximise the potential for engaging the people that count – even if that's only on a small scale at first.
But I don't think we can do this in an uninformed way, and to avoid this we need to do the monitoring and evaluation as well. That helps us to learn and adapt, and to be experimental from a sound knowledge base.
Do we want more Lego videos? Definitely! But alongside systems for finding out what works, and what doesn't.