Latest News

Read our latest Digest

Contact Us

31 March 2016

Science Museum event, ‘Drones: What’s next?

— Posted by @ 4:30pm

By Anna Perman, Sciencewise Flexible Tools Manager

From the moment attendees arrived at the Science Museum event, ‘Drones: What’s next?’, cartoonists were asking them about their opinions, and creating drawings to represent what they heard. Rather than sitting, passively listening to talks, they were asked by facilitators, ‘what do you think of the drone demonstration?’, ‘What are your concerns?’ Representatives from the Department for Transport were sitting on tables with them, hearing the conversations and asking and answering questions.

It was a shock for participants. They were expecting a more typical science event, where they listen to experts. They weren’t expecting the experts to listen to them.

The event was happening because of a partnership between Sciencewise, the Science Museum and the Department for Transport. Sciencewise knows that every member of the public can play a role in science policy in a range of ways and our research has shown that the best dialogues use a mix of methods to get input from a range of people.

Because of this, Sciencewise have spent 2015-16 looking into ways of getting public input which can be used alongside or as a precursor to deliberative dialogue, or other methods for seeking public views. ‘Drones: What Next?’ was a pilot event to understand whether one-off, public workshops could be a way of delivering useful information to policymakers, while also raising awareness of key issues, and the visibility of the department’s involvement.

The key differences between this type of event and a full-scale deliberative dialogue were: that the participants in this case were reached by open invitation rather than recruited to be a ‘representative sample’ and paid for their time; and this was a shorter, one-off interaction, as opposed to the usually two or more events in a deliberative dialogue. There were some similarities in the intention and structure of the event, with the Science Museum event including technical specialists and policy makers from the Department for Transport attending to provide information and listen to the debate; and breaking into small discussion groups which debated the issues around drones and fed back to their facilitator who recorded their views.

The independent evaluators found that most of the participants thought this sort of event was a good way to get public input to government policy, and quite a few thought it would inform that policy. The evaluators also found that the main value of the event for policymakers was to interact with ‘grassroots users’ – a middle ground of participants between traditional stakeholders or specially recruited public participants with no specific interest or previous knowledge of the topic.

So we’re excited to be able to add this type of public event to the suite of techniques available to policymakers for input from the public. All techniques improve with use, and we hope that future iterations of events like this will improve practice.

Sciencewise recognises that processes designed to inform and influence public policy and decision-making – including public dialogue - need to be rigorous and impartial, relevant, accessible, legal and ethical, and that all such processes need to be assessed against agreed standards. At the most basic level, rigour and impartiality require quality assurance of these processes to guarantee the quality of the outputs[1].


A new edition of the Sciencewise Quality in Public Dialogue Framework, published in March 2016, is designed to provide an improved approach to a quality assurance process for public dialogue.


The Framework has been developed on the basis of learning from Sciencewise project evaluations over recent years. This new edition of the Framework takes into account experience of using the framework since the launch of the initial working paper published in March 2015. It also builds on new input from a range of academics, government departments and practitioners.


The Framework provides a set of questions on the context, scope and design, delivery, impact and evaluation of public dialogue practice, designed to stimulate thinking and open up design options. It is not intended to be prescriptive, limiting or bureaucratic but to provide ways of addressing the basic questions that are very often asked of public dialogue including:


       How many is 'enough' participants or locations?

       Should the role of scientists and other specialists involved in dialogue events primarily be to provide information, or should they also be participants in the dialogue?

       What makes a dialogue 'deliberative' and how much time needs to be given to providing information to participants compared to time for discussion?

       To what extent should dialogue processes include non-deliberative techniques such as polling techniques, and attempt quantitative analysis to present what is inherently a qualitative process (e.g. measures of scale to demonstrate strength of feeling)?

       What forms of analysis and reporting are appropriate and what role do participants have in reporting dialogue results (e.g. reports based on agreements reached collectively among or with participants)?

       What will count as sufficiently robust processes to enable decision makers to be able to know how and when to use dialogue results with confidence in decision making alongside other forms of evidence?


We hope the Framework will be of use as initial briefing on what public dialogue involves, as a checklist for those designing and delivering public dialogue – and for those who want to test the robustness of a dialogue project at all stages of planning, design, delivery and evaluation.

[1] Government Social Researchers (GSR) Code. See


No one has commented on this page yet.

Post your comment

RSS feed for comments on this page | RSS feed for all comments