Numbers for policy: Practical problems in quantification
The course introduces concepts of practice and ethics of quantification, seen as an antidote to inconsiderate uses of numbers both in academia and in society.

Main content
It shows the pitfalls to be avoided and offers - with examples, tools and recipes for reasonable uses of quantitative methods. The course aims at practitioners, post-docs and PhD students with an interest in the use of evidence for policy.
The key objective is to promote skills in processing and appraising quantitative information in the context of impact assessment studies. Uncertainty appraisal and uncertainty communications are key topics. While this is not a course on impact assessment, it gives impact assessors an extra gear, plus the set of skill needed to tell apart defensible versus indefensible assessments and risk analyses, enabling participants to spot bogus, implausible or irrelevant quantifications. Examples from epidemiology to criminology, from pharmacology to psychology, from big data to unethical use of algorithms – will be discussed. Elements of sociology of quantification will also be part of the teaching. Technical material will be presented on statistical procedures and malpractices (p-hacking, p-HARKing) and how to address them.
The course includes:
- Pedigrees for quantification such as NUSAP,
- Sensitivity auditing and ethics of quantification
The course will also include elements of technical sensitivity analysis.
Programme
Day one, Monday 18, Morning
9.30 – 11.00 Lesson 1. - Critical appraisal of quantitative assessments: theories and tools, Andrea Saltelli (part 1)
11.00 - 11.30 Coffee breaks
11.30 – 13.00 Lesson 2. Uncertainty and quality assurance in science for policy, Jeroen van der Sluijs (part 1)
13.00 - 14.00 Lunch
Day one, Monday 18, Afternoon
14.00 - 16.00 Practicum with Jeroen van der Sluijs
Day two, Tuesday 19, Morning
9.30 – 11.00 Lesson 3. - Critical appraisal of quantitative assessments: theories and tools, Andrea Saltelli (part 2)
11.00 - 11.30 Coffee breaks
11.30 – 13.00 Lesson 4. Uncertainty and quality assurance in science for policy, Jeroen van der Sluijs (part 2)
13.00 - 14.00 Lunch
Day two, Tuesday 19, Afternoon
14.00-16.00 Practicum with Andrea Saltelli
Day three, Wednesday 20, morning
9.30 – 11.30 Practicum with Samuele Lo Piano, hands-on practicum on sensitivity analysis.
11.30 - 12.00 Coffee breaks
12.00 – 13.30 Practicum on cases from participants
13.30 - 14.30 Lunch
Day three, Wednesday 20, Afternoon
14.30-16.00 General discussion and lesson learned
Support/reading material
Critical appraisal of quantitative assessments: theories and tools, by Andrea Saltelli
This part of the course focuses on the quality of mathematical and statistical modelling, and statistical indicators as quantified evidence. These tools are of paramount importance for policy but also the most prone to abuse and misuse, as we now witness in conjunction with the ongoing reproducibility crisis. It is important to realize that, even when using statistical methods, we make normative choices, and that each model-based evidence is conditional. Uncovering these conditionalities both in terms of plain or technical assumptions and frames or metaphors is a guide to both producing and reading quantified knowledge. We critically re-examine the power and role of existing models and indicators to inform policy under conditions of uncertainty. We suggest tools for the appraisal of uncertainty, such as uncertainty and sensitivity analysis, sensitivity auditing and quantitative story-telling, and provide examples of application.
Reading material:
- ‘What do I make of your latinorum? Sensitivity auditing of mathematical modelling’, Int. J. Foresight and Innovation Policy, (9), 2/3/4, 213–234.
- What is wrong with evidence based policy, and how can it be improved? Futures, 91, 62-71.
Uncertainty and quality assurance in science for policy, by Jeroen van der Sluijs
Science–governance interfaces are characterized by scientific controversies that employ different forms of evidence and stem from the uncertainty and plurality typical of the scientific enterprise. They are also closely interwoven with conflicting interests, values, stakes, and practices of evidence appraisal in institutions. These societal conflicts co-shape the ways in which evidence is produced, communicated and used, and how uncertainty is dealt with, while institutional settings and regulatory frameworks co-define whose evidence counts, e.g. in risk analysis, and under which conditions. The lecture will discuss a novel suite of analytical tools to map deep uncertainty, conflicts of interests, institutional practices and their interactions.