main page

Provocation forthcoming in Being Profiled. Cogitas Ergo Sum. Edited by Bayamlioglu, Baraliuc, Janssens & Hildebrandt. Based on Questioning Mathematics: Algorithms and Open Texture.


Talk to be given at Data, Security, Values: Vocations and Visions of Data Analysis. Peace Research Institute Oslo (PRIO).

Abstract. With the development of a critical research agenda on contemporary data practices we gradually build the tools that are needed to overcome the uncertainty, lack of clarity, and impact of misleading narratives concerning the epistemology of data science. Without such a reflection, we cannot understand the kind of knowledge data analysis produces. More importantly, we then also lack the ability to evaluate specific knowledge-claims as well as more general affirmations of the epistemic superiority (smarter, more objective, ...) of the knowledge, decisions, or insights that data analysis produces. This is why it is important to recognise that data is never just data (e.g. Gitelman 2013, Kitchin 2014), or that the development of algorithms (as any advanced scientific or engineering practice) cannot fully be understood in terms of a well-defined internal logic.

The starting point of this contribution is that we should start asking similar questions about mathematics: We need to understand how mathematics contributes to scientific respectability and authority of data science. To do so, we cannot limit our attention to mathematics as a body of mathematical truths or mathematical techniques. Instead, we should focus on mathematical thought and beliefs about the nature of mathematical thought. I propose to develop this critical inquiry through a dedicated consideration of how mathematical values shape data science.


Talk given at Logic and Metaphysics in the Modern Era. Joint conference of the Université Libre de Bruxelles and the Vrije Universiteit Brussel.

Talk given at the VIIe Congrès de la Société de Philosophie des Sciences. Nantes, France.

Abstract. This paper is concerned with the “problem of visualisation,” and more precisely with the logical and epistemological dimensions of the use and design of information visualisations.

In this paper I reflect on the discrepancy between, on the one hand, philosophical perspectives on visualisation and, on the other hand, the views and assumptions on which visualisation scientists rely when they theorise about visualisation or develop new visualisation-tools. I propose a three-part characterisation of the relevant discrepancy. This is the starting-point for a more thorough exchange between the disciplinary perspectives under consideration. An exchange that is meant to support the visualisation-sciences in their quest for better theoretical foundations (Purchase et al. 2008, Chen et al. 2017), and entice philosophers of science to reconsider their preferred ways of understanding of what visualisations are meant to accomplish and which practical obstacles a visualisation-scientist tries to overcome; especially in the context of data-intensive science. The proposed three-part characterisation is based on three contrasts, namely:

1. The philosophical and the technical problem: What is it vs how do we make it?

2. The epistemological and the computational problem: How do we use a visualisation correctly vs how do we use and construct a visualisation efficiently.

3. The semantical and the syntactical problem: How does a visual artefact represent (a system) vs how does a visual artefact encode (a data- object).

These three pairs form the core of my exposition, and I will use them to further characterise the problem of visualisation as two separate inference-problems: the object-level problem of correctly and efficiently using a visual artefact, and the meta-level problem of correctly and efficiently constructing a visual artefact.

Talk given at the Ninth Workshop on the Philosophy of Information.



The Ninth Workshop on the Philosophy of Information is held at the Royal Flemish Academy of Belgium for Science and Arts. This workshop is a contact-forum of the Academy organised with the additional support of the DSh VUB and the Centre for Logic and Philosophy of Science.

The Workshop theme is Information Visualisation.

Workshop Website

Talk given at 10 Years of ‘Profiling the European Citizen’: Slow Science Seminar 12-13 June 2018, Brussels City Campus.

Abstract Contemporary data practices, whether we call them data science or AI, statistics or algorithms, are widely perceived to be game changers. They change what is at stake epistemologically as well as ethically. This especially applies to decision-making processes that infer new insights from data, use these insights to decide on the most beneficial action, and refer to the data and inference process to justify the chosen course of action.

Developing a critical epistemology that helps us address these challenges is a non-trivial task. There is a lack of clarity regarding the epistemological norms we should adhere to. Purely formal evaluations of decisions under uncertainty can, for instance, be hard to assess outside of the formalism they rely on. In addition, there is substantial uncertainty with regard to the applicably norms because scientific norms may appear to be in flux (new paradigms, new epistemologies, ...). Finally, dealing with this uncertainty and lack of clarity is further complicated by promises of unprecedented progress and opportunities that invite us to imagine a data-revolution with many guaranteed benefits, but few risks.

As part of this broader epistemological exercise, I want to focus on a small, but largely disregarded—in my view misunderstood—fragment of the problem at hand: The question of the role of mathematics, and the question of how some widely shared beliefs about the nature of mathematical knowledge contribute to the scientific respectability of contemporary data practices.

Workshop from 26 through 29 March 2018 at the Lorentz Centre organised with Lora Aroyo, Kaspar Beelen, Davide Ceolin, and Vladi Finotto.