Talks

Talk to be given at Data, Security, Values: Vocations and Visions of Data Analysis. Peace Research Institute Oslo (PRIO).

Abstract. With the development of a critical research agenda on contemporary data practices we gradually build the tools that are needed to overcome the uncertainty, lack of clarity, and impact of misleading narratives concerning the epistemology of data science. Without such a reflection, we cannot understand the kind of knowledge data analysis produces. More importantly, we then also lack the ability to evaluate specific knowledge-claims as well as more general affirmations of the epistemic superiority (smarter, more objective, ...) of the knowledge, decisions, or insights that data analysis produces. This is why it is important to recognise that data is never just data (e.g. Gitelman 2013, Kitchin 2014), or that the development of algorithms (as any advanced scientific or engineering practice) cannot fully be understood in terms of a well-defined internal logic.

The starting point of this contribution is that we should start asking similar questions about mathematics: We need to understand how mathematics contributes to scientific respectability and authority of data science. To do so, we cannot limit our attention to mathematics as a body of mathematical truths or mathematical techniques. Instead, we should focus on mathematical thought and beliefs about the nature of mathematical thought. I propose to develop this critical inquiry through a dedicated consideration of how mathematical values shape data science.

Download

Talk given at Logic and Metaphysics in the Modern Era. Joint conference of the Université Libre de Bruxelles and the Vrije Universiteit Brussel.

Talk given at the VIIe Congrès de la Société de Philosophie des Sciences. Nantes, France.

Abstract. This paper is concerned with the “problem of visualisation,” and more precisely with the logical and epistemological dimensions of the use and design of information visualisations.

In this paper I reflect on the discrepancy between, on the one hand, philosophical perspectives on visualisation and, on the other hand, the views and assumptions on which visualisation scientists rely when they theorise about visualisation or develop new visualisation-tools. I propose a three-part characterisation of the relevant discrepancy. This is the starting-point for a more thorough exchange between the disciplinary perspectives under consideration. An exchange that is meant to support the visualisation-sciences in their quest for better theoretical foundations (Purchase et al. 2008, Chen et al. 2017), and entice philosophers of science to reconsider their preferred ways of understanding of what visualisations are meant to accomplish and which practical obstacles a visualisation-scientist tries to overcome; especially in the context of data-intensive science. The proposed three-part characterisation is based on three contrasts, namely:

1. The philosophical and the technical problem: What is it vs how do we make it?

2. The epistemological and the computational problem: How do we use a visualisation correctly vs how do we use and construct a visualisation efficiently.

3. The semantical and the syntactical problem: How does a visual artefact represent (a system) vs how does a visual artefact encode (a data- object).

These three pairs form the core of my exposition, and I will use them to further characterise the problem of visualisation as two separate inference-problems: the object-level problem of correctly and efficiently using a visual artefact, and the meta-level problem of correctly and efficiently constructing a visual artefact.

Talk given at the Ninth Workshop on the Philosophy of Information.

Abstract

Slides

Talk given at 10 Years of ‘Profiling the European Citizen’: Slow Science Seminar 12-13 June 2018, Brussels City Campus.

Abstract Contemporary data practices, whether we call them data science or AI, statistics or algorithms, are widely perceived to be game changers. They change what is at stake epistemologically as well as ethically. This especially applies to decision-making processes that infer new insights from data, use these insights to decide on the most beneficial action, and refer to the data and inference process to justify the chosen course of action.

Developing a critical epistemology that helps us address these challenges is a non-trivial task. There is a lack of clarity regarding the epistemological norms we should adhere to. Purely formal evaluations of decisions under uncertainty can, for instance, be hard to assess outside of the formalism they rely on. In addition, there is substantial uncertainty with regard to the applicably norms because scientific norms may appear to be in flux (new paradigms, new epistemologies, ...). Finally, dealing with this uncertainty and lack of clarity is further complicated by promises of unprecedented progress and opportunities that invite us to imagine a data-revolution with many guaranteed benefits, but few risks.

As part of this broader epistemological exercise, I want to focus on a small, but largely disregarded—in my view misunderstood—fragment of the problem at hand: The question of the role of mathematics, and the question of how some widely shared beliefs about the nature of mathematical knowledge contribute to the scientific respectability of contemporary data practices.

Talk given at EPSA, Exeter.

Extended Abstract

Talk given at the LogiCIC Workshop 2016, Amsterdam

at Group Knowledge and Mathematical Collaboration 2017, Oxford, and

at Ampliative Reasoning in the Sciences 2017, Ghent.

Abstract The problem that motivates this paper is the following: Given a data-set with records of interactions from collaborative science online, which background-theory should be adopted to study these digital traces if one’s goal is to explain whether and how the collaboration was epistemically successful. I will approach this question on the basis of a specific case-study, namely the Polymath-projects initiated in 2009 by Cambridge mathematician and Field Medalist Timothy Gowers (see e.g. Allo et al. 2013). These are collaborative projects dedicated to specific research-level mathematical questions (finding a proof for a certain result). The centre of activity of these collaborations are interactions in discussion-threads on various weblogs, and the discussions in question are in principle open to anyone.

Extended abstract (LogiCIC-version)

Presentation (Oxford-version)

Presentation (Ghent-version)

Talk given at Situations, Information, and Semantic Content, Munich.

Abstract The background of this talk is the development of an informational conception of logic that is based on the methodology of the philosophy of information, and in particular on the thesis that information is always assessed at a given level of abstraction. Here, I wish to specifically explore the similarities and dissimilarities between an informational perspective on logic based on situation semantics, and my own approach.

Extended Abstract

Talk given at Culture and politics of data visualisation in Sheffield.

Abstract Understanding information visualisation as a reasoning and communication tool doesn’t only require a systematic understanding of its intended functioning (i.e. correct reasoning and reliable communication), but also a better insight in its potential failures. The latter aspect can help us to recognise how information-visualisations can be used to mislead (the critical perspective also present in argumentation theory), but it can also lead to a deeper understanding of the trade-offs that have to be negotiated when designing a visualisation (the design perspective).

The upshot of this talk is to take a closer look at the latter aspect by developing an account of fallacies in information-visualisation, focus on the following common techniques in visualisation:

  • Informational shortcuts (information-hiding, fudging distinction, exploiting imprecision) as a means to jump to conclusions,
  • Data-transformations like re-ordering, clustering or compressing information as a means to discover or reveal patterns in the data,
  • And ask how we can distinguish epistemically beneficial from epistemically detrimental uses of these techniques.

To conclude, the issue of visual fallacies will be reconsidered against the background of contemporary visual analytics and its emphasis on obtaining actionable insight.

Talk given at International Association for Computing and Philosophy – Annual Meeting 2016 (University of Ferrara).

Abstract Within the philosophy of mathematical practice, the collaborative Polymath-projects initiated by Timothy Gowers have since their start in 2009 attracted a lot of attention (Van Bendegem 2011, Pease and Martin 2012, Stefaneas and Vandoulakis 2012).2 As this field is concerned with how mathematics is done, evaluated, and applied,3 this interest should barely surprise us. The research-activity within Polymath was carried out publicly on several blogs, and thus led to the creation of a large repository of interactive mathematical practice in action: a treasure of information ready to be explored. In addition, the main players in this project (the Field-medallists Timothy Gowers and Terrence Tao) continuously reflected on the enterprise, and provided additional insight on the nature of mathe- matical research, and large-scale collaboration (Gowers 2010: §2). This led amongst others to the claim that the online collective problem solving that underpins the Polymath-projects consists of “mathematical research in a new way” (Gowers and Nielsen 2009: 879).

In previous work (Allo et al. 2013a;b) we relied on formal models of interaction, mainly from the dynamic epistemic logic tradition, to develop a theoretical basis for the analysis of such collaborative practices, with the explicit intent to use logic to understand the practice instead of using logic as part of the foundations of mathematics. In particular, we argued that focusing on available announcement-types (public vs private announcements) leads to a finer typology of scientific communities than the models used in for instance Zollman (2007; 2013), and is better suited to model collaborative enterprises.

The present paper further develops this work by supplementing it, on the one hand, with a computational/empirical study of all the Polymath-projects (the latest project was initiated in February 2016) that allows us to apply methods from social network analysis to the totality of interactions that took place within 11 Polymath projects and 4 Mini-Polymath projects, and, on the other hand, with insights drawn from Dunin-Keplicz and Verbrugge’s (2010) work on logics for teamwork. The upshot is to integrate data-driven and logic-driven (a priori) methods for the study of scientific collaboration in general, and ICT-mediated massive collaboration in mathematics in particular.

Slides

Talk given at Models and Simulations 7 (University of Barcelona).

Abstract

Talk given at the Artificial Intelligence and Contemporary Society: The Role of Information conference (XXI Jornadas de Filosofía y Metodología actual de la Ciencia — Jornadas Sobre Inteligencia Artificial Y Sociedad Contemporánea: El Cometido De La Información) at the University of A Coruña.

Abstract The upshot of this paper is to develop and refine the suggestion that logical systems are conceptual artefacts, and hence the result of a design-process. I develop this idea within the confines of the philosophy of information, and against the combined background of a Carnapian philosophy (as perceived through its current revival) and Herbert Simon’s ideas on the sciences of the artificial.

The proposed constructionist account is developed as follows. I start by highlighting the basic ideas behind a constructionist epistemology and a constructionist philosophical methodology, and then use these insights to identify how a constructionist attitude is already at play in how logicians develop novel formal systems. This modest constructionism is then turned into a more radical form through the proposal, backed by the use of the method of abstraction, that the common counterexample-dynamics in philosophical theorising should be replaced by a refinement-dynamics and a clear statement of requirements, and the application of this idea to the development of logical systems. I conclude with some general remarks on logic as a semantic artefact and as an interface.

Talk given at a symposium on the meaning of logical connectives (organised by Luis Estrada-González) at CLMPS in Helsinki.

Abstract The goal of this contribution is to take a few steps back, and put in perspective our reasons for trying to avoid meaning-variance as a means to, first, save the possibility of genuine rivalry between different logics, and, second, safeguard the very idea of logical revision. One reason for this re-examination is that if we understand better why meaning-*in*variance across logics matters, we will also have a better idea of which kind of answer is satisfactory. Indeed, the hope could be that we can also delineate which types of counter-objections can summarily be dismissed once a good answer to the Quinean challenge has been given.

As part of the proposed inquiry, three complementary perspectives will be adopted. First, we will reconsider the stances of Carnap and Kreisel with respect to formal and informal rigour; second, we will take some lessons from the distinction between data and phenomena (as used in the context of conceptual modelling by Löwe and Müller); finally, we shall revisit the problem of meaning (in-)variance in informational conceptions of logic, and particularly in view of the inverse relationship between logical discrimination and deductive strength.

Talk to be given at the Workshop on the Philosophy of Non-Classical Logics (UNILOG 2015), and at the Logic Colloquium 2015.

Abstract The problem of accounting for acceptable uses of classically valid but paraconsistently invalid arguments is a recurrent theme in the history of paraconsistent logics. In particular, the invalidity of the disjunctive syllogism (DS) and modus ponens (MP) in, for instance, the logic of paradox LP, has attracted much attention.

In a number of recent publications, Jc Beall has explicitly defended the rejection of these inference-forms, and has suggested that their acceptable uses cannot be warranted on purely logical grounds [1], [2]. Some uses of DS and MP can lead us from truth to falsehood in the presence of contradictions, and are therefore not generally or infallibly applicable [3].

Not much can be objected to this view: if one accepts LP, then MP and DS can only be conditionally reintroduced by either

  1. opting for Beall’s multiple-conclusion presentation of LP (LP+) which only gives us A,  A ⊃ B ⊢LP+ B,  A ∧ ¬A and ¬A,  A ∨ B ⊢LP+ B,  A ∧ ¬A, or
  2. by treating MP and DS as default rules.

The latter strategy was initiated by inconsistency adaptive logics [4], [5], and implemented for the logic LP under the name Minimally inconsistent LP or MiLP [6].

The gap between these two options is not as wide as it may seem: The restricted versions of MP and DS that are valid in LP are the motor of the default classicality of MiLP. The only difference is that the restricted version only give us logical options (Beall speaks of ’strict choice validities’), whereas default classicality presupposes a preference among these options (unless shown otherwise, we must assume that contradictions are false).

A cursory look at the debate between Beall and Priest [3], [7] may suggest that not much can be added to their disagreement. However, if we focus on the contrast between the mere choices of LP+  and the ordering of these choices in MiLP, we can tap into the formal and conceptual resources of modal epistemic and doxastic logic to provide a deeper analysis [8]. We can thus develop the following analogy:

LP+  is motivated by the view that logical consequence is a strict conditional modality, and is therefore knowledge-like. Using a slightly more general terminology: all logical information is hard information.

MiLP is motivated by the acceptance of forms of logical consequence that are variable conditional modalities, and are therefore belief-like. With the same more general information: some logical information is soft information.

This presentation still gives the upper-hand to Beall’s stance (shouldn’t logical consequences be necessary consequences?), but only barely so. The upshot of this talk is to motivate the views that (i) the soft information that underlies the functioning of MiLP can be seen as a global as well as formal property of a logical space, and is therefore more logical than we may initially expect, and that (ii) adding a preference among logical options can be seen as a legitimate and perhaps even desirable step in a process of logical revision.

References

[1] J. Beall, "Free of Detachment: Logic, Rationality, and Gluts", Noûs, no., p. . n/a, 2013.

[2] J. Beall, "Strict-Choice Validities: A Note on a Familiar Pluralism", Erkenntnis, 79, no. 2, p. . 301-307, 2014.

[3] J. Beall, "Why Priest’s reassurance is not reassuring", Analysis, 72, no. 3, p. . 517-525, 2012.

[4] D. Batens, "Dynamic Dialectical Logics", in Paraconsistent Logic - Essays on the inconsistent, no., G. Priest, R. Routley and J. Norman, Editor. München / Hamden / Wien: Philosophia Verlag, 1989, p. . 187-217.

[5] D. Batens, "Inconsistency-adaptive logics", in Logic at Work - Essays dedicated to the Memory of Helena Rasiowa, no., E. Orlowska, Ed. Heidelberg / New-York: Springer, 1999, p. . 445-472.

[6] G. Priest, "Minimally inconsistent LP", Studia Logica, 50, no. 2, p. . 321, 1991.

[7] G. Priest, "The sun may not, indeed, rise tomorrow: a reply to Beall", Analysis, 72, no. 4, p. . 739-741, 2012.

[8] J. Van Benthem, Logical dynamics of information and interaction, no. Cambridge University Press, 2011.

Talk to be given at ICPI 2015 (IS4IS-Summit)

Abstract My goal in this talk is to further develop the informational conception of logic proposed in [1] by motivating and exploring a methodology for logical practices (using, developing and thinking about logic) that is inspired by the methodology from the philosophy of information, with particular emphasis on its constructionist metaphilosophy [2]. Against this background, I’m interested in the following phenomenon: If a formalisation-process leads to the refinement of one or more concepts we are interested in (either because we are explicitly formalising them, or because we use them to talk about the concepts we are actually formalising), this often leads to a “splitting of notions”. In that case, the uncareful use of the original notions in combination with their refinements often leads to fallacies of equivocation. As suggested in [3], the development of a design-perspective on logic is meant to show that this phenomenon is a reason to abandon the original concepts, and not a reason to cast doubt on the proposed refinement. As a corollary, constructionism is logic contributes to the motivation of a pluralist perspective on logical practices.

References

[1] P. Allo and E. Mares, "Informational Semantics as a Third Alternative?", Erkenntnis, 77, no. 2, p. . 167-185, 2012.

[2] L. Floridi, "A defence of constructionism: philosophy as conceptual engineering", Metaphilosophy, 42, no. 3, p. . 282-304, 2011.

[3] P. Allo, "Synonymy and intra-theoretical pluralism", Australasian Journal of Philosophy, 93, no. 1, p. . 77-91, 2015.

Talk given at the Fourth World Congress on Universal Logic (Rio, Brasil).

Published in the Australasian Journal of Philosophy, 93(1): 77-91.

Abstract The upshot of this paper is to use the formal notion of synonymy to scrutinize the case for intra-theoretical pluralism defended in Hjortland's "Logical pluralism, meaning-variance, and verbal disputes".

Download (preprint)

http://dx.doi.org/10.1080/00048402.2014.930498

Talk and poster given at LOFT 2014 (Bergen, Norway).

Talk and poster given at LOFT 2014 (Bergen, Norway).

(joint work with Jean Paul Van Bendegem and Bart Van Kerkhove)

Talk given at the Workshop on Tableau-systems (Brussels), and at the ILIAS-seminar (Luxembourg).

Abstract The traditional idea that logic is specially relevant for reasoning stems from the fact that logic is often conceived as an absolute normative constraint on what we should and should not believe (a synchronous constraint) and as an infallible guide for what we should (or may) come to believe in view of what we already believe (a diachronic constraint). This view is threathened by the existence of rational failures of deductive cogency; belief-states that do not conform to what logic would require, but that are never- theless more rational than any revised belief-state that would be deductively cogent.

The suggestion that belief-states that are not deductively cogent can still be rational depends itself on the view that logical norms like consistency or deductive closure can sometimes be overruled by extra-logical norms. The latter clearly poses a problem for views that grant logic a special role in reasoning. (The underlying idea is that the special role of logic is inconsistent with the presumed defeasible character of logical norms.)

There are many ways of coping with these insights.

1. One can just accept the conclusion that the received view about logic is wrong,

2. one can deny the existence of rational failures of deductive cogency, or

3. one can revise what we mean by logic and/or how we understand it's role in reasoning.

I'm only interested in the last type of response.

In particular, I'd like to focus on strategies that (a) rely on the use of non-classical logics, (b) claim that logic can be used to formalise defeasible reasoning forms, and (c) propose a logical model of belief and belief-revision. While each of these three strategies reduces the gap between logic and reasoning, and even share some of their formal resources, a unified philosophical account of such proposals is still missing.

My aim in this talk is relatively modest. I only want to develop a minimal model that integrates the crucial features of sub-classical logics, with models of belief that rely on defeasible reasoning and allow for belief-revision. The upshot is to distill an account of the special role of logic in reasoning that is consistent with our best formal (logical) models of belief.

The proposed account combines a modal reconstruction of adaptive consequence relations (Allo, 2013b) with a suggestion to adopt the finer distinctions of different types of group-knowledge (and belief) to model single-agent knowledge (and belief) (Allo, 2013a) and a formal model of belief-merge through communication (Baltag & Smets, 2013).

Published in Aberdein, A. & I. Dove (eds.), The Argument of Mathematics. Dordrecht, Springer: 339–60.

Presented at CLPS13 Conference on Logic and Philosophy of Science (Ghent).

Abstract. Because the conclusion of a correct proof follows by necessity from its premises, and is thus independent of the mathematician’s beliefs about that conclusion, understanding how different pieces of mathematical knowledge can be distributed within a larger community is rarely considered an issue in the epistemology of mathematical proofs. In the present paper, we set out to question the received view expressed by the previous sentence. To that end, we study a prime example of collaborative mathematics, namely the Polymath Project, and propose a simple formal model based on epistemic logics to bring out some of the core features of this case-study.

Download (preprint)

http://dx.doi.org/10.1007/978-94-007-6534-4_17

Talk given at the Conference on the Foundations of Logical Consequence (Arché Centre for Logic, Language, Metaphysics and Epistemology, University of Saint Andrews.)

Published in Erkenntnis. 77(2): 167-85.

(joint work with Edwin Mares)

Abstract. The prima facie case for considering “informational semantics” as an alternative explication of the notion of logical consequence alongside the model-theoretical and the proof-theoretical one is easily summarised. Where the model-theory is standardly associated with a defence of classical logic, and proof-theory with a defence of intuitionist logic, informational semantics seems to wedded to relevant and other substructural logics. As such, if the CL, IL, RL trio is a representative chunk of a broader range of logical options, informational semantics surely has its place. Yet, it is even easier to dismiss the suggestion that informational semantics provides an apparently missing third conception of logical consequence. After all, isn't it just a variant of the usual interpretation of the Routley-Meyer relational semantics rather than a genuine alternative to a model-theoretic account? Or worse, isn't it a mere metaphor? In the present paper, we want to consider a more subtle answer to the question of whether informational semantics is a real alternative for the two more traditional contenders.

http://dx.doi.org/10.1007/s10670-011-9356-1

Invited talk given at the Special Session of CiE on Open Problems in the Philosophy of Information. Cambridge, UK.

Published in S.B. Cooper, A. Dawar, and B. Löwe (Eds.), CiE 2012, Lecture Notes in Computer Science, Vol. 7318:17–28.

http://dx.doi.org/10.1007/978-3-642-30870-3_3

Abstract. Informational conceptions of logic are barely novel. We find them in the work of John Corcoran, in several papers on substructural and constructive logics by Heinrich Wansing, and in the interpretation of the Routley-Meyer semantics for relevant logics in terms of Barwises and Perrys theory of situations.

Allo & Mares [2] present an informational account of logical consequence that is based on the content-nonexpantion platitude, but that also relies on a double inversion of the standard direction of explanation (in- formation doesnt depend on a prior notion of meaning, but is used to naturalize meaning, and informational content is not defined relative to a pre-existing logical space, but that space is constructed relative to the level of abstraction at which information is assessed).

In this paper I focus directly on one of the main ideas introduced in that paper, namely the contrast between logical discrimination and deductive strength, and use this contrast to (1) illustrate a number of open problems for an informational conception of logical consequence, (2) review its connection with the dynamic turn in logic, and (3) situate it relative to the research agenda of the philosophy of information.

Talk given at the Fourth Workshop in the Philosophy of Information. University of Hertfordshire.

Abstract. Standard refinements of epistemic and doxastic logics that avoid the problems of logical and deductive omniscience cannot easily be generalised to default reasoning. This is even more so when defeasible reasoning is understood as tentative reasoning; an understanding that is inspired by the dynamic proofs of adaptive logic. In the present paper we extend the preference models for adaptive consequence with a set of open worlds to account for this type of inferential dynamics. In doing so, we argue that unlike for mere deductive reasoning, tentative inference cannot be modelled without such open worlds. We use this fact to highlight some features of the informational conception of logic.

Talk given at Ninth International Tbilisi Symposium on Language, Logic and Computation

Abstract. Adaptive logics are logics for defeasible inference that are characterised by, on the one hand, a formula preferential semantics, and, on the other hand, a dynamic proof-theory (Batens 2007). Because adaptive logics rely on a truly dynamic perspective on logical inference, one would expect that a comparison and integration of adaptive logics with other dynamic logics should be a fruitful enterprise. It does, for instance, make sense to define a class of Kripke-style models that allow us to reformulate the adaptive consequence relation with the standard tools of modal logic (Allo 2011a), or to develop a dynamic doxastic logic where the relation between an agent’s knowledge (or firm beliefs) and defeasible beliefs is governed by an adaptive logic (Allo 2011b). In either case, we get a better idea of how adaptive logics are related to modal logics, but we still miss out on the one crucial aspect of adaptive logics: its dynamic proof-theory. The main aim of this paper is to fill this gap.

download

Presented at (Anti-)Realisms, Logic and Metaphysics (Nancy, France)

Published in The Realism-Antirealism Debate in the Age of Alternative Logics (Logic, Epistemology and the Unity of Science) edited by Shahid Rahman, Mathieu Marion and Giuseppe Primiero, 2011: 1-23.

Abstract. Present paper’s aim is to put the notion of ambiguous connectives, as explored in Paoli (2003, 2005), in an informational perspective. That is, starting from the notions of informational content and logical pluralism, we ask what it means for a disjunction, i.e. a message of the form “f or y”, to be informative. The bottom line of this paper is that being a pluralist about informational content can even be defended against those who hold a realist conception of semantic information.

doi: 10.1007/978-94-007-1923-1_1

Talk given at the Dynamics in Logic Workshop (Brussels)

Abstract. Adaptive logics provide a general framework for all kinds of defeasible reasoning in terms of, on the one hand, a preferential semantics, and, on the other hand, a dynamic proof-theory. In a previous paper I described a class of Kripke-models that allowed for the reformulation of the consequence relation of adaptive logic in a modal logic. I also claimed that the modal reconstruction would facilitate the comparison and the interaction with other formalisms like preference logics, conditional logics, and the conditional doxastic models from Baltag & Smets [2008]. In the present paper I substantiate this claim. This is done by (a) describing the class of adaptive preference models, which is based on an abnormality-ordering of the states in a model and thus allow for the reconstruction of the consequence relation of adaptive logics,; (b) introducing a modification of that approach that is even closer to the plausibility orderings used to formalise belief and conditional belief; (c) comparing a number of distinctive features of the different approaches, and showing how we can combine the techniques of both approaches. In particular, I shall comment on how abnormality-orderings can be used in doxastic logics, and how awareness can be used to fine-tune the modal reconstruction of the adaptive consequence-relation.

handouts

Talk given at the EEN-meeting, Lund.

Abstract. Harman’s view that logic isn’t specially relevant for reasoning (Harman [1986]) is best viewed as a multi-faceted objection to the received view that the laws of logic are (or provide) general as well as infallible rules for reasoning, rather than as a focused attack on this view.

By focusing on so-called rational failures of deductive cogency, and individuating four different attitudes towards them, I show how we can resist Harman's conclusion. On one account, a failure of deductive cogency is rational whenever Γ entails φ, but believing φ conditional on Γ is itself irrational. Namely, φ cannot be accepted while (a) any revision of Γ that does not entail φ is arbitrary and therefore irrational as well (cfr. the paradox of the preface), (b) there is no immediate way to revise Γ , or (c) revising Γ is too costly. The four different attitudes are simple revisionism, sophisticated revisionism, basic scepticism, and critical scepticism.

The following theses are defended:

1. The simple revisionist and the basic sceptic make the same mistake. They assume that because the rules of (classical) logic have no exceptions, norms based on this logic should also be exceptionless.

2. Taking the role of logic in reasoning seriously commits us either to sophisticated revisionism or to critical scepticism.

3. The sophisticated revisionist and the critical sceptic do not need to disagree about the appropriate formalism to model norms for reasoning. They only disagree on how the formalism should be understood.

Talk given at the Séminaire interuniversitaire de Logique et Ontologie, Namur.

Abstract. The prima facie case for considering "informational semantics" as an alternative explication of the notion of logical consequence alongside the model-theoretical and the proof-theoretical one is easily summarised. Where the model-theory is standardly associated with a defence of classical logic (CL), and proof-theory with a defence of intuitionist logic (IL), informational semantics seems to be wedded to relevant and other substructural logics (RL). As such, if the CL, IL, RL trio is a representative chunk of a broader range of logical options, informational semantics surely has its place. Yet, it is even easier to dismiss the suggestion that informational semantics provides an apparently missing third conception of logical consequence. After all, isn't it just a variant of the usual interpretation of the Routley-Meyer relational semantics rather than a genuine alternative to a model-theoretic account? Or worse, isn't it a mere metaphor? In the present paper, we want to consider a more subtle answer to the question of whether informational semantics is a real alternative for the two more traditional contenders. Our discussion undoubtedly leaves many questions unanswered. We mainly try to give the reader an idea of why informational semantics is a genuine and attractive alternative. To that end, we sketch two complementary pictures of the informational approach to logical consequence: a traditional model-theoretic one, and a more abstract one based on the inverse relation between logical discrimination and deductive strength.

Slides

Talk given at the Logic, Reasoning and Rationality Congress (Gent).

Abstract. Adaptive logics have evolved from systems for handling inconsistent premises to a unifying framework for all kinds of defeasible reasoning—with the standard format (Batens [2007]) as one of its major strengths. Modal logics have gone through a similar evolution. They were originally conceived as an analysis of alethic modalities, but have now become the privileged language to reason about all kinds of relational structures. One field where modal logics have been used as a unifying framework is in the analysis of what Makinson [1993] describes as the different “faces of minimality” in defeasible inference, conditional logic, and belief revision. Modal translations of so-called minimality semantics are found in Boutilier [1990], and more recently in van Benthem et al. [2006]. Given the hypothesis that the standard format of adaptive logic is sufficiently general to incorporate most (if not all) forms of defeasible inference (Batens [forthcoming: Chapt. 1]), it is natural to ask whether the adaptive consequence relation can also be formulated in a modal language. The main reason why a similar reconstruction is possible, is that adaptive logics are obtained by (i) ordering models in a certain way, and (b) using that ordering to select a subset of all models of the premises to obtain a stronger consequence relation.

Talk given at the Formal Epistemology Workshop 2010 (Konstanz, Germany).

Abstract. In this paper I give a more refined account of deductive closure and positive introspection by using the expressive resources of logics for different types of group knowledge.

download

Presented at CAP in Europe 2009 (Barcelona, Spain)

Abstract. The main aim of this paper is to lay the foundation for a broader meta-theoretical reflection on the practice of the formal modeling of cognitive states and actions. Two examples, one from basic epistemic logic, the other from dynamic epistemic logic are used to illustrate some wellknown challenges. These are further evaluated by means of two oppositions: the contrast between abstraction and idealization and the difference between a properties of the agent reading versus a properties of the model reading. To conclude, some methodological insights inherited from the philosophy of information are proposed as fruitful way of understanding the formal modeling of cognitive states and actions.

Talk given at the VAF Conference 2009 (Tilburg).

Abstract. When reformulated as a modal logic for conditional belief, the main properties of adaptive logics can be captured as properties of the resulting modal operators. The purpose of the present paper is to give a broadly epistemic interpretation to these modalities, and use these to bring out the distinctive epistemic character of adaptive logics. Concretely, I want to do three things: (a) give a brief description of a modal logic for conditional belief based on the semantics of adaptive consequence; (b) investigate the role of logical and epistemic or doxastic possibilities in this logic; and (c) show how these modalities can be used to elucidate the relevance of logic for deductive reasoning.

Talk given at the Foundations of Logical Consequence Workshop I: Proof-Theoretic vs Model-Theoretic Semantics. (Arché Centre for Logic, Language, Metaphysics and Epistemology, University of Saint Andrews.)

Talk given at the Fourth World Congress of Paraconsistency (Melbourne)

Abstract. Non-dialetheic proponents of paraconsistency have often appealed to ambiguity to explain away the apparent acceptance of true contradictions in their paraconsistent approach to logical consequence. This can be done by referring to an ambiguity at the level of the logical or the non-logical vocabulary. The kind of ambiguity I’m interested in, relates the validity of explosion to the ambiguity of the classical connectives. (see e.g. Read (1981)).

While this is all fairly well known, it is generally not remarked that classical logic offers only one of two intuitively plausible ambiguous readings of the logical connectives. Namely, a reading which takes an ambiguous connective to exhibit the deductive features of the exten- sional and intensional connectives. Another option, however, takes each ambiguous connective that plays a role in an argument to exhibit (in a non-deterministic way) the deductive features of either an intensional or an extensional connective.

download

Talk given at Logics for Dynamics of Information and Preferences Working sessions (ILLC, Amsterdam).

Abstract. Adaptive logics (a family of nonmonotonic logics introduced by Batens, and further developed by his co-workers) are often suggestively described as "logics which adapt themselves to the specific premise-sets they are applied to." Therefore, their functioning is fleshed out in terms of a dynamic proof-theory which allows for defeasible inference-steps. Notwithstanding the fact that this is an accurate description of what adaptive logics do, this is not always the best way to introduce them. The obvious alternative is to explain some of the basic insights of adaptive logics in terms of its preferential semantics. Admittedly, this is not the adaptive logician's preferred starting point (for it is all about the dynamic proof-theory), but from a present-day semantical perspective on logical dynamics it is undoubtedly the most familiar one.

In this presentation I want to do two things. First, and most importantly, to reformulate the model-theory of adaptive logics in a modal-epistemic framework. This move requires us to interpret the preferential semantics relative to a box-operator with a contextually restricted range. Secondly, and largely as an illustration of the former, to describe how this framework can be applied to elucidate how information loss due to equivocal communication could be reduced.

Invited tutorial given at the ILCLI International Workshop on Logic and Philosophy of Knowledge, Communication and Action

Abstract. The tutorial connects two notions of information: the inverse relationship principle which relates informational content to the exclusion of possibilities, and information-structures based on a partial ordering on states of information. Jointly, these allow the formulation of several distinct precise notions of content-individuation.

download

Presented at the First Workshop on the Philosophy of Information and Logic (Oxford, UK)

Abstract. Cognitive states like knowledge and belief, as well as cognitive commodities like evidence, justification, or proof play a central role in our epistemological theories. Being attentive to the way such states and commodities interact in these theories is particularly important. This is even more so if, besides knowledge, we also want to reason about how data and information improve our overall epistemic position. This is mainly due to the fact that being informed is itself ambiguous between the predominantly syntactic relation of holding a piece of data that qualifies as genuine information and the largely semantic relation of being in a state which satisfies certain conditions. In this paper we argue that getting the relation between states and commodities “right” is a first prerequisite for the choice of bridge axioms in a combined logic of data and information with theoretical virtues similar to the existing combined logics of knowledge and belief. To start with, we formalise the intuitively valid principles that “being informed involves holding data,” and “being informed involves holding a piece of information.” Subsequently, we check how these necessary conditions for being informed constrain the set of plausible bridge axioms, and then outline a generic combined system. To conclude, a number of broader methodological considerations are introduced and related to the specificity of introducing informational considerations into the practice of formal modelling.

Presented at North American Conference on Philosophy and Computing (Loyola University, Chicago, US)

Abstract. The present paper expands upon the previously defended thesis of informational pluralism. This is the view that the content conveyed by a message is a function of the level of abstraction at which the relevant communication is modelled. Specifically, it focuses on the problem of how content and presumed content should be evaluated in settings where the communication is equivocal.

The formal approach is a defeasible account of perceived content. Its functioning is studied informally in terms of the relevant levels of abstraction and the relation of simulation between those levels, and formally characterised in terms of informorphisms between classifications.

Presented at CAP in Europe 2007 (Twente, The Netherlands)

Published in Waelbers, Briggle & Brey (eds.), Current Issues in Computing and Philosophy, IOS Press.

Abstract. One of the basic principles of the general definition of information is its rejection of dataless information, which is reflected in its endorsement of an ontological neutrality. In general, this principles states that “there can be no information without physical implementation” (Floridi (2005)). Though this is standardly considered a commonsensical assumption, many questions arise with regard to its generalised application. In this paper a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted and unrestricted data-implementation-principles.

download

Poster presented at the Formal Epistemology Workshop (CMU, Pittsburgh, US)

Abstract. One of the central aims of the philosophy of information is the formulation of an epistemological theory that is based on information. On this account—and unlike Dretske’s seminal proposal—knowledge should no longer be analysed in terms of beliefs, but directly in terms of the non-doxastic factive attitude of ‘being informed’. A distinctive feature of this project is its simultaneaous investigation of information as a commodity, and the statal conditions that are necessarry and sufficient for being in a state wherein one is informed. While research on the former aspect has essentially been concerned with the veridical nature of semantic information, the research on the latter has, among others, lead to the formulation of an epistemic logic for ‘being informed’.

After a brief elaboration on the discrepancy between reductive analyses of information as a commodity (information as veridical and meaningfull well-formed data) as opposed to the alleged primeness of the statal condition for being informed, we propose a formal analysis of a small class of necessary conditions for being informed. The formulation of these conditions elaborates on previous work on the semantics for the modal logic for ‘being informed’, and uses the preferential models of adaptive logic.

download

Presented at CAP in Europe 2006 Conference (Trondheim, Norway)

Abstract. Holding on to the view that Logical Orthodoxy is at best a fallible guide for the formalisation of the concept of semantic information, the inclusion of any logical principle within a logic of information should be the object of closer scrutiny. By investigating the possibility of being informed of a (true) contradiction, this paper adopts the opposite strategy.

Following this unusual method, it is subsequently argued that paraconsistency alone is not enough to motivate the acceptance of some contradictions as genuine information; that accepting contradictory but not veridical information is a rather trivial position; and that only a few motivations for dialetheic (i.e. true contradictory) information stand up to the standards of a theory of semantic information.

Presented at 27ste Nederlands-Vlaamse Filosofiedag, Rotterdam.

download

Presented at CAP in Europe 2005 Conference (Västerås, Sweden)

Published in Computing, Philosophy, and Cognitive, Science, G. Dodig Crnkovic and S. Stuart (eds.), Cambridge Scholars Press.

Abstract. By introducing the notion of logical pluralism, it can be concluded that up to now theories of semantic information have - at least implicitly - relied on logical monism, the view that there is one true logic. Adopting an unbiased attitude in the philosophy of information, we ought to ask whether logical pluralism could entail informational pluralism. The basic insights from logical pluralism and their implications for a theory of semantic information should therefore be explored.

First, it is shown that (i) the general definition of semantic information as meaningful well-formed data does not favour any logical system, (ii) there are nevertheless good reasons to prefer a given logic above some others, and (iii) preferring a given logic does not contradict logical pluralism.

A genuine informational pluralism is then outlined by arguing that for every true logic the logical pluralist accepts, a corresponding notion of semantic information arises. Relying on connections between these logics, it can be concluded that different logics yield complementary formalisations of information and informational content. The resulting framework can be considered as a more versatile approach to information than its monist counterparts.

download

Presented at Second International Workshop on Philosophy and Informatics (Kaiserslautern)

Published in: WM2005: Professional Knowledge Management Experiences and Visions, edited by Klaus-Dieter Althoff, Andreas Dengel, Ralph Bergmann, Markus Nick and Thomas Roth-Berghofer, 579-86. Kaiserslautern: DFKI Gmbh, 2005.

Also available in CEUR Online Proceedings Vol. 130

Abstract. The core aim of this paper is to provide an overview of the benefits of a formal approach to information as being informative. It is argued that handling information-like objects can be seen as more fundamental than the notion of information itself. Starting from theories of semantic information, it is shown that these leave being informative out of the picture by choosing a logical framework which is essentially classical. Based on arguments in favour of logical pluralism, a formal approach of information handling inspired by non-classical logics is outlined.

download

Presented at 1st World Conference and School on Universal Logic (Montreux)

Abstract. Through their development, adaptive logics (see [1]) have often been devised as modal adaptive logics. That is, a modal logic L strengthened with the provisional application of a rule which is not in L itself (e.g.: <>A => [ ]A). Logical systems using such an approach include inconsistency-adaptive logics based on Jaskowski’s non-adjunctive approach [6], and logics for compatibility [2]. While non-modal adaptive logics generally succeed in providing a natural reconstruction of reasoning, proof-formats for modal adaptive logics lack the same intuitiveness. Basically the drawbacks of the proof-formats stem from adaptive logics’ reliance on a purely syntactic use of modal logics, thus leaving some natural (semantic) insights in modal languages aside. Compared to other adaptive logics (essentially the original inconsistency-adaptive logic ACLuN1) part of the appealing naturalness of dynamic proofs is lost (partly because the rules are defined indirectly with respect to the existence of a Hilbert-style proof). The main purpose of this paper is to provide a labelled proof-format for modal adaptive logics which does not suffer from the mentioned drawbacks.

Presented at Thought Experiments Rethought Congress (Ghent, Belgium)

Abstract. In this paper I try to give an alternative account of what (Floridi, 2003) describes as the two approaches to the Philosophy of Information (henceforth PI), and more precisely as the move from an analytical to a constructionist approach within PI. Whereas he tackles the problem from the standpoint of the historical evolution of PI (see: Floridi, 2002) - more generally relying on the notion of a pragmatic turn within contemporary analytical philosophy - I present a rather different approach, which is based on an interpretation of science fiction as a thought experiment.

Presented at CAP in Europe 2004 Conference (Pavia, Italy)

Published in Computing, Philosophy, and Cognition. L. Magnani and R. Dossena (eds.). London, College Publications: 313—327.

Abstract. Core aim of this paper is to focus on the dynamics of real proofs by introducing the block-semantics from (Batens, 1995) as a dynamical counterpart for classical semantics. We first look briefly at its original formulation - with respect to natural deduction proofs - and then extend its use to tableau-proofs. This yields a perspective on proof-dynamics that (i) explains proofs as a series of steps providing us with an insight in the premises, and (ii) reveals an informational dynamics in proofs unknown to most dynamical logical systems. As the latter remark especially applies to Amsterdam-style dynamic epistemic logic, we consider a weak modal epistemic logic and combine it with dynamic modal operators expressing the informational proof-dynamics (as a natural companion for the informational dynamics due to new information known from dynamic epistemic logic).

The motivation for this approach is twofold. In a general way it is considered as (a first step in) the reconstruction of the proof-dynamics known from adaptive logics (revealed by its block-formulation) within a modal framework (i.e. using a relational structure); in a more restricted way it aims at the explicit application of some results on omniscience formulated in Batens' paper on block-semantics.

download

Presented at VlaPoLo9 Workshop (Ghent, Belgium)

Presented at VlaPoLo7 Workshop (Brussels, Belgium)

Abstract. Problem solving in the sciences often forces us to rely on a pragmatic notion of truth. An adaptive logic interpreting scientific theories as pragmatically possible was already given in (Meheus, 2002). The logic presented in this paper refers to a complementary view on pragmatic truth: not with respect to theories but with respect to single statements, facts or data. Therefore we rely on Nicholas Reschers concept of presumptive truth and the connected cognitive action called (presumptive) taking presented in (Rescher, 2001), and present an adaptive logic modelling the local acceptance and rejection of a statement.