**Provocation forthcoming** in *Being Profiled. Cogitas Ergo Sum*. Edited by Bayamlioglu, Baraliuc, Janssens & Hildebrandt. Based on Questioning Mathematics: Algorithms and Open Texture.

# Archive

**Talk to be given at** Data, Security, Values: Vocations and Visions of Data Analysis. Peace Research Institute Oslo (PRIO).

**Abstract.** With the development of a critical research agenda on contemporary data practices we gradually build the tools that are needed to overcome the uncertainty, lack of clarity, and impact of misleading narratives concerning the epistemology of data science. Without such a reflection, we cannot understand the kind of knowledge data analysis produces. More importantly, we then also lack the ability to evaluate specific knowledge-claims as well as more general affirmations of the epistemic superiority (smarter, more objective, ...) of the knowledge, decisions, or insights that data analysis produces. This is why it is important to recognise that data is never just data (e.g. Gitelman 2013, Kitchin 2014), or that the development of algorithms (as any advanced scientific or engineering practice) cannot fully be understood in terms of a well-defined internal logic.

The starting point of this contribution is that we should start asking similar questions about mathematics: We need to understand how mathematics contributes to scientific respectability and authority of data science. To do so, we cannot limit our attention to mathematics as a body of mathematical truths or mathematical techniques. Instead, we should focus on mathematical thought and beliefs about the nature of mathematical thought. I propose to develop this critical inquiry through a dedicated consideration of how mathematical values shape data science.

**Talk given at **Logic and Metaphysics in the Modern Era. Joint conference of the Université Libre de Bruxelles and the Vrije Universiteit Brussel.

**Talk given at** the VIIe Congrès de la Société de Philosophie des Sciences. Nantes, France.

**Abstract.** This paper is concerned with the “problem of visualisation,” and more precisely with the logical and epistemological dimensions of the use and design of information visualisations.

In this paper I reflect on the discrepancy between, on the one hand, philosophical perspectives on visualisation and, on the other hand, the views and assumptions on which visualisation scientists rely when they theorise about visualisation or develop new visualisation-tools. I propose a three-part characterisation of the relevant discrepancy. This is the starting-point for a more thorough exchange between the disciplinary perspectives under consideration. An exchange that is meant to support the visualisation-sciences in their quest for better theoretical foundations (Purchase et al. 2008, Chen et al. 2017), and entice philosophers of science to reconsider their preferred ways of understanding of what visualisations are meant to accomplish and which practical obstacles a visualisation-scientist tries to overcome; especially in the context of data-intensive science. The proposed three-part characterisation is based on three contrasts, namely:

1. The philosophical and the technical problem: What is it vs how do we make it?

2. The epistemological and the computational problem: How do we use a visualisation correctly vs how do we use and construct a visualisation efficiently.

3. The semantical and the syntactical problem: How does a visual artefact represent (a system) vs how does a visual artefact encode (a data- object).

These three pairs form the core of my exposition, and I will use them to further characterise the problem of visualisation as two separate inference-problems: the object-level problem of correctly and efficiently using a visual artefact, and the meta-level problem of correctly and efficiently constructing a visual artefact.

The Ninth Workshop on the Philosophy of Information is held at the Royal Flemish Academy of Belgium for Science and Arts. This workshop is a contact-forum of the Academy organised with the additional support of the DSh VUB and the Centre for Logic and Philosophy of Science.

The Workshop theme is *Information Visualisation.*

**Talk** given at 10 Years of ‘Profiling the European Citizen’: Slow Science Seminar 12-13 June 2018, Brussels City Campus.

**Abstract** Contemporary data practices, whether we call them data science or AI, statistics or algorithms, are widely perceived to be game changers. They change what is at stake epistemologically as well as ethically. This especially applies to decision-making processes that infer new insights from data, use these insights to decide on the most beneficial action, and refer to the data and inference process to justify the chosen course of action.

Developing a critical epistemology that helps us address these challenges is a non-trivial task. There is a lack of clarity regarding the epistemological norms we should adhere to. Purely formal evaluations of decisions under uncertainty can, for instance, be hard to assess outside of the formalism they rely on. In addition, there is substantial uncertainty with regard to the applicably norms because scientific norms may appear to be in flux (new paradigms, new epistemologies, ...). Finally, dealing with this uncertainty and lack of clarity is further complicated by promises of unprecedented progress and opportunities that invite us to imagine a data-revolution with many guaranteed benefits, but few risks.

As part of this broader epistemological exercise, I want to focus on a small, but largely disregarded—in my view misunderstood—fragment of the problem at hand: The question of the role of mathematics, and the question of how some widely shared beliefs about the nature of mathematical knowledge contribute to the scientific respectability of contemporary data practices.

**Workshop** from 26 through 29 March 2018 at the Lorentz Centre organised with Lora Aroyo, Kaspar Beelen, Davide Ceolin, and Vladi Finotto.

**Paper** to be published in Can Baskent & Thomas M. Ferguson (eds.) *Graham Priest on Dialetheism and Paraconsistency*, Outstanding Contributions in Logic, Springer.

**Abstract **We present a multi-conclusion natural deduction calculus characterizing the dynamic reasoning typical of Adaptive Logics. The resulting system AdaptiveND is sound and complete with respect to the propositional fragment of adaptive logics based on CLuN. This appears to be the first tree-format presentation of the standard linear dynamic proof system typical of Adaptive Logics. It offers the advantage of full transparency in the formulation of locally derivable rules, a connection between restricted inference rules and their adaptive counterpart, and the formulation of abnormalities as a subtype of well-formed formulas. These features of the proposed calculus allow us to clarify the relation between defeasible and multiple-conclusion approaches to classical recapture.

**Talk** given at EPSA, Exeter.

An extended version (SharedIt-link) of this blog-post has now appeared as a commentary in the journal Philosophy & Technology, 30:541–545.

A few months ago I was invited to enter in a dialogue with Brussels-based artist Rossella Biscotti for the occasion of the exhibition of her installation “Other” from 2015 at the Contour Biennale in Mechelen (Belgium). In this work, she uses the Jacquard weaving technique to visualise data from Belgian census data, and engages in an exploration of data-subjects that are categorised as ‘other’ within this data-set. The resulting installation consists of 4 large carpets that display data of various minority-groups and rest-categories of the Brussels population.

My role in this collaboration was to contribute formal or mathematical insights on how rest-categories like other or none of the above could be understood. In this short piece I reflect on this collaboration. I first discuss how artistic research like Biscotti’s can contribute to the critical evaluation of contemporary data-practices, and then elaborate on how logico-mathematical insights can become part of such inquiries.

Biscotti’s 10×10 installation, the precursor of Other, was originally designed and produced to be exhibited at Haus Esters in Krefeld (Germany)—a modernist villa designed by Ludwig Mies van der Rohe for the silk-manufacturer Josef Esters—, and integrates multiple modernist ideals in a single work of art. In this work, Biscotti explores how institutional structures are imposed on individuals by combining features of automated mechanical manufacturing with conceptual and technological aspects of how large data-sets are collected and processed. She focuses in particular on how categories are used to create an overarching structure, and relates this to the punch-cards used to implement such structures within industrial (the Jacquard loom; an early 19th century device that automated the weaving of several complex patterns) and administrative (the Hollerith tabulator) processes that became increasingly automated in the early 20th Century. By showing the resulting work in Haus Esters, it becomes part of a more encompassing modernist narrative exemplified by Mies van der Rohe’s architecture.

For the exhibition of this installation at Contour, Biscotti’s team wished to extend their research with a more rigorous expression of the logic behind uses of rest-categories like other, and capture this logic in a single formal expression. This led us to a brief excursion into the meaning of the labels we use to designate such rest-categories, and suggested that we should interpret these labels as semantically empty labels that share certain features with the sentinel values that data-scientists now use to signal that certain data are missing. By asking how such empty labels interact with the generation (and ensuing reification) of categories, for instance when data are aggregated, we came to an interpretation of rest-categories as sets of data-subjects whose members should not, due to the lack of positive evidence of their similarity, be subsumed under a single kind or profile.

The recurrent attention for minority-groups and rest-categories, as well as the value accorded to automated and/or mechanical processes, naturally place Biscotti’s work within the scope of current debates on large-scale data-processing and the data-revolution. Mechanical objectivity and data-shadows are, for instance, current topics of interest within the scholarly community that tries to understand and assess the ethical, legal, and social implications of the data-revolution. And yet, the artistic research that led to 10×10 and Other deliberately only investigates historical computational technologies like the punch-card, and remains focused on the functioning of categories in census-data, which is itself a very traditional form of large-scale data-collection and organisation. It is, therefore, not immediately clear how Biscotti’s work, which (unlike the work showed at last year’s Big Bang Data at Somerset House in London) remains silent on matters like Big Data and machine learning, can contribute to our understanding of what we now see as the most salient features of the data-revolution.

What I’d like to suggest is that taking early manifestations of automated data-processes as an object of study can help us to open up new ways of questioning data-centric forms of knowledge-production, for instance by making us aware of practices that have become too familiar to deserve a critical assessment. Punch-cards and tabulators are, in that sense, similar to pre-cinematic processes: they are basic mechanical devices we study to understand the technologies that, respectively, enable contemporary artistic and documentary practices (cinema) or that enable novel epistemic practices. As such, it (re)directs our attention to the technological changes that make epistemic practices possible, or even just conceivable. It becomes a genealogical project, and has the potential to identify the technical and conceptual changes we need to be aware of to understand contemporary practices, by exposing us again to the historical building blocks of our current practices.

Biscotti’s work helps us, at the same time, avoid certain distractions. It can encourage us to look underneath the reigning rhetoric on Big Data, the mythical abilities that are often attributed to machine learning and artificial intelligence, and perhaps even the most rudimentary principles of inferential statistics. It invites us to take a few steps back—back into what we think of as known territory—, and draws our attention to the practices and assumptions that make data-driven inquiry and decision-making possible: recording, organising and processing through counting, categorisation, and automated calculation. Because artistic research like Biscotti’s is situated at the periphery of current scholarly debates, it isn’t bound by a given research-agenda and can reinvestigate familiar and often widely trusted practices, and ask elementary questions anew; from a contemporary (artistic) perspective. This includes questions that may have lost their immediate relevance because they no longer drive our scientific or scholarly curiosity, but also questions that are not aligned with the dominant themes of ongoing debates concerning privacy, fairness, transparency, or responsibility.

What then can a logico-mathematical approach contribute to artistic research concerned with the classification practices on which census-data are built? Two things at least. It can help make the idea of a “logic of classification” more explicit, and develop its implications in purely abstract terms (for instance without associating rest-categories with forms of exclusion). As such, it can reorient our critical attention from how classification-structures affect specific data-subjects in concrete settings to how classification-rules create abstract entities like the profiles or categories that become the primary entities we reason about or use to make decisions. Second, it can be used to explore alternative approaches; in this specific case, different ways of conceptualising how rest-categories should be used in the construction of categories of (in certain respects) similar data-subjects.

In relation to the focus on “other”, I specifically contrasted two different ways in which the membership of a rest-category could be conceptualised. The basic principle that underlies both is that data-subjects belong to the same category (or fall under the same profile) if and only if for all the relevant data-dimensions we have attributed them the same values (or values within the same range). In this way, we can construct categories of, say, all the children of ages between 6 and 10 that have at least one sibling. Similarly, we seem to be able to construct the category of all the data-subjects categorised as “other” in the data-dimension “household position,” and this even if the actual household-roles of the presumed members of this category do not have anything in common apart from the fact that they do not conform to any of the roles privileged by the designers of the census, and that their place or role within a household probably isn’t very common (as in the case of “other nationalities”). Treating such rest-categories as bona fide categories makes sense if we think of labels like “other” or “none of the above” as semantically significant labels; labels that provide sufficient ground for identification because they indicate that we have sufficient evidence to identify the data-subjects that were so-labelled.

If, however, we think of such labels as a mere indication of the absence of any information, this strategy quickly becomes questionable. In the context of the mentioned household positions, being categorised as “other” results from negative answers to 4 consecutive yes/no-questions, but does not need to carry any positive information. At least for some rest-categories it thus makes more sense to treat the labels we use to denote these categories along the same lines of the sentinel-values that are customarily used to signal missing data, like 9999 or the NaN (not a number) numeric data-type described by the IEEE 754 floating-point standard. Let us stipulate that two data-subjects fall under the same profile or belong to the same category if and only if, first, there is no information that indicates that they are different in a relevant respect (a potentially vacuous sense of being similar), and, in addition, there is also positive evidence that they are similar in the relevant respects. By the second requirement, the label “other” then no longer leads to the creation of a category of others. Because explicit sentinel-values like NaN have the property of not being equal to themselves (the expression NaN==NaN will typically evaluate to False), this requirement for positive information can be simulated by using such values to denote rest-categories.

Using a randomly generated data-set similar to the data used by Biscotti, the difference between the two types of approaches can easily be visualised. In the figures below the sizes of categories are displayed as bubbles; the figure on the left uses the number 10 to denote “other” (and 10==10 evaluates to True), whereas the figure on the right uses NaN.

Here, we immediately see that the presence of data-subjects labelled as “other” leads to the creation of a large periphery of different (because unknown) data-subjects whenever the label used to denote rest-categories indicates the absence of information. As such, this leads to a minimal sense in which we can understand how the meaning we assign to the labels we use to denote categories interacts with the process of creating categories or profiles and the subsequent use of these categories as an ontology used to describe a given subject-matter.

**Panel-session** at SPT2017: The Grammar of Things. 20th conference of the Society for Philosophy and Technology

June 14-17, 2017 – Darmstadt, Germany.

The upshot of this panel-discussion is to bring together different perspectives on the epistemic and societal role of mathematics in its relation to data-science and the data-revolution. It is based on the assumption that only a realistic picture of mathematics, as emphasised within the philosophy of mathematical practices, can reliably inform such an inquiry. The latter presupposes a better understanding of the role of applied mathematics in the sciences, an appreciation of the diverse ways in which statistical theory can inform the development of data-processes (Gelman & Hennig 2015), and a critical outlook on the societal status of mathematics. Such a realistic picture of mathematics serves two purposes. It should inform an analysis of what it means to “trust in numbers” (Rieder & Simon 2016) or help us identify clear cases of “mathwashing” (Beneson 2016), but it should just as much clarify the critical role of mathematics and explain how certain epistemic virtues of mathematics can play a decisive role in exposing epistemic failures and poor practices in data-science.

Organiser: Patrick Allo (Oxford).

Participants: Karen François (Brussels), Christian Hennig (UCL), Johannes Lenhard (Bielefeld), and Jean Paul Van Bendegem (Brussels).

**Talk given** at the LogiCIC Workshop 2016, Amsterdam

at Group Knowledge and Mathematical Collaboration 2017, Oxford, and

at Ampliative Reasoning in the Sciences 2017, Ghent.

**Abstract** The problem that motivates this paper is the following: Given a data-set with records of interactions from collaborative science online, which background-theory should be adopted to study these digital traces if one’s goal is to explain whether and how the collaboration was epistemically successful. I will approach this question on the basis of a specific case-study, namely the Polymath-projects initiated in 2009 by Cambridge mathematician and Field Medalist Timothy Gowers (see e.g. Allo et al. 2013). These are collaborative projects dedicated to specific research-level mathematical questions (finding a proof for a certain result). The centre of activity of these collaborations are interactions in discussion-threads on various weblogs, and the discussions in question are in principle open to anyone.

Extended abstract (LogiCIC-version)

Presentation (Oxford-version)

Presentation (Ghent-version)

**Published** **in** Minds and Machines (Online First)

**Abstract** This paper develops and refines the suggestion that logical systems are conceptual artefacts that are the outcome of a design-process by exploring how a constructionist epistemology and meta-philosophy can be integrated within the philosophy of logic.

doi: 10.1007/s11023-017-9430-9 (Open Access)

**Published in** the Journal of Logic and Computation — Logic and Philosophy of Information Corner (Advance Article).

**Abstract** In this paper I use the distinction between hard and soft information from the dynamic epistemic logic tradition to extend prior work on informational conceptions of logic to include non-monotonic consequence-relations. In particular, I defend the claim that at least some non-monotonic logics can be understood on the basis of soft or “belief-like” logical information, and thereby question the orthodox view that all logical information is hard, “knowledge-like”, information.

**Paper published in** Big Data & Society, Dec 2016.

authored by Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi.

**Abstract** In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

doi:10.1177/2053951716679679 (open access)

**Talk given at** Situations, Information, and Semantic Content, Munich.

**Abstract** The background of this talk is the development of an informational conception of logic that is based on the methodology of the philosophy of information, and in particular on the thesis that information is always assessed at a given level of abstraction. Here, I wish to specifically explore the similarities and dissimilarities between an informational perspective on logic based on situation semantics, and my own approach.

**Talk given at** Culture and politics of data visualisation in Sheffield.

**Abstract** Understanding information visualisation as a reasoning and communication tool doesn’t only require a systematic understanding of its intended functioning (i.e. correct reasoning and reliable communication), but also a better insight in its potential failures. The latter aspect can help us to recognise how information-visualisations can be used to mislead (the critical perspective also present in argumentation theory), but it can also lead to a deeper understanding of the trade-offs that have to be negotiated when designing a visualisation (the design perspective).

The upshot of this talk is to take a closer look at the latter aspect by developing an account of fallacies in information-visualisation, focus on the following common techniques in visualisation:

- Informational shortcuts (information-hiding, fudging distinction, exploiting imprecision) as a means to jump to conclusions,
- Data-transformations like re-ordering, clustering or compressing information as a means to discover or reveal patterns in the data,
- And ask how we can distinguish epistemically beneficial from epistemically detrimental uses of these techniques.

To conclude, the issue of visual fallacies will be reconsidered against the background of contemporary visual analytics and its emphasis on obtaining actionable insight.

**Published in** The Routledge Handbook on Philosophy of Information (Floridi, ed.).

**Abstract** The combination of logic and information is popular as we as controversial. It is, in fact, not even clear what their juxtaposition, for instance in the title of this chapter, should mean, and indeed different authors have a given a different interpretation to what a or the logic of information might be. Throughout this chapter, I will embrace the plurality of ways in which logic and information can be related and try to individuate a number of fruitful lines of research. In doing so, I want to explain why we should care about the combination, where the controversy comes from, and how certain common themes emerge in different settings.

Download (uncorrected proofs)

**Talk given** at International Association for Computing and Philosophy – Annual Meeting 2016 (University of Ferrara).

**Abstract** Within the philosophy of mathematical practice, the collaborative Polymath-projects initiated by Timothy Gowers have since their start in 2009 attracted a lot of attention (Van Bendegem 2011, Pease and Martin 2012, Stefaneas and Vandoulakis 2012).2 As this field is concerned with how mathematics is done, evaluated, and applied,3 this interest should barely surprise us. The research-activity within Polymath was carried out publicly on several blogs, and thus led to the creation of a large repository of interactive mathematical practice in action: a treasure of information ready to be explored. In addition, the main players in this project (the Field-medallists Timothy Gowers and Terrence Tao) continuously reflected on the enterprise, and provided additional insight on the nature of mathe- matical research, and large-scale collaboration (Gowers 2010: §2). This led amongst others to the claim that the online collective problem solving that underpins the Polymath-projects consists of “mathematical research in a new way” (Gowers and Nielsen 2009: 879).

In previous work (Allo et al. 2013a;b) we relied on formal models of interaction, mainly from the dynamic epistemic logic tradition, to develop a theoretical basis for the analysis of such collaborative practices, with the explicit intent to use logic to understand the practice instead of using logic as part of the foundations of mathematics. In particular, we argued that focusing on available announcement-types (public vs private announcements) leads to a finer typology of scientific communities than the models used in for instance Zollman (2007; 2013), and is better suited to model collaborative enterprises.

The present paper further develops this work by supplementing it, on the one hand, with a computational/empirical study of all the Polymath-projects (the latest project was initiated in February 2016) that allows us to apply methods from social network analysis to the totality of interactions that took place within 11 Polymath projects and 4 Mini-Polymath projects, and, on the other hand, with insights drawn from Dunin-Keplicz and Verbrugge’s (2010) work on logics for teamwork. The upshot is to integrate data-driven and logic-driven (a priori) methods for the study of scientific collaboration in general, and ICT-mediated massive collaboration in mathematics in particular.

**Talk given** at Models and Simulations 7 (University of Barcelona).

**Talk given at** the Artificial Intelligence and Contemporary Society: The Role of Information conference (XXI Jornadas de Filosofía y Metodología actual de la Ciencia — Jornadas Sobre Inteligencia Artificial Y Sociedad Contemporánea: El Cometido De La Información) at the University of A Coruña.

**Abstract **The upshot of this paper is to develop and refine the suggestion that logical systems are conceptual artefacts, and hence the result of a design-process. I develop this idea within the confines of the philosophy of information, and against the combined background of a Carnapian philosophy (as perceived through its current revival) and Herbert Simon’s ideas on the sciences of the artificial.

The proposed constructionist account is developed as follows. I start by highlighting the basic ideas behind a constructionist epistemology and a constructionist philosophical methodology, and then use these insights to identify how a constructionist attitude is already at play in how logicians develop novel formal systems. This modest constructionism is then turned into a more radical form through the proposal, backed by the use of the method of abstraction, that the common counterexample-dynamics in philosophical theorising should be replaced by a refinement-dynamics and a clear statement of requirements, and the application of this idea to the development of logical systems. I conclude with some general remarks on logic as a semantic artefact and as an interface.

**Published in** Theoria, **82(1)**: 3-31.

This is a descendant of the conference presentation with the same title.

**Abstract** The traditional connection between logic and reasoning has been under pressure ever since Gilbert Harman attacked the received view that logic yields norms for what we should believe. In this paper I first place Harman’s challenge in the broader context of the dialectic between logical revisionists like Bob Meyer and sceptics about the role of logic in reasoning like Harman. I then develop a formal model based on contemporary epistemic and doxastic logic in which the relation between logic and norms for belief can be captured.

Download (preprint)

On October 1st I joined the Oxford Internet Institute.

Here’s a summary of the project I’ll be working on:

Information visualisation is an essential tool in data-science, but the lack of a theoretical foundation currently prevents visualisation science to make substantial progress and develop solutions for the epistemological challenges posed by Big Data.

Starting from the current state of the art in formal logic and the philosophy of information, the prospects of a new foundation for information visualisation are explored. This should lead to a model of the information-lifecycle in visualisation that sheds light on trade-offs in design decisions, gives a unified account for reasoning and communication with visualisations, and explains why and how information-visualisation allows us to climb the Data-Information-Knowledge hierarchy.

Given the epistemic challenges in science and in policy-decisions, substantial attention is also devoted to what can go wrong with the use of information-visualisation, which requires the development of an account of mis- and disinformation, and of fallacious reasoning based on computer-generated representations of data.

**Talk given at **a symposium on the meaning of logical connectives (organised by Luis Estrada-González) at CLMPS in Helsinki.

**Abstract** The goal of this contribution is to take a few steps back, and put in perspective our reasons for trying to avoid meaning-variance as a means to, first, save the possibility of genuine rivalry between different logics, and, second, safeguard the very idea of logical revision. One reason for this re-examination is that if we understand better why meaning-*in*variance across logics matters, we will also have a better idea of which kind of answer is satisfactory. Indeed, the hope could be that we can also delineate which types of counter-objections can summarily be dismissed once a good answer to the Quinean challenge has been given.

As part of the proposed inquiry, three complementary perspectives will be adopted. First, we will reconsider the stances of Carnap and Kreisel with respect to formal and informal rigour; second, we will take some lessons from the distinction between data and phenomena (as used in the context of conceptual modelling by Löwe and Müller); finally, we shall revisit the problem of meaning (in-)variance in informational conceptions of logic, and particularly in view of the inverse relationship between logical discrimination and deductive strength.

**Talk to be given at** the Workshop on the Philosophy of Non-Classical Logics (UNILOG 2015), and at the Logic Colloquium 2015.

**Abstract** The problem of accounting for acceptable uses of classically valid but paraconsistently invalid arguments is a recurrent theme in the history of paraconsistent logics. In particular, the invalidity of the disjunctive syllogism (DS) and modus ponens (MP) in, for instance, the logic of paradox LP, has attracted much attention.

In a number of recent publications, Jc Beall has explicitly defended the rejection of these inference-forms, and has suggested that their acceptable uses cannot be warranted on purely logical grounds [1], [2]. Some uses of DS and MP can lead us from truth to falsehood in the presence of contradictions, and are therefore not generally or infallibly applicable [3].

Not much can be objected to this view: if one accepts LP, then MP and DS can only be conditionally reintroduced by either

- opting for Beall’s multiple-conclusion presentation of LP (LP+) which only gives us A, A ⊃ B ⊢LP+ B, A ∧ ¬A and ¬A, A ∨ B ⊢LP+ B, A ∧ ¬A, or
- by treating MP and DS as default rules.

The latter strategy was initiated by inconsistency adaptive logics [4], [5], and implemented for the logic LP under the name Minimally inconsistent LP or MiLP [6].

The gap between these two options is not as wide as it may seem: The restricted versions of MP and DS that are valid in LP are the motor of the default classicality of MiLP. The only difference is that the restricted version only give us logical options (Beall speaks of ’strict choice validities’), whereas default classicality presupposes a preference among these options (unless shown otherwise, we must assume that contradictions are false).

A cursory look at the debate between Beall and Priest [3], [7] may suggest that not much can be added to their disagreement. However, if we focus on the contrast between the mere choices of LP+ and the ordering of these choices in MiLP, we can tap into the formal and conceptual resources of modal epistemic and doxastic logic to provide a deeper analysis [8]. We can thus develop the following analogy:

LP+ is motivated by the view that logical consequence is a strict conditional modality, and is therefore knowledge-like. Using a slightly more general terminology: all logical information is hard information.

MiLP is motivated by the acceptance of forms of logical consequence that are variable conditional modalities, and are therefore belief-like. With the same more general information: some logical information is soft information.

This presentation still gives the upper-hand to Beall’s stance (shouldn’t logical consequences be necessary consequences?), but only barely so. The upshot of this talk is to motivate the views that (i) the soft information that underlies the functioning of MiLP can be seen as a global as well as formal property of a logical space, and is therefore more logical than we may initially expect, and that (ii) adding a preference among logical options can be seen as a legitimate and perhaps even desirable step in a process of logical revision.

References

[1] J. Beall, "Free of Detachment: Logic, Rationality, and Gluts", Noûs, no., p. . n/a, 2013.

[2] J. Beall, "Strict-Choice Validities: A Note on a Familiar Pluralism", Erkenntnis, 79, no. 2, p. . 301-307, 2014.

[3] J. Beall, "Why Priest’s reassurance is not reassuring", Analysis, 72, no. 3, p. . 517-525, 2012.

[4] D. Batens, "Dynamic Dialectical Logics", in Paraconsistent Logic - Essays on the inconsistent, no., G. Priest, R. Routley and J. Norman, Editor. München / Hamden / Wien: Philosophia Verlag, 1989, p. . 187-217.

[5] D. Batens, "Inconsistency-adaptive logics", in Logic at Work - Essays dedicated to the Memory of Helena Rasiowa, no., E. Orlowska, Ed. Heidelberg / New-York: Springer, 1999, p. . 445-472.

[6] G. Priest, "Minimally inconsistent LP", Studia Logica, 50, no. 2, p. . 321, 1991.

[7] G. Priest, "The sun may not, indeed, rise tomorrow: a reply to Beall", Analysis, 72, no. 4, p. . 739-741, 2012.

[8] J. Van Benthem, Logical dynamics of information and interaction, no. Cambridge University Press, 2011.

**Talk to be given at** ICPI 2015 (IS4IS-Summit)

**Abstract** My goal in this talk is to further develop the informational conception of logic proposed in [1] by motivating and exploring a methodology for logical practices (using, developing and thinking about logic) that is inspired by the methodology from the philosophy of information, with particular emphasis on its constructionist metaphilosophy [2]. Against this background, I’m interested in the following phenomenon: If a formalisation-process leads to the refinement of one or more concepts we are interested in (either because we are explicitly formalising them, or because we use them to talk about the concepts we are actually formalising), this often leads to a “splitting of notions”. In that case, the uncareful use of the original notions in combination with their refinements often leads to fallacies of equivocation. As suggested in [3], the development of a design-perspective on logic is meant to show that this phenomenon is a reason to abandon the original concepts, and not a reason to cast doubt on the proposed refinement. As a corollary, constructionism is logic contributes to the motivation of a pluralist perspective on logical practices.

**References**

[1] P. Allo and E. Mares, "Informational Semantics as a Third Alternative?", Erkenntnis, 77, no. 2, p. . 167-185, 2012.

[2] L. Floridi, "A defence of constructionism: philosophy as conceptual engineering", Metaphilosophy, 42, no. 3, p. . 282-304, 2011.

[3] P. Allo, "Synonymy and intra-theoretical pluralism", Australasian Journal of Philosophy, 93, no. 1, p. . 77-91, 2015.

**Talk given at **the Fourth World Congress on Universal Logic (Rio, Brasil).

**Published in **the Australasian Journal of Philosophy, **93(1)**: 77-91.

**Abstract** The upshot of this paper is to use the formal notion of synonymy to scrutinize the case for intra-theoretical pluralism defended in Hjortland's "Logical pluralism, meaning-variance, and verbal disputes".

Download (preprint)

The Seventh Workshop on the Philosophy of Information is Organised by Phyllis Illari (Science and Technology Studies, UCL) Giuseppe Primiero (Computer Science, Middlesex University).

**Theme:** Conceptual challenges of data in science and technology.

**Place:** University College London.

**Time:** 30-31 March 2015.

**Call for abstracts:** closes on 23 January 2015.

With thanks to the School of Science & Technology at Middlesex University, the British Society for the Philosophy of Science, and the Department of Science and Technology Studies, UCL for financial support.

Workshop at the Fifth World Conference on Universal Logic

25-30 June 2015

University of Istanbul

**Keynote**

In Search for a Conceptual Logic of Information (Luciano Floridi)

**Contributed papers**

- A quantitative-informational approach to logical consequence (Marcos Alves, Itala D'Ottaviano)
- Up the hill: on the notion of information in logics based on the four-valued bilattice (Carolina Blasio)
- Logic informed (Justin Bledin)
- Types of informational pluralism (Neil Coleman)
- Towards a more realistic theory of semantic information (Marcello D'Agostino & Luciano Floridi)
- Depth-bounded Probability Logic: A preliminary investigation (Marcello D'Agostino, Tommaso Flaminio, Hykel Hosni)
- Procedural theory of analytic information (Marie Duzi)

Organisers

The workshop is hosted by Universal Logic 2015 and organised in collaboration with the Society for the Philosophy of Information.

Workshop chairs are: Patrick Allo and Giuseppe Primiero

A selection of papers from the Fourth Workshop on the Philosophy of Information edited by Patrick Allo and Luciano Floridi has been published in Minds and Machines, 24(3).

- “A Taxonomy of Errors for Information Systems”, Giuseppe Primiero
- “The Logic of Knowledge and the Flow of Information”, Simon D'Alfonso
- “From Interface to Correspondence: Recovering Classical Representations in a Pragmatic Theory of Semantic Information”, Orlin Vakarelov
- “Smooth Yet Discrete: Modeling Both Non-transitivity and the Smoothness of Graded Categories With Discrete Classification Rules”, Bert Baumgaertner

See also the announcement by the editor of the journal.

**Talk and poster** given at LOFT 2014 (Bergen, Norway).

**Talk and poster** given at LOFT 2014 (Bergen, Norway).

(joint work with Jean Paul Van Bendegem and Bart Van Kerkhove)

Festschrift for Jean Paul van Bendegem on the occasion of his 60th birthday

*Patrick Allo and Bart van Kerkhove, eds*

Commenting on scientific and cultural developments through various media in his native region Flanders, Jean Paul Van Bendegem (b. 1953) has been a relatively well known philosopher for many years now. Although the entirety of his work is based on the very same humanistic principles, his ventures in logic and in the philosophy of mathematics have received less attention in these public fora.

For this Festschrift at the age of sixty, some of Van Bendegems most important intellectual associates in these areas (colleagues and friends, often both), address topics that have been central to their intellectual exchanges with him. More often than not, these are connected in some way or another to Van Bendegems notoriously staunch finitism and focus on the human aspect of mathematics, the sciences, and indeed, even logic.

Special issue of Logique et Analyse. (overview)

Edited by Giuseppe Primiero and Patrick Allo.

**Contributions**

“Taking Stock: Arguments fro the Veridicality Thesis” (Hilmi Demir)

“Information versus Knowledge in Confirmation Theory” (Darrell P. Rowbottom)

“Perception and Testimony as Data Providers” (Luciano Floridi)

“Event Mappings for Comparing Formal Frameworks for Narratives” (Bernhard Fisseni and Benedikt Löwe)

**Talk given at **the Workshop on Tableau-systems (Brussels), and at the ILIAS-seminar (Luxembourg).

**Abstract** The traditional idea that logic is specially relevant for reasoning stems from the fact that logic is often conceived as an absolute normative constraint on what we should and should not believe (a synchronous constraint) and as an infallible guide for what we should (or may) come to believe in view of what we already believe (a diachronic constraint). This view is threathened by the existence of rational failures of deductive cogency; belief-states that do not conform to what logic would require, but that are never- theless more rational than any revised belief-state that would be deductively cogent.

The suggestion that belief-states that are not deductively cogent can still be rational depends itself on the view that logical norms like consistency or deductive closure can sometimes be overruled by extra-logical norms. The latter clearly poses a problem for views that grant logic a special role in reasoning. (The underlying idea is that the special role of logic is inconsistent with the presumed defeasible character of logical norms.)

There are many ways of coping with these insights.

1. One can just accept the conclusion that the received view about logic is wrong,

2. one can deny the existence of rational failures of deductive cogency, or

3. one can revise what we mean by logic and/or how we understand it's role in reasoning.

I'm only interested in the last type of response.

In particular, I'd like to focus on strategies that (a) rely on the use of non-classical logics, (b) claim that logic can be used to formalise defeasible reasoning forms, and (c) propose a logical model of belief and belief-revision. While each of these three strategies reduces the gap between logic and reasoning, and even share some of their formal resources, a unified philosophical account of such proposals is still missing.

My aim in this talk is relatively modest. I only want to develop a minimal model that integrates the crucial features of sub-classical logics, with models of belief that rely on defeasible reasoning and allow for belief-revision. The upshot is to distill an account of the special role of logic in reasoning that is consistent with our best formal (logical) models of belief.

The proposed account combines a modal reconstruction of adaptive consequence relations (Allo, 2013b) with a suggestion to adopt the finer distinctions of different types of group-knowledge (and belief) to model single-agent knowledge (and belief) (Allo, 2013a) and a formal model of belief-merge through communication (Baltag & Smets, 2013).

**Published in** Minds and Machines **24(1)**: 71-83.

Symposium on Luciano Floridi's “The Philosophy of Information” edited by Anthony Beavers.

**Abstract.** Floridi’s chapter on relevant information bridges the analysis of “being informed” with the analysis of knowledge as “relevant information that is accounted for” by analysing subjective or epistemic relevance in terms of the questions that an agent might ask in certain circumstances. In this paper, I scrutinise this analysis, identify a number of problems with it, and finally propose an improvement. By way of epilogue, I offer some more general remarks on the relation between (bounded) rationality, the need to ask the right questions, and the ability to ask the right questions.

Download (preprint)

**Published in** Studia Logica **101(5)**: 933-58.

**Abstract.** Modal logics have in the past been used as a unifying framework for the minimality semantics used in defeasible inference, conditional logic, and belief revision. The main aim of the present paper is to add adaptive logics, a general framework for a wide range of defeasible reasoning forms developed by Diderik Batens and his co-workers, to the growing list of formalisms that can be studied with the tools and methods of contemporary modal logic. By characterising the class of adaptive preference models, this aim is achieved at the level of the model-theory. By proposing formulae that express the consequence relation of adaptive logic in the object-language, the same aim is also partially achieved at the syntactical level.

Download (preprint)

**Published in** Aberdein, A. & I. Dove (eds.), The Argument of Mathematics. Dordrecht, Springer: 339–60.

**Presented at **CLPS13 Conference on Logic and Philosophy of Science (Ghent).

**Abstract.** Because the conclusion of a correct proof follows by necessity from its premises, and is thus independent of the mathematician’s beliefs about that conclusion, understanding how different pieces of mathematical knowledge can be distributed within a larger community is rarely considered an issue in the epistemology of mathematical proofs. In the present paper, we set out to question the received view expressed by the previous sentence. To that end, we study a prime example of collaborative mathematics, namely the Polymath Project, and propose a simple formal model based on epistemic logics to bring out some of the core features of this case-study.

Download (preprint)

Paper based on The Dynamics of Adaptive Proofs: A Modal Perspective

**Published in** G. Bezhanishvili, S. Loebner, V. Marra & F. Richter (Eds.), TbiLLC 2011, Lecture Notes in Computer Science, Vol. 7758: 155–65.

**Abstract** Standard refinements of epistemic and doxastic logics that avoid the problems of logical and deductive omniscience cannot easily be generalised to default reasoning. This is even more so when defeasible reasoning is understood as tentative reasoning; an understanding that is inspired by the dynamic proofs of adaptive logic. In the present paper we extend the abnormality (preference) models for adaptive consequence with a set of open worlds to account for this type of inferential dynamics. In doing so, we argue that unlike for mere deductive reasoning, tentative inference cannot be modelled without such open worlds.

Following a 10-year period of formal and informal collaboration between several researchers, the establishment of the Society for the Philosophy of Information (SPI) inaugurates the next phase in the development of the philosophy of information as an independent and self-sustained philosophical field.

The Society was founded during the fourth workshop on the philosophy of information held at the University of Hertfordshire in May 2012, and is now ready to open its membership to anyone interested in the philosophy of information while promoting its scientific and educational activities.

Prior collaborations, including part of the work done at the Oxford-based IEG research-group , several editorial projects, and a highly successful workshop-series, will find a new home in this society. In addition to this legacy, several new activities will be launched and led by some of the current members of the society.

Concretely, the SPI:

- brings together scholars in the area harnessing the multidisciplinary and international nature of the Philosophy of Information;

- organises workshops, seminars, conferences and other similar activities to explore the philosophical issues concerning the concept of information and its cognate notions;

- publishes teaching material for undergraduate and graduate courses on the Philosophy of Information;

- maintains a state-of-the-art collection of bibliographic resources;

fosters editorial projects and funding proposals.

In this way, the SPI offers learning and research instruments to undergraduate and graduate students, while promoting the academic network and activities of junior and senior academics whose work focuses on the Philosophy of Information.

The website of the SPI (http://www.socphilinfo.org) is the main centre of activity where we present the aim and focus of the philosophy of information, the mission of its society, and, most importantly, provide information about the current and soon to be launched activities of the SPI. The current activities include:

- a regularly updated PI-related news feed;

- an overview of previous workshops in the philosophy of information, and an announcement of the fifth workshop;

- a brand new textbook on the philosophy of information that forms the cornerstone of our teaching resources;

While the soon to be launched activities include:

- a sustained presence of SPI-sponsored sessions at international conferences;

- a repository of teaching resources, including an overview of courses in the philosophy of information that are currently taught;

- bibliographic resources on the philosophy of information, including an annotated bibliography;

- an overview of the many edited volumes and monographs on the philosophy of information that were published during the last ten years;

- book-reviews and book-symposia on notable publications that fit within or are relevant to the philosophy of information.

Interested researchers and students are encouraged to support this enterprise by becoming a member (link) and by taking part in the activities of the society.

This paper supersedes Interactive Models of Closure and Computation

**Published in** the Journal of Philosophical Logic. 42(1): 91–124.

**Abstract.** In this paper I present a more refined analysis of the principles of deductive closure and positive introspection. This analysis uses the expressive resources of logics for different types of group knowledge, and discriminates between aspects of closure and computation that are often conflated. The resulting model also yields a more fine-grained distinction between implicit and explicit knowledge, and places Hintikka’s original argument for positive introspection in a new perspective.

**14th-15th March 2013**

Brussels, Belgium

The impact of the development of formal logic on philosophy in the 20th Century is well-documented. More recently the rise of formal philosophy, and in particular the application of formal methods in epistemology and semantics has proved that logical and mathematical methods have a bright future in philosophy.

With his work on Quine, P. Gochet played an important role in the former movement. Yet, he was also one of the first to recognized the importance of the interactive and dynamic turn in epistemology and formal semantics that characterize the latter movement.

The scholars and philosophers that will participate to this event have all played major roles in either or both of these movements.

**Speakers**

Patrick Blackburn, Jaakko Hintikka (TBC), Philippe de Rouilhan, Dov Gabbay, Susan Haack, Gerhard Heinzmann, Hourya Sinaceur, Jean-Maurice Monnoyer, Johan van Benthem, Vincent Hendricks, Jacques Dubucs, Dagfinn Føllesdal, Alex Orenstein, Shahid Rahman.

**Practical Information**

The workshop is held at the University Foundation, Egmontstraat 11 rue D’Egmont, 1000 Brussels, and at the Palais des Académies / Paleis der Academiën, Hertogsstraat 1 Rue Ducale, 1000 Brussels.

Admission is free, but registration is required (no lunch is provided). Please send an email to lkl@bslps.be with your name and affiliation.

**Homepage:** http://www.bslps.be/LKL2013/

**27th-28th March 2013**

**University of Hertfordshire, UK**

Submissions are invited for the Fifth Workshop on the Philosophy of Information, which will take place at the University of Hertfordshire, 27th-28th March 2013

The topic this year will be the intersections between qualitative and quantitative views of information.

There is no registration fee, and no fee for the refreshments, lunches, and the workshop dinner.

Bursaries that will cover the participation expenses will be awarded on the basis of need and scientific merit.

Please send abstracts of approximately 1000 words to Mrs Penny Driscoll,

A selection of the best papers will be submitted for publication in a peer-reviewed journal, tba. Papers from the 4th workshop are forthcoming in Minds and Machines.

The Workshop is organised by the UNESCO Chair in Information and Computer Ethics, in collaboration with the AHRC project ‘Understanding Information Quality Standards and their Challenges’ (2011-2013)

For more information about format and previous participants, see previous workshops in the series: http://philosophyofinformation.net/WPI/WPI_Home/Home.html

Talk given at the Conference on the Foundations of Logical Consequence (Arché Centre for Logic, Language, Metaphysics and Epistemology, University of Saint Andrews.)

Published in Erkenntnis. 77(2): 167-85.

(joint work with Edwin Mares)

Abstract. The prima facie case for considering “informational semantics” as an alternative explication of the notion of logical consequence alongside the model-theoretical and the proof-theoretical one is easily summarised. Where the model-theory is standardly associated with a defence of classical logic, and proof-theory with a defence of intuitionist logic, informational semantics seems to wedded to relevant and other substructural logics. As such, if the CL, IL, RL trio is a representative chunk of a broader range of logical options, informational semantics surely has its place. Yet, it is even easier to dismiss the suggestion that informational semantics provides an apparently missing third conception of logical consequence. After all, isn't it just a variant of the usual interpretation of the Routley-Meyer relational semantics rather than a genuine alternative to a model-theoretic account? Or worse, isn't it a mere metaphor? In the present paper, we want to consider a more subtle answer to the question of whether informational semantics is a real alternative for the two more traditional contenders.

Paper based on Paraconsistency and the Logic of Ambiguous Connectives

Published in Paraconsistency: Logic and Applications (Logic, Epistemology and the Unity of Science) edited by Koji Tanaka, Francesco Berto, Edwin Mares and Francesco Paoli, 2012: 57–79.

Abstract. Substructural pluralism about the meaning of logical connectives is best understood as the view that natural language connectives have all (and only) the properties conferred by classical logic, but that particular occurrences of these connectives cannot simultaneously exhibit all these properties. This is just a more sophisticated way of saying that while natural language connectives are ambiguous, they are not so in the way classical logic intends them to be. Since this view is usually framed as a means to resolve paradoxes, little attention is paid to the logical properties of the ambiguous connectives themselves. The present paper sets out to ﬁll this gap by arguing that substructural logicians should care about these connectives, by describing a consequence relation between a set of ambiguous premises and an ambiguous conclusion, and finally by exhaustively characterising the logical properties of ambiguous connectives.

**Invited talk given at** the Special Session of CiE on Open Problems in the Philosophy of Information. Cambridge, UK.

**Published in** S.B. Cooper, A. Dawar, and B. Löwe (Eds.), CiE 2012, Lecture Notes in Computer Science, Vol. 7318:17–28.

http://dx.doi.org/10.1007/978-3-642-30870-3_3

**Abstract.** Informational conceptions of logic are barely novel. We find them in the work of John Corcoran, in several papers on substructural and constructive logics by Heinrich Wansing, and in the interpretation of the Routley-Meyer semantics for relevant logics in terms of Barwises and Perrys theory of situations.

Allo & Mares [2] present an informational account of logical consequence that is based on the content-nonexpantion platitude, but that also relies on a double inversion of the standard direction of explanation (in- formation doesnt depend on a prior notion of meaning, but is used to naturalize meaning, and informational content is not defined relative to a pre-existing logical space, but that space is constructed relative to the level of abstraction at which information is assessed).

In this paper I focus directly on one of the main ideas introduced in that paper, namely the contrast between logical discrimination and deductive strength, and use this contrast to (1) illustrate a number of open problems for an informational conception of logical consequence, (2) review its connection with the dynamic turn in logic, and (3) situate it relative to the research agenda of the philosophy of information.

**Talk given at ** the Fourth Workshop in the Philosophy of Information. University of Hertfordshire.

**Abstract.** Standard refinements of epistemic and doxastic logics that avoid the problems of logical and deductive omniscience cannot easily be generalised to default reasoning. This is even more so when defeasible reasoning is understood as tentative reasoning; an understanding that is inspired by the dynamic proofs of adaptive logic. In the present paper we extend the preference models for adaptive consequence with a set of open worlds to account for this type of inferential dynamics. In doing so, we argue that unlike for mere deductive reasoning, tentative inference cannot be modelled without such open worlds. We use this fact to highlight some features of the informational conception of logic.

**Edited by** Patrick Allo & Giuseppe Primiero.

**Published by** the Koninklijke Academie voor Wetenschappen Letteren en Schone Kunsten België.

**Contributions**

- Benedikt Löwe “Methodological remarks about comparing formal frameworks for narratives”
- Lorenz Demey “Narrative and Information: Comment on Benedikt Löwe”
- Darrell Rowbottom “Information versus Knowledge in Confirmation Theory”
- Erik Myin “Extended Information Processing”
- Francesca Poggiolesi “A Pragmatic Argument in Support of Analyticity”
- Giuseppe Primiero “On the necessity of (sometimes) being synthetic: Comment on Francesca Poggiolesi”
- Luciano Floridi “Perception and Testimony as Data Providers”
- Adriane Rini “Modal Notions: Perception and Testimony as Data Providers: Comment on Floridi”
- Liesbeth De Mol “Reasoning with computer-assisted experiments in mathematics”
- Bart Van Kerkhove “Comment on De Mol”

**Review of** "Not Exactly: In Praise of Vagueness" (Kees van Deemter)

**Published in **Minds and Machines. **22(1)**: 41–45.

**Talk given at** Ninth International Tbilisi Symposium on Language, Logic and Computation

**Abstract.** Adaptive logics are logics for defeasible inference that are characterised by, on the one hand, a formula preferential semantics, and, on the other hand, a dynamic proof-theory (Batens 2007). Because adaptive logics rely on a truly dynamic perspective on logical inference, one would expect that a comparison and integration of adaptive logics with other dynamic logics should be a fruitful enterprise. It does, for instance, make sense to define a class of Kripke-style models that allow us to reformulate the adaptive consequence relation with the standard tools of modal logic (Allo 2011a), or to develop a dynamic doxastic logic where the relation between an agent’s knowledge (or firm beliefs) and defeasible beliefs is governed by an adaptive logic (Allo 2011b). In either case, we get a better idea of how adaptive logics are related to modal logics, but we still miss out on the one crucial aspect of adaptive logics: its dynamic proof-theory. The main aim of this paper is to fill this gap.

Presented at (Anti-)Realisms, Logic and Metaphysics (Nancy, France)

Published in The Realism-Antirealism Debate in the Age of Alternative Logics (Logic, Epistemology and the Unity of Science) edited by Shahid Rahman, Mathieu Marion and Giuseppe Primiero, 2011: 1-23.

Abstract. Present paper’s aim is to put the notion of ambiguous connectives, as explored in Paoli (2003, 2005), in an informational perspective. That is, starting from the notions of informational content and logical pluralism, we ask what it means for a disjunction, i.e. a message of the form “f or y”, to be informative. The bottom line of this paper is that being a pluralist about informational content can even be defended against those who hold a realist conception of semantic information.

**Talk given at **the Dynamics in Logic Workshop (Brussels)

**Abstract.** Adaptive logics provide a general framework for all kinds of defeasible reasoning in terms of, on the one hand, a preferential semantics, and, on the other hand, a dynamic proof-theory. In a previous paper I described a class of Kripke-models that allowed for the reformulation of the consequence relation of adaptive logic in a modal logic. I also claimed that the modal reconstruction would facilitate the comparison and the interaction with other formalisms like preference logics, conditional logics, and the conditional doxastic models from Baltag & Smets [2008]. In the present paper I substantiate this claim. This is done by (a) describing the class of adaptive preference models, which is based on an abnormality-ordering of the states in a model and thus allow for the reconstruction of the consequence relation of adaptive logics,; (b) introducing a modification of that approach that is even closer to the plausibility orderings used to formalise belief and conditional belief; (c) comparing a number of distinctive features of the different approaches, and showing how we can combine the techniques of both approaches. In particular, I shall comment on how abnormality-orderings can be used in doxastic logics, and how awareness can be used to fine-tune the modal reconstruction of the adaptive consequence-relation.

Patrick Allo (editor), Putting Information First: Luciano Floridi and the Philosophy of Information, (Oxford: Wiley-Blackwell, 2011).

**Talk given at** the EEN-meeting, Lund.

**Abstract.** Harman’s view that logic isn’t specially relevant for reasoning (Harman [1986]) is best viewed as a multi-faceted objection to the received view that the laws of logic are (or provide) general as well as infallible rules for reasoning, rather than as a focused attack on this view.

By focusing on so-called rational failures of deductive cogency, and individuating four different attitudes towards them, I show how we can resist Harman's conclusion. On one account, a failure of deductive cogency is rational whenever Γ entails φ, but believing φ conditional on Γ is itself irrational. Namely, φ cannot be accepted while (a) any revision of Γ that does not entail φ is arbitrary and therefore irrational as well (cfr. the paradox of the preface), (b) there is no immediate way to revise Γ , or (c) revising Γ is too costly. The four different attitudes are simple revisionism, sophisticated revisionism, basic scepticism, and critical scepticism.

The following theses are defended:

1. The simple revisionist and the basic sceptic make the same mistake. They assume that because the rules of (classical) logic have no exceptions, norms based on this logic should also be exceptionless.

2. Taking the role of logic in reasoning seriously commits us either to sophisticated revisionism or to critical scepticism.

3. The sophisticated revisionist and the critical sceptic do not need to disagree about the appropriate formalism to model norms for reasoning. They only disagree on how the formalism should be understood.

Published in Philosophical Studies, 153(3): 417–34.

Abstract. The logic of ‘being informed’ gives a formal analysis of a cognitive state that does not coincide with either belief, or knowledge. To Floridi, who first proposed the formal analysis, the latter is supported by the fact that unlike knowledge or belief, being informed is a factive, but not a reflective state. This paper takes a closer look at the formal analysis itself, provides a pure and an applied semantics for the logic of being informed, and tries to find out to what extent the formal analysis can contribute to an information-based epistemology.

Research Objectives

The notion of logical discrimination captures what can be “told apart” in a given logical system. We say, for instance, that intuitionist logic can discriminate between p and ~~p, whereas classical logic cannot, or that paraconsistent logics can tell different inconsistent theories apart while from a classical viewpoint there’s only one such theory—the trivial one. Still, even when it is acknowledged that non-classical logics like intuitionist and paraconsistent logic allow for finer discriminations than classical logic, considerations of that kind are not assumed to bear upon core issues in the philosophy of logic. Consider, for that matter, the fact that for intentional notions like belief or meaning the granularity-problem of finding the right way to discriminate meanings or to characterise the content of intentional states is a central concern (Barwise [1997], Stalnaker [1984]), while similar considerations about logical discrimination do not have a similar impact on the choice of a logic. The present project is motivated by the view that thinking explicitly about logical discrimination is as central to the choice of a logic as the traditional granularity-problem is for how we model intentional states.

The notion of logical discrimination can only take up a central role in our thinking about logic if it can stand on a par with the more established notions of consequence, truth, and validity. This presupposes that the notion in question is already sufficiently precise; which isn’t the case: Logical discrimination can refer to several distinct phenomena. Humberstone [2005], for instance, individuates four of them. The upshot of this project is double: A first aim is to clarify the concept or concepts of logical discrimination; a second aim is to give considerations about logical discrimination a central place in our theorising about logic. The latter is important because logical discrimination is already one of the central notions that are used to formulate the informational conception of logical consequence (Allo & Mares [forthcoming]), but also because implicit considerations about logical discrimination are already at work in the philosophy of non-classical logic and the debate between logical pluralists and logical monists.

To get a better grip on the different types of logical discrimination, we first need to focus on what logics (as opposed to presentations of logics) can discriminate, and then we need to introduce the distinction between (1) what can be captured by (interpreted) formal languages (the ability to discriminate between different structures), and (2) the discriminations that can be made by a specific logic. The former type of discriminatory power reduces to the expressiveness of a language; the latter type bears on the relations of synonymy and logical equivalence between formulae. This distinction between the expressiveness of a language and the granularity of the relations of synonymy and logical equivalence needs further clarification.

A detailed account of how expressivity and granularity are related will be developed by (a) providing a more precise formulation of the formal relation between discrimination as expressivity and as granularity; (b) spelling out a number of illustrative examples based on the two standard contenders of classical logic (intuitionist and relevant logic); and (c) showing how the conceptual primacy of logical discrimination within an informational semantics benefits from a generalised account of logical discrimination.

**State of the Art Review**

The formal interest of the two guises of logical discrimination that were introduced above can be described by referring to how these already surface in the relevant literature. Discrimination at the level of the language (and its interpretation) is the focus of model theory. Discrimination as granularity is relevant in view of the inverse-relationship between synonymy and deductive strength. The latter relation holds for a wide range of logics (the core topic of Humberstone [2005]), and is captured by the truism that “the more a logic proves, the fewer distinctions it registers” (ibid, 207). As such, logical discrimination as granularity is closely related to the traditional focus on valid arguments and the study of reasoning or inference, while the expressivity of a formal language is more concerned with our means for describing the world. The interest of both types of logical discrimination for the philosophy of logic is illustrated by two paradigm cases.

A first paradigm case refers to two ways of changing one’s logic: adopting an extension of a logic, or adopting a rival of that logic—a classical distinction due to Haack [1974]. In the first case, one extends a logic with some new logical vocabulary to obtain conservative extension of the original logic (e.g. by adding modal operators to classical propositional logic). In the second case, the language remains unchanged, but the set of theorems or valid deductions is altered (e.g. by moving from classical to intuitionist logic). Clearly, both changes involve a change in logical discrimination, but the change is of a different kind. One reason for thinking that the gap between a more fine-grained logic, and a more expressive logic isn’t absolute is that the distinction between extensions and rivals of classical logic isn’t absolute either. This is best illustrated by the well-known fact that intuitionist logic can be faithfully embedded in the classical modal logic S4 (but see Aberdein & Read [2009: 2.1.2] for a discussion of its significance). Thus, if the contrast between extended and rivalling logics can itself be retraced to a more general contrast between two guises of logical discrimination, a better understanding of the latter contrast could arguably also lead to new insights about how rivals and extensions of a logic are related. To the best of my knowledge, this line of research hasn’t been systematically pursued in the past.

A second paradigm case concerns what one believes to be the core subject-matter of logic. Orthodoxy has it that this is the concept of logical consequence (see, for instance, the second chapter of Beall & Restall [2006]). As such, the traditional view ties logical theorising to the study of the deductive strength of specific logics. If, by contrast, one looks at some of the most important results in twentieth-century logic, it is clear that the notion of definability is equally important (van Benthem [2008]). The connection with expressivity is plain in the latter case, but also on the more traditional view there’s a connection with logical discrimination via the inverse-relation between deductive strength and granularity. Consequently, getting a better grip on how granularity and expressivity are related, might help us to resolve the tension between the two competing views about what the primary subject-matter of logic might be.

**Research Project**

Connecting the two notions

The purpose of connecting two guises of logical discrimination isn’t to reduce one to the other, but rather to show, first, how they are formally related, and, secondly, how considerations about both granularity and expressivity function in the process of logical modelling. Expressivity is in the first place a property of formal languages and their intended interpretation, while granularity is a property of specific logics. By making this clear, we can already show that the adoption of an extended logic isn’t merely about enhancing the expressive means of our logic, but actually involves two choices of logical discrimination: one aimed at the language, and the other aimed at the logic itself.

In rough outline, we can thus characterise the distinction between expressivity and granularity as follows. When we decide to use a certain formal language, we decide on the in-principle available distinctions. When, furthermore, we settle on a given consequence-relation over that language, we decide which of the in-principle available distinctions really matter. That is, we decide which distinctions are retained, and which distinctions are collapsed. This type of interaction is nicely illustrated by the well-known fact that while we can embed classical logic (the more coarse grained logic) into intuitionist logic (a more fine-grained logic), we can only embed intuitionist logic (a more fine-grained logic) in a modal extension of classical logic (a more expressive logic with a more coarse grained propositional fragment).

When we think of logic as a modelling tool (Shapiro [2006: Chapt. 2]), we can relate considerations about granularity and expressivity to the criteria we use when we try to construct a good model. Basically, finding a good model means choosing the right degree of logical discrimination. The expressivity of the language we use is closely tied to one purpose of models: the description of the world. The granularity of a logic is also related to this descriptive aim, but is more intimately tied to another purpose: deductive inference. Again, we encounter the two already mentioned purposes of logical theorising, but since each aim can be furthered by choosing the right degree of logical discrimination, there’s no insurmountable gap between them. As illustrated above with respect to embedding one logic in another, the two processes of deciding on a set of in-principle available distinctions and the decision on which of these distinctions can be collapses do not occur independently of each other, but need to be balanced. Furthermore, there is no reason to presuppose that one type of logical discrimination is more basic than the other. Our means for describing the world influence what we can reason about, but the opposite is equally true. A pre-existing deductive practice also limits the distinctions that can be usefully made by a language. This two-way interaction seems closer to the practice of logical modelling, and needs to be made more precise to show how the descriptive and deductive aim of logic are related.

Applications

The main reason for the study of logical discrimination as an independent notion is its intended role within the informational conception of logic described in Allo & Mares [forthcoming]. The informational conception is meant as a contender for the traditional truth-conditional and inferential conceptions of logic. In particular, it proposes a double inversion of the usual order of explanation: Information comes before meaning, and information comes before possibility (Barwise [1997]). It is based on the assumption that the way we individuate informational content should be understood relative to how we access and use information. This approach presupposes that information is itself a relational concept: It is there to be accessed, but how it can be accessed depends as much on the informee as on the world. More exactly, it depends on what the world is like (i.e. how information is distributed), but also on the distinctions that are already available to the informee and on the distinctions the informee chooses to ignore. These features of the world and of the informee give rise to so-called global constraints on the logical space, which determine the possibilities, and thus yield a consequence relation.

This description of informational semantics implicitly favours the granularity account of logical discrimination, but thinking about the distinctions that are available to an informee also forces us the integrate the expressivity account of logical discrimination. A more elaborate description and defence of the informational conception of logic will therefore have to rely on a better understanding of the two main guises of logical discrimination.

Two further topics in the philosophy of logic that benefit from a better insight in the phenomenon of logical discrimination are described below.

Discussions about how logic and information are related pop up every now and then in the literature. A recent disagreement concerns the so-called modal and categorical information-theories (van Benthem [2010], Sequoiah-Grayson [2010]). The former is customarily known as the information-as-range paradigm (Stalnaker [1984]). The latter is related to the study of substructural logics, and the view that the Lambek calculus is a kind of basic logic for information flow and in particular inference. One aim of my “The Many Faces of Closure and Introspection” was to show that we could remain within the framework of modal information to model inferential processes, and thus avoid categorical information to explain inference as a dynamic process. The upshot of that proposal is, however, not to show that categorical information theory should be reduced to, or even be replaced with, modal information theory.

The difference between the modal and the categorical paradigm can be usefully compared to the tension between rivals and extensions of classical logic. Categorical information-theory, through its use of a substructural logic, proposes a rival of classical logic. Modal information-theory, through its use of modal logic, proposes an extension of classical logic. This insight situates the debate in the broader context of the integration of two guises of logical discrimination, and suggests that this is an area where a better understanding of how discrimination as granularity and as expressivity are related can be put to good use.

A from a formal point of view even more challenging application of insights about how granularity and expressivity are related, arises in the context of modal reconstructions of nonmonotonic logic (e.g. my “Adaptive Logic as a Modal Logic”). As with the previously discussed examples, this is a case where a weaker than classical logic is embedded in an extension of classical logic. The main difference is that here it is a nonmonotonic logic that is embedded in a monotonic logic. Unlike the former examples, it is not all that obvious how this can be understood in terms of two types of logical discrimination. For sure, the modal logic used to reformulate the adaptive consequence relation is more expressive, but it is much less clear whether we can understand the standard presentation of adaptive logics in terms of more fine-grained synonymy and logical equivalence relations. Yet, if this can be achieved, it promises to integrate the topic of defeasible inference within the informational account of logical consequence; which would count as a major advantage of the informational conception vis-à-vis the traditional truth-conditional and inferential conceptions.

**Work Plan (based on full-time research position)**

Because the completion of the proposed project requires the study of several formal notions, but also the development of a broader philosophical framework, the work plan described below develops two parallel paths that come together in the final part.

1. Preliminary work (year 1)

Formal path: Study the notions of synonymy and logical equivalence as they are used in, specifically, Humberstone [2005], and, more broadly, in the literature on algebraic semantics.

Philosophical path: Expand on previous work on “informational semantics,” with particular attention to the contrast between logical and non-logical forms of discrimination.

2. In-depth investigation (year 1–3)

Formal path: (i) Investigate how logical discrimination surfaces in algebraic and Kripke-style semantics. Use these two formalisms to analyse and compare the granularity and expressivity of modal and substructural logics. (ii) Investigate how logical discrimination surfaces in the Kripke-style semantics for adaptive logics.

Philosophical path: Further develop the logic-as-modelling view, and show how, for a given application, considerations about granularity and expressivity interact, and jointly determine how we choose a logic for that application.

3. Application (year 4–5)

Informational semantics: Describe how considerations about logical discrimination and expressivity function within an informational conception of logic. Explain what this means for the philosophy of logic (topics: logical pluralism, subject-matter of logic, contrast between rival and extended logics).

Modal and categorical information: See to what extent the contrast between modal and categorical information is analogous to the contrast between rival and extended logics.

Informational semantic for adaptive logic: Use the insights on how logical discrimination surface in Kripke-style models for adaptive logics to integrate adaptive logics within the informational conception of logic. Investigate what this means for the philosophy of logic.

**References**

Barwise, J.: 1997, ‘Information and Impossibilities.’ Notre Dame Journal of Formal Logic **38**, 488-515.

Stalnaker, R.: 1984, Inquiry. MIT Press, Cambridge Ma.

Humberstone, I. L.: 2005. ‘Logical Discrimination’, in J.-Y. Béziau, (ed.), Logica Universalis, Birkhäuser Verlag, Basel, pp. 207–228.

Allo, P. and Mares, E.: 2011, ‘Informational Semantics as a Third Alternative?’, Erkenntnis, (forthcoming).

Haack, S.: 1974, Deviant Logic. Some Philosophical Issues. Cambridge University Press, Cambridge.

Aberdein, A. and S. Read: 2009. 'The Philosophy of Alternative Logics', in L. Haaparanta, (ed.), The Development of Modern Logic, Oxford University Press, Oxford, pp. 613–723.

Beall, Jc & G. Restall: 2006. Logical Pluralism. Oxford University Press, Oxford.

van Benthem, J.: 2008, ‘Logical dynamics meets logical pluralism?’ Australasian Journal of Logic **6**, 182–209.

Shapiro, S.: 2006, Vagueness in Context. Oxford University Press, Oxford.

van Benthem, J.: 2011, ‘Categorical versus Modal Information Theory.’ Linguistic Analysis **36**, 533–540.

Sequoiah-Grayson, S.: 2010, ‘Epistemic closure and commutative, nonassociative residuated structures.’ Synthese, 1-16. (Online First)

**Talk given at** the Séminaire interuniversitaire de Logique et Ontologie, Namur.

**Abstract.** The *prima facie *case for considering "informational semantics" as an alternative explication of the notion of logical consequence alongside the model-theoretical and the proof-theoretical one is easily summarised. Where the model-theory is standardly associated with a defence of classical logic (CL), and proof-theory with a defence of intuitionist logic (IL), informational semantics seems to be wedded to relevant and other substructural logics (RL). As such, if the CL, IL, RL trio is a representative chunk of a broader range of logical options, informational semantics surely has its place. Yet, it is even easier to dismiss the suggestion that informational semantics provides an apparently missing third conception of logical consequence. After all, isn't it just a variant of the usual interpretation of the Routley-Meyer relational semantics rather than a genuine alternative to a model-theoretic account? Or worse, isn't it a mere metaphor? In the present paper, we want to consider a more subtle answer to the question of whether informational semantics is a real alternative for the two more traditional contenders. Our discussion undoubtedly leaves many questions unanswered. We mainly try to give the reader an idea of why informational semantics is a genuine and attractive alternative. To that end, we sketch two complementary pictures of the informational approach to logical consequence: a traditional model-theoretic one, and a more abstract one based on the inverse relation between logical discrimination and deductive strength.

**Contactforum** hosted by the Royal Flemish Academy of Belgium for Science and the Arts.

**Date:** November 18 & 19, 2010.

**Venue:** Paleis der Academiën, Hertogstraat 1, 1000 BRUSSELS.

An ambitious project like the philosophy of information cannot operate in isolation. Not only does it have to rely on input from the sciences of information and computation, it also needs to convince those who are sympathetic to the project (including those who had been philosophers of information without realising it) to contribute actively, and encourage the sceptics to challenge the basic assumptions of the philosophy of information. The series of workshops in the philosophy of information provide a platform for this kind of interaction.

For the third workshop on the philosophy of information, we bring together researchers who have been working in the philosophy of information, with researchers working in related areas.

Talk given at the Logic, Reasoning and Rationality Congress (Gent).

Abstract. Adaptive logics have evolved from systems for handling inconsistent premises to a unifying framework for all kinds of defeasible reasoning—with the standard format (Batens [2007]) as one of its major strengths. Modal logics have gone through a similar evolution. They were originally conceived as an analysis of alethic modalities, but have now become the privileged language to reason about all kinds of relational structures. One ﬁeld where modal logics have been used as a unifying framework is in the analysis of what Makinson [1993] describes as the different “faces of minimality” in defeasible inference, conditional logic, and belief revision. Modal translations of so-called minimality semantics are found in Boutilier [1990], and more recently in van Benthem et al. [2006]. Given the hypothesis that the standard format of adaptive logic is sufficiently general to incorporate most (if not all) forms of defeasible inference (Batens [forthcoming: Chapt. 1]), it is natural to ask whether the adaptive consequence relation can also be formulated in a modal language. The main reason why a similar reconstruction is possible, is that adaptive logics are obtained by (i) ordering models in a certain way, and (b) using that ordering to select a subset of all models of the premises to obtain a stronger consequence relation.

**Talk given** at the Formal Epistemology Workshop 2010 (Konstanz, Germany).

**Abstract**. In this paper I give a more refined account of deductive closure and positive introspection by using the expressive resources of logics for different types of group knowledge.

**Published in** Knowledge, Technology and Policy, 23(1): 25–40.

**Abstract. **In this paper I reassess Floridi's solution to the Bar-Hillel-Carnap paradox (the information-yield of inconsistent propositions is maximal) by questioning the orthodox view that contradictions cannot be true. The main part of the paper is devoted to showing that the veridicality thesis (semantic information has to be true) is compatible with dialetheism (there are true contradictions), and that unless we accept the additional non-falsity thesis (information cannot be false) there is no reason to presuppose that there is no such thing like contradictory information.

Presented at CAP in Europe 2009 (Barcelona, Spain)

Abstract. The main aim of this paper is to lay the foundation for a broader meta-theoretical reflection on the practice of the formal modeling of cognitive states and actions. Two examples, one from basic epistemic logic, the other from dynamic epistemic logic are used to illustrate some wellknown challenges. These are further evaluated by means of two oppositions: the contrast between abstraction and idealization and the difference between a properties of the agent reading versus a properties of the model reading. To conclude, some methodological insights inherited from the philosophy of information are proposed as fruitful way of understanding the formal modeling of cognitive states and actions.

The philosophy of information and information technology track is part of the ECAP '09 conference.

This paper supersedes Logic in Epistemic Perspective. Adaptive Logic as Conditional Belief.

Abstract. In this paper we reconstruct the ﬁnal derivability relation of adaptive logic within the framework of conditional doxastic logic (CDL). On the formal level, this is achieved by generalising the preference ordering used in CDL in such a way that it can capture the preferential semantics of adaptive logic. The final result is a class of preferential models wherein a boxed formula is valid iff a corresponding formula can be finally derived using a particular adaptive strategy.

Presently being revised.

Paper based on A two-level approach to logics of data and information

Published in Synthese 167(2) (2009): 231-249.

(Knowledge, Rationality, and Action) special issue on the Philosophy of Information and Logic edited by Luciano Floridi and Sebastian Sequoiah-Grayson (Online First).

Abstract. Cognitive states as well as cognitive commodities play central though distinct roles in our epistemological theories. By being attentive to how a difference in their roles affects our way of referring to them, we can undoubtedly accrue our understanding of the structure and functioning of our main epistemological theories. In this paper we propose an analysis of the dichotomy between states and commodities in terms of the method of abstraction, and more specifically by means of infomorphisms between different ways to classify states of information, information-bases, and evidential situations.

Talk given at the VAF Conference 2009 (Tilburg).

Abstract. When reformulated as a modal logic for conditional belief, the main properties of adaptive logics can be captured as properties of the resulting modal operators. The purpose of the present paper is to give a broadly epistemic interpretation to these modalities, and use these to bring out the distinctive epistemic character of adaptive logics. Concretely, I want to do three things: (a) give a brief description of a modal logic for conditional belief based on the semantics of adaptive consequence; (b) investigate the role of logical and epistemic or doxastic possibilities in this logic; and (c) show how these modalities can be used to elucidate the relevance of logic for deductive reasoning.

Review of: “Mainstream and Formal Epistemology” (Vincent Hendricks)

Published in Erkenntnis 69(3) (2008): 427-432.

Talk given at the Fourth World Congress of Paraconsistency (Melbourne)

Abstract. Non-dialetheic proponents of paraconsistency have often appealed to ambiguity to explain away the apparent acceptance of true contradictions in their paraconsistent approach to logical consequence. This can be done by referring to an ambiguity at the level of the logical or the non-logical vocabulary. The kind of ambiguity I’m interested in, relates the validity of explosion to the ambiguity of the classical connectives. (see e.g. Read (1981)).

While this is all fairly well known, it is generally not remarked that classical logic offers only one of two intuitively plausible ambiguous readings of the logical connectives. Namely, a reading which takes an ambiguous connective to exhibit the deductive features of the exten- sional and intensional connectives. Another option, however, takes each ambiguous connective that plays a role in an argument to exhibit (in a non-deterministic way) the deductive features of either an intensional or an extensional connective.

Paper based on Adaptive Logics presented in ‘almost Amsterdam style’: an outline and an application.

Abstract. In this paper we reconstruct the ﬁnal derivability relation of adaptive logic (a peculiar kind of nonmonotonic logic developed with the intent to formalise and explicate real-life reasoning) within the framework of modal epistemic logic. On the formal level, this is achieved through the adoption of (i) a modal language with operators labelled with sets of non-modal formulae, and (ii) a model theory which evaluates modal formulae over a contextually restricted range of possible worlds.

Talk given at Logics for Dynamics of Information and Preferences Working sessions (ILLC, Amsterdam).

Abstract. Adaptive logics (a family of nonmonotonic logics introduced by Batens, and further developed by his co-workers) are often suggestively described as "logics which adapt themselves to the specific premise-sets they are applied to." Therefore, their functioning is fleshed out in terms of a dynamic proof-theory which allows for defeasible inference-steps. Notwithstanding the fact that this is an accurate description of what adaptive logics do, this is not always the best way to introduce them. The obvious alternative is to explain some of the basic insights of adaptive logics in terms of its preferential semantics. Admittedly, this is not the adaptive logician's preferred starting point (for it is all about the dynamic proof-theory), but from a present-day semantical perspective on logical dynamics it is undoubtedly the most familiar one.

In this presentation I want to do two things. First, and most importantly, to reformulate the model-theory of adaptive logics in a modal-epistemic framework. This move requires us to interpret the preferential semantics relative to a box-operator with a contextually restricted range. Secondly, and largely as an illustration of the former, to describe how this framework can be applied to elucidate how information loss due to equivocal communication could be reduced.

Published in the Journal of Philosophical Logic 36(6) (2007): 659-94.

Abstract. Up to now theories of semantic information have implicitly relied on logical monism, or the view that there is one true logic. The latter position has been explicitly challenged by logical pluralists. Adopting an unbiased attitude in the philosophy of information, we take a suggestion from J.C. Beall and Greg Restall at heart and exploit logical pluralism to recognise another kind of pluralism. The latter is called informational pluralism, a thesis whose implications for a theory of semantic information we explore.

Invited tutorial given at the ILCLI International Workshop on Logic and Philosophy of Knowledge, Communication and Action

Abstract. The tutorial connects two notions of information: the inverse relationship principle which relates informational content to the exclusion of possibilities, and information-structures based on a partial ordering on states of information. Jointly, these allow the formulation of several distinct precise notions of content-individuation.

Presented at the First Workshop on the Philosophy of Information and Logic (Oxford, UK)

Abstract. Cognitive states like knowledge and belief, as well as cognitive commodities like evidence, justification, or proof play a central role in our epistemological theories. Being attentive to the way such states and commodities interact in these theories is particularly important. This is even more so if, besides knowledge, we also want to reason about how data and information improve our overall epistemic position. This is mainly due to the fact that being informed is itself ambiguous between the predominantly syntactic relation of holding a piece of data that qualifies as genuine information and the largely semantic relation of being in a state which satisfies certain conditions. In this paper we argue that getting the relation between states and commodities “right” is a first prerequisite for the choice of bridge axioms in a combined logic of data and information with theoretical virtues similar to the existing combined logics of knowledge and belief. To start with, we formalise the intuitively valid principles that “being informed involves holding data,” and “being informed involves holding a piece of information.” Subsequently, we check how these necessary conditions for being informed constrain the set of plausible bridge axioms, and then outline a generic combined system. To conclude, a number of broader methodological considerations are introduced and related to the specificity of introducing informational considerations into the practice of formal modelling.

Presented at North American Conference on Philosophy and Computing (Loyola University, Chicago, US)

Abstract. The present paper expands upon the previously defended thesis of informational pluralism. This is the view that the content conveyed by a message is a function of the level of abstraction at which the relevant communication is modelled. Specifically, it focuses on the problem of how content and presumed content should be evaluated in settings where the communication is equivocal.

The formal approach is a defeasible account of perceived content. Its functioning is studied informally in terms of the relevant levels of abstraction and the relation of simulation between those levels, and formally characterised in terms of informorphisms between classifications.

Presented at CAP in Europe 2007 (Twente, The Netherlands)

Published in Waelbers, Briggle & Brey (eds.), Current Issues in Computing and Philosophy, IOS Press.

Abstract. One of the basic principles of the general definition of information is its rejection of dataless information, which is reflected in its endorsement of an ontological neutrality. In general, this principles states that “there can be no information without physical implementation” (Floridi (2005)). Though this is standardly considered a commonsensical assumption, many questions arise with regard to its generalised application. In this paper a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted and unrestricted data-implementation-principles.

The philosophy of information and information technology track is part of the ECAP '07 conference.

PRELIMINARY PROGRAMME AVAILABLE NOW

Poster presented at the Formal Epistemology Workshop (CMU, Pittsburgh, US)

Abstract. One of the central aims of the philosophy of information is the formulation of an epistemological theory that is based on information. On this account—and unlike Dretske’s seminal proposal—knowledge should no longer be analysed in terms of beliefs, but directly in terms of the non-doxastic factive attitude of ‘being informed’. A distinctive feature of this project is its simultaneaous investigation of information as a commodity, and the statal conditions that are necessarry and sufficient for being in a state wherein one is informed. While research on the former aspect has essentially been concerned with the veridical nature of semantic information, the research on the latter has, among others, lead to the formulation of an epistemic logic for ‘being informed’.

After a brief elaboration on the discrepancy between reductive analyses of information as a commodity (information as veridical and meaningfull well-formed data) as opposed to the alleged primeness of the statal condition for being informed, we propose a formal analysis of a small class of necessary conditions for being informed. The formulation of these conditions elaborates on previous work on the semantics for the modal logic for ‘being informed’, and uses the preferential models of adaptive logic.

**Invited entry for ** The Language of Science, Monza, Polimetrica

This entry is no longer available at http://www.polimetrica.eu/site/?p=125

A local copy can still be accessed here.

Doctoral thesis defended on the 25th of April 2007.

Abstract. The core topic of this thesis lies within a newly emerged field called the philosophy of information (henceforth, PI), a domain which among others focuses on the diversity of informational phenomena and the sciences of information. We investigate the notions of informativeness and informational content with a non-aprioristic attitude derived from the thesis of logical pluralism and the methodology of the philosophy of information. The formal tools used for that task are mainly those provided by non-classical logics. In a more general perspective, the obtained results shed a light on the actual and potential interaction between cognitive information, formal logic, and the general adherence to formal methods within the philosophy of information.

The central goal is to formulate and to defend a broadly pluralist understanding of the notions of informativity and informational content. The general structure comprises two chapters that contain most of the preliminary work, and three chapters devoted to the formulation of a pluralist alternative to the received view that is of predominantly monist inspiration. Essentially, the preliminary work is concerned with a general account of informativity, and with the thesis of logical pluralism. The alternative proposal starts with a pluralist interpretation of objective content, and continues with a nonmonotonic interpretation of perceived content. The latter alternative is then further motivated in terms of the adaptive or nonmonotonic conditions that characterise states of information from an internal perspective. These adaptive conditions are described in the final chapter.

Special Issue of Logique & Analyse (Volume 49, Issue 196) edited by Luciano Floridi & Patrick Allo

Contributions

- Allo Patrick: Local Information and Adaptive Consequence
- Floridi, Luciano: The Logic of ‘Being Informed’
- Frápolli, María J. and Francesc Camós: The Informational Content of Necessary Truths
- Jago, Mark: Imagine the Possibilities: Information without Overload
- Mares, Ed: Relevant Logic, Probabilistic Information, and Conditionals
- Sequoiah-Grayson, Sebastian: Information Flow and Impossible Situations

Published in Logique et Analyse 49(196) (2006): 461–488.

Special issue on Logic and the Philosophy of Information.

Abstract. In this paper we aim at providing a formal description of what it means to be in a local or partial information-state. Starting from the notion of locality in a relational structure, we define so-called adaptive generated-submodels. The latter are then shown to yield an adaptive consequence relation such that the derivability of []p is naturally interpreted as a core property of being in a state in which one holds the information that p.

Review of: Models of a Man: Essays in Memory of Herbert Simon (Augier & March, eds.)

Published in Minds & Machines, 16(2) (2006): 221–224.

Presented at CAP in Europe 2006 Conference (Trondheim, Norway)

Abstract. Holding on to the view that Logical Orthodoxy is at best a fallible guide for the formalisation of the concept of semantic information, the inclusion of any logical principle within a logic of information should be the object of closer scrutiny. By investigating the possibility of being informed of a (true) contradiction, this paper adopts the opposite strategy.

Following this unusual method, it is subsequently argued that paraconsistency alone is not enough to motivate the acceptance of some contradictions as genuine information; that accepting contradictory but not veridical information is a rather trivial position; and that only a few motivations for dialetheic (i.e. true contradictory) information stand up to the standards of a theory of semantic information.

Presented at 27ste Nederlands-Vlaamse Filosofiedag, Rotterdam.

In this paper we investigate a refinement of minimal abnormal and reliable models in modal adaptive logics. Starting from lower limit logics below S5, we introduce a new flavour of localised adaptive consequence based on minimally abnormal or reliable point generated submodels. The design of a proof-system for such a localised consequence, asks for a labelling mechanism allowing the derivation of formulas at a point in a frame. A labelled dynamic proof-format is therefore elaborated in the second part of the paper.

Presented at CAP in Europe 2005 Conference (Västerås, Sweden)

Published in Computing, Philosophy, and Cognitive, Science, G. Dodig Crnkovic and S. Stuart (eds.), Cambridge Scholars Press.

Abstract. By introducing the notion of logical pluralism, it can be concluded that up to now theories of semantic information have - at least implicitly - relied on logical monism, the view that there is one true logic. Adopting an unbiased attitude in the philosophy of information, we ought to ask whether logical pluralism could entail informational pluralism. The basic insights from logical pluralism and their implications for a theory of semantic information should therefore be explored.

First, it is shown that (i) the general definition of semantic information as meaningful well-formed data does not favour any logical system, (ii) there are nevertheless good reasons to prefer a given logic above some others, and (iii) preferring a given logic does not contradict logical pluralism.

A genuine informational pluralism is then outlined by arguing that for every true logic the logical pluralist accepts, a corresponding notion of semantic information arises. Relying on connections between these logics, it can be concluded that different logics yield complementary formalisations of information and informational content. The resulting framework can be considered as a more versatile approach to information than its monist counterparts.

Presented at Second International Workshop on Philosophy and Informatics (Kaiserslautern)

Published in: WM2005: Professional Knowledge Management Experiences and Visions, edited by Klaus-Dieter Althoff, Andreas Dengel, Ralph Bergmann, Markus Nick and Thomas Roth-Berghofer, 579-86. Kaiserslautern: DFKI Gmbh, 2005.

Also available in CEUR Online Proceedings Vol. 130

Abstract. The core aim of this paper is to provide an overview of the benefits of a formal approach to information as being informative. It is argued that handling information-like objects can be seen as more fundamental than the notion of information itself. Starting from theories of semantic information, it is shown that these leave being informative out of the picture by choosing a logical framework which is essentially classical. Based on arguments in favour of logical pluralism, a formal approach of information handling inspired by non-classical logics is outlined.

Presented at 1st World Conference and School on Universal Logic (Montreux)

Abstract. Through their development, adaptive logics (see [1]) have often been devised as modal adaptive logics. That is, a modal logic L strengthened with the provisional application of a rule which is not in L itself (e.g.: <>A => [ ]A). Logical systems using such an approach include inconsistency-adaptive logics based on Jaskowski’s non-adjunctive approach [6], and logics for compatibility [2]. While non-modal adaptive logics generally succeed in providing a natural reconstruction of reasoning, proof-formats for modal adaptive logics lack the same intuitiveness. Basically the drawbacks of the proof-formats stem from adaptive logics’ reliance on a purely syntactic use of modal logics, thus leaving some natural (semantic) insights in modal languages aside. Compared to other adaptive logics (essentially the original inconsistency-adaptive logic ACLuN1) part of the appealing naturalness of dynamic proofs is lost (partly because the rules are defined indirectly with respect to the existence of a Hilbert-style proof). The main purpose of this paper is to provide a labelled proof-format for modal adaptive logics which does not suffer from the mentioned drawbacks.

Presented at Thought Experiments Rethought Congress (Ghent, Belgium)

Abstract. In this paper I try to give an alternative account of what (Floridi, 2003) describes as the two approaches to the Philosophy of Information (henceforth PI), and more precisely as the move from an analytical to a constructionist approach within PI. Whereas he tackles the problem from the standpoint of the historical evolution of PI (see: Floridi, 2002) - more generally relying on the notion of a pragmatic turn within contemporary analytical philosophy - I present a rather different approach, which is based on an interpretation of science fiction as a thought experiment.

Presented at CAP in Europe 2004 Conference (Pavia, Italy)

Published in Computing, Philosophy, and Cognition. L. Magnani and R. Dossena (eds.). London, College Publications: 313—327.

Abstract. Core aim of this paper is to focus on the dynamics of real proofs by introducing the block-semantics from (Batens, 1995) as a dynamical counterpart for classical semantics. We first look briefly at its original formulation - with respect to natural deduction proofs - and then extend its use to tableau-proofs. This yields a perspective on proof-dynamics that (i) explains proofs as a series of steps providing us with an insight in the premises, and (ii) reveals an informational dynamics in proofs unknown to most dynamical logical systems. As the latter remark especially applies to Amsterdam-style dynamic epistemic logic, we consider a weak modal epistemic logic and combine it with dynamic modal operators expressing the informational proof-dynamics (as a natural companion for the informational dynamics due to new information known from dynamic epistemic logic).

The motivation for this approach is twofold. In a general way it is considered as (a first step in) the reconstruction of the proof-dynamics known from adaptive logics (revealed by its block-formulation) within a modal framework (i.e. using a relational structure); in a more restricted way it aims at the explicit application of some results on omniscience formulated in Batens' paper on block-semantics.

Presented at VlaPoLo9 Workshop (Ghent, Belgium)

Review of the SLI-2003 Workshop, Brussels, 31st of March 2003.

Published in: Algemeen Nederlands Tijdschrift voor Wijsbegeerte 95, no. 3 (2003): 225.

Presented at VlaPoLo7 Workshop (Brussels, Belgium)

Abstract. Problem solving in the sciences often forces us to rely on a pragmatic notion of truth. An adaptive logic interpreting scientific theories as pragmatically possible was already given in (Meheus, 2002). The logic presented in this paper refers to a complementary view on pragmatic truth: not with respect to theories but with respect to single statements, facts or data. Therefore we rely on Nicholas Reschers concept of presumptive truth and the connected cognitive action called (presumptive) taking presented in (Rescher, 2001), and present an adaptive logic modelling the local acceptance and rejection of a statement.