Logical Consequence Research Papers - Academia.edu (original) (raw)

2025, Computational Linguistics

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of... more

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of the recent studies have exposed the inherent human disagreements of the inference task and have proposed a shift from categorical labels to human subjective probability assessments, capturing human uncertainty. In this work, we show how neither the current task formulation nor the proposed uncertainty gradient are entirely suitable for solving the NLI challenges. Instead, we propose an ordered sense space annotation, which distinguishes between logical and common-sense inference. One end of the space captures non-sensical inferences, while the other end represents strictly logical scenarios. In the middle of the space, we find a continuum of common-sense, namely, the subjective and graded opinion of a “person on the street.” To arrive at the proposed...

2025, British Journal for the History of Philosophy

The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a... more

The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a formal consequence as one that was valid for any substitution of categorematic terms, whereas the so-called British tradition required that the meaning of the consequent be contained in that of the antecedent. The former criterion resembles our model-theoretic definition of logical consequence, but it was the latter that, it has been claimed, was more popular at the time. Why? I argue that the question has no answer because the contradistinction of substitution and containment does not stand up to scrutiny. I base my argument on selected texts from various fourteenthcentury authors, including Walter Burley, Nicholas Drukken of Denmark, Richard Lavenham, and Peter of Mantua. Instead of two distinct criteria, one of which is favoured over the other, we find various ways of mixing the two and gradual developments towards a hybrid view. I would say that both traditions made use of a substitutional criterion and that they only disagreed on what is to be substituted and what is not, i.e. what counts as form.

2025, Synthese

Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically... more

Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically closed, but need not be jointly coherent and logically closed, nor uniformly accessible for guiding the agent's actions across different deliberative contexts. Expressivism is a widely discussed thesis on the mental states conventionally expressed by certain categories of declarative discourse, saying, roughly, that prominent forms of declarative utterance should be taken to express something other than the speaker's outright acceptance of a representational content. In this paper, I argue that specific versions of these views-Topical Fragmentation and Semantic Expressivism-present a mutually beneficial combination. In particular, I argue that combining Topical Fragmentation with Semantic Expressivism fortifies the former against (what I call) the Connective Problem, a pressing objection that lays low more familiar forms of Fragmentation. This motivates a novel semantic framework: Fragmented Semantic Expressivism, a bilateral state-based system that (i) prioritizes fragmentationist acceptance conditions over truth conditions, (ii) treats representational content as hyperintensional, and (iii) gives expressivistic acceptance conditions for the standard connectives. Finally, we discuss the distinctive advantages of this system in answering the problem of logical omniscience and Karttunen's problem for epistemic 'must'.

2025, Lecture Notes in Computer Science

We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in... more

We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in several semantic evaluations. SICK-BR consists of around 10k sentence pairs annotated for neutral/contradiction/entailment relations and for semantic relatedness, using a 5 point scale. Here we describe the strategies used for the adaptation of SICK, which preserve its original inference and relatedness relation labels in the SICK-BR Portuguese version. We also discuss some issues with the original corpus and how we might deal with them.

2025, Underreview

This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is:... more

This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is: 'What is Logical Inferentialism & How do its Standards challenge Classical Logic?' In response, the study begins with a survey of the three principal proof systems that is, David Hilbert's axiomatic systems and Gerhard Gentzen's natural deduction and his sequent calculus, thus situating logical inferentialism within a broader proof-theoretic landscape. The investigation then turns to the core tenets of logical inferentialism by focusing on the role of introduction and elimination rules in determining the meaning of logical constants. Through this framework, natural deduction is evaluated as a system that satisfies key inferentialist virtues including harmony, conservativeness and the subformula property. Ultimately, the paper presents challenges to classical logic from intuitionist and revisionist perspectives by arguing that certain classical principles fail to uphold inferentialist standards, consequently undermining their legitimacy within a meaning-theoretic framework.

2025, The Review of Socionetwork Strategies

The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues... more

The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues (in addition to the organizing team of Ken Satoh, Yoshinobu Kano, and Masaharu Yoshioka) have contributed to the extension and curation of the statute law data for the COLIEE competition.

2025

We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing... more

We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing entailment between articles and queries (Task 2). Participation was open to any group based on any approach, and the tasks attracted 10 teams. We received 9 submissions to Task 1 (for a total of 17 runs), and 8 submissions to Task 2 (for a total of 20 runs).

2025, Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on... more

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on legal information extraction/entailment (COLIEE)-2017. The competition focuses on the legal information processing required to answer yes/no questions from Japanese legal bar exams, and it consists of two phases: ad hoc legal information retrieval (Phase 1), and textual entailment (Phase 2). Phase 1 requires the identification of Japan civil law articles relevant to a legal bar exam query. For this phase, we have used an information retrieval approach using TF-IDF combined with a simple language model. Phase 2 requires a yes/no decision for previously unseen queries, which we approach by comparing the approximate meanings of queries with relevant statutes. Our meaning extraction process uses a selection of features based on a kind of paraphrase, coupled with a condition/conclusion/exception analysis of articles and queries. We also extract and exploit negation patterns from the articles. We construct a logic-based representation as a semantic analysis result, and then classify questions into easy and difficult types by analyzing the logic representation. If a question is in our easy category, we simply obtain the entailment answer from the logic representation; otherwise we use an unsupervised learning method to obtain the entailment answer. Experimental evaluation shows that our result ranked highest in the Phase 2 amongst all COLIEE-2017 competitors.

2025

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding... more

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such modelsplain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)-can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.

2025

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine... more

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

2025, arXiv (Cornell University)

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding... more

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such modelsplain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)-can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.

2025, Humanistyka i Przyrodoznawstwo

Od dawna bowiem, rzecz jasna, wiadomo wam, co macie na myli, u¿ywaj¹c wyra¿enia bytuj¹cy. My wszak¿e, którzy swego czasu mylelimy, ¿e je rozumiemy, popadlimy teraz w k³opot [Platon, Sofista, 244a]. Martin Heidegger, Bycie i czas, s. 2 W... more

Od dawna bowiem, rzecz jasna, wiadomo wam, co macie na myli, u¿ywaj¹c wyra¿enia bytuj¹cy. My wszak¿e, którzy swego czasu mylelimy, ¿e je rozumiemy, popadlimy teraz w k³opot [Platon, Sofista, 244a]. Martin Heidegger, Bycie i czas, s. 2 W ka¿dym razie d³ugo nie mog³em siê wyzwoliae z tego dziecinnego zwi¹zku z nimi, który by³ jakim rodzajem wiernoci, nieuwiadomionej jeszcze, a wiêc niewybranej, lecz z samej niejako natury obowi¹zuj¹cej. Taka wiernoae, czy te¿ tylko mo¿liwoae wiernoci, niezale¿nie od wszystkiego zapewnia nie tylko poczucie bezpieczeñstwa, okrela tak¿e miejsce w nieprzeczuwanej jeszcze przestrzeni twojego widnokrêgu, tym samym staj¹c siê znakiem twojego przeznaczenia. Wies³aw Myliwski, Widnokr¹g, s. 195. Zadanie pomylenia tego, czemu wyraz dany zosta³ w Byciu i czasie, ujawnia zrazu, niczym emblemat prawdziwoci, ¿e ród³o lub pocz¹tek w swym byciu sob¹ s¹ trudne w stopniu najwy¿szym, choaeby z tego wzglêdu, i¿ ich to¿samoae ju¿ u swego zarania narzuca nam sw¹ problematycznoae, skoro ród³o jest pocz¹tkiem w sposób jedynie wzglêdny, staj¹c siê nie tyle pocz¹tkiem samym w sobie, ile raczej pocz¹tkiem dla nas, ale i dla nas w tej mierze jedynie, w jakiej czerpiemy z niego, zawsze licz¹c na o¿ywczoae lektury: ¿e mianowicie stanie siê ona dla nas krynic¹ ¿yciodajn¹, ¿e tedy pozwoli nam byae tym, kim w³anie jestemy lub byae bymy chcieli, w danym wypadku mylicielami. Liczymy przy tym na to, ¿e siê w liczeniu nie przeliczamy, na co nadziei mieae niepodobna, skoro przeliczalnoae tego, co jest, nie tylko nie stanowi nawet wstêpnego jego znamienia, lecz wrêcz zdaje siê byae zakryciem tego, co jest, w jego sposobie: sposobie bycia mianowicie. Siêgamy tedy po tekst, by zrozumieae. Interpretacja jednak okazuje sw¹ gwa³town¹ niemo¿liwoae ju¿ na samym pocz¹tku, na pierwszej karcie, w pierwszych s³owach, wskazuj¹c tym samym, i¿ tu w³anie, w miejscu pocz¹tku, zaczyna siê to, co ka¿e przerwaae lekturê z racji braku rozumienia, przerywaj¹c ci¹g wywodu, który tym samym sw¹ jednoae lub ci¹g³oae zaczyna przedstawiaae nam jako ci¹g³oae braku, nieuchron-n¹ jednoae aktu ja³owoci lektury, która: ma byae. Swój procesualny charakter wywód tak nie-pojêty zdaje siê czerpaae ju¿ z tego tylko, ¿e przecie¿ niemo¿noae interpretacji choae dotyczy pierwszych s³ów tekstu uderza czytelnika nie za pierwszym razem: ¿e tedy lektura okazuje sw¹ niemo¿noae wtedy dopiero, gdy jej akt natrêtnie powtarzalny i powtarzany staje siê monotoni¹ stuporu, dla którego pierwsza karta dzie³a stanowi znak ju¿ czego innego ni¿ pierwszorazowoci zetkniêcia. Nic bowiem nie mo¿e siê zacz¹ae inaczej ni¿ jako powtórzenie pocz¹tku tedy i niemo¿noae rozpoczêcia nie inaczej. Brak powodzenia w dzia³aniu, pozbawiaj¹c nas szczêcia lektury, przez samo to zawiadcza jednak dobitnie o marnej kondycji czytelnika, ten bowiem sko-

2025, Philosophy of Science

2025, Language Resources and Evaluation

For a language pair such as Chinese and Korean that belong to entirely different language families in terms of typology and genealogy, finding the correspondences is quite obscure in word alignment. We present annotation guidelines for... more

For a language pair such as Chinese and Korean that belong to entirely different language families in terms of typology and genealogy, finding the correspondences is quite obscure in word alignment. We present annotation guidelines for Chinese-Korean word alignment through contrastive analysis of morpho-syntactic encodings. We discuss the differences in verbal systems that cause most of linking obscurities in annotation process. Systematic comparison of verbal systems is conducted by analyzing morpho-syntactic encodings. The viewpoint of grammatical category allows us to define consistent and systematic instructions for linguistically distant languages such as Chinese and Korean. The scope of our guidelines is limited to the alignment between Chinese and Korean, but the instruction methods exemplified in this paper are also applicable in developing systematic and comprehensible alignment guidelines for other languages having such different linguistic phenomena.

2025, Pre-print

This paper explores the philosophical implications of Resolution Matrix Semantics (RMS) as an alternative foundation for modal logic. Unlike traditional Kripkean models, which interpret modality through relations between multiple possible... more

This paper explores the philosophical implications of Resolution Matrix Semantics (RMS) as an alternative foundation for modal logic. Unlike traditional Kripkean models, which interpret modality through relations between multiple possible worlds governed by classical logic, RMS treats indeterminate truth values as fundamental, operating within a single world. RMS introduces "blinking" truth assignments and sub-interpretations to resolve uncertainty, capturing the inherently poly-logical nature of human thought. Drawing a parallel to quantum physics, we argue that Kripke models resemble Everett’s Many-Worlds interpretation, while RMS aligns with the Copenhagen interpretation’s emphasis on intrinsic uncertainty. RMS offers a new view of modal reasoning—not as a proliferation of worlds, but as a diversification of perspectives within one world. The framework’s philosophical significance is examined through connections to poly-logic thinking, quantum cognitive models, and potential applications in artificial intelligence and parallel computing. RMS ultimately provides a dynamic, pluralistic model for rationality that better reflects the complexity of human cognition and decision-making under uncertainty.

2025

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on... more

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on legal information extraction/entailment (COLIEE)-2017. The competition focuses on the legal information processing required to answer yes/no questions from Japanese legal bar exams, and it consists of two phases: ad hoc legal information retrieval (Phase 1), and textual entailment (Phase 2). Phase 1 requires the identification of Japan civil law articles relevant to a legal bar exam query. For this phase, we have used an information retrieval approach using TF-IDF combined with a simple language model. Phase 2 requires a yes/no decision for previously unseen queries, which we approach by comparing the approximate meanings of queries with relevant statutes. Our meaning extraction process uses a selection of features based on a kind of paraphrase, coupled with a condition/conclusion/exception analysis of articles and queries. We also extract and exploit negation patterns from the articles. We construct a logic-based representation as a semantic analysis result, and then classify questions into easy and difficult types by analyzing the logic representation. If a question is in our easy category, we simply obtain the entailment answer from the logic representation; otherwise we use an unsupervised learning method to obtain the entailment answer. Experimental evaluation shows that our result ranked highest in the Phase 2 amongst all COLIEE-2017 competitors.

2025, Ex falso sequitur quodlibet

Ex-falso-sequitur-quodlibet" is a Latin phrase meaning that from what is false any assertion validly follows. However, it is necessary to clarify what is meant by "following" by distinguishing whether the implication referred to is a... more

Ex-falso-sequitur-quodlibet" is a Latin phrase meaning that from what is false any assertion validly follows. However, it is necessary to clarify what is meant by "following" by distinguishing whether the implication referred to is a material implication or a formal one. In general terms, "following" as used in this context means the logical process of deciding true another different proposition from the established truth of certain propositions and on account of them. Implication is named the process of logical following, and also the relation connecting its terms.

2025, Informal Logic

This paper presents a way in which formal logic can be understood and reformulated in terms of argumentation that can help us unify formal and informal reasoning. Classical deductive reasoning will be expressed entirely in terms of... more

This paper presents a way in which formal logic can be understood and reformulated in terms of argumentation that can help us unify formal and informal reasoning. Classical deductive reasoning will be expressed entirely in terms of notions and concepts from argumentation so that formal logical entailment is equivalently captured via the arguments that win between those supporting concluding formulae and arguments supporting contradictory formulae. This allows us to go beyond Classical Logic and smoothly connect it with human reasoning, thus providing a uniform argumentation-based view of both informal and formal logic.

2025, Figures de la vérité

The paper's purpose is to articulate a deflationary conception of truth and the view that the notion of truth in critical for rational inquiry. The key to the suggested articulation is the identification of the "reflective stance" as one... more

2025, bioRxiv (Cold Spring Harbor Laboratory)

Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with... more

Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with progressively more refined prompts. Results thus become dependent not just on what is known about the target, but also on what is known about the prompt-engineering. In this paper, we separate the prompt into domain-constraints that can be written in a standard logical form, and a simple textbased query. We investigate whether LLMs can be guided, not by refining prompts manually, but by refining the the logical component automatically, keeping the query unchanged. We describe an iterative procedure LMLF ("Language Models with Logical Feedback") in which the constraints are progressively refined using a logical notion of generalisation. On any iteration, newly generated instances are verified against the constraint, providing "logical-feedback" for the next iteration's refinement of the constraints. We evaluate LMLF using two well-known targets (inhibition of the Janus Kinase 2; and Dopamine Receptor D2); and two different LLMs (GPT-3 and PaLM). We show that LMLF, starting with the same logical constraints and query text, can guide both LLMs to generate potential leads. We find: (a) Binding affinities of LMLF-generated molecules are skewed towards higher binding affinities than those from existing baselines; (b) LMLF results in generating molecules that are skewed towards higher binding affinities than without logical feedback; (c) Assessment by a computational chemist suggests that LMLF generated compounds may be novel inhibitors. These findings suggest that LLMs with logical feedback may provide a mechanism for generating new leads without requiring the domain-specialist to acquire sophisticated skills in prompt-engineering.

2025, O Que nos faz pensar

This paper attempts to outline Leibniz's main views on negation. We first present the leibnizian distinction between propositional negation and predicative negation, which is adopted by Leibniz as a general rule for the interpretation of... more

This paper attempts to outline Leibniz's main views on negation. We first present the leibnizian distinction between propositional negation and predicative negation, which is adopted by Leibniz as a general rule for the interpretation of the negation operator. In spite of this distinction, however, we argue that Leibniz tries to reduce propositional negation to predicative negation. But in order to maintain the coherence of his account of propositional truth, Leibniz explains predicative negation as a predication of negative concepts. Finally, we give an interpretation for the formal meaning of negative concepts.

2025, Submitted

A prominent solution to the 'symmetry problem' allows implicatures to be computed from simple but not from complex alternatives ('COMPLEXITY'; Katzir 2007). Recently Schwarz and Wagner (2024) have proposed a different mechanism for... more

A prominent solution to the 'symmetry problem' allows implicatures to be computed from simple but not from complex alternatives ('COMPLEXITY'; Katzir 2007). Recently Schwarz and Wagner (2024) have proposed a different mechanism for symmetry breaking ('BLOCKING'), arguing that it can, but COMPLEXITY cannot, account for cases of so-called 'simplex threats' in which the simple alternative is available but the expected implicature is unattested. This note provides a defense of COMPLEXITY. We show that it explains simplex threats once coupled with constraints on questions ('Partition by Exhaustification'; Fox 2019, 2020) and on assertability of sentences with contextually equivalent alternatives ('Fatal Competition'; Magri 2009, Bar-Lev and Fox 2023). We furthermore point out (following Schmitt and Haslinger 2025) that BLOCKING makes a wrong prediction for some cases.

2025, Proceedings of the Aristotelian Society

Recently an abductivist approach to the epistemology of logic has gained traction. A necessary component of logical abductivism is justification holism, asserting that claims of logical entailment can only be justified in the context of... more

Recently an abductivist approach to the epistemology of logic has gained traction. A necessary component of logical abductivism is justification holism, asserting that claims of logical entailment can only be justified in the context of an entire logical theory, e.g., classical, intuitionistic, etc. One view that is incompatible with abductivism is an atomistic view on which individual entailment-claims can be justified point-wise rather than in the context of a whole theory. This paper provides two atomistic counterexamples to justification holism in the epistemology of logic. Both examples appeal to pre-theoretic commitments of deductive validity. The main aim is to show that there are some foundational entailment-claims for which we can have propositional justification independently of theory choice and outside the context of a whole logical theory. If one were to give up on these foundational claims, all semantic and syntactic accounts of deductive validity would be non-starters.

2025, arXiv (Cornell University)

Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a... more

Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a premise P entails a hypothesis H iff in all worlds where P is true, H is also true. Statistical models view this relationship probabilistically, addressing it in terms of whether a human would likely infer H from P. In this paper, we wish to bridge these two perspectives, by arguing for a visually-grounded version of the Textual Entailment task. Specifically, we ask whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant "world" or "situation"). We use a multimodal version of the SNLI dataset and we compare "blind" and visually-augmented models of textual entailment. We show that visual information is beneficial, but we also conduct an in-depth error analysis that reveals that current multimodal models are not performing "grounding" in an optimal fashion.

2025, Journal of Philosophical Logic

We show that there are infinitely many pairwise non-equivalent formulae in one propositional variable p in the pure implication fragment of the logic T of "ticket entailment" proposed by Anderson and Belnap. This answers a question posed... more

We show that there are infinitely many pairwise non-equivalent formulae in one propositional variable p in the pure implication fragment of the logic T of "ticket entailment" proposed by Anderson and Belnap. This answers a question posed by R. K. Meyer.

2025, Studies in Universal Logic

In this work, we propose a definition of logical consequence based on the relation between the quantity of information present in a particular set of formulae and a particular formula. As a starting point, we use Shannon"s quantitative... more

In this work, we propose a definition of logical consequence based on the relation between the quantity of information present in a particular set of formulae and a particular formula. As a starting point, we use Shannon"s quantitative notion of information, founded on the concepts of logarithmic function and probability value. We first consider some of the basic elements of an axiomatic probability theory, and then construct a probabilistic semantics for languages of classical propositional logic. We define the quantity of information for the formulae of these languages and introduce the concept of informational logical consequence, identifying some important results, among them: certain arguments that have traditionally been considered valid, such as modus ponens, are not valid from the informational perspective; the logic underlying informational logical consequence is not classical, and is at the least paraconsistent sensu lato; informational logical consequence is not a Tarskian logical consequence.

2025

Sprechen heißt, mit Sachverhalten zu operieren. Die Forschung an der Dimension PARTIZIPATION hat es unternommen, zu zeigen, was es heißt, Sachverhalte sprachlich zu erfassen, und welche Techniken unter dieser allgemeinen Funktion zu... more

Sprechen heißt, mit Sachverhalten zu operieren. Die Forschung an der Dimension PARTIZIPATION hat es unternommen, zu zeigen, was es heißt, Sachverhalte sprachlich zu erfassen, und welche Techniken unter dieser allgemeinen Funktion zu finden sind und zusammenspielen (cf. Seiler/Premper (eds.) 1991). Wer spricht, macht aber gewöhnlich mehr: Sachverhalte werden nicht nur erfaßt, sondern gleichzeitig auch in den Kontext einer kommunikativen Absicht gestellt; sie werden behauptet, vermutet, bezweifelt, in Frage gestellt, negiert, gefordert, herbeigewünscht und anderes mehr. Kommunikative Absichten sind ebenfalls konstitutiv fürs Sprechen; durch sie werden Sprechereignisse erst zu Sprechakten. Sprechsituationen sind aber auch nicht nur durch kommunikative Absichten gekennzeichnet, sondern sie finden natürlich in Zeit und Raum statt. Folglich bestehen zwischen Sprecher und besprochenen Sachverhalten nicht nur Einstellungs-, sondern auch zeitliche Beziehungen. Dieses Hineinstellen in einen k...

2025, Nature machine intelligence

The Gene Ontology (GO) is a formal, axiomatic theory with over 100,000 axioms that describe the molecular functions, biological processes and cellular locations of proteins in three subontologies. Predicting the functions of proteins... more

The Gene Ontology (GO) is a formal, axiomatic theory with over 100,000 axioms that describe the molecular functions, biological processes and cellular locations of proteins in three subontologies. Predicting the functions of proteins using the GO requires both learning and reasoning capabilities in order to maintain consistency and exploit the background knowledge in the GO. Many methods have been developed to automatically predict protein functions, but effectively exploiting all the axioms in the GO for knowledge-enhanced learning has remained a challenge. We have developed DeepGO-SE, a method that predicts GO functions from protein sequences using a pretrained large language model. DeepGO-SE generates multiple approximate models of GO, and a neural network predicts the truth values of statements about protein functions in these approximate models. We aggregate the truth values over multiple models so that DeepGO-SE approximates semantic entailment when predicting protein functions. We show, using several benchmarks, that the approach effectively exploits background knowledge in the GO and improves protein function prediction compared to state-of-the-art methods. Protein function prediction is one of the key challenges in modern biology and bioinformatics as it enables better understanding of the roles and interactions of proteins within living systems. Accurate functional descriptions of proteins are necessary for tasks such as identification of drug targets, understanding disease mechanisms and improving biotechnological applications in industry. While predicting protein structures has become increasingly accurate in recent years 1 , predicting protein function remains challenging due to the small number of known functions combined with their complexity and interactions. Functions of proteins are described using the Gene Ontology (GO) 2 which is one of the most successful ontologies in biology. GO includes three subontologies for describing molecular functions (MFO) of a single protein, biological processes (BPO) to which proteins can contribute and cellular components (CCO) where proteins are active. Researchers identify protein functions based on experiments and generate scientific reports which are then taken by database curators and added to knowledge bases. These annotations are generally propagated to homologue proteins. As a result, the UniProtKB/Swiss-Prot database 3 contains manually curated GO annotations for thousands of organisms and more than 550,000 proteins. Recent protein function prediction methods rely on different sources of information such as sequence, interactions, protein tertiary structure, literature, coexpression, phylogenetic analysis or the information provided in GO . The methods may use sequence domain annotations , directly apply deep convolutional neural networks (CNN) or language models such as long short-term memory neural networks 9 and transformers 14 , or use pretrained protein language models to represent amino acid sequences. Models may

2025, bioRxiv (Cold Spring Harbor Laboratory)

The Gene Ontology (GO) is one of the most successful ontologies in the biological domain. GO is a formal theory with over 100,000 axioms that describe the molecular functions, biological processes, and cellular locations of proteins in... more

The Gene Ontology (GO) is one of the most successful ontologies in the biological domain. GO is a formal theory with over 100,000 axioms that describe the molecular functions, biological processes, and cellular locations of proteins in three sub-ontologies. Many methods have been developed to automatically predict protein functions. However, only few 1 .

2025

This paper introduces a relational meta-formalism that challenges substantialist mathematical ontology. Starting from primordial non-duality (Ω) and its selfdifferentiation (δ), we derive fundamental polarizing tendencies and relational... more

This paper introduces a relational meta-formalism that challenges substantialist mathematical ontology. Starting from primordial non-duality (Ω) and its selfdifferentiation (δ), we derive fundamental polarizing tendencies and relational operators, establishing their structural uniqueness. This framework addresses Benacerraf's epistemological challenge and Wigner's "unreasonable effectiveness" puzzle by demonstrating how mathematical constants like π emerge as structural invariants without axiomatic presupposition. This first article of seven establishes meta-formal foundations, connecting to historical debates on mathematical ontology from Plato through Leibniz to contemporary structuralism.

2025, Journal of Philosophical Investigations

Recent developments in non-classical logic have raised the question of rational choice in the field of logic. If logic is not an exception, a posterior methodology can be used for rational choice among logical theories. In choosing a... more

Recent developments in non-classical logic have raised the question of rational choice in the field of logic. If logic is not an exception, a posterior methodology can be used for rational choice among logical theories. In choosing a logical theory, there are several criteria to consider, such as expressive power and separation of propositions, explanatory power and separation of inferences, consistency and internal coherence, compatibility with evidence, simplicity, and unification. To apply this methodology to logic, we will echo the views of Priest and Williamson and examine their opinions on logic and logical evidence. In this article, we consider, in Priest's opinion, the linguistic concept of "validity" as the subject of logic and partial inferences and our intuitions about their validity as evidence for logical theories. Based on these criteria, we compare Relevance Logic theory and Truth Functional System theory, then calculate the rationality index for each theory. Compared with Relevance Logic, the Truth Functional System theory has a higher rationality index and outperforms it many times over.

2025, Jurnal Pendidikan Indonesia

The focus of this research is presupposition and entailment in The Familiy Nightmare short story. This research aimed to reveal how presupposition and entailment were used in the short story. This research used the qualitative method for... more

The focus of this research is presupposition and entailment in The Familiy Nightmare short story. This research aimed to reveal how presupposition and entailment were used in the short story. This research used the qualitative method for analyzing the story which involved the document and material analysis to collecting the data. The result showed this research found 6 types of presuppositions and 2 types of entailments. Presupposition and entailment are to emphasize, draw attention, sympathy toward the readers, and become a strategy to make the readers more focused in the story.

2025

In this paper we critique the interpretation of Concept-Knowledge theory in terms of logical decidability. We argue instead that concepts and knowledge should be regarded as logical language and axioms, the two main components of a... more

In this paper we critique the interpretation of Concept-Knowledge theory in terms of logical decidability. We argue instead that concepts and knowledge should be regarded as logical language and axioms, the two main components of a logical theory. Based on this proposal and using tools from the mathematics of category theory, we propose a category of logical designs to act as a formal interpretation for the dynamic operators which define the design processes of C-K theory.

2025, arXiv (Cornell University)

The Logic of Approximate Entailment (LAE) is a graded counterpart of classical propositional calculus, where conclusions that are only approximately correct can be drawn. This is achieved by equipping the underlying set of possible worlds... more

The Logic of Approximate Entailment (LAE) is a graded counterpart of classical propositional calculus, where conclusions that are only approximately correct can be drawn. This is achieved by equipping the underlying set of possible worlds with a similarity relation. When using this logic in applications, however, a disadvantage must be accepted; namely, in LAE it is not possible to combine conclusions in a conjunctive way. In order to overcome this drawback, we propose in this paper a modification of LAE where, at the semantic level, the underlying set of worlds is moreover endowed with an order structure. The chosen framework is designed in view of possible applications.

2025

Most privacy preserving data mining methods apply transformations to the data that result in the loss of original data and reduces the effectiveness of the underlying mining results. Our goal in this work is to define privacy preserving... more

Most privacy preserving data mining methods apply transformations to the data that result in the loss of original data and reduces the effectiveness of the underlying mining results. Our goal in this work is to define privacy preserving methods that would reduce the difference between the mining results obtained with the original data and the �anonymized� data. We propose to use lexical entailment in order to replace specific features in the text with a semantically close category, this way the general meaning of the document is not overtly modified and the mining results will still be mostly valid.

2025, People's Publishing House (Beijign)

CHEN Bo took charge of a significant project “Research on Major Frontier Issues in Contemporary Philosophy of Logic” supported by the National Social Science Foundation of China in last seven years. The six-volumes of Companion of Studies... more

CHEN Bo took charge of a significant project “Research on Major Frontier Issues in Contemporary Philosophy of Logic” supported by the National Social Science Foundation of China in last seven years. The six-volumes of Companion of Studies on Contemporary Philosophy of Logic are the final achievement of that project, which is evaluated as “Excellent” grade by the National Social Science Foundation of China in 2025. The Companion will be published in Chinese by People’s Publishing House (Beijing) in a few years later.

2025, Applied Categorical Structures

We provide a co-free construction which adds elementary structure to a primary doctrine. We show that the construction preserves comprehensions and all the logical operations which are in the starting doctrine, in the sense that it maps a... more

We provide a co-free construction which adds elementary structure to a primary doctrine. We show that the construction preserves comprehensions and all the logical operations which are in the starting doctrine, in the sense that it maps a first order many-sorted theory into a the same theory formulated with equality. As a corollary it forces an implicational doctrine to have an extentional entailment.

2025

Maria Copeland A thesis submitted to the University of Manchester for the degree of Master of Philosophy, 2016 Understanding changes in an ontology is becoming an active topic of interest to ontology engineers because of the increasing... more

Maria Copeland A thesis submitted to the University of Manchester for the degree of Master of Philosophy, 2016 Understanding changes in an ontology is becoming an active topic of interest to ontology engineers because of the increasing number of requirements to better support and maintain large collaborative ontologies. Ontology support and debugging mechanisms have mainly addressed errors in ontologies derived from reasoning tasks such as checking concept satisfiability and ontology consistency. Although debugging and tools to help the understanding of entailments have been introduced in the past decade, see [1, 2], these do not address the desirability and expectations of the entailments. Currently, logical faults in ontologies are treated in a vacuum approach that does not take into consideration the information available regarding the entailment evolution of the ontology as recorded in ontology versions, the expectation of entailments, and how the ontology and its logical consequences comply with historical changes. In this thesis we present a novel approach for detecting logical warnings that are directly linked to the desirability and expectation of entailments as recorded in the ontology 's versions. We first introduce methods for evaluating ontology evolution trends, editing dynamics, and identify versions that correspond to areas of major change in the ontology. This lifetime view of the ontology gives background information regarding the growth and change of the ontology from an axiom centric perspective and their entailment presence through out the studied versions. We then subject the asserted axioms from each version to a cross-functional and systematic analyses of changes, the effectiveness of these changes, and the consistency of these changes in future versions. From this detailed axiom change record and their entailment profiles, we derived entailment warnings that indicate or suggest domain modelling bugs in terms of content redundancy, regression, refactoring, and thrashing. We validate and confirm these methods by analysing a ten year evolution period of Table 2.1: Ontology Engineering Life Cycle Phases and Activities Phase Activity Useful for Evaluation Feasibility Study Stakeholder analysis No Purpose and scope Yes Problems and opportunities No Economic feasibility No Potential solutions No Resource allocation No Domain Analysis Requirements specifications Yes Motivating scenarios No Competency questions Yes Knowledge acquisition No Solutions analysis No Conceptualisation Create semi-formal ontology descriptions Yes Produce architectural design of main concepts Yes Design integration with existing solutions Yes Evaluate the semi-formal ontology No Produce a formal model of the ontology No Implementation O id for Axiom Assertions (α ∈ O) O id for Axiom Inferences (O |= α and α ∈ O)

2025

Two printed works on logic by Georgius Benignus / Juraj Dragišić are preserved: Dialectica nova (Florentiae 1488 [1489]) and Artis dialecticaes praecepta (Romae 1520 [1519]). The presentations of Dragišić's logic by Carl Prantl and... more

Two printed works on logic by Georgius Benignus / Juraj Dragišić are preserved: Dialectica nova (Florentiae 1488 [1489]) and Artis dialecticaes praecepta (Romae 1520 [1519]). The presentations of Dragišić's logic by Carl Prantl and Stjepan Zimmermann are assessed. Dragišić’s 1488 logic contains a very early exposition of nearly all modes of the fourth figure. He listed four direct modes of the fourth figure, and he may also have had in mind the fifth mode (Fresison), along with an additional mode, probably the indirect Fimeno. Furthermore, Dragišić presented a specific version of the terminist doctrine of supposition of terms (suppositio terminorum), incorporating his initial distinction between the mode and subject of supposition. Dragišić’s operational doctrine of consequences (consequentiae) reflects his interest in systematizing and reducing the number of rules (as seen in the differences between his two logical works). His detailed argumentation for the rejection of the principles ex impossibile and ad necessarium makes his work particularly interesting from the perspective of modern paraconsistent logics.

2025, Logics

Non-Tarskian interpretations of many-valued logics have been widely explored in the logic literature. The development of non-tarskian conceptions of logical consequence set the theoretical foundations for rediscovering well-known... more

Non-Tarskian interpretations of many-valued logics have been widely explored in the logic literature. The development of non-tarskian conceptions of logical consequence set the theoretical foundations for rediscovering well-known (Tarskian) many-valued logics. One may find in distinct authors many novel interpretations of many-valued systems. They are produced through a type of procedure which consists in altering the semantic structure of Tarskian many-valued logics in order to output a non-Tarskian interpretation of these logics. Through this type of transformation the paper explores a uniform way of transforming finitely many-valued Tarskian logics into their non-Tarskian interpretation. Some general properties of carrying out this type of procedure are studied, namely the dualities between these logics and the conditions under which negation-explosive and negation-complete Tarskian logics become non-explosive.

2025, Lecture Notes in Computer Science

The Answer Validation Exercise (AVE) is a pilot track within the Cross-Language Evaluation Forum (CLEF) 2006. The AVE competition provides an evaluation framework for answer validations in Question Answering (QA). In our participation in... more

The Answer Validation Exercise (AVE) is a pilot track within the Cross-Language Evaluation Forum (CLEF) 2006. The AVE competition provides an evaluation framework for answer validations in Question Answering (QA). In our participation in AVE, we propose a system that has been initially used for other task as Recognising Textual Entailment (RTE). The aim of our participation is to evaluate the improvement our system brings to QA. Moreover, due to the fact that these two task (AVE and RTE) have the same main idea, which is to find semantic implications between two fragments of text, our system has been able to be directly applied to the AVE competition. Our system is based on the representation of the texts by means of logic forms and the computation of semantic comparison between them. This comparison is carried out using two different approaches. The first one managed by a deeper study of the Word-Net relations, and the second uses the measure defined by Lin in order to compute the semantic similarity between the logic form predicates. Moreover, we have also designed a voting strategy between our system and the MLEnt system, also presented by the University of Alicante, with the aim of obtaining a joint execution of the two systems developed at the University of Alicante. Although the results obtained have not been very high, we consider that they are quite promising and this supports the fact that there is still a lot of work on researching in any kind of textual entailment.

2025, Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing - RTE '07

The textual entailment recognition system that we discuss in this paper represents a perspective-based approach composed of two modules that analyze text-hypothesis pairs from a strictly lexical and syntactic perspectives, respectively.... more

The textual entailment recognition system that we discuss in this paper represents a perspective-based approach composed of two modules that analyze text-hypothesis pairs from a strictly lexical and syntactic perspectives, respectively. We attempt to prove that the textual entailment recognition task can be overcome by performing individual analysis that acknowledges us of the maximum amount of information that each single perspective can provide. We compare this approach with the system we presented in the previous edition of PASCAL Recognising Textual Entailment Challenge, obtaining an accuracy rate 17.98% higher.

2025, Lecture Notes in Computer Science

This paper discusses the recognition of textual entailment in a text-hypothesis pair by applying a wide variety of lexical measures. We consider that the entailment phenomenon can be tackled from three general levels: lexical, syntactic... more

This paper discusses the recognition of textual entailment in a text-hypothesis pair by applying a wide variety of lexical measures. We consider that the entailment phenomenon can be tackled from three general levels: lexical, syntactic and semantic. The main goals of this research are to deal with this phenomenon from a lexical point of view, and achieve high results considering only such kind of knowledge. To accomplish this, the information provided by the lexical measures is used as a set of features for a Support Vector Machine which will decide if the entailment relation is produced. A study of the most relevant features and a comparison with the best state-of-the-art textual entailment systems is exposed throughout the paper. Finally, the system has been evaluated using the Second PASCAL Recognising Textual Entailment Challenge data and evaluation methodology, obtaining an accuracy rate of 61.88%.

2025

Saul Kripke gilt zurecht als einer der bedeutendsten analytischen Philosophen des 20. Jahrhunderts. Sein Hauptwerk „Naming and Necessity“ aus dem Jahre 1972 schuf nicht nur einen bedeutsamen Beitrag zur modernen Modallogik, auch lieferte... more

2025, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

In this paper, we propose the logic Pmin , which is a nonmonotonic extension of Preferential logic P defined by Kraus, Lehmann and Magidor (KLM). In order to perform nonmonotonic inferences, we define a "minimal model" semantics. Given a... more

In this paper, we propose the logic Pmin , which is a nonmonotonic extension of Preferential logic P defined by Kraus, Lehmann and Magidor (KLM). In order to perform nonmonotonic inferences, we define a "minimal model" semantics. Given a modal interpretation of a minimal A-world as A ∧ ¬A, the intuition is that preferred, or minimal models are those that minimize the number of worlds where ¬ ¬A holds, that is of A-worlds which are not minimal. We also present a tableau calculus for deciding entailment in Pmin .

2025, Synthese

According to a growing number of scholars (Dolby, 2016) Wittgenstein's account of truth in the Tractatus is not a correspondence theory. Foremost among them, Hans-Johann Glock has argued that Wittgenstein neither held the version of... more

According to a growing number of scholars (Dolby, 2016) Wittgenstein's account of truth in the Tractatus is not a correspondence theory. Foremost among them, Hans-Johann Glock has argued that Wittgenstein neither held the version of correspondence theory standardly ascribed to him nor any other version, simply because there is no such thing as a genuine correspondence relation in Wittgenstein's treatise. Instead, according to Glock, Wittgenstein held an obtainment theory according to which "a sentence is true iff the state of affairs it depicts obtains" (Glock, 2006, p. 347). Though sympathetic to Glock's critique of the standard interpretation, I argue that Wittgenstein nonetheless always thought of truth in terms of correspondence and never thought of it in terms of an obtainment theory or in terms of any other nonrelational account. Instead, truth in the Tractatus consists of an indirect internal relation of correspondence, which Wittgenstein understands in terms of "correctly depicting reality." This, I further argue, was suggested to him by the rejection of a mistaken conception of correspondence held by Russell and by one of Frege's critique of the correspondence theory. The resulting version of correspondence theory not only differs from the one that is usually ascribed to him but is much more sophisticated than Moore's and Russell's versions of it, is largely independent of the picture theory, and offers some answers to classical critiques of the correspondence theory that have gone unnoticed.