Logical Consequence Research Papers - Academia.edu (original) (raw)

2025, Notre Dame Journal of Formal Logic

The paper discusses several first-order modal logics that extend the classical predicate calculus. The model theory involves possible worlds with world-variable domains. The logics rely on the philosophical tenet known as serious... more

The paper discusses several first-order modal logics that extend the classical predicate calculus. The model theory involves possible worlds with world-variable domains. The logics rely on the philosophical tenet known as serious actualism in that within modal contexts they allow existential generalization from atomic formulae. The language may or may not have a sign of identity, includes no primitive existence predicate, and has individual constants. Some logics correspond to various standard constraints on the accessibility relation, while others correspond to various constraints on the domains of the worlds. Soundness and strong completeness are proved in every case; a novel method is used for proving completeness.

2025, Synthese

Take a formula of first-order logic which is a logical consequence of some other formulae according to model theory, and in all those formulae replace schematic letters with English expressions. Is the argument resulting from the... more

Take a formula of first-order logic which is a logical consequence of some other formulae according to model theory, and in all those formulae replace schematic letters with English expressions. Is the argument resulting from the replacement valid in the sense that the premisses could not have been true without the conclusion also being true? Can we reason from the model-theoretic concept of logical consequence to the modal concept of validity? Yes, if the model theory is the standard one for sentential logic; no, if it is the standard one for the predicate calculus; and yes, if it is a certain model theory for free logic. These conclusions rely inter alia on some assumptions about possible worlds, which are mapped into the models of model theory. Plural quantification is used in the last section, while part of the reasoning is relegated to an appendix that includes a proof of completeness for a version of free logic.

2025, unpublished

I argue that, given the narrow locality of deontic alternatives to the actual world, an extensional quantified deontic logic with a fixed domain is feasible, requiring neither possibilia nor other metaphysical proxies for the same.... more

I argue that, given the narrow locality of deontic alternatives to the actual world, an extensional quantified deontic logic with a fixed domain is feasible, requiring neither possibilia nor other metaphysical proxies for the same. Advantages and applications are then briefly explored.

2025, Linguistics and Philosophy

Is there a principled difference between entailments in natural language that are valid solely in virtue of their form or structure and those that are not? This paper advances an affirmative answer to this question, one that takes as its... more

Is there a principled difference between entailments in natural language that are valid solely in virtue of their form or structure and those that are not? This paper advances an affirmative answer to this question, one that takes as its starting point Gareth Evans's suggestion that semantic theory aims to carve reality at the joints by uncovering the semantic natural kinds of the language. I sketch an Evans-inspired account of semantic kinds and show how it supports a principled account of structural entailment. I illustrate the account by application to a case study involving the entailment properties of adverbs; this involves developing a novel proposal about the semantics for adverbs like 'quickly' and 'slowly'. In the course of the discussion we touch on some implications of the account for the place of model-theoretic tools in natural language semantics, and about the relationship between semantic structure and logical consequence as customarily conceived. Richard Montague begins his landmark "English as a Formal Language" (EFL) with the crisp declaration, "I reject the contention that an important theoretical difference exists between formal and natural languages." (Montague 1974, p. 188) He goes on to demonstrate how the tools of model theory can be used to capture the way the truth conditions of sentences systematically depend on the ways they are constructed from basic lexical items for a significant fragment of English. At the time of Montague's writings, Donald Davidson was vigorously arguing that the role of structure in determining truth conditions should be captured, not in model-theoretic terms, but by means of a Tarski-style recursive definition of truth (Davidson 1966, 1967). One important advantage of Montague's approach over Davidson's is that it supports a straightforward semantic characterization of a consequence relation for natural language: where Γ is a set of sentences and s is a sentence (all of English), s is a consequence of Γ just in case s is true in every model in which all of the sentences in Γ are true; consequence is simply the preservation of truth across all admissible variations in the interpretation of the basic lexical items. 1 This is one of the most exciting elements of EFL, because it provides an elegant way to capture relations of entailment that hold purely in virtue of the semantically relevant structures of the sentences involved. No comparable conception of structural entailment emerges naturally from the Davidsonian approach. 2 It is thus disappointing to observe that the semantics Montague actually develops in EFL turns out not to yield any non-degenerate instances of structural entailment in the pure sense just defined; only reiteration -the entailment from s to s itself -comes out as structural. Non-trivial cases emerge only once Montague begins to add stipulations concerning the meanings of individual lexical items. For example, constraints on the interpretations of 'not', 'necessarily' and the 'is' of identity are added to secure some of the familiar logical consequences of sentences containing these expressions. But these are extrinsic constraints on the range of admissible interpretations that play no role at all in the account of how structure contributes to meaning,

2025, Philosophy Compass

The term 'logical form' has been called on to serve a wide range of purposes in philosophy, and it would be too ambitious to try to survey all of them in a single essay. Instead, I will focus on just one conception of logical form that... more

The term 'logical form' has been called on to serve a wide range of purposes in philosophy, and it would be too ambitious to try to survey all of them in a single essay. Instead, I will focus on just one conception of logical form that has occupied a central place in the philosophy of language, and in particular in the philosophical study of linguistic meaning. This is what I will call the classical conception of logical form. The classical conception, as I will present it in section 1, has (either explicitly or implicitly) shaped a great deal of important philosophical work in semantic theory. But it has come under fire in recent decades, and in sections 2 and 3 I will discuss two of the recent challenges that I take to be most interesting and significant. The classical conception of logical form brings together two strands of thought, from the theory of meaning and philosophical logic, respectively. Let me start by briefly saying something about each of these. It is a familiar fact that the meaning of any given natural language sentence S depends, not only on the meanings of its basic constituents-the words and other basic meaningful components S contains-but also on its semantic structure-the way those constituents combine to form S. Hence (1) and (2) differ in truth conditions, despite sharing all the same words: 1. Homer loves Marge. 2. Marge loves Homer.

2025

It has been shown that the rules of logic for the principle 'Ex Contradictione quod libet' (ECQ) do not cause U8 to explode. This is because the antecedent is non-designated and modus ponens blocks detachment for the conclusion.... more

It has been shown that the rules of logic for the principle 'Ex Contradictione quod libet' (ECQ) do not cause U8 to explode. This is because the antecedent is non-designated and modus ponens blocks detachment for the conclusion. Therefore, ECQ is not a theorem of U8, however, ECQ is acknowledged to be a hypothetical theorem. This leaves the axioms disjunctive introduction and the disjunctive syllogism intact for U8, unlike other paraconsistent logics. Using traffic light signals, an example showing paraconsistent behaviour is given for the U8 AND operation. It can be concluded U8 is a paraconsistent null logic system.

2025, Cuadernos Filosóficos, Segunda Época, FHyA, UNR, Dossier “Voluntarismo e Intelectualismo en la edad media y la modernidad temprana: génesis del problema e intentos de solución”

Resumen: Los lógicos medievales, como los lógicos contemporáneos, aprendieron a utilizar la lógica para solucionar problemas filosóficos. Los problemas generados por el concepto de voluntad no fueron la excepción. En este trabajo... more

2025, Oxford Handbook for the Philosophy of Logic

In this chapter we explore the topic of logical disagreement. Though disagreement in general has attracted widespread philosophical interest, both in epistemology and philosophy of language, the general issues surrounding disagreement... more

In this chapter we explore the topic of logical disagreement. Though disagreement in general has attracted widespread philosophical interest, both in epistemology and philosophy of language, the general issues surrounding disagreement have only rarely been applied to logical disagreement in particular. Here, we develop some of the fascinating semantic and epistemological puzzles to which logical disagreement gives rise. In particular, after distinguishing between different types of logical disagreement, we explore some connections between logical disagreements and deep disagreements over fundamental epistemic principles; we discuss several semantic puzzles that arise on various accounts of the meanings of logical terms; we investigate how such disagreements relate to Kripke’s so-called “Adoption Problem”; and we probe epistemological puzzles that arise from disagreements about logic in the light of central principles from the peer disagreement literature.

2025

With the emergence of large language models and their impressive performance across diverse natural language processing tasks, the question of whether connectionist models can exhibit compositionality without relying on symbolic... more

With the emergence of large language models and their impressive performance across diverse natural language processing tasks, the question of whether connectionist models can exhibit compositionality without relying on symbolic processing has regained attention in both cognitive science and artificial intelligence. However, interpretability challenges faced by neural networks make it difficult to determine whether they genuinely generalize compositional structures. In this paper, we introduce a targeted evaluation framework designed to directly assess the ability of transformer-based language models to translate natural language sentences into first-order logic expressions, a task that requires both nuanced linguistic understanding and compositional generalization. To demonstrate our framework, we fine-tune two different sizes of the T5 language model using our dataset and evaluate their performance through three experiments employing four task-specific evaluation metrics. Our findings reveal that while these models achieve high scores on test data with logical and structural complexity similar to the training set, their performance drops markedly as sentence length, the number of truth-functional connectives and predicates, and the depth of hierarchical composition increase. More strikingly, the models fail to generalize even when complexity increases solely through repeated applications of a single truth-functional connective.

2025, Proceedings of the Workshop on Bob …

In this paper we propose to investigate the mutual relations among Brandom's three dimensions of semantic inferential articulation, namely, incompatibility en-tailment, committive, and permissive consequences. Brandom (Unpub.) argues... more

In this paper we propose to investigate the mutual relations among Brandom's three dimensions of semantic inferential articulation, namely, incompatibility en-tailment, committive, and permissive consequences. Brandom (Unpub.) argues (1) that ...

2025, Glossa: a journal of general linguistics

Our goal in this study was to behaviorally characterize the property (or properties) that render negative quantifiers more complex in processing compared to their positive counterparts (e.g. the pair few/many). We examined two sources:... more

Our goal in this study was to behaviorally characterize the property (or properties) that render negative quantifiers more complex in processing compared to their positive counterparts (e.g. the pair few/many). We examined two sources: (i) negative polarity; (ii) entailment reversal (aka downward monotonicity). While negative polarity can be found in other pairs in language such as dimensional adjectives (e.g. the pair small/large), only in quantifiers does negative polarity also reverse the entailment pattern of the sentence. By comparing the processing traits of negative quantifiers with those of non-monotone expressions that contain negative adjectives, using a verification task and measuring reaction times, we found that negative polarity is cognitively costly, but in downward monotone quantifiers it is even more so. We therefore conclude that both negative polarity and downward monotonicity contribute to the processing complexity of negative quantifiers.

2025

It is argued that the assertion sign, ‘⊢’, in Principia Mathematica can be taken as imperatival. It indicates that what follows it is to be accepted as true. Whereas axioms are unconditional imperatives, rules of inference are conditional... more

2025

Civil society forums have historically been heralded as critical spaces for democratic engagement, collective agency, and the articulation of grassroots interests in political processes. Yet, the increasing phenomenon of political... more

Civil society forums have historically been heralded as critical spaces for democratic engagement, collective agency, and the articulation of grassroots interests in political processes. Yet, the increasing phenomenon of political deployment within these forums represents a fraught intersection between genuine emancipatory potential and the reproduction of hegemonic power dynamics that undermine their foundational ideals. This complex tension must be analyzed through a multidimensional lens that takes into account the political economy of power, epistemic violence and the ethical responsibility inherent in consequence management. Drawing from the decolonial insights, the cultural-political analyses and the psychological and philosophical inquiries, this discourse unpacks the layered implications of political deployment, interrogating its effects on autonomy, identity and the possibilities for genuine social transformation.

2025, 2015 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI)

We propose a system for automated essay grading using ontologies and textual entailment. The process of textual entailment is guided by hypotheses, which are extracted from a domain ontology. Textual entailment checks if the truth of the... more

We propose a system for automated essay grading using ontologies and textual entailment. The process of textual entailment is guided by hypotheses, which are extracted from a domain ontology. Textual entailment checks if the truth of the hypothesis follows from a given text. We enact textual entailment to compare students answer to a model answer obtained from ontology. We validated the solution against various essays written by students in the chemistry domain.

2025

We study the lattice [C_o , S] of order logics with respect to the Scott topology, focusing on the distribution and structural properties of logics with the parity property (PP) and oddity property (OP). We show that the class of logics... more

We study the lattice [C_o , S] of order logics with respect to the Scott topology, focusing on the distribution and structural properties of logics with the parity property (PP) and oddity property (OP). We show that the class of logics with PP forms a Scott-closed set, while OP is Scott-open. This topological perspective enables the use of compactness and Zorn's Lemma to construct maximal non-implicational logics with prescribed properties. The interplay between order-theoretic and topological methods yields new insights into the classification and structure of order logics.

2025, arXiv (Cornell University)

Questions concerning the proof-theoretic strength of classical versus nonclassical theories of truth have received some attention recently. A particularly convenient case study concerns classical and nonclassical axiomatizations of... more

Questions concerning the proof-theoretic strength of classical versus nonclassical theories of truth have received some attention recently. A particularly convenient case study concerns classical and nonclassical axiomatizations of fixed-point semantics. It is known that nonclassical axiomatizations in four-or three-valued logics are substantially weaker than their classical counterparts. In this paper we consider the addition of a suitable conditional to First-Degree Entailment -a logic recently studied by Hannes Leitgeb under the label HYPE. We show in particular that, by formulating the theory PKF over HYPE, one obtains a theory that is sound with respect to fixed-point models, while being proof-theoretically on a par with its classical counterpart KF. Moreover, we establish that also its schematic extension -in the sense of Feferman -is as strong as the schematic extension of KF, thus matching the strength of predicative analysis.

2025, arXiv (Cornell University)

Weighted knowledge bases for description logics with typicality under a "concept-wise" multipreferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to... more

Weighted knowledge bases for description logics with typicality under a "concept-wise" multipreferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to be suitable for addressing defeasible reasoning in the finitely many-valued case, providing a Π p 2 upper bound on the complexity of the problem, nonetheless leaving unknown the exact complexity and only providing a proof-ofconcept implementation. This paper fulfils the lack by providing a P NP[log] -completeness result and new ASP encodings that deal with weighted knowledge bases with large search spaces.

2025, Law, Probability and Risk

Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by... more

Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by judges or informally by jurors who typically have little, if any, training in formal or informal logical forms. This article explores the universal sufficiency of a single intuitive categorical natural language logical form (i.e. 'defeasible class-inclusion transitivity', DCIT) for facilitating such determinations and explores its effectiveness for constructing any typical inferential network in court. This exploration includes a comparison of the functionality of hybrid branching tree-like argument structures with the homogenous linear path argument structure of DCIT. The practicality of customary dialectical argument semantics and conceptions of probative weight are also examined with alternatives proposed. Finally, the issues of intelligibility and acceptability by end users in court of logical models are examined.

2025, Law, Probability and Risk

Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by... more

Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by judges or informally by jurors who typically have little, if any, training in formal or informal logical forms. This article explores the universal sufficiency of a single intuitive categorical natural language logical form (i.e. 'defeasible class-inclusion transitivity', DCIT) for facilitating such determinations and explores its effectiveness for constructing any typical inferential network in court. This exploration includes a comparison of the functionality of hybrid branching tree-like argument structures with the homogenous linear path argument structure of DCIT. The practicality of customary dialectical argument semantics and conceptions of probative weight are also examined with alternatives proposed. Finally, the issues of intelligibility and acceptability by end users in court of logical models are examined.

2025, Klima G. Consequence. In: Dutilh Novaes C, Read S, eds. The Cambridge Companion to Medieval Logic. Cambridge Companions to Philosophy. Cambridge University Press; 2016:316-341.

Gyula Klima 1. The limitations of Aristotelian syllogistic, and the need for non-syllogistic consequences Medieval theories of consequences are theories of logical validity, providing tools to judge the correctness of various forms of... more

Gyula Klima 1. The limitations of Aristotelian syllogistic, and the need for non-syllogistic consequences Medieval theories of consequences are theories of logical validity, providing tools to judge the correctness of various forms of reasoning. Although Aristotelian syllogistic was regarded as the primary tool for achieving this, the limitations of syllogistic with regard to valid non-syllogistic forms of reasoning, as well as the limitations of formal deductive systems in detecting fallacious forms of reasoning in general, naturally provided the theoretical motivation for its supplementation with theories dealing with non-syllogistic, non-deductive, as well as fallacious inferences. We can easily produce deductively valid forms of inference that are clearly not syllogistic, as in propositional logic or in relational reasoning, or even other types of sound reasoning that are not strictly deductively valid, such as enthymemes, probabilistic arguments, and inductive reasoning, while we can just as easily provide examples of inferences that appear to be legitimate instances of syllogistic forms, yet are clearly fallacious (say, because of equivocation). For Aristotle himself, this sort of supplementation of his syllogistic was provided mostly in terms of the doctrine of "immediate inferences" 1 in his On Interpretation, various types of non-syllogistic or even non-deductive inferences in the Topics, and the doctrine of logical fallacies, in his On Sophistical Refutations. Taking their cue primarily from Aristotle (but drawing on Cicero, Boethius, and others as well), medieval logicians worked out in systematic detail various theories of non-syllogistic inferences, sometimes as supplementations of Aristotelian syllogistic, sometimes as merely useful devices taken to be reducible to syllogistic, and sometimes as more comprehensive theories of valid inference, containing syllogistic as a special, and important, case.

2025, arXiv (Cornell University)

Large Language Models (LLMs) like ChatGPT and Llama have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175... more

Large Language Models (LLMs) like ChatGPT and Llama have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175 billion parameters, where inference demands billions of floating-point operations. Caching is a natural solution to reduce LLM inference costs on repeated queries. However, existing caching methods are incapable of finding semantic similarities among LLM queries nor do they operate effectively on contextual queries, leading to unacceptable false hit-and-miss rates. This paper introduces MeanCache, a user-centric semantic cache for LLM-based services that identifies semantically similar queries to determine cache hit or miss. Using MeanCache, the response to a user's semantically similar query can be retrieved from a local cache rather than re-querying the LLM, thus reducing costs, service provider load, and environmental impact. MeanCache leverages Federated Learning (FL) to collaboratively train a query similarity model without violating user privacy. By placing a local cache in each user's device and using FL, Mean-Cache reduces the latency, costs, and enhances model performance, resulting in lower false-hit rates. MeanCache also encodes context chains for every cached query, offering a simple yet highly effective mechanism to discern contextual query responses from standalone queries. Our experiments benchmarked against the state-of-the-art caching method reveal that MeanCache attains an approximately 17% higher F-score and a 20% increase in precision during semantic cache hit-and-miss decisions while performing even better on contextual queries. It also reduces the storage requirement by 83% and accelerates semantic cache hit-and-miss decisions by 11%.

2025, Journal of Computer System and Informatics

Abstrak-Pandemi COVID-19 mengakibatkan penutupan fisik yang kini telah mengubah pendidikan menjadi model "pembelajaran online" eksklusif. Zoom digunakan untuk mengevaluasi kegunaan yang dirasakan sebagai platform referensi. Para siswa... more

Abstrak-Pandemi COVID-19 mengakibatkan penutupan fisik yang kini telah mengubah pendidikan menjadi model "pembelajaran online" eksklusif. Zoom digunakan untuk mengevaluasi kegunaan yang dirasakan sebagai platform referensi. Para siswa merasa kurang kolaboratif, kurang interaktif, membosankan, dan kurang kolaboratif. Dari perspektif ini, Kegunaan platform pembelajaran online saat ini merupakan faktor penting, terutama karena tidak ada kelas fisik yang hadir. User-Centered Design (UCD) dipilih untuk penelitian ini dan menggunakan metode Usability Scale (SUS) untuk mengevaluasi antarmuka. Tujuan dari penelitian ini adalah untuk menganalisis pengalaman pengguna, merancang solusi dan mengevaluasi antarmuka pengguna yang dapat memenuhi kebutuhan pengguna. Pra-survei untuk mengevaluasi kesulitan aplikasi Zoom berdasarkan pengalaman pengguna, dan pasca-survei untuk melihat apakah desain yang ditingkatkan dapat membantu siswa menggunakan aplikasi Zoom untuk pembelajaran online. Kemudian, gunakan pendekatan kuesioner System Usability Scale (SUS) untuk mengukur kegunaan sistem. Setelah pendekatan UCD selesai, peneliti melakukan survei lanjutan. Hasil penelitian menunjukkan bahwa peringkat SUS ke 85,12. Akibatnya, rentang penerimaan yang sebelumnya rendah telah dinaikkan menjadi dapat diterima. Selain itu, skala kelas telah direklasifikasi B. Program Zoom kini memiliki lebih banyak fitur dan lebih mudah digunakan serta memenuhi kebutuhan siswa.

2025

Argues for a 3-valued logic of vagueness. In distinction to other 3-valued approaches this logic is an extension of classical logic, due to the use of both a Boolean and a predicate negation. The approch is justified by semantic... more

Argues for a 3-valued logic of vagueness. In distinction to other 3-valued approaches this logic is an extension of classical logic, due to the use of both a Boolean and a predicate negation. The approch is justified by semantic considerations. Critical issues like Sorites reasoning, higher order vagueness, and 'penumbral truth' are discussed from this perspective.

2025, Notre Dame Journal of Formal Logic

It is here argued that Russell's Principles of Mathematics contains an intriguing idea about how to demarcate logical concepts from nonlogical ones. On this view, implication and generality emerge as the two fundamental logical concepts.... more

It is here argued that Russell's Principles of Mathematics contains an intriguing idea about how to demarcate logical concepts from nonlogical ones. On this view, implication and generality emerge as the two fundamental logical concepts. RusselPs 1903 proposals for defining other logical concepts from these basic ones are examined and extended. Despite its attractiveness, the proposal is ultimately unsatisfactory because of problems about defining negation and existential quantification.

2025

Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors... more

Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors and psychologists to help them in detection. Nowadays,<br> social networks can be utilized to determine depressive<br> content and thus depressed people. To accomplish this, twitter is<br> used to collect the most recent tweets that is related to depression.<br> This is done by PHQ-9 technique that classifies depression into<br> 9 degrees. Each degree is represented by set of words. Using<br> this classification, the model can alert users that need a have a<br> visit to a psychiatrist or ask a psychologist as soon as possible<br> based on their social content. The collected dataset is then trained<br> using deep learning and then experimented with different tweets<br> from the collected datas...

2025

Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors... more

Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors and psychologists to help them in detection. Nowadays,<br> social networks can be utilized to determine depressive<br> content and thus depressed people. To accomplish this, twitter is<br> used to collect the most recent tweets that is related to depression.<br> This is done by PHQ-9 technique that classifies depression into<br> 9 degrees. Each degree is represented by set of words. Using<br> this classification, the model can alert users that need a have a<br> visit to a psychiatrist or ask a psychologist as soon as possible<br> based on their social content. The collected dataset is then trained<br> using deep learning and then experimented with different tweets<br> from the collected datas...

2025, International Journal of Computer Applications

Variability of semantic expression is a fundamental phenomenon of a natural language where same meaning can be expressed by different texts. The process of inferring a text from another is called textual entailment. Textual Entailment is... more

Variability of semantic expression is a fundamental phenomenon of a natural language where same meaning can be expressed by different texts. The process of inferring a text from another is called textual entailment. Textual Entailment is useful in a wide range of applications, including question answering, summarization, text generation, and machine translation. The recognition of textual entailment is one of the recent challenges of the Natural Language Processing (NLP) domain. This paper summarizes key ideas from the area of textual entailment recognition by considering in turn the different recognition models. The paper points to prominent testing data, training data, resources and Performance Evaluation for each model. Also this paper compares between textual entailment models according to the method which used, the result of each method and the strong and weakness of each method.

2025, Computational Linguistics

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of... more

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of the recent studies have exposed the inherent human disagreements of the inference task and have proposed a shift from categorical labels to human subjective probability assessments, capturing human uncertainty. In this work, we show how neither the current task formulation nor the proposed uncertainty gradient are entirely suitable for solving the NLI challenges. Instead, we propose an ordered sense space annotation, which distinguishes between logical and common-sense inference. One end of the space captures non-sensical inferences, while the other end represents strictly logical scenarios. In the middle of the space, we find a continuum of common-sense, namely, the subjective and graded opinion of a “person on the street.” To arrive at the proposed...

2025, British Journal for the History of Philosophy

The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a... more

The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a formal consequence as one that was valid for any substitution of categorematic terms, whereas the so-called British tradition required that the meaning of the consequent be contained in that of the antecedent. The former criterion resembles our model-theoretic definition of logical consequence, but it was the latter that, it has been claimed, was more popular at the time. Why? I argue that the question has no answer because the contradistinction of substitution and containment does not stand up to scrutiny. I base my argument on selected texts from various fourteenthcentury authors, including Walter Burley, Nicholas Drukken of Denmark, Richard Lavenham, and Peter of Mantua. Instead of two distinct criteria, one of which is favoured over the other, we find various ways of mixing the two and gradual developments towards a hybrid view. I would say that both traditions made use of a substitutional criterion and that they only disagreed on what is to be substituted and what is not, i.e. what counts as form.

2025, Synthese

Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically... more

Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically closed, but need not be jointly coherent and logically closed, nor uniformly accessible for guiding the agent's actions across different deliberative contexts. Expressivism is a widely discussed thesis on the mental states conventionally expressed by certain categories of declarative discourse, saying, roughly, that prominent forms of declarative utterance should be taken to express something other than the speaker's outright acceptance of a representational content. In this paper, I argue that specific versions of these views-Topical Fragmentation and Semantic Expressivism-present a mutually beneficial combination. In particular, I argue that combining Topical Fragmentation with Semantic Expressivism fortifies the former against (what I call) the Connective Problem, a pressing objection that lays low more familiar forms of Fragmentation. This motivates a novel semantic framework: Fragmented Semantic Expressivism, a bilateral state-based system that (i) prioritizes fragmentationist acceptance conditions over truth conditions, (ii) treats representational content as hyperintensional, and (iii) gives expressivistic acceptance conditions for the standard connectives. Finally, we discuss the distinctive advantages of this system in answering the problem of logical omniscience and Karttunen's problem for epistemic 'must'.

2025, Lecture Notes in Computer Science

We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in... more

We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in several semantic evaluations. SICK-BR consists of around 10k sentence pairs annotated for neutral/contradiction/entailment relations and for semantic relatedness, using a 5 point scale. Here we describe the strategies used for the adaptation of SICK, which preserve its original inference and relatedness relation labels in the SICK-BR Portuguese version. We also discuss some issues with the original corpus and how we might deal with them.

2025, Underreview

This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is:... more

This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is: 'What is Logical Inferentialism & How do its Standards challenge Classical Logic?' In response, the study begins with a survey of the three principal proof systems that is, David Hilbert's axiomatic systems and Gerhard Gentzen's natural deduction and his sequent calculus, thus situating logical inferentialism within a broader proof-theoretic landscape. The investigation then turns to the core tenets of logical inferentialism by focusing on the role of introduction and elimination rules in determining the meaning of logical constants. Through this framework, natural deduction is evaluated as a system that satisfies key inferentialist virtues including harmony, conservativeness and the subformula property. Ultimately, the paper presents challenges to classical logic from intuitionist and revisionist perspectives by arguing that certain classical principles fail to uphold inferentialist standards, consequently undermining their legitimacy within a meaning-theoretic framework.

2025, The Review of Socionetwork Strategies

The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues... more

The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues (in addition to the organizing team of Ken Satoh, Yoshinobu Kano, and Masaharu Yoshioka) have contributed to the extension and curation of the statute law data for the COLIEE competition.

2025

We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing... more

We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing entailment between articles and queries (Task 2). Participation was open to any group based on any approach, and the tasks attracted 10 teams. We received 9 submissions to Task 1 (for a total of 17 runs), and 8 submissions to Task 2 (for a total of 20 runs).

2025, Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on... more

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on legal information extraction/entailment (COLIEE)-2017. The competition focuses on the legal information processing required to answer yes/no questions from Japanese legal bar exams, and it consists of two phases: ad hoc legal information retrieval (Phase 1), and textual entailment (Phase 2). Phase 1 requires the identification of Japan civil law articles relevant to a legal bar exam query. For this phase, we have used an information retrieval approach using TF-IDF combined with a simple language model. Phase 2 requires a yes/no decision for previously unseen queries, which we approach by comparing the approximate meanings of queries with relevant statutes. Our meaning extraction process uses a selection of features based on a kind of paraphrase, coupled with a condition/conclusion/exception analysis of articles and queries. We also extract and exploit negation patterns from the articles. We construct a logic-based representation as a semantic analysis result, and then classify questions into easy and difficult types by analyzing the logic representation. If a question is in our easy category, we simply obtain the entailment answer from the logic representation; otherwise we use an unsupervised learning method to obtain the entailment answer. Experimental evaluation shows that our result ranked highest in the Phase 2 amongst all COLIEE-2017 competitors.

2025

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding... more

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such modelsplain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)-can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.

2025

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine... more

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

2025, arXiv (Cornell University)

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding... more

Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such modelsplain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)-can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.

2025, Humanistyka i Przyrodoznawstwo

Od dawna bowiem, rzecz jasna, wiadomo wam, co macie na myli, u¿ywaj¹c wyra¿enia bytuj¹cy. My wszak¿e, którzy swego czasu mylelimy, ¿e je rozumiemy, popadlimy teraz w k³opot [Platon, Sofista, 244a]. Martin Heidegger, Bycie i czas, s. 2 W... more

Od dawna bowiem, rzecz jasna, wiadomo wam, co macie na myli, u¿ywaj¹c wyra¿enia bytuj¹cy. My wszak¿e, którzy swego czasu mylelimy, ¿e je rozumiemy, popadlimy teraz w k³opot [Platon, Sofista, 244a]. Martin Heidegger, Bycie i czas, s. 2 W ka¿dym razie d³ugo nie mog³em siê wyzwoliae z tego dziecinnego zwi¹zku z nimi, który by³ jakim rodzajem wiernoci, nieuwiadomionej jeszcze, a wiêc niewybranej, lecz z samej niejako natury obowi¹zuj¹cej. Taka wiernoae, czy te¿ tylko mo¿liwoae wiernoci, niezale¿nie od wszystkiego zapewnia nie tylko poczucie bezpieczeñstwa, okrela tak¿e miejsce w nieprzeczuwanej jeszcze przestrzeni twojego widnokrêgu, tym samym staj¹c siê znakiem twojego przeznaczenia. Wies³aw Myliwski, Widnokr¹g, s. 195. Zadanie pomylenia tego, czemu wyraz dany zosta³ w Byciu i czasie, ujawnia zrazu, niczym emblemat prawdziwoci, ¿e ród³o lub pocz¹tek w swym byciu sob¹ s¹ trudne w stopniu najwy¿szym, choaeby z tego wzglêdu, i¿ ich to¿samoae ju¿ u swego zarania narzuca nam sw¹ problematycznoae, skoro ród³o jest pocz¹tkiem w sposób jedynie wzglêdny, staj¹c siê nie tyle pocz¹tkiem samym w sobie, ile raczej pocz¹tkiem dla nas, ale i dla nas w tej mierze jedynie, w jakiej czerpiemy z niego, zawsze licz¹c na o¿ywczoae lektury: ¿e mianowicie stanie siê ona dla nas krynic¹ ¿yciodajn¹, ¿e tedy pozwoli nam byae tym, kim w³anie jestemy lub byae bymy chcieli, w danym wypadku mylicielami. Liczymy przy tym na to, ¿e siê w liczeniu nie przeliczamy, na co nadziei mieae niepodobna, skoro przeliczalnoae tego, co jest, nie tylko nie stanowi nawet wstêpnego jego znamienia, lecz wrêcz zdaje siê byae zakryciem tego, co jest, w jego sposobie: sposobie bycia mianowicie. Siêgamy tedy po tekst, by zrozumieae. Interpretacja jednak okazuje sw¹ gwa³town¹ niemo¿liwoae ju¿ na samym pocz¹tku, na pierwszej karcie, w pierwszych s³owach, wskazuj¹c tym samym, i¿ tu w³anie, w miejscu pocz¹tku, zaczyna siê to, co ka¿e przerwaae lekturê z racji braku rozumienia, przerywaj¹c ci¹g wywodu, który tym samym sw¹ jednoae lub ci¹g³oae zaczyna przedstawiaae nam jako ci¹g³oae braku, nieuchron-n¹ jednoae aktu ja³owoci lektury, która: ma byae. Swój procesualny charakter wywód tak nie-pojêty zdaje siê czerpaae ju¿ z tego tylko, ¿e przecie¿ niemo¿noae interpretacji choae dotyczy pierwszych s³ów tekstu uderza czytelnika nie za pierwszym razem: ¿e tedy lektura okazuje sw¹ niemo¿noae wtedy dopiero, gdy jej akt natrêtnie powtarzalny i powtarzany staje siê monotoni¹ stuporu, dla którego pierwsza karta dzie³a stanowi znak ju¿ czego innego ni¿ pierwszorazowoci zetkniêcia. Nic bowiem nie mo¿e siê zacz¹ae inaczej ni¿ jako powtórzenie pocz¹tku tedy i niemo¿noae rozpoczêcia nie inaczej. Brak powodzenia w dzia³aniu, pozbawiaj¹c nas szczêcia lektury, przez samo to zawiadcza jednak dobitnie o marnej kondycji czytelnika, ten bowiem sko-

2025, Philosophy of Science

2025, Language Resources and Evaluation

For a language pair such as Chinese and Korean that belong to entirely different language families in terms of typology and genealogy, finding the correspondences is quite obscure in word alignment. We present annotation guidelines for... more

For a language pair such as Chinese and Korean that belong to entirely different language families in terms of typology and genealogy, finding the correspondences is quite obscure in word alignment. We present annotation guidelines for Chinese-Korean word alignment through contrastive analysis of morpho-syntactic encodings. We discuss the differences in verbal systems that cause most of linking obscurities in annotation process. Systematic comparison of verbal systems is conducted by analyzing morpho-syntactic encodings. The viewpoint of grammatical category allows us to define consistent and systematic instructions for linguistically distant languages such as Chinese and Korean. The scope of our guidelines is limited to the alignment between Chinese and Korean, but the instruction methods exemplified in this paper are also applicable in developing systematic and comprehensible alignment guidelines for other languages having such different linguistic phenomena.

2025

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on... more

Our legal question answering system combines legal information retrieval and textual entailment, and exploits semantic information using a logic-based representation. We have evaluated our system using the data from the competition on legal information extraction/entailment (COLIEE)-2017. The competition focuses on the legal information processing required to answer yes/no questions from Japanese legal bar exams, and it consists of two phases: ad hoc legal information retrieval (Phase 1), and textual entailment (Phase 2). Phase 1 requires the identification of Japan civil law articles relevant to a legal bar exam query. For this phase, we have used an information retrieval approach using TF-IDF combined with a simple language model. Phase 2 requires a yes/no decision for previously unseen queries, which we approach by comparing the approximate meanings of queries with relevant statutes. Our meaning extraction process uses a selection of features based on a kind of paraphrase, coupled with a condition/conclusion/exception analysis of articles and queries. We also extract and exploit negation patterns from the articles. We construct a logic-based representation as a semantic analysis result, and then classify questions into easy and difficult types by analyzing the logic representation. If a question is in our easy category, we simply obtain the entailment answer from the logic representation; otherwise we use an unsupervised learning method to obtain the entailment answer. Experimental evaluation shows that our result ranked highest in the Phase 2 amongst all COLIEE-2017 competitors.

2025, Ex falso sequitur quodlibet

Ex-falso-sequitur-quodlibet" is a Latin phrase meaning that from what is false any assertion validly follows. However, it is necessary to clarify what is meant by "following" by distinguishing whether the implication referred to is a... more

Ex-falso-sequitur-quodlibet" is a Latin phrase meaning that from what is false any assertion validly follows. However, it is necessary to clarify what is meant by "following" by distinguishing whether the implication referred to is a material implication or a formal one. In general terms, "following" as used in this context means the logical process of deciding true another different proposition from the established truth of certain propositions and on account of them. Implication is named the process of logical following, and also the relation connecting its terms.

2025, Informal Logic

This paper presents a way in which formal logic can be understood and reformulated in terms of argumentation that can help us unify formal and informal reasoning. Classical deductive reasoning will be expressed entirely in terms of... more

This paper presents a way in which formal logic can be understood and reformulated in terms of argumentation that can help us unify formal and informal reasoning. Classical deductive reasoning will be expressed entirely in terms of notions and concepts from argumentation so that formal logical entailment is equivalently captured via the arguments that win between those supporting concluding formulae and arguments supporting contradictory formulae. This allows us to go beyond Classical Logic and smoothly connect it with human reasoning, thus providing a uniform argumentation-based view of both informal and formal logic.

2025, Figures de la vérité

The paper's purpose is to articulate a deflationary conception of truth and the view that the notion of truth in critical for rational inquiry. The key to the suggested articulation is the identification of the "reflective stance" as one... more

2025, bioRxiv (Cold Spring Harbor Laboratory)

Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with... more

Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically require experimentation with progressively more refined prompts. Results thus become dependent not just on what is known about the target, but also on what is known about the prompt-engineering. In this paper, we separate the prompt into domain-constraints that can be written in a standard logical form, and a simple textbased query. We investigate whether LLMs can be guided, not by refining prompts manually, but by refining the the logical component automatically, keeping the query unchanged. We describe an iterative procedure LMLF ("Language Models with Logical Feedback") in which the constraints are progressively refined using a logical notion of generalisation. On any iteration, newly generated instances are verified against the constraint, providing "logical-feedback" for the next iteration's refinement of the constraints. We evaluate LMLF using two well-known targets (inhibition of the Janus Kinase 2; and Dopamine Receptor D2); and two different LLMs (GPT-3 and PaLM). We show that LMLF, starting with the same logical constraints and query text, can guide both LLMs to generate potential leads. We find: (a) Binding affinities of LMLF-generated molecules are skewed towards higher binding affinities than those from existing baselines; (b) LMLF results in generating molecules that are skewed towards higher binding affinities than without logical feedback; (c) Assessment by a computational chemist suggests that LMLF generated compounds may be novel inhibitors. These findings suggest that LLMs with logical feedback may provide a mechanism for generating new leads without requiring the domain-specialist to acquire sophisticated skills in prompt-engineering.