Logical Consequence Research Papers - Academia.edu (original) (raw)
2025, Demarcating logic and science: exploring new frontiers
Logical theories are usually seen as true or false in relation to the phenomenon they aim to describe, namely, validity. An alternative view suggests that the laws governing validity are "legislated-true," making logical theories... more
Logical theories are usually seen as true or false in relation to the phenomenon they aim to describe, namely, validity. An alternative view suggests that the laws governing validity are "legislated-true," making logical theories conventional. In this chapter, I propose an intermediate standpoint in which logics exhibit a blend of descriptive and conventional aspects, balanced to fulfill their primary purpose, which is not seen as theoretical, but practical: assisting us in generating and identifying valid inferences. From this perspective, logics are cognitive tools, human creations designed primarily to facilitate the performance of cognitive operations. This viewpoint aligns logic more closely with technology than science and supports a pluralistic approach: the diversity of alternative technical solutions for issues related to the creation and recognition of valid inferences allows for the coexistence of multiple logics.
2025, Filosofia Viva II
Este trabajo pretende responder a la pregunta ¿cuál es la relación que hay entre la Filosofía, la lógica y la Inteligencia artificial? Para responder a la pregunta haremos un recorrido desde los fundamentos de la IA e intentaré mostrar... more
Este trabajo pretende responder a la pregunta ¿cuál es la relación que hay entre la Filosofía,
la lógica y la Inteligencia artificial? Para responder a la pregunta haremos un recorrido desde
los fundamentos de la IA e intentaré mostrar que dichos fundamentos son principalmente
Filosóficos y si somos más precisos, son fundamentos de naturaleza lógica.
Tanto los computólogos como los filósofos han tenido un interés común, averiguar cómo está
constituida la mente humana y si puede replicar en un sistema no biológico.
2025
Borrador. Versión final en: Karen, González Fernández (ed.) (2025). Inteligencia Artificial. Enfoques multidisciplinares. Navarra: EUNSA.
2025
En este trabajo analizo una nocion de consecuencia 16gica falible que represento en el metalenguaje con el signo " en particular reviso la nocion que creo adecuada para los razonamientos que operan con "normas derrotables" (ciertos... more
En este trabajo analizo una nocion de consecuencia 16gica falible que represento en el metalenguaje con el signo " en particular reviso la nocion que creo adecuada para los razonamientos que operan con "normas derrotables" (ciertos condicionales no monotonicos que expreso en el lenguaje objeto con la forma "A>B"). Sostengo que para dar cuenta de las inferencias basadas en normas derrotables debe apelarse a una nocion de consecuencia logica no deductiva que tiene analogias con la nocion de condicional derrotable (CO), pero tambien diferencias importantes. En particular, la noci6n de consecuencia 16gica falible cumple, a diferencia de los CD, con el principio de monotonia cauta, asi como con el de que un conjunto de premisas consistente no puede arrojar conclusiones contradictorias. Por tiltimo, sostengo que todo CD encubre uno estricto en el sentido de que, para todo par de formulas <A,B>, el CD A>B equivale a un condicional estricto /[A,B)->B. (Llamo a/(x,y) "funcion de selecci6n".) Afirmo analogamente que para cada razonamiento falible R hay un razonamiento paralelo R' que implica deductivamente a la conclusidn de R, pero en este caso la funcion de selection que identifica las premisas de R' no depende de premisas y conclusion, sino unicamente del conjunto de las premisas. Asi, recupero para la noci6n de consecuencia 16gica derrotable propiedades que son frecuentemente atribuidas a nociones de consecuencia derrotables y que yo rechazo para los condicionales mientras las mantengo para la noci6n de consecuencia que opera sobre aquellos. Finalmente, rechazo para la relacidn de inferencia derrotable un principio importante del que carece la 16gica de los CD pero que es usualmente aceptado para la inferencia derrotable, el principio de corte cauto. Asi, acepto una nocion de consecuencia derrotable menos permisiva que otras estandar pero a la vez mas segura. T£rminos clave: consecuencia 16gica, condicional derrotable, consecuencia falible.
2025, Theoretical Computer Science
seems to require (at least in part) a persistent repository of information that concurrent agents can query and update. Indeed, most coordination languages are based on a shared data space model. They differ in the details of how actions... more
seems to require (at least in part) a persistent repository of information that concurrent agents can query and update. Indeed, most coordination languages are based on a shared data space model. They differ in the details of how actions and processes are defined, but most assume the data space to have a multiset structure, and actions to be rewritings. We find this view too particular and not expressive enough in many practical cases, and set out in this paper to develop a more general theory of actions, abandoning the syntactic rewriting paradigm in favour of a more abstract notion of update based on entailment. Actions may impose certain properties to be entailed or not entailed, and the corresponding update is the minimal change, possibly by removing information and adding new one, that satisfies the (dis)entailment requirements. We work with abstract situations (standing for information states) ordered under entailment. We show that if a situation space is a coherent, prime algebraic, consistently complete poset then a suitable class of its subsets, which we call definite, corresponds to update operations with suitable generality (any situation can be obtained by an update of any other) and good compositional properties (closure under sequential and synchronous composition). These updates can be seen as unconditional determinate actions, i.e. total functions from situations to situations; these functions are always a composition of a restriction (losing information) and an expansion (adding consistent information). We show that the space of updates is itself an ordered structure similar to a situation space. Then we consider general actions, which may be conditional and nondeterministic. They thus represent arbitrary relations between situations, but are actually more specific, coding the intensional rather than extensional behaviour, this being relevant for the synchronous composition. We formulate general actions as (suitably restricted) relations between situations and definite sets, define their synchronous, sequential and choice compositions, and show them to be fully abstract with respect to observing situation transitions under any compositional context. The synchronous and sequential compositions give rise to an intrinsic notion of independence of actions, that reflects their ability to be truly concurrent.
2025, Arxiv
This paper investigates the proof-theoretic foundations of double negation introduction (DNI: A ⊢ ¬¬A) and double negation elimination (DNE: ¬¬A ⊢ A) in classical logic. 1 By examining both sequent calculus and natural deduction, it is... more
This paper investigates the proof-theoretic foundations of double negation introduction (DNI: A ⊢ ¬¬A) and double negation elimination (DNE: ¬¬A ⊢ A) in classical logic. 1 By examining both sequent calculus and natural deduction, it is shown that these rules originate in reductio ad absurdum (RAA): DNI results from deriving ¬¬A via ¬I by discharging [¬A], while DNE arises from deriving A through a reductio on [¬A]. 2 Their significance extends beyond semantic equivalence, for DNI and DNE embody the identity relation A ⟷ ¬¬A as a structural principle of classical logic. The paper demonstrates that both rules possess harmony, ensuring balance between introduction and elimination, and normalisation, which guarantees that derivations reduce to canonical form without detours. These features reveal double negation not as a redundancy, but as a mechanism of proof-theoretic stability, securing the disciplined integration of RAA into classical logic.
2025, Open Journal of Philosophy
This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is:... more
This paper undertakes a foundational inquiry into logical inferentialism with particular emphasis on the normative standards it establishes and the implications these pose for classical logic. The central question addressed herein is: 'What is Logical Inferentialism & How do its Standards challenge Classical Logic?' In response, the study begins with a survey of the three principal proof systems that is, David Hilbert's axiomatic systems and Gerhard Gentzen's natural deduction and his sequent calculus, thus situating logical inferentialism within a broader proof-theoretic landscape. The investigation then turns to the core tenets of logical inferentialism by focusing on the role of introduction and elimination rules in determining the meaning of logical constants. Through this framework, natural deduction is evaluated as a system that satisfies key inferentialist virtues including harmony, conservativeness and the subformula property. Ultimately, the paper presents challenges to classical logic from intuitionist and revisionist perspectives by arguing that certain classical principles fail to uphold inferentialist standards, consequently undermining their legitimacy within a meaning-theoretic framework.
2025, arXiv (Cornell University)
The mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow sequent calculus was introduced in Santos (2016). This paper presents a Termination proof and a new (more direct) Completeness proof for it. mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow is aimed to be used for proof... more
The mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow sequent calculus was introduced in Santos (2016). This paper presents a Termination proof and a new (more direct) Completeness proof for it. mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow is aimed to be used for proof search in Propositional Minimal Implicational Logic ($\mathbf{M^{\rightarrow}}$), in a bottom-up approach. Termination of the calculus is guaranteed by a rule application strategy that stresses all the possible combinations. For an initial formula alpha\alphaalpha, proofs in mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow have an upper bound of ∣alpha∣times2∣alpha∣+1+2timeslog_2∣alpha∣|\alpha| \times 2^{|\alpha| + 1 + 2 \times log_2|\alpha|}∣alpha∣times2∣alpha∣+1+2timeslog_2∣alpha∣, which together with the system strategy ensure decidability. mathbfLMTrightarrow\mathbf{LMT^{\rightarrow}}mathbfLMTrightarrow has the property to allow extractability of counter-models from failed proof searches (bicompleteness), i.e., the attempt proof tree of an expanded branch produces a Kripke model that falsifies the initial formula.
2025
The article explores the shift from classical binary logic (true and false) to a more complex landscape of logical frameworks. It introduces logical pluralism, which suggests that no single, universally accepted system of logic exists.... more
The article explores the shift from classical binary logic (true and false) to a more complex landscape of logical frameworks. It introduces logical pluralism, which suggests that no single, universally accepted system of logic exists. The paper connects this idea to neutrosophy, a framework that extends fuzzy logic by incorporating a third value: indeterminacy. This triadic approach, where every proposition has a degree of truth (T), falsity (F), and indeterminacy (I), is particularly useful for managing incomplete or contradictory information. The author argues that different logical systems, like different tools, are suited for different problems, and that the search for a single "one true logic" is less important than recognizing the diverse tools available for navigating the complexities of reasoning.
2025, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelatedness. Intermediate... more
Semantic Textual Similarity (STS) seeks to measure the degree of semantic equivalence between two snippets of text. Similarity is expressed on an ordinal scale that spans from semantic equivalence to complete unrelatedness. Intermediate values capture specifically defined levels of partial similarity. While prior evaluations constrained themselves to just monolingual snippets of text, the 2016 shared task includes a pilot subtask on computing semantic similarity on cross-lingual text snippets. This year's traditional monolingual subtask involves the evaluation of English text snippets from the following four domains: Plagiarism Detection, Post-Edited Machine Translations, Question-Answering and News Article Headlines. From the questionanswering domain, we include both questionquestion and answer-answer pairs. The cross-lingual subtask provides paired Spanish-English text snippets drawn from the same sources as the English data as well as independently sampled news data. The English subtask attracted 43 participating teams producing 119 system submissions, while the crosslingual Spanish-English pilot subtask attracted 10 teams resulting in 26 systems.
2025, Language Resources and Evaluation
We focus on textual entailments mediated by syntax and propose a new methodology to evaluate textual entailment recognition systems on such data. The main idea is to generate a syntactically annotated corpus of pairs of (non-)entailments... more
We focus on textual entailments mediated by syntax and propose a new methodology to evaluate textual entailment recognition systems on such data. The main idea is to generate a syntactically annotated corpus of pairs of (non-)entailments and to use error mining to identify the most likely sources of errors. To illustrate the approach, we apply this methodology to the Afazio
2025, International Conference on Computational Linguistics
We propose a methodology for investigating how well NLP systems handle meaning preserving syntactic variations. We start by presenting a method for the semi automated creation of a benchmark where entailment is mediated solely by meaning... more
We propose a methodology for investigating how well NLP systems handle meaning preserving syntactic variations. We start by presenting a method for the semi automated creation of a benchmark where entailment is mediated solely by meaning preserving syntactic variations. We then use this benchmark to compare a semantic role labeller and two grammar based RTE systems. We argue that the proposed methodology (i) supports a modular evaluation of the ability of NLP systems to handle the syntax/semantic interface and (ii) permits focused error mining and error analysis.
2025, "Grenzen des Denkens -Gödels Unvollständigkeitssatz und die Architektur des Wissens"
Der Gödelsche Unvollständigkeitssatz zählt zu den tiefgreifendsten Erkenntnissen der modernen Logik und hat weitreichende Implikationen für Mathematik, Informatik und Philosophie. Dieses Paper nähert sich dem Satz nicht über formale... more
Der Gödelsche Unvollständigkeitssatz zählt zu den tiefgreifendsten Erkenntnissen der modernen Logik und hat weitreichende Implikationen für Mathematik, Informatik und Philosophie. Dieses Paper nähert sich dem Satz nicht über formale Beweise, sondern über seine erkenntnistheoretische und wissenschaftsphilosophische Bedeutung. Ausgehend von Hilberts Traum eines vollständigen, widerspruchsfreien Systems zeigt Gödel, dass es in jedem hinreichend komplexen formalen System wahre Aussagen gibt, die sich nicht beweisen lassen. Diese Einsicht stellt nicht nur die Grenzen mathematischer Systeme infrage, sondern auch unser Verständnis von Wahrheit, Beweisbarkeit und Erkenntnis.
Das Paper beleuchtet Gödels Denkweg im historischen Kontext, verknüpft seine Arbeit mit Kant, Wittgenstein und Turing und diskutiert die Auswirkungen auf moderne Wissenschaftsverständnisse. Es wird gezeigt, dass Gödels Satz nicht nur eine logische Grenze markiert, sondern auch ein philosophisches Fenster öffnet – hin zu einer epistemischen Bescheidenheit, die in Zeiten algorithmischer Hybris und KI-Optimismus dringend gebraucht wird.
Ziel ist es, Studierenden der Mathematik, Philosophie und Informatik eine interdisziplinäre Perspektive zu bieten, die sowohl intellektuell fordert als auch existenziell berührt. Der Unvollständigkeitssatz wird hier nicht als technisches Resultat behandelt, sondern als Einladung, die Architektur unseres Denkens neu zu betrachten.
2025, Notre Dame Journal of Formal Logic
The paper discusses several first-order modal logics that extend the classical predicate calculus. The model theory involves possible worlds with world-variable domains. The logics rely on the philosophical tenet known as serious... more
The paper discusses several first-order modal logics that extend the classical predicate calculus. The model theory involves possible worlds with world-variable domains. The logics rely on the philosophical tenet known as serious actualism in that within modal contexts they allow existential generalization from atomic formulae. The language may or may not have a sign of identity, includes no primitive existence predicate, and has individual constants. Some logics correspond to various standard constraints on the accessibility relation, while others correspond to various constraints on the domains of the worlds. Soundness and strong completeness are proved in every case; a novel method is used for proving completeness.
2025, Synthese
Take a formula of first-order logic which is a logical consequence of some other formulae according to model theory, and in all those formulae replace schematic letters with English expressions. Is the argument resulting from the... more
Take a formula of first-order logic which is a logical consequence of some other formulae according to model theory, and in all those formulae replace schematic letters with English expressions. Is the argument resulting from the replacement valid in the sense that the premisses could not have been true without the conclusion also being true? Can we reason from the model-theoretic concept of logical consequence to the modal concept of validity? Yes, if the model theory is the standard one for sentential logic; no, if it is the standard one for the predicate calculus; and yes, if it is a certain model theory for free logic. These conclusions rely inter alia on some assumptions about possible worlds, which are mapped into the models of model theory. Plural quantification is used in the last section, while part of the reasoning is relegated to an appendix that includes a proof of completeness for a version of free logic.
2025, Linguistics and Philosophy
Is there a principled difference between entailments in natural language that are valid solely in virtue of their form or structure and those that are not? This paper advances an affirmative answer to this question, one that takes as its... more
Is there a principled difference between entailments in natural language that are valid solely in virtue of their form or structure and those that are not? This paper advances an affirmative answer to this question, one that takes as its starting point Gareth Evans's suggestion that semantic theory aims to carve reality at the joints by uncovering the semantic natural kinds of the language. I sketch an Evans-inspired account of semantic kinds and show how it supports a principled account of structural entailment. I illustrate the account by application to a case study involving the entailment properties of adverbs; this involves developing a novel proposal about the semantics for adverbs like 'quickly' and 'slowly'. In the course of the discussion we touch on some implications of the account for the place of model-theoretic tools in natural language semantics, and about the relationship between semantic structure and logical consequence as customarily conceived. Richard Montague begins his landmark "English as a Formal Language" (EFL) with the crisp declaration, "I reject the contention that an important theoretical difference exists between formal and natural languages." (Montague 1974, p. 188) He goes on to demonstrate how the tools of model theory can be used to capture the way the truth conditions of sentences systematically depend on the ways they are constructed from basic lexical items for a significant fragment of English. At the time of Montague's writings, Donald Davidson was vigorously arguing that the role of structure in determining truth conditions should be captured, not in model-theoretic terms, but by means of a Tarski-style recursive definition of truth (Davidson 1966, 1967). One important advantage of Montague's approach over Davidson's is that it supports a straightforward semantic characterization of a consequence relation for natural language: where Γ is a set of sentences and s is a sentence (all of English), s is a consequence of Γ just in case s is true in every model in which all of the sentences in Γ are true; consequence is simply the preservation of truth across all admissible variations in the interpretation of the basic lexical items. 1 This is one of the most exciting elements of EFL, because it provides an elegant way to capture relations of entailment that hold purely in virtue of the semantically relevant structures of the sentences involved. No comparable conception of structural entailment emerges naturally from the Davidsonian approach. 2 It is thus disappointing to observe that the semantics Montague actually develops in EFL turns out not to yield any non-degenerate instances of structural entailment in the pure sense just defined; only reiteration -the entailment from s to s itself -comes out as structural. Non-trivial cases emerge only once Montague begins to add stipulations concerning the meanings of individual lexical items. For example, constraints on the interpretations of 'not', 'necessarily' and the 'is' of identity are added to secure some of the familiar logical consequences of sentences containing these expressions. But these are extrinsic constraints on the range of admissible interpretations that play no role at all in the account of how structure contributes to meaning,
2025, Philosophy Compass
The term 'logical form' has been called on to serve a wide range of purposes in philosophy, and it would be too ambitious to try to survey all of them in a single essay. Instead, I will focus on just one conception of logical form that... more
The term 'logical form' has been called on to serve a wide range of purposes in philosophy, and it would be too ambitious to try to survey all of them in a single essay. Instead, I will focus on just one conception of logical form that has occupied a central place in the philosophy of language, and in particular in the philosophical study of linguistic meaning. This is what I will call the classical conception of logical form. The classical conception, as I will present it in section 1, has (either explicitly or implicitly) shaped a great deal of important philosophical work in semantic theory. But it has come under fire in recent decades, and in sections 2 and 3 I will discuss two of the recent challenges that I take to be most interesting and significant. The classical conception of logical form brings together two strands of thought, from the theory of meaning and philosophical logic, respectively. Let me start by briefly saying something about each of these. It is a familiar fact that the meaning of any given natural language sentence S depends, not only on the meanings of its basic constituents-the words and other basic meaningful components S contains-but also on its semantic structure-the way those constituents combine to form S. Hence (1) and (2) differ in truth conditions, despite sharing all the same words: 1. Homer loves Marge. 2. Marge loves Homer.
2025
It has been shown that the rules of logic for the principle 'Ex Contradictione quod libet' (ECQ) do not cause U8 to explode. This is because the antecedent is non-designated and modus ponens blocks detachment for the conclusion.... more
It has been shown that the rules of logic for the principle 'Ex Contradictione quod libet' (ECQ) do not cause U8 to explode. This is because the antecedent is non-designated and modus ponens blocks detachment for the conclusion. Therefore, ECQ is not a theorem of U8, however, ECQ is acknowledged to be a hypothetical theorem. This leaves the axioms disjunctive introduction and the disjunctive syllogism intact for U8, unlike other paraconsistent logics. Using traffic light signals, an example showing paraconsistent behaviour is given for the U8 AND operation. It can be concluded U8 is a paraconsistent null logic system.
2025, Cuadernos Filosóficos, Segunda Época, FHyA, UNR, Dossier “Voluntarismo e Intelectualismo en la edad media y la modernidad temprana: génesis del problema e intentos de solución”
Resumen: Los lógicos medievales, como los lógicos contemporáneos, aprendieron a utilizar la lógica para solucionar problemas filosóficos. Los problemas generados por el concepto de voluntad no fueron la excepción. En este trabajo... more
2025, Oxford Handbook for the Philosophy of Logic
In this chapter we explore the topic of logical disagreement. Though disagreement in general has attracted widespread philosophical interest, both in epistemology and philosophy of language, the general issues surrounding disagreement... more
In this chapter we explore the topic of logical disagreement. Though disagreement in general has attracted widespread philosophical interest, both in epistemology and philosophy of language, the general issues surrounding disagreement have only rarely been applied to logical disagreement in particular. Here, we develop some of the fascinating semantic and epistemological puzzles to which logical disagreement gives rise. In particular, after distinguishing between different types of logical disagreement, we explore some connections between logical disagreements and deep disagreements over fundamental epistemic principles; we discuss several semantic puzzles that arise on various accounts of the meanings of logical terms; we investigate how such disagreements relate to Kripke’s so-called “Adoption Problem”; and we probe epistemological puzzles that arise from disagreements about logic in the light of central principles from the peer disagreement literature.
2025
With the emergence of large language models and their impressive performance across diverse natural language processing tasks, the question of whether connectionist models can exhibit compositionality without relying on symbolic... more
With the emergence of large language models and their impressive performance across diverse natural language processing tasks, the question of whether connectionist models can exhibit compositionality without relying on symbolic processing has regained attention in both cognitive science and artificial intelligence. However, interpretability challenges faced by neural networks make it difficult to determine whether they genuinely generalize compositional structures. In this paper, we introduce a targeted evaluation framework designed to directly assess the ability of transformer-based language models to translate natural language sentences into first-order logic expressions, a task that requires both nuanced linguistic understanding and compositional generalization. To demonstrate our framework, we fine-tune two different sizes of the T5 language model using our dataset and evaluate their performance through three experiments employing four task-specific evaluation metrics. Our findings reveal that while these models achieve high scores on test data with logical and structural complexity similar to the training set, their performance drops markedly as sentence length, the number of truth-functional connectives and predicates, and the depth of hierarchical composition increase. More strikingly, the models fail to generalize even when complexity increases solely through repeated applications of a single truth-functional connective.
2025, Proceedings of the Workshop on Bob …
In this paper we propose to investigate the mutual relations among Brandom's three dimensions of semantic inferential articulation, namely, incompatibility en-tailment, committive, and permissive consequences. Brandom (Unpub.) argues... more
In this paper we propose to investigate the mutual relations among Brandom's three dimensions of semantic inferential articulation, namely, incompatibility en-tailment, committive, and permissive consequences. Brandom (Unpub.) argues (1) that ...
2025, Glossa: a journal of general linguistics
Our goal in this study was to behaviorally characterize the property (or properties) that render negative quantifiers more complex in processing compared to their positive counterparts (e.g. the pair few/many). We examined two sources:... more
Our goal in this study was to behaviorally characterize the property (or properties) that render negative quantifiers more complex in processing compared to their positive counterparts (e.g. the pair few/many). We examined two sources: (i) negative polarity; (ii) entailment reversal (aka downward monotonicity). While negative polarity can be found in other pairs in language such as dimensional adjectives (e.g. the pair small/large), only in quantifiers does negative polarity also reverse the entailment pattern of the sentence. By comparing the processing traits of negative quantifiers with those of non-monotone expressions that contain negative adjectives, using a verification task and measuring reaction times, we found that negative polarity is cognitively costly, but in downward monotone quantifiers it is even more so. We therefore conclude that both negative polarity and downward monotonicity contribute to the processing complexity of negative quantifiers.
2025
It is argued that the assertion sign, ‘⊢’, in Principia Mathematica can be taken as imperatival. It indicates that what follows it is to be accepted as true. Whereas axioms are unconditional imperatives, rules of inference are conditional... more
2025
Civil society forums have historically been heralded as critical spaces for democratic engagement, collective agency, and the articulation of grassroots interests in political processes. Yet, the increasing phenomenon of political... more
Civil society forums have historically been heralded as critical spaces for democratic engagement, collective agency, and the articulation of grassroots interests in political processes. Yet, the increasing phenomenon of political deployment within these forums represents a fraught intersection between genuine emancipatory potential and the reproduction of hegemonic power dynamics that undermine their foundational ideals. This complex tension must be analyzed through a multidimensional lens that takes into account the political economy of power, epistemic violence and the ethical responsibility inherent in consequence management. Drawing from the decolonial insights, the cultural-political analyses and the psychological and philosophical inquiries, this discourse unpacks the layered implications of political deployment, interrogating its effects on autonomy, identity and the possibilities for genuine social transformation.
2025, 2015 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI)
We propose a system for automated essay grading using ontologies and textual entailment. The process of textual entailment is guided by hypotheses, which are extracted from a domain ontology. Textual entailment checks if the truth of the... more
We propose a system for automated essay grading using ontologies and textual entailment. The process of textual entailment is guided by hypotheses, which are extracted from a domain ontology. Textual entailment checks if the truth of the hypothesis follows from a given text. We enact textual entailment to compare students answer to a model answer obtained from ontology. We validated the solution against various essays written by students in the chemistry domain.
2025
We study the lattice [C_o , S] of order logics with respect to the Scott topology, focusing on the distribution and structural properties of logics with the parity property (PP) and oddity property (OP). We show that the class of logics... more
We study the lattice [C_o , S] of order logics with respect to the Scott topology, focusing on the distribution and structural properties of logics with the parity property (PP) and oddity property (OP). We show that the class of logics with PP forms a Scott-closed set, while OP is Scott-open. This topological perspective enables the use of compactness and Zorn's Lemma to construct maximal non-implicational logics with prescribed properties. The interplay between order-theoretic and topological methods yields new insights into the classification and structure of order logics.
2025, arXiv (Cornell University)
Questions concerning the proof-theoretic strength of classical versus nonclassical theories of truth have received some attention recently. A particularly convenient case study concerns classical and nonclassical axiomatizations of... more
Questions concerning the proof-theoretic strength of classical versus nonclassical theories of truth have received some attention recently. A particularly convenient case study concerns classical and nonclassical axiomatizations of fixed-point semantics. It is known that nonclassical axiomatizations in four-or three-valued logics are substantially weaker than their classical counterparts. In this paper we consider the addition of a suitable conditional to First-Degree Entailment -a logic recently studied by Hannes Leitgeb under the label HYPE. We show in particular that, by formulating the theory PKF over HYPE, one obtains a theory that is sound with respect to fixed-point models, while being proof-theoretically on a par with its classical counterpart KF. Moreover, we establish that also its schematic extension -in the sense of Feferman -is as strong as the schematic extension of KF, thus matching the strength of predicative analysis.
2025, arXiv (Cornell University)
Weighted knowledge bases for description logics with typicality under a "concept-wise" multipreferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to... more
Weighted knowledge bases for description logics with typicality under a "concept-wise" multipreferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to be suitable for addressing defeasible reasoning in the finitely many-valued case, providing a Π p 2 upper bound on the complexity of the problem, nonetheless leaving unknown the exact complexity and only providing a proof-ofconcept implementation. This paper fulfils the lack by providing a P NP[log] -completeness result and new ASP encodings that deal with weighted knowledge bases with large search spaces.
2025, Law, Probability and Risk
Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by... more
Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by judges or informally by jurors who typically have little, if any, training in formal or informal logical forms. This article explores the universal sufficiency of a single intuitive categorical natural language logical form (i.e. 'defeasible class-inclusion transitivity', DCIT) for facilitating such determinations and explores its effectiveness for constructing any typical inferential network in court. This exploration includes a comparison of the functionality of hybrid branching tree-like argument structures with the homogenous linear path argument structure of DCIT. The practicality of customary dialectical argument semantics and conceptions of probative weight are also examined with alternatives proposed. Finally, the issues of intelligibility and acceptability by end users in court of logical models are examined.
2025, Law, Probability and Risk
Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by... more
Inference in court is subject to scrutiny for structural correctness (e.g. deductive or non-monotonic validity) and probative weight in determinations such as logical relevancy and sufficiency of evidence. These determinations are made by judges or informally by jurors who typically have little, if any, training in formal or informal logical forms. This article explores the universal sufficiency of a single intuitive categorical natural language logical form (i.e. 'defeasible class-inclusion transitivity', DCIT) for facilitating such determinations and explores its effectiveness for constructing any typical inferential network in court. This exploration includes a comparison of the functionality of hybrid branching tree-like argument structures with the homogenous linear path argument structure of DCIT. The practicality of customary dialectical argument semantics and conceptions of probative weight are also examined with alternatives proposed. Finally, the issues of intelligibility and acceptability by end users in court of logical models are examined.
2025, Klima G. Consequence. In: Dutilh Novaes C, Read S, eds. The Cambridge Companion to Medieval Logic. Cambridge Companions to Philosophy. Cambridge University Press; 2016:316-341.
Gyula Klima 1. The limitations of Aristotelian syllogistic, and the need for non-syllogistic consequences Medieval theories of consequences are theories of logical validity, providing tools to judge the correctness of various forms of... more
Gyula Klima 1. The limitations of Aristotelian syllogistic, and the need for non-syllogistic consequences Medieval theories of consequences are theories of logical validity, providing tools to judge the correctness of various forms of reasoning. Although Aristotelian syllogistic was regarded as the primary tool for achieving this, the limitations of syllogistic with regard to valid non-syllogistic forms of reasoning, as well as the limitations of formal deductive systems in detecting fallacious forms of reasoning in general, naturally provided the theoretical motivation for its supplementation with theories dealing with non-syllogistic, non-deductive, as well as fallacious inferences. We can easily produce deductively valid forms of inference that are clearly not syllogistic, as in propositional logic or in relational reasoning, or even other types of sound reasoning that are not strictly deductively valid, such as enthymemes, probabilistic arguments, and inductive reasoning, while we can just as easily provide examples of inferences that appear to be legitimate instances of syllogistic forms, yet are clearly fallacious (say, because of equivocation). For Aristotle himself, this sort of supplementation of his syllogistic was provided mostly in terms of the doctrine of "immediate inferences" 1 in his On Interpretation, various types of non-syllogistic or even non-deductive inferences in the Topics, and the doctrine of logical fallacies, in his On Sophistical Refutations. Taking their cue primarily from Aristotle (but drawing on Cicero, Boethius, and others as well), medieval logicians worked out in systematic detail various theories of non-syllogistic inferences, sometimes as supplementations of Aristotelian syllogistic, sometimes as merely useful devices taken to be reducible to syllogistic, and sometimes as more comprehensive theories of valid inference, containing syllogistic as a special, and important, case.
2025, arXiv (Cornell University)
Large Language Models (LLMs) like ChatGPT and Llama have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175... more
Large Language Models (LLMs) like ChatGPT and Llama have revolutionized natural language processing and search engine dynamics. However, these models incur exceptionally high computational costs. For instance, GPT-3 consists of 175 billion parameters, where inference demands billions of floating-point operations. Caching is a natural solution to reduce LLM inference costs on repeated queries. However, existing caching methods are incapable of finding semantic similarities among LLM queries nor do they operate effectively on contextual queries, leading to unacceptable false hit-and-miss rates. This paper introduces MeanCache, a user-centric semantic cache for LLM-based services that identifies semantically similar queries to determine cache hit or miss. Using MeanCache, the response to a user's semantically similar query can be retrieved from a local cache rather than re-querying the LLM, thus reducing costs, service provider load, and environmental impact. MeanCache leverages Federated Learning (FL) to collaboratively train a query similarity model without violating user privacy. By placing a local cache in each user's device and using FL, Mean-Cache reduces the latency, costs, and enhances model performance, resulting in lower false-hit rates. MeanCache also encodes context chains for every cached query, offering a simple yet highly effective mechanism to discern contextual query responses from standalone queries. Our experiments benchmarked against the state-of-the-art caching method reveal that MeanCache attains an approximately 17% higher F-score and a 20% increase in precision during semantic cache hit-and-miss decisions while performing even better on contextual queries. It also reduces the storage requirement by 83% and accelerates semantic cache hit-and-miss decisions by 11%.
2025, Journal of Computer System and Informatics
Abstrak-Pandemi COVID-19 mengakibatkan penutupan fisik yang kini telah mengubah pendidikan menjadi model "pembelajaran online" eksklusif. Zoom digunakan untuk mengevaluasi kegunaan yang dirasakan sebagai platform referensi. Para siswa... more
Abstrak-Pandemi COVID-19 mengakibatkan penutupan fisik yang kini telah mengubah pendidikan menjadi model "pembelajaran online" eksklusif. Zoom digunakan untuk mengevaluasi kegunaan yang dirasakan sebagai platform referensi. Para siswa merasa kurang kolaboratif, kurang interaktif, membosankan, dan kurang kolaboratif. Dari perspektif ini, Kegunaan platform pembelajaran online saat ini merupakan faktor penting, terutama karena tidak ada kelas fisik yang hadir. User-Centered Design (UCD) dipilih untuk penelitian ini dan menggunakan metode Usability Scale (SUS) untuk mengevaluasi antarmuka. Tujuan dari penelitian ini adalah untuk menganalisis pengalaman pengguna, merancang solusi dan mengevaluasi antarmuka pengguna yang dapat memenuhi kebutuhan pengguna. Pra-survei untuk mengevaluasi kesulitan aplikasi Zoom berdasarkan pengalaman pengguna, dan pasca-survei untuk melihat apakah desain yang ditingkatkan dapat membantu siswa menggunakan aplikasi Zoom untuk pembelajaran online. Kemudian, gunakan pendekatan kuesioner System Usability Scale (SUS) untuk mengukur kegunaan sistem. Setelah pendekatan UCD selesai, peneliti melakukan survei lanjutan. Hasil penelitian menunjukkan bahwa peringkat SUS ke 85,12. Akibatnya, rentang penerimaan yang sebelumnya rendah telah dinaikkan menjadi dapat diterima. Selain itu, skala kelas telah direklasifikasi B. Program Zoom kini memiliki lebih banyak fitur dan lebih mudah digunakan serta memenuhi kebutuhan siswa.
2025
Argues for a 3-valued logic of vagueness. In distinction to other 3-valued approaches this logic is an extension of classical logic, due to the use of both a Boolean and a predicate negation. The approch is justified by semantic... more
Argues for a 3-valued logic of vagueness. In distinction to other 3-valued approaches this logic is an extension of classical logic, due to the use of both a Boolean and a predicate negation. The approch is justified by semantic considerations. Critical issues like Sorites reasoning, higher order vagueness, and 'penumbral truth' are discussed from this perspective.
2025, Notre Dame Journal of Formal Logic
It is here argued that Russell's Principles of Mathematics contains an intriguing idea about how to demarcate logical concepts from nonlogical ones. On this view, implication and generality emerge as the two fundamental logical concepts.... more
It is here argued that Russell's Principles of Mathematics contains an intriguing idea about how to demarcate logical concepts from nonlogical ones. On this view, implication and generality emerge as the two fundamental logical concepts. RusselPs 1903 proposals for defining other logical concepts from these basic ones are examined and extended. Despite its attractiveness, the proposal is ultimately unsatisfactory because of problems about defining negation and existential quantification.
2025
Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors... more
Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors and psychologists to help them in detection. Nowadays,<br> social networks can be utilized to determine depressive<br> content and thus depressed people. To accomplish this, twitter is<br> used to collect the most recent tweets that is related to depression.<br> This is done by PHQ-9 technique that classifies depression into<br> 9 degrees. Each degree is represented by set of words. Using<br> this classification, the model can alert users that need a have a<br> visit to a psychiatrist or ask a psychologist as soon as possible<br> based on their social content. The collected dataset is then trained<br> using deep learning and then experimented with different tweets<br> from the collected datas...
2025
Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors... more
Abstract—Depression detection nowadays is essential to help in<br> supporting depressed people. Detecting emotional disturbance is<br> currently remarkable in people who suffer from depression, and<br> yet for doctors and psychologists to help them in detection. Nowadays,<br> social networks can be utilized to determine depressive<br> content and thus depressed people. To accomplish this, twitter is<br> used to collect the most recent tweets that is related to depression.<br> This is done by PHQ-9 technique that classifies depression into<br> 9 degrees. Each degree is represented by set of words. Using<br> this classification, the model can alert users that need a have a<br> visit to a psychiatrist or ask a psychologist as soon as possible<br> based on their social content. The collected dataset is then trained<br> using deep learning and then experimented with different tweets<br> from the collected datas...
2025, International Journal of Computer Applications
Variability of semantic expression is a fundamental phenomenon of a natural language where same meaning can be expressed by different texts. The process of inferring a text from another is called textual entailment. Textual Entailment is... more
Variability of semantic expression is a fundamental phenomenon of a natural language where same meaning can be expressed by different texts. The process of inferring a text from another is called textual entailment. Textual Entailment is useful in a wide range of applications, including question answering, summarization, text generation, and machine translation. The recognition of textual entailment is one of the recent challenges of the Natural Language Processing (NLP) domain. This paper summarizes key ideas from the area of textual entailment recognition by considering in turn the different recognition models. The paper points to prominent testing data, training data, resources and Performance Evaluation for each model. Also this paper compares between textual entailment models according to the method which used, the result of each method and the strong and weakness of each method.
2025, Computational Linguistics
Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of... more
Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of the recent studies have exposed the inherent human disagreements of the inference task and have proposed a shift from categorical labels to human subjective probability assessments, capturing human uncertainty. In this work, we show how neither the current task formulation nor the proposed uncertainty gradient are entirely suitable for solving the NLI challenges. Instead, we propose an ordered sense space annotation, which distinguishes between logical and common-sense inference. One end of the space captures non-sensical inferences, while the other end represents strictly logical scenarios. In the middle of the space, we find a continuum of common-sense, namely, the subjective and graded opinion of a “person on the street.” To arrive at the proposed...
2025, British Journal for the History of Philosophy
The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a... more
The distinction between formal and material consequence was introduced into medieval logic in the fourteenth century. Authors widely adopted the new terms but disagreed on their definition. The so-called Parisian tradition regarded a formal consequence as one that was valid for any substitution of categorematic terms, whereas the so-called British tradition required that the meaning of the consequent be contained in that of the antecedent. The former criterion resembles our model-theoretic definition of logical consequence, but it was the latter that, it has been claimed, was more popular at the time. Why? I argue that the question has no answer because the contradistinction of substitution and containment does not stand up to scrutiny. I base my argument on selected texts from various fourteenthcentury authors, including Walter Burley, Nicholas Drukken of Denmark, Richard Lavenham, and Peter of Mantua. Instead of two distinct criteria, one of which is favoured over the other, we find various ways of mixing the two and gradual developments towards a hybrid view. I would say that both traditions made use of a substitutional criterion and that they only disagreed on what is to be substituted and what is not, i.e. what counts as form.
2025, Synthese
Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically... more
Fragmentation is a widely discussed thesis on the architecture of mental content, saying, roughly, that the content of an agent's belief state is best understood as a set of information islands that are individually coherent and logically closed, but need not be jointly coherent and logically closed, nor uniformly accessible for guiding the agent's actions across different deliberative contexts. Expressivism is a widely discussed thesis on the mental states conventionally expressed by certain categories of declarative discourse, saying, roughly, that prominent forms of declarative utterance should be taken to express something other than the speaker's outright acceptance of a representational content. In this paper, I argue that specific versions of these views-Topical Fragmentation and Semantic Expressivism-present a mutually beneficial combination. In particular, I argue that combining Topical Fragmentation with Semantic Expressivism fortifies the former against (what I call) the Connective Problem, a pressing objection that lays low more familiar forms of Fragmentation. This motivates a novel semantic framework: Fragmented Semantic Expressivism, a bilateral state-based system that (i) prioritizes fragmentationist acceptance conditions over truth conditions, (ii) treats representational content as hyperintensional, and (iii) gives expressivistic acceptance conditions for the standard connectives. Finally, we discuss the distinctive advantages of this system in answering the problem of logical omniscience and Karttunen's problem for epistemic 'must'.
2025, Lecture Notes in Computer Science
We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in... more
We describe SICK-BR, a Brazilian Portuguese corpus annotated with inference relations and semantic relatedness between pairs of sentences. SICK-BR is a translation and adaptation of the original SICK, a corpus of English sentences used in several semantic evaluations. SICK-BR consists of around 10k sentence pairs annotated for neutral/contradiction/entailment relations and for semantic relatedness, using a 5 point scale. Here we describe the strategies used for the adaptation of SICK, which preserve its original inference and relatedness relation labels in the SICK-BR Portuguese version. We also discuss some issues with the original corpus and how we might deal with them.
2025, The Review of Socionetwork Strategies
The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues... more
The Review of Socionetwork Strategies 1 3 develop and extend the case law data for COLIEE, and to Young Yik Rhim of Intellicon in Seoul, who has been our advocate since the beginning of COLIEE. In addition, a number of Japanese colleagues (in addition to the organizing team of Ken Satoh, Yoshinobu Kano, and Masaharu Yoshioka) have contributed to the extension and curation of the statute law data for the COLIEE competition.
2025
We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing... more
We present the evaluation of the legal question answering Competition on Legal Information Extraction/Entailment (COLIEE) 2017. The COLIEE 2017 Task consists of two sub-Tasks: legal information retrieval (Task 1), and recognizing entailment between articles and queries (Task 2). Participation was open to any group based on any approach, and the tasks attracted 10 teams. We received 9 submissions to Task 1 (for a total of 17 runs), and 8 submissions to Task 2 (for a total of 20 runs).