Jean Sallantin - Academia.edu (original) (raw)
Papers by Jean Sallantin
Résumé : Nous discutons dans cet article les bases d'un projet scientifique commun aux in-fo... more Résumé : Nous discutons dans cet article les bases d'un projet scientifique commun aux in-formaticiens, cogniticiens et économistes, dont le but est de modéliser et d'expérimenter des situations de recherche collective en science. Nous avons développé une plateforme ...
Abstract. In Educational literature, Discovery Learning appears as an approach in which the learn... more Abstract. In Educational literature, Discovery Learning appears as an approach in which the learner builds up his/her own knowledge by performing experiments within a domain and inferring/increasing rules as a result. Such a constructivist approach has been largely exploited in the design of computational artifacts with learning purposes in what is today known as Discovery Learning Environments (DLE's). In this paper we are concerned with the design and usage of particular DLE's, within which learning events occur as a consequence of contradiction detection and overcoming, during human/machine cooperative work. We firstly introduce a model of an agent capable of handling such an approach of learning, by highlighting the exchanges that the agent should promote with a human learner. The model lies on the basis of the scientific rationale, particularly the empirical approach guided by the theory-experiment confrontation. We shall then reinforce the interest of the model for t...
Lecture Notes in Computer Science, 2000
We present in this paper our reflections about the requirements of new architectures and language... more We present in this paper our reflections about the requirements of new architectures and languages for the Web, confronted with the ones emerging from qualified scientists such as Mc Carthy [1] and Wegner [2]. The contribution highlights if and how these reflections may be concretely realized by means of extensions of non standard models and tools that we have already experimented and that appeared in previous papers (the STROBE model and Phi Calculus). We conclude with the preliminary specifications of a new language for modeling and programming Interactions, called C+C, that represents constructively our approach, privileging the communicational aspects among Autonomous Agents, with respect to the more traditional algorithmic ones.
Le Centre pour la Communication Scientifique Directe - HAL - Diderot, Oct 1, 2019
Background. Since 2006, at the Faculty of Sport Sciences of Montpellier, to train students in the... more Background. Since 2006, at the Faculty of Sport Sciences of Montpellier, to train students in the use of sportive video analysis with general and specific software, students must build 4 specific applications. These ICT skills are part of those defined by the Ministry of Higher Education, and the feature of these applications is the use of "delayed videos" analysis. Their learning journey is supported by on line tutorials. To consider the ever-increasing efficiency of new technologies and go further in video analysis, we have added a step (a fifth application): the use of "live videos" analysis excluding the backtracking of video. The benefits of this type of work can help decision-making in everyday life and therefore concern the world of sport. Problem. The purpose of this work was to observe students analyzing a recorded video sequence excluding video backtracking, as if they were live, as a coach. The feasibility conditions of such an experiment led us to study a judo combat by appreciating or estimating the interactions between different criteria as a coach. Method. After training, Masters students (n=12), who specialized in the video analysis and in judo performance, analyzed (they studied three times the same combat) the balance of power (subjective criterion) between two judokas. The process required the student observers to identify whether a judoka was "dominant, i.e.: in a favorable organization to win" or "dominated" whilst managing his/her actions in "programmed/conscious" or "automatic, i.e.: reflex organization" modes. Nonparametric Kolmogorov Smirnov and Wilcoxon tests were used to analyze the data collected by the students. Conclusions. These tests indicated that the analyses were identical and stable at the intraobserver (p=[0.095: 0.999], p=[0.219: 1.0]) and group levels (p=0.224; p=0.406). We propose to include this methodology of analyze in Masters' students training programs and develop these live analyses for coachs, teachers or referees.
We start from the geometrical-logical extension of Aristotle's square in [Bla66], [Pel06] and [Mo... more We start from the geometrical-logical extension of Aristotle's square in [Bla66], [Pel06] and [Mor04], and study them from both syntactic and semantic points of view. Recall that Aristotle's square under its modal form has the following four vertices: A is α, E is ¬α, I is ¬ ¬α and O is ¬ α, where α is a logical formula and is a modality which can be defined axiomatically within a particular logic known as S5 (classical or intuitionistic, depending on whether ¬ is involutive or not) modal logic. [Béz03] has proposed extensions which can be interpreted respectively within paraconsistent and paracomplete logical frameworks. [Pel06] has shown that these extensions are subfigures of a tetraicosahedron whose vertices are actually obtained by closure of {α, α} by the logical operations {¬, ∧, ∨}, under the assumption of classical S5 modal logic. We pursue these researches on the geometrical-logical extensions of Aristotle's square: first we list all modal squares of opposition. We show that if the vertices of that geometrical figure are logical formulae and if the sub-alternation edges are interpreted as logical implication relations, then the underlying logic is none other than classical logic. Then we consider a higher-order extension introduced by [Mor04], and we show that the same tetraicosahedron plays a key role when additional modal operators are introduced. Finally we discuss the relation between the logic underlying these extensions and the resulting geometrical-logical figures.
tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 2011
This article focuses on the efficiency of scientific knowledge involved in the context of managin... more This article focuses on the efficiency of scientific knowledge involved in the context of managing a particular socio-environmental as that composed by Amazon. In a first part, we introduce the actual tools used to create and disseminate knowledge among scientists and to stakeholders. In the second part, we give a structural framework, concerning the co-construction of an interdisciplinary scientific knowledge on a specific geographical region. This structural framework, which is as mathematical object "free of context", provides a contextual efficiency of scientific work when it combines multi-disciplinarity, interdisciplinarity and transdisciplinarity.
Biocomputing '99, 1998
This paper describes a new approach to problem solving by splitting up problem component parts be... more This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The rst one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on recon gurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show h o w e cient the interaction between the machine and the expert is when the concept modelling is based on recon gurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.
Proceedings of 1st International Conference on Conventional and Knowledge Based Intelligent Electronic Systems. KES '97
This paper proposes a new technology for intelligent machines, based on the concept of programmab... more This paper proposes a new technology for intelligent machines, based on the concept of programmable hardware. To build an intelligent system, the designer has to adapt it t o the problem. First we show that programmable hardware is an intermediate step for building configurations, in order to choose the best architecture. In this case, the tasks are performed in a time period that respects human cognitive capacities. Next is detailed a multilevel model composed of the cognitive, software and hardware levels. An experimental platform has been built based on programmable hardware, and used in a "Grand Challenge" problem: knowledge discovery in genetic sequence databases, to compare the relative efficiencies of programmable hardware and classical Von Neumann based architecture. Programmable hardware has shown to have a significantly faster response time, which is essential for modern day intelligent machine user interaction.
Kybernetika -Praha-
Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digi... more Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use. This paper has been digitized, optimized for electronic delivery and stamped with digital signature within the project DML-CZ: The Czech Digital Mathematics Library http://project.dml.cz
Lecture Notes in Computer Science, 2009
We analyze the dynamics of problem-solving in a framework which captures two key features of that... more We analyze the dynamics of problem-solving in a framework which captures two key features of that activity. The first feature is that problem-solving is a social game where a number of problem-solvers interact, rely on other agents to tackle parts of a problem, and regularly communicate the outcomes of their investigations. The second feature is that problem-solving requires a careful control over the set of hypotheses that might be needed at various stages of the investigation for the problem to be solved; more particularly, that any incorrect hypothesis be eventually refuted in the face of some evidence: all agents can expect such evidence to be brought to their knowledge whenever it holds. Our presentation uses a very general form of logic programs, viewed as sets of rules that can be activated and fire, depending on what a problem-solver is willing to explore, what a problem-solver is willing to hypothesize, and what a problem-solver knows about the problem to be solved in the form of data or background knowledge. Our framework supports two fundamental aspects of problem-solving. The first aspect is that no matter how the work is being distributed amongst agents, exactly the same knowledge is guaranteed to be discovered eventually. The second aspect is that any group of agents (with at one end, one agent being in charge of all rules and at another end, one agent being in charge of one and only one rule) might need to sometimes put forward some hypotheses to allow for the discovery of a particular piece of knowledge in finite time.
Proceedings of the Twenty-Eighth Hawaii International Conference on System Sciences, vol.5
This paper concerns the management of genetic sequences in an object-oriented database and the ex... more This paper concerns the management of genetic sequences in an object-oriented database and the extraction of knowledge from these sequences. In our case knowledge discovery consists in finding functions capable of predicting properties about genetic sequences. This problem is also known as funciional znference. Th.e paper is divided in two parts: ihe first one shows the interesf of using an object-oriented query language to build and use prediction junctzons. In the second part, we propose to 'use prediction functions as descriptors of sequences in order to index them. The indexation is perform.ed with con,cept lattices [1'7].
2008 20th IEEE International Conference on Tools with Artificial Intelligence, 2008
Control architectures, such as the LAAS architecture [1], CLARATY [12] and HARPIC [9], have been ... more Control architectures, such as the LAAS architecture [1], CLARATY [12] and HARPIC [9], have been developped to provide autonomy to robots. To achieve a robot's task, these control architectures plan sequences of sensorimotor behaviors. Currently carried out by roboticians, the design of sensorimotor behaviors is a truly complex task that can require many hours of hard work and intensive computations. In this paper, we propose a Constraint Programming-based framework to interact with roboticians during the sensorimotor behaviors design. A constraint network acquisition platform and a CSP-Based planner are used to automatically design sensorimotor behaviors. Moreover, our architecture exploits the propagation properties of the acquired CSPs to supervise the execution of a given sensorimotor behavior. Some experimental results are presented to validate our approach.
Proceedings / ... International Conference on Intelligent Systems for Molecular Biology ; ISMB. International Conference on Intelligent Systems for Molecular Biology, 1993
We propose in this paper a modular learning environment for protein modeling. In this system, the... more We propose in this paper a modular learning environment for protein modeling. In this system, the protein modeling problem is tackled in two successive phases. First, partial structural informations are determined via numerical learning techniques. Then, in the second phase, the multiple available informations are combined in pattern matching searches via dynamic programming. It is shown on real problems that various protein structure predictions can be improved in this way, such as secondary structure prediction, alignment of weakly homologous protein sequences or protein model evaluations.
Proceedings / ... International Conference on Intelligent Systems for Molecular Biology ; ISMB. International Conference on Intelligent Systems for Molecular Biology, 1994
Homology detection in large data bases is probably the most time consuming operation in molecular... more Homology detection in large data bases is probably the most time consuming operation in molecular genetic computing systems. Moreover, the progresses made all around the world concerning the mapping and sequencing of the genome of Homo Sapiens and other species have increased the size of data bases exponentially. Therefore even the best workstation would not be able to reach the scanning speed required. In order to answer this need we propose an algorithm, A2R2, and its implementation on a massively parallel system. Basically, two kinds of algorithms are used to search in molecular genetic data bases. The first kind is based on dynamic programming and the second on word processing, A2R2 belongs to the second kind. The structure of the motif (pattern) searched by A2R2 can support those from FAST, BLAST and FLASH algorithms. After a short presentation of the reconfigurable hardware concept and technology used in our massively parallel accelerator we present the A2R2 implementation. Th...
In this paper, we describe a formal logical framework which we claim as essential to prove and to... more In this paper, we describe a formal logical framework which we claim as essential to prove and to revise a model produced by combined ILP techniques. The dynamic process of proof embrace the supervision of the learning machine by a human, and this framework places the interpretation of contradictions in the heart of the interactive process which leads to a model which can be discussed, justified, and proven. We illustrate and validate this framework on an industrial application in the field of Drug Discovery, combining different learning processes to predict pharmaco-kinetic properties (ADME-T) and adverse side effects of therapeutic drug molecules.
The Second IEEE International Conference on Cognitive Informatics, 2003. Proceedings., 2003
An ontology is a formal language adequately representing the knowledge used for reasoning in a sp... more An ontology is a formal language adequately representing the knowledge used for reasoning in a specific environment. When contradictions arise and make ontologies inadequate, revision is currently a very difficult and time consuming task. We suggest the design of rational agents to assist scientists in ontology building through the removal of contradictions. These machines, in line with Angluin's "learning from different teachers" paradigm, learn to manage applications in place of users. Rational agents have some interesting cognitive faculties: a kind of identity, consciousness of their behaviour, dialectical control of logical contradictions in a learned theory respecting a given ontology and aptitude to propose ontology revision. In the paper, we present an experimental scientific game Eleusis+Nobel as a framework outlining this new approach, i.e., automated assistance to scientific discovery. We show that rational agents are generic enough to support the ontology building process in many other contexts.
Proceedings of the 12th International Conference on Information Integration and Web-based Applications & Services, 2010
This paper presents a work in progress about stakeholder detection for online debates. We propose... more This paper presents a work in progress about stakeholder detection for online debates. We propose an approach based on classical community detection methods applied to semantic social networks representation. We defend that new web2.0 tools should assist users to define semantic relations between users, groups and roles based on social interaction analysis. The main goal is to provide new mecanisms for moderation decreasing misunderstandings and highlighting unexpected behaviors.
Document numérique, 2010
La pratique actuelle de la démocratie électronique s'inscrit dans les problématiques « 2.0 ». Il ... more La pratique actuelle de la démocratie électronique s'inscrit dans les problématiques « 2.0 ». Il devient nécessaire d'avoir des modèles et outils permettant de gérer le flot d'informations pour améliorer les services. Notre modèle générique d'annotation sociosémantique est ici présenté comme une pratique discursive efficace illustrée au travers d'une expérimentation. Ce modèle se base sur l'utilisation de statuts d'annotations afin de structurer la discussion. Nous le comparons à d'autres pratiques discursives du web et détaillons sa réalisation. ABSTRACT. Problems encountered within the e-democracy are closely related to the web 2.0 phenomenon. To be able to manage the flow of generated data, models and tools become a necessity. In this paper, we present the generic model of the socio-semantic Annotation. An experiment highlights its efficiency as a discursive practice. This model is based on the use of annotation status to structure discussions. We compare it to classicals discursive pratices on the web and detail its realization. In this paper, we present the "Annotation" as a practical and effective model on which we will base our argument. An experiment illustrates this model.
Lecture Notes in Computer Science, 2005
This paper is an attempt to design an interaction protocol for a multi-agent learning platform to... more This paper is an attempt to design an interaction protocol for a multi-agent learning platform to assist a human community in their task of scientific discovery. Designing tools to assist Scientific Discovery offers a challenging problematic, since the problems studied by scientists are not yet solved, and valid models are not yet available. It is therefore impossible to create a problem solver to simulate a given phenomenon and explain or predict facts. We propose to assist scientists with learning machines considered as adaptive problem solvers, to build interactively a consistent model suited for reasoning, simulating, predicting, and explaining facts. The interaction protocol presented in this paper is based on Angluin's "Learning from Different Teachers" [1] and we extend the original protocol to make it operational to assist scientists solve open problems. The main problem we deal with is that this learning model supposes the existence of teachers having previously solved the problem. These teachers are able to answer the learner's queries whereas this is not the case in the context of Scientific Discovery in which it is only possible to refute a model by finding experimental processes revealing contradictions. Our first contribution is to directly use Angluin's interaction protocol to let a machine learn a program that approximates the theory of a scientist, and to help him improve this theory. Our second contribution is to attenuate Angluin's protocol to take into account a social cognition level during which multiple scientists interact with each other by the means of publications and refutations of rival theories. The program learned by the machine can be included in a publication to avoid false refutations coming from a wrong interpretation of the theory.
Lecture Notes in Computer Science
As mentioned in French secondary school official texts, teaching science implies teaching scienti... more As mentioned in French secondary school official texts, teaching science implies teaching scientific process. This poses the problem of how to teach epistemology, as traditional science teaching is mostly dogmatic and based on contents. Previous studies show that pupils, science students and teachers mostly own positivist and realist spontaneous conceptions of science and scientific discovery. Here, we present the evaluation of the didactic impact of a network game, Eleusis+Nobel, on third year biology students who aim at becoming teachers. This cards game, based on a Popperian epistemology, has been designed to reproduce the scientific discovery process in a community. In the limits of our study, results obtained with classical social psychology tools indicate that students who played this game specifically assimilated the subjective dimension of knowledge and the role of the community in their conception of science, on the contrary to negative control students, who did not play.
Résumé : Nous discutons dans cet article les bases d'un projet scientifique commun aux in-fo... more Résumé : Nous discutons dans cet article les bases d'un projet scientifique commun aux in-formaticiens, cogniticiens et économistes, dont le but est de modéliser et d'expérimenter des situations de recherche collective en science. Nous avons développé une plateforme ...
Abstract. In Educational literature, Discovery Learning appears as an approach in which the learn... more Abstract. In Educational literature, Discovery Learning appears as an approach in which the learner builds up his/her own knowledge by performing experiments within a domain and inferring/increasing rules as a result. Such a constructivist approach has been largely exploited in the design of computational artifacts with learning purposes in what is today known as Discovery Learning Environments (DLE's). In this paper we are concerned with the design and usage of particular DLE's, within which learning events occur as a consequence of contradiction detection and overcoming, during human/machine cooperative work. We firstly introduce a model of an agent capable of handling such an approach of learning, by highlighting the exchanges that the agent should promote with a human learner. The model lies on the basis of the scientific rationale, particularly the empirical approach guided by the theory-experiment confrontation. We shall then reinforce the interest of the model for t...
Lecture Notes in Computer Science, 2000
We present in this paper our reflections about the requirements of new architectures and language... more We present in this paper our reflections about the requirements of new architectures and languages for the Web, confronted with the ones emerging from qualified scientists such as Mc Carthy [1] and Wegner [2]. The contribution highlights if and how these reflections may be concretely realized by means of extensions of non standard models and tools that we have already experimented and that appeared in previous papers (the STROBE model and Phi Calculus). We conclude with the preliminary specifications of a new language for modeling and programming Interactions, called C+C, that represents constructively our approach, privileging the communicational aspects among Autonomous Agents, with respect to the more traditional algorithmic ones.
Le Centre pour la Communication Scientifique Directe - HAL - Diderot, Oct 1, 2019
Background. Since 2006, at the Faculty of Sport Sciences of Montpellier, to train students in the... more Background. Since 2006, at the Faculty of Sport Sciences of Montpellier, to train students in the use of sportive video analysis with general and specific software, students must build 4 specific applications. These ICT skills are part of those defined by the Ministry of Higher Education, and the feature of these applications is the use of "delayed videos" analysis. Their learning journey is supported by on line tutorials. To consider the ever-increasing efficiency of new technologies and go further in video analysis, we have added a step (a fifth application): the use of "live videos" analysis excluding the backtracking of video. The benefits of this type of work can help decision-making in everyday life and therefore concern the world of sport. Problem. The purpose of this work was to observe students analyzing a recorded video sequence excluding video backtracking, as if they were live, as a coach. The feasibility conditions of such an experiment led us to study a judo combat by appreciating or estimating the interactions between different criteria as a coach. Method. After training, Masters students (n=12), who specialized in the video analysis and in judo performance, analyzed (they studied three times the same combat) the balance of power (subjective criterion) between two judokas. The process required the student observers to identify whether a judoka was "dominant, i.e.: in a favorable organization to win" or "dominated" whilst managing his/her actions in "programmed/conscious" or "automatic, i.e.: reflex organization" modes. Nonparametric Kolmogorov Smirnov and Wilcoxon tests were used to analyze the data collected by the students. Conclusions. These tests indicated that the analyses were identical and stable at the intraobserver (p=[0.095: 0.999], p=[0.219: 1.0]) and group levels (p=0.224; p=0.406). We propose to include this methodology of analyze in Masters' students training programs and develop these live analyses for coachs, teachers or referees.
We start from the geometrical-logical extension of Aristotle's square in [Bla66], [Pel06] and [Mo... more We start from the geometrical-logical extension of Aristotle's square in [Bla66], [Pel06] and [Mor04], and study them from both syntactic and semantic points of view. Recall that Aristotle's square under its modal form has the following four vertices: A is α, E is ¬α, I is ¬ ¬α and O is ¬ α, where α is a logical formula and is a modality which can be defined axiomatically within a particular logic known as S5 (classical or intuitionistic, depending on whether ¬ is involutive or not) modal logic. [Béz03] has proposed extensions which can be interpreted respectively within paraconsistent and paracomplete logical frameworks. [Pel06] has shown that these extensions are subfigures of a tetraicosahedron whose vertices are actually obtained by closure of {α, α} by the logical operations {¬, ∧, ∨}, under the assumption of classical S5 modal logic. We pursue these researches on the geometrical-logical extensions of Aristotle's square: first we list all modal squares of opposition. We show that if the vertices of that geometrical figure are logical formulae and if the sub-alternation edges are interpreted as logical implication relations, then the underlying logic is none other than classical logic. Then we consider a higher-order extension introduced by [Mor04], and we show that the same tetraicosahedron plays a key role when additional modal operators are introduced. Finally we discuss the relation between the logic underlying these extensions and the resulting geometrical-logical figures.
tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 2011
This article focuses on the efficiency of scientific knowledge involved in the context of managin... more This article focuses on the efficiency of scientific knowledge involved in the context of managing a particular socio-environmental as that composed by Amazon. In a first part, we introduce the actual tools used to create and disseminate knowledge among scientists and to stakeholders. In the second part, we give a structural framework, concerning the co-construction of an interdisciplinary scientific knowledge on a specific geographical region. This structural framework, which is as mathematical object "free of context", provides a contextual efficiency of scientific work when it combines multi-disciplinarity, interdisciplinarity and transdisciplinarity.
Biocomputing '99, 1998
This paper describes a new approach to problem solving by splitting up problem component parts be... more This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The rst one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on recon gurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show h o w e cient the interaction between the machine and the expert is when the concept modelling is based on recon gurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.
Proceedings of 1st International Conference on Conventional and Knowledge Based Intelligent Electronic Systems. KES '97
This paper proposes a new technology for intelligent machines, based on the concept of programmab... more This paper proposes a new technology for intelligent machines, based on the concept of programmable hardware. To build an intelligent system, the designer has to adapt it t o the problem. First we show that programmable hardware is an intermediate step for building configurations, in order to choose the best architecture. In this case, the tasks are performed in a time period that respects human cognitive capacities. Next is detailed a multilevel model composed of the cognitive, software and hardware levels. An experimental platform has been built based on programmable hardware, and used in a "Grand Challenge" problem: knowledge discovery in genetic sequence databases, to compare the relative efficiencies of programmable hardware and classical Von Neumann based architecture. Programmable hardware has shown to have a significantly faster response time, which is essential for modern day intelligent machine user interaction.
Kybernetika -Praha-
Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digi... more Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use. This paper has been digitized, optimized for electronic delivery and stamped with digital signature within the project DML-CZ: The Czech Digital Mathematics Library http://project.dml.cz
Lecture Notes in Computer Science, 2009
We analyze the dynamics of problem-solving in a framework which captures two key features of that... more We analyze the dynamics of problem-solving in a framework which captures two key features of that activity. The first feature is that problem-solving is a social game where a number of problem-solvers interact, rely on other agents to tackle parts of a problem, and regularly communicate the outcomes of their investigations. The second feature is that problem-solving requires a careful control over the set of hypotheses that might be needed at various stages of the investigation for the problem to be solved; more particularly, that any incorrect hypothesis be eventually refuted in the face of some evidence: all agents can expect such evidence to be brought to their knowledge whenever it holds. Our presentation uses a very general form of logic programs, viewed as sets of rules that can be activated and fire, depending on what a problem-solver is willing to explore, what a problem-solver is willing to hypothesize, and what a problem-solver knows about the problem to be solved in the form of data or background knowledge. Our framework supports two fundamental aspects of problem-solving. The first aspect is that no matter how the work is being distributed amongst agents, exactly the same knowledge is guaranteed to be discovered eventually. The second aspect is that any group of agents (with at one end, one agent being in charge of all rules and at another end, one agent being in charge of one and only one rule) might need to sometimes put forward some hypotheses to allow for the discovery of a particular piece of knowledge in finite time.
Proceedings of the Twenty-Eighth Hawaii International Conference on System Sciences, vol.5
This paper concerns the management of genetic sequences in an object-oriented database and the ex... more This paper concerns the management of genetic sequences in an object-oriented database and the extraction of knowledge from these sequences. In our case knowledge discovery consists in finding functions capable of predicting properties about genetic sequences. This problem is also known as funciional znference. Th.e paper is divided in two parts: ihe first one shows the interesf of using an object-oriented query language to build and use prediction junctzons. In the second part, we propose to 'use prediction functions as descriptors of sequences in order to index them. The indexation is perform.ed with con,cept lattices [1'7].
2008 20th IEEE International Conference on Tools with Artificial Intelligence, 2008
Control architectures, such as the LAAS architecture [1], CLARATY [12] and HARPIC [9], have been ... more Control architectures, such as the LAAS architecture [1], CLARATY [12] and HARPIC [9], have been developped to provide autonomy to robots. To achieve a robot's task, these control architectures plan sequences of sensorimotor behaviors. Currently carried out by roboticians, the design of sensorimotor behaviors is a truly complex task that can require many hours of hard work and intensive computations. In this paper, we propose a Constraint Programming-based framework to interact with roboticians during the sensorimotor behaviors design. A constraint network acquisition platform and a CSP-Based planner are used to automatically design sensorimotor behaviors. Moreover, our architecture exploits the propagation properties of the acquired CSPs to supervise the execution of a given sensorimotor behavior. Some experimental results are presented to validate our approach.
Proceedings / ... International Conference on Intelligent Systems for Molecular Biology ; ISMB. International Conference on Intelligent Systems for Molecular Biology, 1993
We propose in this paper a modular learning environment for protein modeling. In this system, the... more We propose in this paper a modular learning environment for protein modeling. In this system, the protein modeling problem is tackled in two successive phases. First, partial structural informations are determined via numerical learning techniques. Then, in the second phase, the multiple available informations are combined in pattern matching searches via dynamic programming. It is shown on real problems that various protein structure predictions can be improved in this way, such as secondary structure prediction, alignment of weakly homologous protein sequences or protein model evaluations.
Proceedings / ... International Conference on Intelligent Systems for Molecular Biology ; ISMB. International Conference on Intelligent Systems for Molecular Biology, 1994
Homology detection in large data bases is probably the most time consuming operation in molecular... more Homology detection in large data bases is probably the most time consuming operation in molecular genetic computing systems. Moreover, the progresses made all around the world concerning the mapping and sequencing of the genome of Homo Sapiens and other species have increased the size of data bases exponentially. Therefore even the best workstation would not be able to reach the scanning speed required. In order to answer this need we propose an algorithm, A2R2, and its implementation on a massively parallel system. Basically, two kinds of algorithms are used to search in molecular genetic data bases. The first kind is based on dynamic programming and the second on word processing, A2R2 belongs to the second kind. The structure of the motif (pattern) searched by A2R2 can support those from FAST, BLAST and FLASH algorithms. After a short presentation of the reconfigurable hardware concept and technology used in our massively parallel accelerator we present the A2R2 implementation. Th...
In this paper, we describe a formal logical framework which we claim as essential to prove and to... more In this paper, we describe a formal logical framework which we claim as essential to prove and to revise a model produced by combined ILP techniques. The dynamic process of proof embrace the supervision of the learning machine by a human, and this framework places the interpretation of contradictions in the heart of the interactive process which leads to a model which can be discussed, justified, and proven. We illustrate and validate this framework on an industrial application in the field of Drug Discovery, combining different learning processes to predict pharmaco-kinetic properties (ADME-T) and adverse side effects of therapeutic drug molecules.
The Second IEEE International Conference on Cognitive Informatics, 2003. Proceedings., 2003
An ontology is a formal language adequately representing the knowledge used for reasoning in a sp... more An ontology is a formal language adequately representing the knowledge used for reasoning in a specific environment. When contradictions arise and make ontologies inadequate, revision is currently a very difficult and time consuming task. We suggest the design of rational agents to assist scientists in ontology building through the removal of contradictions. These machines, in line with Angluin's "learning from different teachers" paradigm, learn to manage applications in place of users. Rational agents have some interesting cognitive faculties: a kind of identity, consciousness of their behaviour, dialectical control of logical contradictions in a learned theory respecting a given ontology and aptitude to propose ontology revision. In the paper, we present an experimental scientific game Eleusis+Nobel as a framework outlining this new approach, i.e., automated assistance to scientific discovery. We show that rational agents are generic enough to support the ontology building process in many other contexts.
Proceedings of the 12th International Conference on Information Integration and Web-based Applications & Services, 2010
This paper presents a work in progress about stakeholder detection for online debates. We propose... more This paper presents a work in progress about stakeholder detection for online debates. We propose an approach based on classical community detection methods applied to semantic social networks representation. We defend that new web2.0 tools should assist users to define semantic relations between users, groups and roles based on social interaction analysis. The main goal is to provide new mecanisms for moderation decreasing misunderstandings and highlighting unexpected behaviors.
Document numérique, 2010
La pratique actuelle de la démocratie électronique s'inscrit dans les problématiques « 2.0 ». Il ... more La pratique actuelle de la démocratie électronique s'inscrit dans les problématiques « 2.0 ». Il devient nécessaire d'avoir des modèles et outils permettant de gérer le flot d'informations pour améliorer les services. Notre modèle générique d'annotation sociosémantique est ici présenté comme une pratique discursive efficace illustrée au travers d'une expérimentation. Ce modèle se base sur l'utilisation de statuts d'annotations afin de structurer la discussion. Nous le comparons à d'autres pratiques discursives du web et détaillons sa réalisation. ABSTRACT. Problems encountered within the e-democracy are closely related to the web 2.0 phenomenon. To be able to manage the flow of generated data, models and tools become a necessity. In this paper, we present the generic model of the socio-semantic Annotation. An experiment highlights its efficiency as a discursive practice. This model is based on the use of annotation status to structure discussions. We compare it to classicals discursive pratices on the web and detail its realization. In this paper, we present the "Annotation" as a practical and effective model on which we will base our argument. An experiment illustrates this model.
Lecture Notes in Computer Science, 2005
This paper is an attempt to design an interaction protocol for a multi-agent learning platform to... more This paper is an attempt to design an interaction protocol for a multi-agent learning platform to assist a human community in their task of scientific discovery. Designing tools to assist Scientific Discovery offers a challenging problematic, since the problems studied by scientists are not yet solved, and valid models are not yet available. It is therefore impossible to create a problem solver to simulate a given phenomenon and explain or predict facts. We propose to assist scientists with learning machines considered as adaptive problem solvers, to build interactively a consistent model suited for reasoning, simulating, predicting, and explaining facts. The interaction protocol presented in this paper is based on Angluin's "Learning from Different Teachers" [1] and we extend the original protocol to make it operational to assist scientists solve open problems. The main problem we deal with is that this learning model supposes the existence of teachers having previously solved the problem. These teachers are able to answer the learner's queries whereas this is not the case in the context of Scientific Discovery in which it is only possible to refute a model by finding experimental processes revealing contradictions. Our first contribution is to directly use Angluin's interaction protocol to let a machine learn a program that approximates the theory of a scientist, and to help him improve this theory. Our second contribution is to attenuate Angluin's protocol to take into account a social cognition level during which multiple scientists interact with each other by the means of publications and refutations of rival theories. The program learned by the machine can be included in a publication to avoid false refutations coming from a wrong interpretation of the theory.
Lecture Notes in Computer Science
As mentioned in French secondary school official texts, teaching science implies teaching scienti... more As mentioned in French secondary school official texts, teaching science implies teaching scientific process. This poses the problem of how to teach epistemology, as traditional science teaching is mostly dogmatic and based on contents. Previous studies show that pupils, science students and teachers mostly own positivist and realist spontaneous conceptions of science and scientific discovery. Here, we present the evaluation of the didactic impact of a network game, Eleusis+Nobel, on third year biology students who aim at becoming teachers. This cards game, based on a Popperian epistemology, has been designed to reproduce the scientific discovery process in a community. In the limits of our study, results obtained with classical social psychology tools indicate that students who played this game specifically assimilated the subjective dimension of knowledge and the role of the community in their conception of science, on the contrary to negative control students, who did not play.