Marco Schaerf - Academia.edu (original) (raw)
Papers by Marco Schaerf
Arxiv preprint arXiv: …, Jan 1, 2011
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. ... more We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge α, is the size of the shortest formula of F that represents α. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
Proceedings of the …, Jan 1, 1995
In this paper, we present an efficient method for detecting collisions and self-collisions on art... more In this paper, we present an efficient method for detecting collisions and self-collisions on articulated models deformed by Position Based Skinning. Position Based Skinning is a real-time skinning method, which produces believable skin deformations, and avoids artifacts such as the well-known "candy-wrapper" effect and joint-bulging. The proposed method employs spatial hashing with a uniform grid to detect collisions and self collisions. All the mesh primitives are mapped to a hash table, where only primitives mapped to the same hash index indicate a possible collision and need to be tested for intersections. Being based on spatial hashing, our method requires neither expensive set-up nor complex data structures and is hence suitable for articulated characters with deformable soft tissues. We exploit the skeletal nature of the deformation to only update the hash table when required. The resulting algorithm is simple to implement and fast enough for real-time applications. We demonstrate the efficiency of our method on various animation examples. A quantitative experiment is also presented to evaluate our method.
International Journal of Risk Assessment and Management, 2001
The paper presents ENEA's next step towards the development of Intelligent Decision Support Syste... more The paper presents ENEA's next step towards the development of Intelligent Decision Support Systems (IDSS) for large-scale industrial and territorial emergencies. The prototype IDA (Intelligent Decision Advisor) for emergency management in an oil port is analysed as a test case. The work was performed under the national R&D MICA project and specifically ENEA's long-term strategic MINDES Program synchronized with indications of the worldwide GEMINI (Global Emergency Management Information Network Initiative) of the G7 Committee. IDA is an approach in designing intelligentagent based kernels of IDSS. In the frame of the generic TOGA (Top-down Object-based Goal-oriented Approach) model of abstract intelligent agents, IPK (Information, Preferences, Knowledge) architecture was employed. The specific IDA objectives were to develop and verify the properties of an information-managed agent and a knowledge managed agent, where the latter should suggest an action or plan after every new significant event in the emergency domain. The IDA functional kernel is composed of three simple agents: a DirectAdvisor, which interacts with the human user and emergency domain, an InfoProvider, which manages information and intervention goals and an IDAPlanner, which plans adequate interventions. For the design, UML (Unified Modelling Language) has been employed. MDP (Markov Decision Process) and CBR (Case-Based Reasoning) are used for planning crisis management actions. Owing to a generic agent model and object-based conceptualization, the IDA system should be adaptable to the different roles of emergency managers. The obtained results confirm the IPK conceptualization hypothesis and provide a concrete technological experience for the next step towards high-intelligent DSSs for the management of emergencies.
Proceedings of the Fourth …, Jan 1, 1996
We formally investigate the idea of processing offline part of the input data in order to speed u... more We formally investigate the idea of processing offline part of the input data in order to speed up on-line computing. In particular, we focus on off-line processing for intractable decision problems. To this end, we define new complexity classes and reductions, and find complete problems.
International Journal of Approximate Reasoning, 2015
ABSTRACT While looking for abductive explanations of a given set of manifestations, an ordering b... more ABSTRACT While looking for abductive explanations of a given set of manifestations, an ordering between possible solutions is often assumed. The complexity of finding/verifying optimal solutions is already known. In this paper we consider the computational complexity of finding second-best solutions. We consider different orderings, and consider also different possible definitions of what a second-best solution is.
Lecture Notes in Computer Science, 2013
Lecture Notes in Computer Science, 2014
ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005., 2005
to transfer the motion of a virtual face (namely, the source) to a mesh representing another face... more to transfer the motion of a virtual face (namely, the source) to a mesh representing another face (the target), generally having a different geometry and topology. We present a novel FMC method relying on the combination of the Radial Basis Functions (RBF) based scattered data interpolation with the encoding capabilities of the MPEG-4 Facial and Body Animation (FBA) international standard. Starting from a manually picked set of correspondences, we find a scattered data interpolation function that precisely fit the source face mesh into the target one. Then, all the MPEG-4 FBA basic movements (encoded as morph targets) of the source face are deformed using the same function and the deformations are mapped to the corresponding vertices of the target mesh. By doing this, we obtain, in a straightforward and simple way, the animatable MPEG-4 compliant version of the target face in a short amount of time.
The Sapienza Digital Library (SDL) is a research project undertaken by Sapienza Università di Ro... more The Sapienza Digital Library (SDL) is a research project undertaken by Sapienza Università di Roma, the largest Europe’s campus, and the Italian supercomputer center Cineca.
The SDL project aims to build an infrastructure supporting preservation, man- agement and dissemination of the past, present and future digital resources, that contain the overall intellectual production of the Sapienza University. The solu- tion adopted tries to find a tradeoff between the standardization of the digital processes and products (that allows a cost-effective centralized and shared management and curation), and the preservation of the peculiarities of scientific materials, belonging to disparate knowledge disciplines (that need to be digi- tally available for future initiatives, more specifically tailored to the designated communities).
Communications in Computer and Information Science, 2014
Lecture Notes in Computer Science, 2011
ABSTRACT In this paper we perform a preliminary investigation into the application of sampling-ba... more ABSTRACT In this paper we perform a preliminary investigation into the application of sampling-based search algorithms to satisfiability testing of propositional formulas in Conjunctive Normal Form (CNF). In particular, we adapt the Upper Confidence bounds applied to Trees (UCT) algorithm [5] which has been successfully used in many game playing programs including MoGo, one of the strongest computer Go players [3].
Procedia Computer Science, 2014
The Sapienza Digital Library (SDL) was released in December 2013 as result of a research project ... more The Sapienza Digital Library (SDL) was released in December 2013 as result of a research project undertaken by Sapienza University of Rome and the Cineca consortium, since 2011. The digital library has been collecting materials coming from different kind of organizations including departments, libraries, and archives, belonging or donated to Sapienza University. The main result of the project was the development of an information framework supporting multidisciplinary organizations in managing digital materials, maintaining scientific, organizational and operational responsibilities. The technical solution adopted has found a tradeoff between the standardization of the digital processes and products, and the preservation of the scientific materials' peculiarities. The automatic standard translation, and the enrichment of the digital resource's metadata have reached the main goal of providing digital resources with the essential information necessary to their management in different technological contexts. The reuse of the digital information and contents, in different application contexts, has converted the holistic vision of a digital library in the implementation of an information infrastructure, setting the foundation for the long-term access and usability of its digital assets.
Journal of Automated Reasoning, 2002
Arxiv preprint arXiv: …, Jan 1, 2011
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. ... more We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge α, is the size of the shortest formula of F that represents α. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
Proceedings of the …, Jan 1, 1995
In this paper, we present an efficient method for detecting collisions and self-collisions on art... more In this paper, we present an efficient method for detecting collisions and self-collisions on articulated models deformed by Position Based Skinning. Position Based Skinning is a real-time skinning method, which produces believable skin deformations, and avoids artifacts such as the well-known "candy-wrapper" effect and joint-bulging. The proposed method employs spatial hashing with a uniform grid to detect collisions and self collisions. All the mesh primitives are mapped to a hash table, where only primitives mapped to the same hash index indicate a possible collision and need to be tested for intersections. Being based on spatial hashing, our method requires neither expensive set-up nor complex data structures and is hence suitable for articulated characters with deformable soft tissues. We exploit the skeletal nature of the deformation to only update the hash table when required. The resulting algorithm is simple to implement and fast enough for real-time applications. We demonstrate the efficiency of our method on various animation examples. A quantitative experiment is also presented to evaluate our method.
International Journal of Risk Assessment and Management, 2001
The paper presents ENEA's next step towards the development of Intelligent Decision Support Syste... more The paper presents ENEA's next step towards the development of Intelligent Decision Support Systems (IDSS) for large-scale industrial and territorial emergencies. The prototype IDA (Intelligent Decision Advisor) for emergency management in an oil port is analysed as a test case. The work was performed under the national R&D MICA project and specifically ENEA's long-term strategic MINDES Program synchronized with indications of the worldwide GEMINI (Global Emergency Management Information Network Initiative) of the G7 Committee. IDA is an approach in designing intelligentagent based kernels of IDSS. In the frame of the generic TOGA (Top-down Object-based Goal-oriented Approach) model of abstract intelligent agents, IPK (Information, Preferences, Knowledge) architecture was employed. The specific IDA objectives were to develop and verify the properties of an information-managed agent and a knowledge managed agent, where the latter should suggest an action or plan after every new significant event in the emergency domain. The IDA functional kernel is composed of three simple agents: a DirectAdvisor, which interacts with the human user and emergency domain, an InfoProvider, which manages information and intervention goals and an IDAPlanner, which plans adequate interventions. For the design, UML (Unified Modelling Language) has been employed. MDP (Markov Decision Process) and CBR (Case-Based Reasoning) are used for planning crisis management actions. Owing to a generic agent model and object-based conceptualization, the IDA system should be adaptable to the different roles of emergency managers. The obtained results confirm the IPK conceptualization hypothesis and provide a concrete technological experience for the next step towards high-intelligent DSSs for the management of emergencies.
Proceedings of the Fourth …, Jan 1, 1996
We formally investigate the idea of processing offline part of the input data in order to speed u... more We formally investigate the idea of processing offline part of the input data in order to speed up on-line computing. In particular, we focus on off-line processing for intractable decision problems. To this end, we define new complexity classes and reductions, and find complete problems.
International Journal of Approximate Reasoning, 2015
ABSTRACT While looking for abductive explanations of a given set of manifestations, an ordering b... more ABSTRACT While looking for abductive explanations of a given set of manifestations, an ordering between possible solutions is often assumed. The complexity of finding/verifying optimal solutions is already known. In this paper we consider the computational complexity of finding second-best solutions. We consider different orderings, and consider also different possible definitions of what a second-best solution is.
Lecture Notes in Computer Science, 2013
Lecture Notes in Computer Science, 2014
ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005., 2005
to transfer the motion of a virtual face (namely, the source) to a mesh representing another face... more to transfer the motion of a virtual face (namely, the source) to a mesh representing another face (the target), generally having a different geometry and topology. We present a novel FMC method relying on the combination of the Radial Basis Functions (RBF) based scattered data interpolation with the encoding capabilities of the MPEG-4 Facial and Body Animation (FBA) international standard. Starting from a manually picked set of correspondences, we find a scattered data interpolation function that precisely fit the source face mesh into the target one. Then, all the MPEG-4 FBA basic movements (encoded as morph targets) of the source face are deformed using the same function and the deformations are mapped to the corresponding vertices of the target mesh. By doing this, we obtain, in a straightforward and simple way, the animatable MPEG-4 compliant version of the target face in a short amount of time.
The Sapienza Digital Library (SDL) is a research project undertaken by Sapienza Università di Ro... more The Sapienza Digital Library (SDL) is a research project undertaken by Sapienza Università di Roma, the largest Europe’s campus, and the Italian supercomputer center Cineca.
The SDL project aims to build an infrastructure supporting preservation, man- agement and dissemination of the past, present and future digital resources, that contain the overall intellectual production of the Sapienza University. The solu- tion adopted tries to find a tradeoff between the standardization of the digital processes and products (that allows a cost-effective centralized and shared management and curation), and the preservation of the peculiarities of scientific materials, belonging to disparate knowledge disciplines (that need to be digi- tally available for future initiatives, more specifically tailored to the designated communities).
Communications in Computer and Information Science, 2014
Lecture Notes in Computer Science, 2011
ABSTRACT In this paper we perform a preliminary investigation into the application of sampling-ba... more ABSTRACT In this paper we perform a preliminary investigation into the application of sampling-based search algorithms to satisfiability testing of propositional formulas in Conjunctive Normal Form (CNF). In particular, we adapt the Upper Confidence bounds applied to Trees (UCT) algorithm [5] which has been successfully used in many game playing programs including MoGo, one of the strongest computer Go players [3].
Procedia Computer Science, 2014
The Sapienza Digital Library (SDL) was released in December 2013 as result of a research project ... more The Sapienza Digital Library (SDL) was released in December 2013 as result of a research project undertaken by Sapienza University of Rome and the Cineca consortium, since 2011. The digital library has been collecting materials coming from different kind of organizations including departments, libraries, and archives, belonging or donated to Sapienza University. The main result of the project was the development of an information framework supporting multidisciplinary organizations in managing digital materials, maintaining scientific, organizational and operational responsibilities. The technical solution adopted has found a tradeoff between the standardization of the digital processes and products, and the preservation of the scientific materials' peculiarities. The automatic standard translation, and the enrichment of the digital resource's metadata have reached the main goal of providing digital resources with the essential information necessary to their management in different technological contexts. The reuse of the digital information and contents, in different application contexts, has converted the holistic vision of a digital library in the implementation of an information infrastructure, setting the foundation for the long-term access and usability of its digital assets.
Journal of Automated Reasoning, 2002