Principles of semantic networks (original) (raw)
IRJET, 2023
This paper describes the origins of semantic networks and the methods they were first developed for psychological purposes and then adopted by artificial intelligence techniques to analyze textual data in graph format. This will deepen your understanding and further improve the way you extract knowledge. Thus, semantic network analysis further developed its importance in the field of psychology and later found application in artificial technology based on the emergence of semantic networks. Idea generation, visual text analysis, and conceptual design ideas are further developed.
Notes on Semantic Nets and Frames Semantic Nets
Artificial Intelligence I Matthew Huntbach, Dept of Computer Science, Queen Mary and Westfield College, London, UK E1 4NS. Email: mmh@dcs.qmw.ac.uk . Notes may be used with the permission of the author. Notes on Semantic Nets and Frames Semantic Nets Semantic networks are an alternative to predicate logic as a form of knowledge representation. The idea is that we can store our knowledge in the form of a graph, with nodes representing objects in the world, and arcs representing relationships between those objects. For example, the following:
Representational systems and symbolic systems
Behavioral and Brain Sciences, 1990
Connectionist models provide a promising alternative to the traditional computational approach that has for several decades dominated cognitive science and artificial intelligence, although the nature of connectionist models and their relation to symbol processing remains controversial. Connectionist models can be characterized by three general computational features: distinct layers of interconnected units, recursive rules for updating the strengths of the connections during learning, and "simple" homogeneous computing elements. Using just these three features one can construct surprisingly elegant and powerful models of memory, perception, motor control, categorization, and reasoning. What makes the connectionist approach unique is not its variety of representational possibilities (including "distributed representations") or its departure from explicit rule-based models, or even its preoccupation with the brain metaphor. Rather, it is that connectionist models can be used to explore systematically the complex interaction between learning and representation, as we try to demonstrate through the analysis of several large networks.
Object-Oriented Dynamic Networks
This paper contains description of such knowledge representation model as Object-Oriented Dynamic Network (OODN), which gives us an opportunity to represent knowledge, which can be modified in time, to build new relations between objects and classes of objects and to represent results of their modifications. The model is based on representation of objects via their properties and methods. It gives us a possibility to classify the objects and, in a sense, to build hierarchy of their types. Furthermore, it enables to represent relation of modification between concepts, to build new classes of objects based on existing classes and to create sets and multisets of concepts. OODN can be represented as a connected and directed graph, where nodes are concepts and edges are relations between them. Using such model of knowledge representation, we can consider modifications of knowledge and movement through the graph of network as a process of logical reasoning or finding the right solutions or creativity, etc. The proposed approach gives us an opportunity to model some aspects of human knowledge system and main mechanisms of human thought, in particular getting a new experience and knowledge.
ERNEST: a semantic network system for pattern understanding
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990
Aktract-This paper gives a detailed account of a system environment for the treatment of general problems of image and speech understanding. It provides a framework for the representation of declarative and procedural knowledge based on a suitable definition of a semantic network. The syntax and semantics of the network are clearly defined. In addition, the pragmatics of the network in its use for pattern understanding is defined by several rules which are problem independent. This allows one to formulate problem-independent control algorithms. Complete software environments are available to handle the described structures. The general applicability of the network system is demonstrated by short descriptions of three applications from different task domains.
Computational Networks For Knowledge Representation
2009
In the artificial intelligence field, knowledge representation and reasoning are important areas for intelligent systems, especially knowledge base systems and expert systems. Knowledge representation Methods has an important role in designing the systems. There have been many models for knowledge such as semantic networks, conceptual graphs, and neural networks. These models are useful tools to design intelligent systems. However, they are not suitable to represent knowledge in the domains of reality applications. In this paper, new models for knowledge representation called computational networks will be presented. They have been used in designing some knowledge base systems in education for solving problems such as the system that supports studying knowledge and solving analytic geometry problems, the program for studying and solving problems in Plane Geometry, the program for solving problems about alternating current in physics.
Semantic networks: Structure and dynamics
2010
During the last ten years several studies have appeared regarding language complexity. Research on this issue began soon after the burst of a new movement of interest and research in the study of complex networks, i.e., networks whose structure is irregular, complex and dynamically evolving in time. In the first years, network approach to language mostly focused on a very abstract and general overview of language complexity, and few of them studied how this complexity is actually embodied in humans or how it affects cognition. However research has slowly shifted from the language-oriented towards a more cognitive-oriented point of view. This review first offers a brief summary on the methodological and formal foundations of complex networks, then it attempts a general vision of research activity on language from a complex networks perspective, and specially highlights those efforts with cognitive-inspired aim.
On the Semantics of a Semantic Network
Fundamenta Informaticae
We elaborate on the semantics of an enhanced object-oriented semantic network, where multiple instantiation, multiple specialization, and meta-classes are supported for both kinds of objects: entities and properties. By semantics of a semantic network, we mean the information (both explicit and derived) that the semantic network carries. Several data models use semantic networks to organize information. However, many of these models do not have a formalism de ning what the semantics of the semantic network is. In our data model, in addition to the Isarelation, we consider a stronger form of specialization for properties, that we call restriction isa, or Risa for short. The Risa relation expresses property value re nement. A distinctive feature of our data model is that it supports the interaction between Isaand Risa relations. The combination of Isaand Risa prov i d e s a p o werful conceptual modeling mechanism.
Semantic networks: visualizations of knowledge
1997
The history of semantic networks is almost as long as that of their parent discipline, artificial intelligence. They have formed the basis of many fascinating, yet controversial, discussions in conferences and in the literature, ranging from metaphysics through to complexity theory in computer science.
Neural Networks, Knowledge, and Cognition: A Mathematical Semantic Model Based upon Category Theory
Category theory can be applied to mathematically model the semantics of cognitive neural systems. We discuss semantics as a hierarchy of concepts, or symbolic descriptions of items sensed and represented in the connection weights distributed throughout a neural network. The hierarchy expresses subconcept relationships, and in a neural network it becomes represented incrementally through a Hebbian-like learning process. The categorical semantic model described here explains the learning process as the derivation of colimits and limits in a concept category. It explains the representation of the concept hierarchy in a neural network at each stage of learning as a system of functors and natural transformations, expressing knowledge coherence across the regions of a multi-regional network equipped with multiple sensors. The model yields design principles that constrain neural network designs capable of the most important aspects of cognitive behavior.
Lecture Notes in Computer Science, 2004
We consider the categorical concepts of a 'network of networks': (a) each node is a host network (1-network or 1-graph) and super-links are analogous to a graph-functor, i.e. this is (1, 1)-network; (b) 2-network where there are 2-links among 1-links. The general notion of network-morphism is proposed. 1 Network The dominant technological structure is a network (a graph), e.g. World Wide Web graph, each web page is a node and each (hyper)link is a directed edge; internet, social networks, networks in the molecular biology, a biological cell as a network of genes, neural networks, metabolic network, scientific citation network, energetic networks, phone calls, linguistics networks, networks in natural languages, ecological eco-networks, computer circuits, www.nd.edu/networks or http://www.internetmathematics.org. A graph underline a category [4, 6, 7, Burroni 1981, Lawvere 1989]. Graphs provide useful concept in a computational science (integrating science with a computation), braided logic [5, Chávez Rodríguez et al. 2001], etc. Lawvere consider the category of directed graphs and their morphisms [Lawvere 1989]. The categorical aspects of graphs is not familiar outside of category theory. The value of categories for computational science is that the category theory, invented by Eilenberg and MacLane in 1945, has developed language for the study of structures, language for the formulation of problems, language for development of methods of calculation and deduction, language for discovering and exploiting analogies between various inter-disciplinary fields of science. Attraction of category theory is that the same algebraic tool=language is applicable in a variety of multidisciplinary science. Study of the structure (process) involves studying the pre-categories (≡ graphs) and pre-functors ≡ graph-morphisms.
Beyond is-a and part-whole: More semantic network links
Computers & Mathematics with Applications, 1992
Semantic networks need many more links than traditional ones include if they are to function as adequate models of human memory. Many tasks benefit from cognltively realistic representations. Such cognitively realistic models of human reasoning processes require a deep understanding of the logical properties of their links. This paper argues for three basic claims. First, we must identify the links that are fundaanental to hunuxn cognitive processes. Second, we must understand how their logical properties are actually used in conunon sense reasoning. Third, even though the resulting properties are not nearly as neat as those of their better lumwn mathematical counterparts, we must investigate and use those properties if we want our systems to he representationally adequate. Tiffs paper presents analyses of three links, and in the process demonstrates a methodology for dealing with these issues.
A Conceptual Space Approach to Semantic Networks
Computers & Mathematics with Applications 23 (1992), 6-9, March-May, s. 517-526.
Atmtract--If every entity has a set of attributes with each attribute having a value, we regard the complete set of an eatity's attribute-v,due pairs (e.8., color-red, heisht-4 era) as fully describing the entity. Such descriptions form a conceptual space, that is, an intensional space of concepts for which spatial inclusion corresponds to strict logical implication. An intensional losic of concepts is developed with which we can talk about concepts and their relations without referring to extensions of concepts. In this approach semantic networks are simply sets of interrelated formulas; i.e., they are theories in the logic of concepts. Default values are treated by introducing a new type of modal po~ibil~ty operator and a superconcept operator, not by revising the basic logical entailment relation. A concept may inherit values from a superconcept either strictly or by default as "concept qva superconcept." In this way inheritance problems turn out to be logical inference problems, and they can be solved in sound proof theory. (z)p, are formulas. A model of LC is a structure M, M = (V,S,F), where V is a (non-empty) set of values, S is a conceptual space, S = {f If: ATT--~ V}, 523
Extending the expressive power of semantic networks
Artificial Intelligence, 1976
~C'T "'Factual knowledge" Used by natural lang._~_o~e processing systems can be cor~niently represented in the form of semantic networks. Compared to a "linear" representation such as that of I~ Pre~dicate Calculus however, semantic networks present special problems with respect to the use of logical connectives, quantiflers, descriptions, and certain other constructimts? sy~fe/~tt¢ solutions to these problems will be proposed, in the form Of extensions to a more or less conventionalnetwork notation. Predicate Calculus tranzlottons of network 'propositions will freq~,ntly be given for comparison, to illustrate the close kltt~p of the two forms of representation.
An approach to the problem of meaning: Semantic networks
Journal of Psycholinguistic Research, 1976
An approach to the problem of meaning through the postulation of semantic networks is presented. Subjects generated them for ten concrete and ten abstract nouns with two different procedures. Comparisons were made between the semantic network and the set of associations given to each concept. Finally, both the concrete concepts' and the abstract concepts' networks were compared. It was suggested that a form of meaning is given by the semantic network of the concept, by a reconstructive memory process.
In Shortly about Neural Networks
A neural network is a collection of neurons that are interconnected and interactive through signal processing operations. The traditional term "neural network" refers to a biological neural network, i.e., a network of biological neurons. The modern meaning of this term also includes artificial neural networks, built of artificial neurons or nodes. Machine learning includes adaptive mechanisms that allow computers to learn from experience, learn by example and by analogy. Learning opportunities can improve the performance of an intelligent system over time. One of the most popular approaches to machine learning is artificial neural networks. An artificial neural network consists of several very simple and interconnected processors, called neurons, which are based on modeling biological neurons in the brain. Neurons are connected by calculated connections that pass signals from one neuron to another. Each connection has a numerical weight associated with it. Weights are the basis of long-term memory in artificial neural networks. They express strength or importance for each neuron input. An artificial neural network "learns" through repeated adjustments of these weights.
Representation has a fundamental role in Artificial Intelligence but there is still an open debate on basic issues on this subject. Particularly, there have been various studies on the emergence of communication and language in artificial agents, where the debate on representations underlying these processes should be significant, however not much discussion and studies have been done. We propose to identify and classify possible representational processes occurring during the emergence of communication, replicating a computational experiment previously proposed and evaluating neural network activations patterns. To define representation and its classes, including icons, indexes and symbols, we rely on the semiotics of Charles Sanders Peirce. Results show that symbolic associations are established during the evolution of artificial agents and such symbolic associations benefit adaptive success.
Intensional Concepts in Propositional Semantic Networks*
Cognitive Science, 1982
An integrated statement is made concerning the semantic status of nodes in a propositional semantic network, claiming that such nodes represent only intensions. Within the network, the only reference to extensionality is via a mechanism to assert that two intensions have the same extension in some world. This framework is employed in three application problems to illustrate the nature of its solutions. The formalism used here utilizes only assertional information and no structural, or definitional, information. This restriction corresponds to many of the psychologically motivated network models. Some of the psychological implications of network processes called node merging and node splitting ore discussed. Additionally, it is pointed out that both our networks and the psychologically based networks are prone to memory confusions about knowing unless augmented by domain-specific inference processes, or by structural information.
Trends in Cognitive Sciences, 2013
Networks of interconnected nodes have long played a key role in cognitive science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions or collaborations among scientists. Today, the inclusion of network theory into cognitive sciences, and the expansion of complex systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the cognitive sciences. Humans have more than 10 10 neurons and between 10 14 and 10 15 synapses in their nervous system [1]. Together, neurons and synapses form neural networks, organized into structural and functional sub-networks at many scales [2]. However, understanding the collective behavior of neural networks starting from the knowledge of their constituents is infeasible. This is a common feature of all complex systems, summarized in the famous motto "more is different" [3]. The study of complexity has yielded important insights into the behavior of complex systems over the past decades, but most of the toy models that proliferated under its umbrella have failed to find practical applications [4]. However, in the last decade or so a revolution has taken place. An unprecedented amount of data, available thanks to technological advances, including the Internet and the Web, has transformed the field. The data-driven modeling of complex systems has led to what is now known as Network Science [5]. Network Science has managed to provide a unifying framework to put different systems under the same conceptual lens [5], with important practical consequences [6]. The resulting formal approach has uncovered widespread properties of complex networks and led to new experiments [4] [7] [8]. The potential impact on cognitive science is considerable. The newly available concepts and tools already provided insights into the collective behavior of neurons [9], but they have also inspired new empirical work, designed, for example, to identify large-scale functional networks [10] [11]. Moreover, very different systems such as semantic networks [12], language networks [13] or social networks [14, 15] can now be investigated quantitatively, using the unified framework of Network Science. These developments suggest that the concepts and tools from Network Science will become increasingly relevant to the study of cognition. Here, we review recent results showing how a network approach can provide insights into cognitive science, introduce Network Science to the interested cognitive scientist without prior experience of the subject, and give pointers to further readings. After a gentle overview of complex networks, we survey existing work in three subsections, concerning the neural, cognitive, and social levels of analysis. A final section considers dynamical processes taking place upon networks, which is likely to be an important topic for cognitive science in the future. I. Introduction to Network Science The study of networks (or graphs) is a classical topic in mathematics, whose history began in the 17 th century [16]. In formal terms, networks are objects composed of a set of points, called vertices or nodes, joined in pairs by lines, termed edges (see Fig. 1 for basic network definitions). They provide a simple and powerful representation of complex systems consisting of interacting units, with nodes representing the units, and edges denoting pairwise interactions between units. Mathematical graph theory [17], based mainly on the rigorous demonstration of the topological properties of particular graphs, or in general extremal properties, has been dramatically expanded by the recent availability of large digital databases, which have allowed exploration of the properties of very