Neural networks and how machines learn meaning (original) (raw)
Related papers
Neural Network Pattern for Enhancing Functionality of Electronic Dictionaries
Advanced Education, 2019
The value of a dictionary is traditionally considered to be proportional to its physical volume, measured in the number of entries. However, the amount of useful data varies depending on existing hypertextual links across a dictionary. Therefore, its utility might also be calculated as proportional to the number of useful links among its structural parts which can interact in a similar way as neurons do via synapse links, provided that the number of links turns out to be exponentially greater than the number of entries. Today's lexicographic practice, as well as an experiment held by the author with his own developed onomasiological electronic dictionary of phraseological synonyms "IdeoPhrase", appears to demonstrate that the main criterion for establishing links automatically is the repetition of each kind of signs (stylistic labels, graphical word, metalinguistic comments). Automatically generated hypertextual links can be used for finding out semantic relations of different types among lexemes (synonymic, antonymic and others), semantic equivalence or similarity among lexemes in different languages (which is close to automatic translation), as well as compiling a new dictionary. The fact that generated relation established by а computer constitute new useful knowledge which has not been directly input by the compiler, qualifies this algorithm as artificial intelligence engine.
Computational Mechanisms for the Grounding of Concepts, Indexicals, and Names
lagrammar.net, 2019
This paper investigates the theoretical consequences which follow from grounding the Content kinds concept, indexical, and name as computational mechanisms of an artificial agent. The mechanism for the recognition and realization (action) of concepts is pattern matching between types and raw data. The mechanism for the interpretation of indexicals is pointing at values of the agent's on-board orientation system (STAR). The reference mechanism of names relies on markers inserted in an act of baptism into the cogni-tive representation of referents. The empirical result is three universals: (1) figurative use is restricted to the matching mechanism of concepts, but uses the Semantic kinds ref-erent, property, and relation; (2) reference is restricted to nouns, but utilizes the mechanisms of matching, pointing, and baptism. As computational mechanisms, (3) concepts use pattern matching as a direct interaction with cognition-external raw data, while indexicals and names use it indirectly.
Artificial intelligence in word prediction systems
Artificial Intelligence is the process of performing tasks that generally requires human intelligence by artificially emulating it in a computational background. Implementation of this concept in predicting the next suitable word efficiently such that it does not deviate from its original meaning. It is observed that most word prediction algorithms involve comparison of a word with a dictionary or a collection of words that shows resemblance and follow a fixed linear path by recognizing patterns. In most cases, this linear analysis solves the problem of maintaining the sentence structure and its meaning. However, in complex cases, it is found to be inaccurate as minor details of a language are ignored. This can be corrected with the help of machine learning along with pattern recognition by analyzing the frequency of the words, addition of new words into a local dictionary, identifying the difference between official and unofficial text sentences or words. It becomes more complex as the neural network keeps increasing but it improves and provides accurate results over time and can be improved with comparison using a cloud-based dictionary. The concept being implemented is universal and is useful in any kind of language and provides a restriction free communication and understanding.
From Talking Animal to Talking Machine. Lexical semantic relations in WordNet
2015
A long time passed from the first word said by Homo sapiens to the first word said by a machine. The possibility of machines talking like men confronts us with the richness and complexity of human linguistic competence and its cognitive underpinnings. The contemporary study of linguistics attempts to explain this complexity explicitly. Part of this is lexical semantics, which studies the meanings of words and the relations between them. This essay addresses the attempt to represent one aspect of lexical semantic linguistic competence (lexical semantic relations) in a major computational resource: WordNet. There are various kinds of relations in lexical semantics: homonymy, synonymy, antonymy, hyponymy, meronymy, and troponymy. They were used in WordNet to represent the organisation of the human lexicon. WordNet has a synset as a main building block. Synsets are sets of word forms that are close in meaning in context. In WordNet, nouns and verbs have taxonomic structures. The word forms are divided into domains related to a specific subject and shared features. Adjectives have a structure based on the antonymy relation where bipolar adjectives divided into clusters referring to a certain meaning. Adverbs are gathered in a single file. Psycholinguists have often attacked the WordNet structure as a representation of human linguistic competence. However, computational linguists have found the lexical semantic database useful for machine applications and natural language processing. WordNet have been translated into many languages and combined into multilingual databases such as EuroWordNet. Each language has developed its own wordnet but they are interconnected with interlingual links. Expand and merge approaches are used for data acquisition. The expand approach assumes bilingual translation with automatic, manual and hybrid methods to fill up gaps in data. Linguistic bias between languages can be reduced by data from sources such as Wikipedia or dictionary translation by professional interpreters. The merge approach assumes use of monolingual corpora for data acquisition. WordNet moved from cognitive science to natural language processing. It is one of the remarkable discoveries that helped scientists to come closer to the desire to teach machines to speak.
Similarity of objects and the meaning of words
2006
We survey the emerging area of compression-based, parameter-free, similarity distance measures useful in data-mining, pattern recognition, learning and automatic semantics extraction. Given a family of distances on a set of objects, a distance is universal up to a certain precision for that family if it minorizes every distance in the family between every two objects in the set, up to the stated precision (we do not require the universal distance to be an element of the family). We consider similarity distances for two types of objects: literal objects that as such contain all of their meaning, like genomes or books, and names for objects. The latter may have literal embodyments like the first type, but may also be abstract like "red" or "christianity." For the first type we consider a family of computable distance measures corresponding to parameters expressing similarity according to particular features between pairs of literal objects. For the second type we consider similarity distances generated by web users corresponding to particular semantic relations between the (names for) the designated objects. For both families we give universal similarity distance measures, incorporating all particular distance measures in the family. In the first case the universal distance is based on compression and in the second case it is based on Google page counts related to search terms. In both cases experiments on a massive scale give evidence of the viability of the approaches.
Object Networks: A Computational Framework to Compute with Words
Studies in Fuzziness and Soft Computing, 1999
In this work, we introduce a framework as a modeling and implementation tool to compute with words. The basis of the framework is a computational model called object networks. We present the mathematical concept of objects, generic objects and fuzzy objects. The interaction of these elements composes object systems. In particular, emphasis is given on a special class of object systems, the object networks. They are proposed here as a dynamic software architecture suitable for handling the issues involved when computing with words, but independent of the specific numerical algorithms used. The object networks architecture is easily translated into computer programs, and support explanatory databases build upon atomic pieces of information and from the query entered, among other important features. It is also shown how object networks manage semantics of linguistic, semiotic terms, to provide higher level linguistic computations. Illustrative examples are also included.
Can Machines Read (Literature)
2019
In this essay, we reflect on distant reading as one of the various takes on reading that currently prevail in literary scholarship as well as the teaching of literature. We focus on three concepts of reading which for various reasons can be considered inter-related: close reading, surface reading and distant reading. We offer a theoretical treatment of distant reading and demonstrate why it is closely related to the concept of machine reading (part of artificial intelligence). Throughout, we focus on the role of the individual reader in all this and argue that Digital Literary Studies have much to gain from paying closer attention to the so-called “natural” reading process of individual humans.
Analogies Between Texts: Mathematical Models and Applications in Computer-Assisted Knowledge Testing
2013
book maintains articles on actual problems of research and application of information technologies, especially the new approaches, models, algorithms and methods fot information modeling of knowledge in: Intelligence metasynthesis and knowledge processing in intelligent systems; Formalisms and methods of knowledge representation; Connectionism and neural nets; System analysis and sintesis; Modelling of the complex artificial systems; Image Processing and Computer Vision; Computer virtual reality; Virtual laboratories for computer-aided design; Decision support systems; Information models of knowledge of and for education; Open social info-educational