The storage capacity of Potts models for semantic memory retrieval (original) (raw)

A General Memory Model of Language and

The paper "From the Least Effort Point of View" provides the answers to the questions of "What is language?" and "Why do we need language?" However, these philosophical answers do not give us a formally defined language for this philosophy of language. In this paper, firstly I give the formal language of the mechanism of "Least Effort Point of View". We will see that this actually gives the first glance of a General Memory Model (GMM) without type specification. Also in the introduction, I give some examples of GMM in programming language, natural language, association memory and logic. Then an interface and 3 implementations of GMM are proposed. It is especially noted that the Artificial Neural Network(ANN) implementation of GMM shares the same "simple", "small", and "uniform" representation on both sides. And then I compare this model with 3 famous memory models : Structured Query Language(SQL), Object Oriented Programming(OOP), Semantic Networks Processing System(SNePS). Next I build Cognitive Models of Consciousness, Short term memory, and Long Term Memory by GMM. Lastly, I invent a High Level Vision Description Language(HLVDL) and its conversion to the GMM. Thus we will have a general memory model for computer vision with small, simple, and uniform memory elements. Therefore, the GMM can not only do with Language as it is firstly inspired from, but also it can deal with Vision!

Words, symbols and rich memory

A series of arguments is presented showing that words are not stored in memory in a way that resembles the abstract, phonological code used by orthography or by linguistic analysis. Words are stored in a very concrete, detailed code that includes nonlinguistic information including speaker's voice properties and other auditory details. Thus, memory for language resembles an exemplar memory and abstract descriptions (using letter-like units and speaker-invariant features) are probably computed on the fly whenever needed. One consequence of this hypothesis is that the study of phonology should be the study of generalizations across the speech of a community and that such a description will employ units (segments, syllable types, prosodic patterns, etc.) that are not, themselves, employed as units in speakers' memory for language. That is, the psychological units of language are not useful for description of linguistic generalizations and the study of linguistic generalizations are not useful for storing the language for speaker use.

Encoding words into a Potts attractor network

To understand the brain mechanisms underlying language phenomena, and sentence construction in particular, a number of approaches have been followed that are based on artificial neural networks, where words are encoded as distributed patterns of activity. Still, issues like the distinct encoding of semantic vs syntactic features, word binding, and the learning processes through which words come to be encoded that way, have remained tough challenges. We explore a novel approach to address these challenges, which focuses first on encoding words of an artificial language of intermediate complexity (BLISS) into a Potts attractor net. Such a network has the capability to spontaneously latch between attractor states, offering a simplified cortical model of sentence production. The network stores the BLISS vocabulary, and hopefully its grammar, in its semantic and syntactic subnetworks. Function and content words are encoded differently on the two subnetworks, as suggested by neuropsychological findings. We propose that a next step might describe the self-organization of a comparable representation of words through a model of a learning process.