Gianfranco Basti - Academia.edu (original) (raw)
Papers by Gianfranco Basti
Progress in biophysics and molecular biology, Jan 4, 2017
We suggest that in the framework of the Category Theory it is possible to demonstrate the mathema... more We suggest that in the framework of the Category Theory it is possible to demonstrate the mathematical and logical dual equivalence between the category of the q-deformed Hopf Coalgebras and the category of the q-deformed Hopf Algebras in quantum field theory (QFT), interpreted as a thermal field theory. Each pair algebra-coalgebra characterizes a QFT system and its mirroring thermal bath, respectively, so to model dissipative quantum systems in far-from-equilibrium conditions, with an evident significance also for biological sciences. Our study is in fact inspired by applications to neuroscience where the brain memory capacity, for instance, has been modeled by using the QFT unitarily inequivalent representations. The q-deformed Hopf Coalgebras and the q-deformed Hopf Algebras constitute two dual categories because characterized by the same functor T, related with the Bogoliubov transform, and by its contravariant application T(op), respectively. The q-deformation parameter is rela...
The 2021 Summit of the International Society for the Study of Information, 2021
During the last twenty years a lot of research has be done for the development of probabilistic m... more During the last twenty years a lot of research has be done for the development of probabilistic machine learning algorithms, especially in the field of the artificial neural networks (ANN) for dealing with the problem of data streaming classification and, more generally for thereal time information extraction/manipulation/analysisfrom (infinite) data streams.For instance, sensor networks, healthcare monitoring, social networks, financial markets, … are among the main sources of data streams, often arriving at high speed and always requiring a real-time analysis, before all for individuating long-range and higher order correlationsamong data, which are continuously changing over time.Indeed, the standard statistical machine learning algorithms in ANN models starting from their progenitor, the so-called backpropagation (BP)algorithm-based on the presence of the "sigmoid function"acting on the activation function of the neurons of the hidden layers of the net for detecting higher order correlations in the data, and the gradient descent(GD) stochastic algorithm for the (supervised) neuron weight refresh-are developed for staticalso huge bases of data ("big data"). Then, they are systematically inadequate and unadaptable for the analysis of data streaming, i.e., of dynamic bases of data characterized by sudden changes in the correlation length among the variables (phase transitions), and then by the unpredictable variation of the number of the signifying degrees of freedom of the probability distributions. From the computational standpoint, this means the infinitary character of the data streaming problem, whose solution is in principle unreachable by a TM, either classical or quantum (QTM).Indeed, for dealing with the data streaming infinitarychallenge, the exponential increasing of the computational speed derived by the usage of quantum machine learning algorithms is not very helpful, either using "quantum gates" (QTM), or using "quantum annealing" (quantum Boltzmann Machine (QBM)),both objects of an intensive research during the last years. In the case of ANNs, the improvement given by theBoltzmann-Machine(BM) learning algorithm to GD is that BM uses "thermal fluctuations" for jumping out of the local minima of the cost function (simulated annealing),so to avoid the main limitation of the GD algorithm in machine learning.In this framework, the advantage of quantum annealingin a QBM is that it uses the "quantum vacuum fluctuations" instead of thermal fluctuations of the classical annealing for bringing the system out of swallow (local) minima, by using the "quantum tunnelling" effect. This outperforms the thermal annealing, especially where the energy (cost) landscape consists of high but thin barriers surrounding shallow local minima. However, despite the improvement that, at least in some specific cases, QBM can give for finding the absolute minimum size/length/cost/distance among a large even though finite set of possible solutions, the problem of data streaming remains because in this case this finitary supposition does not hold. Like the analogy with the coarse-graining problem in statistical physics emphasizes very well, the search for the global minimum of the energy function makes sense after the system performed a phase transition. That is, physically, after that a sudden change in the correlation length among variables, generally under the action of an external field, determined a new way by which they are aggregated for defining the signifying number of the degrees of freedom N characterizing the system statistics after the transition.In other terms, the infinitary challenge implicit in the data streamingis related with phase transitionsso that, from the QFT standpoint, this is the same phenomenon of the infinite number of degrees of freedom of the Haag Theorem, characterizing the quantum superposition inQFT systems in far from equilibrium conditions. This requires the extension of the QFT formalism to dissipative systems, inaugurated by the pioneering works of N. Bogoliubov and H. Umezawa. The Bogoliubov transform, indeed, allows to map between different phases of the bosons and the fermions quantum fields, making the dissipative QFTdifferently from QM and from QFT in theirstandard (Hamiltonian)interpretation for closed systemable to calculate over phase transitions.Indeed, inspired by the modeling of natural brains as many-body systems, the QFT dissipative formalism has been used to model ANNs[1, 2]. The mathematical formalism of QFT requires that for open (dissipative) systems, like the brain which is in a permanent "trade" or "dialog" with its environment, the degrees of freedom of the system (the brain), say , need to be "doubled" by introducing the degrees of freedom ̃ describing the environment, according to the coalgebraic scheme: → ×̃. Indeed, Hopf coproducts (sums) are generally used in quantum physics to calculate the total energy of a superposition quantum state. In the case of a dissipative system, the coproducts represent the total energy of a balanced state between the system and its thermal bath. In this case, because the two terms of the coproduct are not mutually interchangeable like in the case of closed systems(where the sum concerns the energy of two superposed particles), we are led to consider
Studies in Applied Philosophy, Epistemology and Rational Ethics, 2020
A significant chapter of the short history of formal philosophy is related with the notion and th... more A significant chapter of the short history of formal philosophy is related with the notion and the theory of the so-called "Social Welfare Functions (SWFs)", as a substantial component of the "social choice theory". One of the main uses of SWFs is aimed, indeed, at representing coherent patterns (effectively, algebraic structures of relations) of individual and collective choices/preferences, with respect to a fixed ranking of alternative social/economical states. Indeed, the SWF theory is originally inspired by Samuelson's pioneering work on the foundations of mathematical economic analysis. It uses explicitly Gibbs' thermodynamics of ensembles "at equilibrium" based on statistical mechanics as the physical paradigm for the mathematical theory of economic systems. In both theories, indeed, the differences and the relationships among individuals are systematically considered as irrelevant. On the contrary, in the mathematical theory of "Social Choice Functions" (SCFs) developed by Amartya Sen, the interpersonal comparison and the real-time information exchanges among different social actors and their environments-differentethical values and constraints, included-play an essential role. This means that the inspiring physical paradigm is no longer "gas" but "fluid thermodynamics" of interacting systems passing through different "phases" of fast "dissolution/aggregation of coherent behaviors", and then staying persistently in far from equilibrium conditions. These processes are systematically studied by the quantum field theory (QFT) of "dissipative systems", at the basis of the physics of condensed matter, modeled by the "algebra doubling" of coalgebras. This coalgebraic modeling is highly
Proceedings, 2020
In the recent history of the effort for defining a suitable. [...]
Applications and Science of Computational Intelligence III, 2000
ABSTRACT We propose a novel method for simultaneous speckle reduction and data compression based ... more ABSTRACT We propose a novel method for simultaneous speckle reduction and data compression based on wavelets. The main feature of the method is that of preserving the geometrical shapes of the figures present in the noisy images. A fast algorithm, the dynamic perceptron, is applied to detect the regular shapes present in the noisy image. Another fast algorithm is then used to find the best wavelet basis in the rate- distortion sense. Subsequently, a soft thresholding is applied in the wavelet domain to significantly suppress the speckles of the synthetic aperture radar images, while maintaining bright reflections for subsequent detection.
Applications and Science of Computational Intelligence V, 2002
ABSTRACT In this paper we present an encryption module included in the Subsidiary Communication C... more ABSTRACT In this paper we present an encryption module included in the Subsidiary Communication Channel (SCC) System we are developing for video-on-FM radio broadcasting. This module is aimed to encrypt by symmetric key the video image archive and real-time database of the broadcaster, and by asymmetric key the video broadcasting to final users. The module includes our proprietary Techniteia Encryption Library (TEL), that is already successfully running and securing several e-commerce portals in Europe. TEL is written in C-ANSI language for its easy exportation onto all main platforms and it is optimized for real-time applications. It is based on the blowfish encryption algorithm and it is characterized by a physically separated sub-module for the automatic generation/recovering of the variable sub-keys of the blowfish algorithm. In this way, different parts of the database are encrypted by different keys, both in space and in time, for granting an optimal security.
Applications and Science of Artificial Neural Networks II, 1996
ABSTRACT In this paper, starting from a general discussion on neural network dynamics from the st... more ABSTRACT In this paper, starting from a general discussion on neural network dynamics from the standpoint of statistical mechanics, we discuss three different strategies to deal with the problem of pattern recognition in neural nets. Particularly we emphasized the role of matching the intrinsic correlations within the input patterns, to solve the problem of the optimal pattern recognition. In this context, the first two strategies, we applied to different problems and we discuss in this paper, consist essentially in adding either white noise or colored noise (deterministic chaos) on the input pattern pre-processing, to make easier for a classical backpropagation algorithm the class separation, respectively because the input patterns are too correlated among themselves or, on the contrary, are too noisy. The third more radical strategy, we applied to very hard pattern recognition problems in HEP experiments, consists in an automatic (dynamic) redefinition of the same net topology on the inner correlations of the inputs.
Science of Artificial Neural Networks II, 1993
ABSTRACT Usually, to discriminate among particle tracks in high energy physics a set of discrimin... more ABSTRACT Usually, to discriminate among particle tracks in high energy physics a set of discriminating parameters is used. To cope with the different particle behaviors these parameters are connected by the human observer with boolean operators. We tested successfully an automatic method for particle recognition using a stochastic method to pre-process the input to a back propagation algorithm. The test was made using raw experimental data of electrons and negative pions taken at CERN laboratories (Geneva). From the theoretical standpoint, the stochastic pre-processing of a back propagation algorithm can be interpreted as finding the optimal fuzzy membership function notwithstanding high fluctuating (noisy) input data.
Science of Artificial Neural Networks, 1992
ABSTRACT After a discussion of some theoretical limitations and their experimental demonstration ... more ABSTRACT After a discussion of some theoretical limitations and their experimental demonstration of multilayer architectures in contextual pattern recognition, we propose an implementation of a spin-glass like neural net designed to deal efficiently in real time with time-dependent inputs (pattern translations, rotations, scaling, deformations) in noisy environments. The basic idea is a double dynamic on activations and weights on the same time scale. The two dynamics are correlated through an STM locking function on the object. This locking is the means by which the LTM module of the net can perform an invariant recognition of the object under transformations. This is possible owing to the invariant extraction of global features. The net is non-stationary and asymmetrical, because it is able to choose the right correlation order regarding the memorized prototypes for a successful recognition. Nevertheless, the same non- stationary condition, depending on the locking on an object under transformations, implies that the net displays a non-relaxing stabilization. It is presented as an application of the model to the classical recognition problem of rotation `T' and `C' pattern sequences in different noisy contexts.
Applications of Artificial Neural Networks III, 1992
ABSTRACT After a short discussion on the problems related to the higher order correlations treatm... more ABSTRACT After a short discussion on the problems related to the higher order correlations treatment in Hopfield neural net, we propose a modified architecture able to rearrange dynamically its topology in function of the input representation. The relations of this problem with computability problems are briefly considered, particularly in view of avoiding exponential time in computation. Some experimental results are shown for the recognition of particle traces in high energy accelerators and in speaker independent speed recognition.
Applications and Science of Artificial Neural Networks, 1995
ABSTRACT With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstr... more ABSTRACT With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstrated by M. Minsky and S. Papert are discussed. These two theorems, `(Psi) One-in-a-box' and `(Psi) Parity,' ultimately concern the intrinsic limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of these limitation problems by substituting the static definition of characteristic functions and of their domains in the `geometrical' perceptron, with their dynamic definition. This dynamic consists in the mutual redefinition of the characteristic function and of its domain depending on the matching with the input.
Wavelet Applications, 1994
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks, 1992
... eIk( 0) = aOIj,k (2) where Ij,k is the value of the j-th site of the bth input and a0 is a co... more ... eIk( 0) = aOIj,k (2) where Ij,k is the value of the j-th site of the bth input and a0 is a constant >> 1. IV-840 Page 4. ... When the constant a(n) is lower than a given value, the net already found which pattern of the initial sequence is closer with the prototype. ...
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339), 1999
Antonio L. Perroneqb (perrone@pul,it) , Gianfranco Basti' (basti@pul.it) ... 1. Abstrac... more Antonio L. Perroneqb (perrone@pul,it) , Gianfranco Basti' (basti@pul.it) ... 1. Abstract For the evaluation and the selection of the optimal NN structure complexity, as a function of the minimization ei-ther of the approximation error or of the generalization er-ror, we discuss briefly the ...
Perspectives in Neural Computing, 2002
... of logic 3.1 Analytic versus axiomatic method in logic after Godel The preceding discussion a... more ... of logic 3.1 Analytic versus axiomatic method in logic after Godel The preceding discussion about the core of the human intentionality is expressed in the language of cognitive science as the capability of human mind of re-defining the basic symbols of its computations. ...
Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290), 2002
This paper outlines the appbtive environment and sketches the technical details of an FM subcamer... more This paper outlines the appbtive environment and sketches the technical details of an FM subcamer technology called Multi Purpose Radio Communication channd (MPRC).
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan), 1993
With respect to Rosenblatt linear perceptron, a classical limitation theorem demonstrated by Mins... more With respect to Rosenblatt linear perceptron, a classical limitation theorem demonstrated by Minsky-Papert is discussed. This theorem, "ΨOne-in-a-box", ultimately concern the intrinsic "mathematical" limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of this limitation problem by substituting the static definition of characteristic functions and of their domains in the "geometrical" perceptron, with their dynamic definition.
International Joint Conference on Neural Networks, 1989
GIANFRANCO BASTI and ANTONIO PERRONE Pontifical Gregorian University - Piazza della Pilotta, 4 - ... more GIANFRANCO BASTI and ANTONIO PERRONE Pontifical Gregorian University - Piazza della Pilotta, 4 - I 00187 - Rome (Ita1 ... Abstract. Recently has been produced experimental evidence in neurophysiology suggesting that the different features of the external stimuli, processed ...
Studies in Applied Philosophy, Epistemology and Rational Ethics, 2013
In a seminal work published in 1952, "The chemical basis of morphogenesis", A. M. Turing establis... more In a seminal work published in 1952, "The chemical basis of morphogenesis", A. M. Turing established the core of what today we call "natural computation" in biological systems, intended as self-organizing dissipative systems. In this contribution we show that a proper implementation of Turing's seminal idea cannot be based on diffusive processes, but on the coherence states of condensed matter according to the dissipative Quantum Field Theory (QFT) principles. This foundational theory is consistent with the intentional approach in cognitive neuroscience, as far as it is formalized in the appropriate ontological interpretation of the modal calculus (formal ontology). This interpretation is based on the principle of the "double saturation" between a singular argument and its predicate that has its dynamical foundation in the principle of the "doubling of the degrees of freedom" between a brain state and the environment, as an essential ingredient of the mathematical formalism of dissipative QFT.
Progress in biophysics and molecular biology, Jan 4, 2017
We suggest that in the framework of the Category Theory it is possible to demonstrate the mathema... more We suggest that in the framework of the Category Theory it is possible to demonstrate the mathematical and logical dual equivalence between the category of the q-deformed Hopf Coalgebras and the category of the q-deformed Hopf Algebras in quantum field theory (QFT), interpreted as a thermal field theory. Each pair algebra-coalgebra characterizes a QFT system and its mirroring thermal bath, respectively, so to model dissipative quantum systems in far-from-equilibrium conditions, with an evident significance also for biological sciences. Our study is in fact inspired by applications to neuroscience where the brain memory capacity, for instance, has been modeled by using the QFT unitarily inequivalent representations. The q-deformed Hopf Coalgebras and the q-deformed Hopf Algebras constitute two dual categories because characterized by the same functor T, related with the Bogoliubov transform, and by its contravariant application T(op), respectively. The q-deformation parameter is rela...
The 2021 Summit of the International Society for the Study of Information, 2021
During the last twenty years a lot of research has be done for the development of probabilistic m... more During the last twenty years a lot of research has be done for the development of probabilistic machine learning algorithms, especially in the field of the artificial neural networks (ANN) for dealing with the problem of data streaming classification and, more generally for thereal time information extraction/manipulation/analysisfrom (infinite) data streams.For instance, sensor networks, healthcare monitoring, social networks, financial markets, … are among the main sources of data streams, often arriving at high speed and always requiring a real-time analysis, before all for individuating long-range and higher order correlationsamong data, which are continuously changing over time.Indeed, the standard statistical machine learning algorithms in ANN models starting from their progenitor, the so-called backpropagation (BP)algorithm-based on the presence of the "sigmoid function"acting on the activation function of the neurons of the hidden layers of the net for detecting higher order correlations in the data, and the gradient descent(GD) stochastic algorithm for the (supervised) neuron weight refresh-are developed for staticalso huge bases of data ("big data"). Then, they are systematically inadequate and unadaptable for the analysis of data streaming, i.e., of dynamic bases of data characterized by sudden changes in the correlation length among the variables (phase transitions), and then by the unpredictable variation of the number of the signifying degrees of freedom of the probability distributions. From the computational standpoint, this means the infinitary character of the data streaming problem, whose solution is in principle unreachable by a TM, either classical or quantum (QTM).Indeed, for dealing with the data streaming infinitarychallenge, the exponential increasing of the computational speed derived by the usage of quantum machine learning algorithms is not very helpful, either using "quantum gates" (QTM), or using "quantum annealing" (quantum Boltzmann Machine (QBM)),both objects of an intensive research during the last years. In the case of ANNs, the improvement given by theBoltzmann-Machine(BM) learning algorithm to GD is that BM uses "thermal fluctuations" for jumping out of the local minima of the cost function (simulated annealing),so to avoid the main limitation of the GD algorithm in machine learning.In this framework, the advantage of quantum annealingin a QBM is that it uses the "quantum vacuum fluctuations" instead of thermal fluctuations of the classical annealing for bringing the system out of swallow (local) minima, by using the "quantum tunnelling" effect. This outperforms the thermal annealing, especially where the energy (cost) landscape consists of high but thin barriers surrounding shallow local minima. However, despite the improvement that, at least in some specific cases, QBM can give for finding the absolute minimum size/length/cost/distance among a large even though finite set of possible solutions, the problem of data streaming remains because in this case this finitary supposition does not hold. Like the analogy with the coarse-graining problem in statistical physics emphasizes very well, the search for the global minimum of the energy function makes sense after the system performed a phase transition. That is, physically, after that a sudden change in the correlation length among variables, generally under the action of an external field, determined a new way by which they are aggregated for defining the signifying number of the degrees of freedom N characterizing the system statistics after the transition.In other terms, the infinitary challenge implicit in the data streamingis related with phase transitionsso that, from the QFT standpoint, this is the same phenomenon of the infinite number of degrees of freedom of the Haag Theorem, characterizing the quantum superposition inQFT systems in far from equilibrium conditions. This requires the extension of the QFT formalism to dissipative systems, inaugurated by the pioneering works of N. Bogoliubov and H. Umezawa. The Bogoliubov transform, indeed, allows to map between different phases of the bosons and the fermions quantum fields, making the dissipative QFTdifferently from QM and from QFT in theirstandard (Hamiltonian)interpretation for closed systemable to calculate over phase transitions.Indeed, inspired by the modeling of natural brains as many-body systems, the QFT dissipative formalism has been used to model ANNs[1, 2]. The mathematical formalism of QFT requires that for open (dissipative) systems, like the brain which is in a permanent "trade" or "dialog" with its environment, the degrees of freedom of the system (the brain), say , need to be "doubled" by introducing the degrees of freedom ̃ describing the environment, according to the coalgebraic scheme: → ×̃. Indeed, Hopf coproducts (sums) are generally used in quantum physics to calculate the total energy of a superposition quantum state. In the case of a dissipative system, the coproducts represent the total energy of a balanced state between the system and its thermal bath. In this case, because the two terms of the coproduct are not mutually interchangeable like in the case of closed systems(where the sum concerns the energy of two superposed particles), we are led to consider
Studies in Applied Philosophy, Epistemology and Rational Ethics, 2020
A significant chapter of the short history of formal philosophy is related with the notion and th... more A significant chapter of the short history of formal philosophy is related with the notion and the theory of the so-called "Social Welfare Functions (SWFs)", as a substantial component of the "social choice theory". One of the main uses of SWFs is aimed, indeed, at representing coherent patterns (effectively, algebraic structures of relations) of individual and collective choices/preferences, with respect to a fixed ranking of alternative social/economical states. Indeed, the SWF theory is originally inspired by Samuelson's pioneering work on the foundations of mathematical economic analysis. It uses explicitly Gibbs' thermodynamics of ensembles "at equilibrium" based on statistical mechanics as the physical paradigm for the mathematical theory of economic systems. In both theories, indeed, the differences and the relationships among individuals are systematically considered as irrelevant. On the contrary, in the mathematical theory of "Social Choice Functions" (SCFs) developed by Amartya Sen, the interpersonal comparison and the real-time information exchanges among different social actors and their environments-differentethical values and constraints, included-play an essential role. This means that the inspiring physical paradigm is no longer "gas" but "fluid thermodynamics" of interacting systems passing through different "phases" of fast "dissolution/aggregation of coherent behaviors", and then staying persistently in far from equilibrium conditions. These processes are systematically studied by the quantum field theory (QFT) of "dissipative systems", at the basis of the physics of condensed matter, modeled by the "algebra doubling" of coalgebras. This coalgebraic modeling is highly
Proceedings, 2020
In the recent history of the effort for defining a suitable. [...]
Applications and Science of Computational Intelligence III, 2000
ABSTRACT We propose a novel method for simultaneous speckle reduction and data compression based ... more ABSTRACT We propose a novel method for simultaneous speckle reduction and data compression based on wavelets. The main feature of the method is that of preserving the geometrical shapes of the figures present in the noisy images. A fast algorithm, the dynamic perceptron, is applied to detect the regular shapes present in the noisy image. Another fast algorithm is then used to find the best wavelet basis in the rate- distortion sense. Subsequently, a soft thresholding is applied in the wavelet domain to significantly suppress the speckles of the synthetic aperture radar images, while maintaining bright reflections for subsequent detection.
Applications and Science of Computational Intelligence V, 2002
ABSTRACT In this paper we present an encryption module included in the Subsidiary Communication C... more ABSTRACT In this paper we present an encryption module included in the Subsidiary Communication Channel (SCC) System we are developing for video-on-FM radio broadcasting. This module is aimed to encrypt by symmetric key the video image archive and real-time database of the broadcaster, and by asymmetric key the video broadcasting to final users. The module includes our proprietary Techniteia Encryption Library (TEL), that is already successfully running and securing several e-commerce portals in Europe. TEL is written in C-ANSI language for its easy exportation onto all main platforms and it is optimized for real-time applications. It is based on the blowfish encryption algorithm and it is characterized by a physically separated sub-module for the automatic generation/recovering of the variable sub-keys of the blowfish algorithm. In this way, different parts of the database are encrypted by different keys, both in space and in time, for granting an optimal security.
Applications and Science of Artificial Neural Networks II, 1996
ABSTRACT In this paper, starting from a general discussion on neural network dynamics from the st... more ABSTRACT In this paper, starting from a general discussion on neural network dynamics from the standpoint of statistical mechanics, we discuss three different strategies to deal with the problem of pattern recognition in neural nets. Particularly we emphasized the role of matching the intrinsic correlations within the input patterns, to solve the problem of the optimal pattern recognition. In this context, the first two strategies, we applied to different problems and we discuss in this paper, consist essentially in adding either white noise or colored noise (deterministic chaos) on the input pattern pre-processing, to make easier for a classical backpropagation algorithm the class separation, respectively because the input patterns are too correlated among themselves or, on the contrary, are too noisy. The third more radical strategy, we applied to very hard pattern recognition problems in HEP experiments, consists in an automatic (dynamic) redefinition of the same net topology on the inner correlations of the inputs.
Science of Artificial Neural Networks II, 1993
ABSTRACT Usually, to discriminate among particle tracks in high energy physics a set of discrimin... more ABSTRACT Usually, to discriminate among particle tracks in high energy physics a set of discriminating parameters is used. To cope with the different particle behaviors these parameters are connected by the human observer with boolean operators. We tested successfully an automatic method for particle recognition using a stochastic method to pre-process the input to a back propagation algorithm. The test was made using raw experimental data of electrons and negative pions taken at CERN laboratories (Geneva). From the theoretical standpoint, the stochastic pre-processing of a back propagation algorithm can be interpreted as finding the optimal fuzzy membership function notwithstanding high fluctuating (noisy) input data.
Science of Artificial Neural Networks, 1992
ABSTRACT After a discussion of some theoretical limitations and their experimental demonstration ... more ABSTRACT After a discussion of some theoretical limitations and their experimental demonstration of multilayer architectures in contextual pattern recognition, we propose an implementation of a spin-glass like neural net designed to deal efficiently in real time with time-dependent inputs (pattern translations, rotations, scaling, deformations) in noisy environments. The basic idea is a double dynamic on activations and weights on the same time scale. The two dynamics are correlated through an STM locking function on the object. This locking is the means by which the LTM module of the net can perform an invariant recognition of the object under transformations. This is possible owing to the invariant extraction of global features. The net is non-stationary and asymmetrical, because it is able to choose the right correlation order regarding the memorized prototypes for a successful recognition. Nevertheless, the same non- stationary condition, depending on the locking on an object under transformations, implies that the net displays a non-relaxing stabilization. It is presented as an application of the model to the classical recognition problem of rotation `T' and `C' pattern sequences in different noisy contexts.
Applications of Artificial Neural Networks III, 1992
ABSTRACT After a short discussion on the problems related to the higher order correlations treatm... more ABSTRACT After a short discussion on the problems related to the higher order correlations treatment in Hopfield neural net, we propose a modified architecture able to rearrange dynamically its topology in function of the input representation. The relations of this problem with computability problems are briefly considered, particularly in view of avoiding exponential time in computation. Some experimental results are shown for the recognition of particle traces in high energy accelerators and in speaker independent speed recognition.
Applications and Science of Artificial Neural Networks, 1995
ABSTRACT With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstr... more ABSTRACT With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstrated by M. Minsky and S. Papert are discussed. These two theorems, `(Psi) One-in-a-box' and `(Psi) Parity,' ultimately concern the intrinsic limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of these limitation problems by substituting the static definition of characteristic functions and of their domains in the `geometrical' perceptron, with their dynamic definition. This dynamic consists in the mutual redefinition of the characteristic function and of its domain depending on the matching with the input.
Wavelet Applications, 1994
[Proceedings 1992] IJCNN International Joint Conference on Neural Networks, 1992
... eIk( 0) = aOIj,k (2) where Ij,k is the value of the j-th site of the bth input and a0 is a co... more ... eIk( 0) = aOIj,k (2) where Ij,k is the value of the j-th site of the bth input and a0 is a constant >> 1. IV-840 Page 4. ... When the constant a(n) is lower than a given value, the net already found which pattern of the initial sequence is closer with the prototype. ...
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339), 1999
Antonio L. Perroneqb (perrone@pul,it) , Gianfranco Basti' (basti@pul.it) ... 1. Abstrac... more Antonio L. Perroneqb (perrone@pul,it) , Gianfranco Basti' (basti@pul.it) ... 1. Abstract For the evaluation and the selection of the optimal NN structure complexity, as a function of the minimization ei-ther of the approximation error or of the generalization er-ror, we discuss briefly the ...
Perspectives in Neural Computing, 2002
... of logic 3.1 Analytic versus axiomatic method in logic after Godel The preceding discussion a... more ... of logic 3.1 Analytic versus axiomatic method in logic after Godel The preceding discussion about the core of the human intentionality is expressed in the language of cognitive science as the capability of human mind of re-defining the basic symbols of its computations. ...
Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290), 2002
This paper outlines the appbtive environment and sketches the technical details of an FM subcamer... more This paper outlines the appbtive environment and sketches the technical details of an FM subcamer technology called Multi Purpose Radio Communication channd (MPRC).
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan), 1993
With respect to Rosenblatt linear perceptron, a classical limitation theorem demonstrated by Mins... more With respect to Rosenblatt linear perceptron, a classical limitation theorem demonstrated by Minsky-Papert is discussed. This theorem, "ΨOne-in-a-box", ultimately concern the intrinsic "mathematical" limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of this limitation problem by substituting the static definition of characteristic functions and of their domains in the "geometrical" perceptron, with their dynamic definition.
International Joint Conference on Neural Networks, 1989
GIANFRANCO BASTI and ANTONIO PERRONE Pontifical Gregorian University - Piazza della Pilotta, 4 - ... more GIANFRANCO BASTI and ANTONIO PERRONE Pontifical Gregorian University - Piazza della Pilotta, 4 - I 00187 - Rome (Ita1 ... Abstract. Recently has been produced experimental evidence in neurophysiology suggesting that the different features of the external stimuli, processed ...
Studies in Applied Philosophy, Epistemology and Rational Ethics, 2013
In a seminal work published in 1952, "The chemical basis of morphogenesis", A. M. Turing establis... more In a seminal work published in 1952, "The chemical basis of morphogenesis", A. M. Turing established the core of what today we call "natural computation" in biological systems, intended as self-organizing dissipative systems. In this contribution we show that a proper implementation of Turing's seminal idea cannot be based on diffusive processes, but on the coherence states of condensed matter according to the dissipative Quantum Field Theory (QFT) principles. This foundational theory is consistent with the intentional approach in cognitive neuroscience, as far as it is formalized in the appropriate ontological interpretation of the modal calculus (formal ontology). This interpretation is based on the principle of the "double saturation" between a singular argument and its predicate that has its dynamical foundation in the principle of the "doubling of the degrees of freedom" between a brain state and the environment, as an essential ingredient of the mathematical formalism of dissipative QFT.