Information Theory (Mathematics) Research Papers (original) (raw)

A planetary trains for multi-speed is mainly used for automation in industries of automobile. A planetary gear train is represented by a graph. It is identified by (i) number of vertices and their connectivity (ii) number of edges and... more

A planetary trains for multi-speed is mainly used for automation in industries of automobile. A planetary
gear train is represented by a graph. It is identified by (i) number of vertices and their connectivity (ii) number of edges and their types and values (iii) fundamental circuits, their size and adjancy. Connectivity of individual link is a property characteristic of kinematic chain. It is possible to identify a planetary gear,therefore of using sets of labele (decimal numbers representing connectivity ) of individual link. The
connectivity of vertices , edges values and circuit values, related to design invariants which in turn indicates the possible behavior of the gear train ( for example capacity of power transmission, speed ratio and power carculation). For a specified degree – of – freedom a number of planetary gear kinematic chain (PGKCs) are selected and hence planetary gear trains (PGTs) can be formed with a given number of links
and joints so that designer must be able to select to select the best train from the view point of say velocity ratio and capacity of power transmission, space requirements etc. Synthesis of planetary gear kinematic chain and planetary gear trains has been studied(1-9). Almost all reported work deals with only
identification of distinct chains. Besides providing an atlus of chains, this in itself does not provide any help to designer in the selection of best possible gear train. In the present paper a simple method based of circuit property ( based on link-link shortest path distance and degree of links) is presented to determine the topology values of power transmission efficiency and topology power transmission capacity of five-links PGKCs and their distinct inversions

This paper brings numbers in such a way that both sides of the expressions are with same digits. One side is numbers with power, while other side just with numbers, such as, a^b+c^d +.. =ab +cd+..., etc. The the expressions studies are... more

This paper brings numbers in such a way that both sides of the expressions are with same digits. One side is numbers with power, while other side just with numbers, such as, a^b+c^d +.. =ab +cd+..., etc. The the expressions studies are with positive and negative signs. Work is done for two to five terms expressions. Due to high quantity of numbers, in some cases, powers and bases both are considered bigger than one.

We are six students enrolled in the Physics of Data master degree at the University of Padova, Italy. From this year on, our master provides lectures on Information Theory and Inference, held by two international expert scientists that... more

We are six students enrolled in the Physics of Data master degree at the University of Padova, Italy. From this year on, our master provides lectures on Information Theory and Inference, held by two international expert scientists that have been working on these topics for several years. Since the arguments tackled seemed very interesting, we decided to put the efforts together and work on some high quality notes for the whole course, to allow the other students to study and have a different point of view on the various arguments treated. We hope you will enjoy our vegetable soup with melt cheese.

The International Journal of Chaos, Control, Modeling and Simulation is a Quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Chaos Theory, Control Systems, Scientific Modeling... more

The International Journal of Chaos, Control, Modeling and Simulation is a Quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Chaos Theory, Control Systems, Scientific Modeling and Computer Simulation. In the last few decades, chaos theory has found important applications in secure communication devices and secure data encryption. Control methods are very useful in several applied areas like Chaos, Automation, Communication, Robotics, Power Systems, and Biomedical Instrumentation. The journal focuses on all technical and practical aspects of Chaos, Control, Modeling and Simulation. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on chaotic systems and applications, control theory, process control, automation, modeling concepts, computational methods, computer simulation and establishing new collaborations in these areas.

11th International Conference on Information Theory (IT 2022 ) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Information Theory. The goal of this conference is... more

11th International Conference on Information Theory (IT 2022 ) will provide an excellent
international forum for sharing knowledge and results in theory, methodology and
applications of Information Theory. The goal of this conference is to bring together
researchers and practitioners from academia and industry to focus on understanding modern
Information Theory concepts and establishing new collaborations in these areas. Authors are
solicited to contribute to the conference by submitting articles that illustrat e research results,
projects, surveying works and industrial experiences that describe significant advances in the
areas of Information Theory and applications.

This paper proposes a technique to compress any data irrespective of it’s type. Compressing random data especially has always proved to be a difficult task. With very less patterns and logic within the data it quickly reaches a point... more

This paper proposes a technique to compress any data irrespective of it’s type. Compressing random data especially has always proved to be a difficult task. With very less patterns and logic within the data it quickly reaches a point where no more data can be represented within a given number of bits. The proposed technique will allow us to compress irrespective of it’s pattern or logic within, and represent it. Further it will permit the technique to be again applied to the already compressed data without having any change in the possible compression ratio. While data can’t be compressed further than a limit the technique will rely on representing the data as a position in any computationally easy number series that extends to infinity provided it have high enough deviation among it’s digits. Only few markers that are linked to the position is saved, rather than representing the original data. The system then use these markers to guess the position and derive the data from it. The procedure is however, computationally intensive and as of now can raise questions of data corruption but with more computing power, efficient algorithm and proper data integrity checks it will be able to provide very high compression ratios in the future

In computer search optimization theory, active information is a measurement of a search algorithm's internal information as it relates to its problem space. While it has been previously applied to evolutionary search algorithms on... more

In computer search optimization theory, active information is a measurement of a search algorithm's internal information as it relates to its problem space. While it has been previously applied to evolutionary search algorithms on computers, it has not been applied yet to biological systems. Active information can be very useful in differentiating between mutational adaptations which are based on internally-coded information and those which are the results of happenstance. However, biological systems present many practical problems regarding measuring active information which are not present in digital systems. This paper describes active information, how it can be used in biology, and how some of these problems can be overcome in specific cases.

Advanced yet to be defined superfluid aether analysis for masses of objects found in Nature. protons, electrons, black holes. simple and agrees exactly with GR (General Relativity) 's calculation of black hole mass - nealy identical to... more

Advanced yet to be defined superfluid aether analysis for masses of objects found in Nature. protons, electrons, black holes. simple and agrees exactly with GR (General Relativity) 's calculation of black hole mass - nealy identical to Haramein's approach if not exactly.

With the Generalized Theorem of Lagrange
DOI: 10.13140/RG.2.2.33819.39208
Research Proposal

We live in the information age. Claude Shannon, as the father of the information age, gave us a theory of communications that quanti…ed an "amount of information," but, as he pointed out, "no concept of information itself was de…ned."... more

We live in the information age. Claude Shannon, as the father of the information age, gave us a theory of communications that quanti…ed an "amount of information," but, as he pointed out, "no concept of information itself was de…ned." Logical entropy provides that de…nition. Logical entropy is the natural measure of the notion of information based on distinctions, di¤erences, distinguishability, and diversity. It is the (normalized) quantitative measure of the distinctions of a partition on a set-just as the Boole-Laplace logical probability is the normalized quantitative measure of the elements of a subset of a set. And partitions and subsets are mathematically dual concepts-so the logic of partitions is dual in that sense to the usual Boolean logic of subsets, and hence the name "logical entropy." The logical entropy of a partition has a simple interpretation as the probability that a distinction or dit (elements in di¤erent blocks) is obtained in two independent draws from the underlying set. The Shannon entropy is shown to also be based on this notion of information-as-distinctions; it is the average minimum number of binary partitions (bits) that need to be joined to make all the same distinctions of the given partition. Hence all the concepts of simple, joint, conditional, and mutual logical entropy can be transformed into the corresponding concepts of Shannon entropy by a uniform non-linear dit-bit transform. And …nally logical entropy linearizes naturally to the corresponding quantum concept. The quantum logical entropy of an observable applied to a state is the probability that two di¤erent eigenvalues are obtained in two independent projective measurements of that observable on that state.

Processing information is what all physical systems do. Quantum computation is not just a technological promise, but the most challenging test for the conceptual problems in Quantum Mechanics. We could say that Schrödinger’s cat has been... more

Processing information is what all physical systems do. Quantum computation is not just a technological promise, but the most challenging test for the conceptual problems in Quantum Mechanics. We could say that Schrödinger’s cat has been tamed and is leading us along the most charming paths of the physical world. Anyway, NP–complete problems appear impregnable even by traditional quantum computing. All that sounds paradoxical considering that the local and classical world emerges from the non–local quantum one, which permeates any aspect of the physical world. Quantum Turing Machines constrain the quantum system to yes/no answers, whereas the real computational vocation of QM would be to use superposition and non–locality to obtain probabilistic oracles beyond Turing barrier performance. In this volume we have tried to provide a panorama of these trends. On one hand the physics of traditional, Turing–based Quantum Computing — crucial to clarify the old foundational problems and surely decisive in the future in nanotechnology and quantum communication —, on the other the possibility of a broader concept of quantum information which will lead to a new pact between quantum dissipative field theory and the concept of computation in physical systems.

This study illustrates presents a set of the Long – Term Geoelectric Potential (LTGP) measurements that are collected for experimental investigation in Western Greece during a five–year period (1993 – 1997). During this period, many major... more

This study illustrates presents a set of the Long – Term Geoelectric Potential (LTGP) measurements that are collected for experimental investigation in Western Greece during a five–year period (1993 – 1997). During this period, many major destructive earthquake events occurred hat caused human casualties and extended material damages. The ollection and processing of geoelectric measurements was done by an automated data acquisition system at the Seismological Laboratory of the University of Patras, Greece. This novel study considers seismic activity of this area as a typical linear dynamic system and the dynamic relationship between the magnitude of earthquakes and Long – Term Geoelectric Potential signals is inferred by the Recursive Least Square algorithm. The results are encouraging and show that linear dynamic systems, which are widely used in modern control theory, can describe efficiently the dynamic behavior of seismic activity and become a useful interpretative tool of seismic phenomenon.

В статье предлагается гипотеза, что сознание представляет собой высокоуровневое взаимодействие, которое заключается в передаче каузальности без передачи информации. Рассматриваются некоторые ранее не обсуждаемые свойства сознания, среди... more

В статье предлагается гипотеза, что сознание представляет собой высокоуровневое взаимодействие, которое заключается в передаче каузальности без передачи информации. Рассматриваются некоторые ранее не обсуждаемые свойства сознания, среди которых свойство занимать положение надсистемы, по отношению к любой системе, в том числе, по отношению к состояниям самого сознания, а также свойство заполнять каузальный разрыв и свойство формировать каузальную среду. Рассмотрен вопрос соотношения нейрофизиологических подходов к изучению сознания на примере теории интегрированной информации Дж. Тонини и теории когнитома К.В. Анохина. Делается вывод, что последняя теория имеет значительно больший потенциал для развития, однако, для это необходим новый понятийный аппарат, включающий такие понятия как каузальная среда, неопределенность, а также понятие о несистемах, как о комплексах невзаимодействующих друг с другом элементов.

In this essay the logical and conceptual foundations of distributed artificial intelligence and multi-agent systems are explored. An attempt is made to provide an introduction to some of the key concepts of the area. These include the... more

In this essay the logical and conceptual foundations of distributed artificial intelligence and multi-agent systems are explored. An attempt is made to provide an introduction to some of the key concepts of the area. These include the notion of a changing social world of multiple agents. The agents have different dynamically changing, possible choices and abilities. The agents also have uncertainty or lack of information about their physical state as well as their dynamic social state. The social state of an agent includes the intentional state of that agent, as well as, that agent's representation of the intentional states of other agents. Furthermore , it includes the evaluations agents make of their physical and social condition. Communication and meaning and their relationship to intentional and information states are investigated. The logic of agent abilities and intentions are motivated and formalized. The entropy of group plan states is defined.

Uplink and downlink cloud radio access networks are modeled as two-hop K-user L-relay networks, whereby small base-stations act as relays for end-to-end communications and are connected to a central processor via orthogonal fronthaul... more

Uplink and downlink cloud radio access networks
are modeled as two-hop K-user L-relay networks, whereby
small base-stations act as relays for end-to-end communications
and are connected to a central processor via orthogonal fronthaul
links of finite capacities. Simplified versions of network
compress–forward (or noisy network coding) and distributed
decode–forward are presented to establish inner bounds on
the capacity region for uplink and downlink communications,
that match the respective cutset bounds to within a finite gap
independent of the channel gains and signal to noise ratios.
These approximate capacity regions are then compared with
the capacity regions for networks with no capacity limit on
the fronthaul. Although it takes infinite fronthaul link capacities
to achieve these “fronthaul-unlimited” capacity regions exactly,
these capacity regions can be approached approximately with
finite-capacity fronthaul. The total fronthaul link capacities
required to approach the fronthaul-unlimited sum-rates (for
uplink and downlink) are characterized. Based on these results,
the capacity scaling law in the large network size limit is
established under certain uplink and downlink network models,
both theoretically and via simulations.

The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this... more

The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems.

Abstract: In this paper, motivated by the seminal paper of Brockett “Information theoretic approach to actuarial science: a unification and extension of relevant the- ory and applications”, Transactions of the Society of Actuaries, Vol... more

Abstract: In this paper, motivated by the seminal paper of Brockett “Information
theoretic approach to actuarial science: a unification and extension of relevant the-
ory and applications”, Transactions of the Society of Actuaries, Vol 43, 73-135 (1991), initially, we review minimization of the Kullback-Leibler divergence DKL(u, v) between observed (raw) death probabilities or mortality rates, u, and the same entities, v, to be graduated (or smoothed) subject to a set of reasonable constraints such as monotonicity, bounded smoothness etc. Noting that the quantities u and v, involved in the above minimization problem based on the Kullback-Leibler divergence, are non-probability vectors, we study the properties of divergence and statistical information theory for DKL(p, q), where p and q are non-probability vectors. We do the same for the Cressie and Read power divergence between non-probability vectors, solve the problem of graduation of mortality rates via Lagrangian duality theory, discuss the ramifications of constraints, tests of goodness of fit and compare with other graduation methods, predominantly the Whittaker and Henderson method. At the end we provide numerical illustrations and comparisons.

The paper discusses differences between the approaches to optimization of wireless analog feedback communication systems (AFCS) and digital communication systems (DCS). There is shown that “one mile zone” transmission of the analog... more

The paper discusses differences between the approaches to optimization of wireless analog feedback communication systems (AFCS) and digital communication systems (DCS). There is shown that “one mile zone” transmission of the analog signals can be realized by optimal AFCS more efficiently than by DCS.

Previous work suggests that there is an ordering to the dis-coverability of axioms, a "size" of sorts. However, there is not presently a method of measuring the size of an axiom. This paper suggests two possible methods for measuing axiom... more

Previous work suggests that there is an ordering to the dis-coverability of axioms, a "size" of sorts. However, there is not presently a method of measuring the size of an axiom. This paper suggests two possible methods for measuing axiom size. The goal is not to produce a definitive measurement technique, but to begin the exploration of different possible size measurements for axioms.

In general, divergences and measures of information are defined for probability vectors. However, in some cases, divergences are ‘informally’ used to measure the discrepancy between vectors, which are not necessarily probability vectors.... more

In general, divergences and measures of information are defined for probability vectors. However, in some cases, divergences are ‘informally’ used to measure the discrepancy between vectors, which are not necessarily probability vectors. In this paper we examine whether divergences with nonprobability vectors in their arguments share the properties of probabilistic or information theoretic divergences. The results indicate that divergences with nonprobability vectors share, under some conditions, some of the properties of probabilistic or information theoretic divergences and therefore can be considered and used as information measures. We then use these divergences in the problem of actuarial graduation of mortality rates.

The persistent mutual information (PMI) is a complexity measure for stochastic processes. It is related to well-known complexity measures like excess entropy or statistical complexity. Essentially it is a variation of the excess entropy... more

The persistent mutual information (PMI) is a complexity measure for stochastic processes. It is related to well-known complexity measures like excess entropy or statistical complexity. Essentially it is a variation of the excess entropy so that it can be interpreted as a specific measure of system internal memory. The PMI was first introduced in 2010 by Ball, Diakonova and MacKay as a measure for (strong) emergence. In this paper we define the PMI mathematically and investigate the relation to excess entropy and statistical complexity. In particular we prove that the excess entropy is an upper bound of the PMI. Furthermore we show some properties of the PMI and calculate it explicitly for some example processes. We also discuss to what extend it is a measure for emergence and compare it with alternative approaches used to formalize emergence.

Recently, Xiong et al. (2019) introduced an alternative measure of uncertainty known as the fractional cumulative residual entropy (FCRE). In this paper, rst, we study some general properties of FCRE and its dynamic version. We also... more

Recently, Xiong et al. (2019) introduced an alternative measure of uncertainty known as the fractional cumulative residual entropy (FCRE). In this paper, rst, we study some general properties of FCRE and its dynamic version. We also consider a version of fractional cumulative paired entropy for a random lifetime. Then we apply the FCRE measure for the coherent system lifetimes with identically distributed components.

Affordances are directly perceived environmental possibilities for action. Born within ecological psychology, they havebeen proposed to be one of the main building blocks to explain cognition from and embodied and situated... more

Affordances are directly perceived environmental possibilities for action. Born within ecological psychology, they havebeen proposed to be one of the main building blocks to explain cognition from and embodied and situated perspective.Despite the interest, a formal definition of affordances in information theory terms that would allow to exploit their fullpotential in models of cognitive systems is still missing. We explore the challenge of quantifying affordances by usinginformation-theoretical measures. Specifically, we proposethat empowerment (i.e., information quantifying how muchinfluence and control an agent has over the environment it canperceive) can be used to formally capture information aboutthe possibilities for action (the range of possible behaviors of the agent in a given environment), which in some cases canconstitute affordances. We test this idea in a minimal modelreproducing some aspects of a classical example of body-scaled affordances: an agent passing through an aperture. We use empowerment measures to characterize the affordance of passing through the aperture. We find out that empowerment measures yield a similar transition to the one found in experimental data in humans in the specialized literature on ecological psychology. The exercise points to some limitations for formalizing affordances and allows us to pose questions regarding how affordances can be differentiated from moregeneric possibilities for action.

— For source sequences of length L symbols we proposed to use a more realistic value to the usual benchmark of number of code letters by source letters. Our idea is based on a quantifier of information fluctuation of a source, F (U),... more

— For source sequences of length L symbols we proposed to use a more realistic value to the usual benchmark of number of code letters by source letters. Our idea is based on a quantifier of information fluctuation of a source, F (U), which corresponds to the second central moment of the random variable that measures the information content of a source symbol. An alternative interpretation of typical sequences is additionally provided through this approach.

We extend previously proposed measures of complexity, emergence, and self-organization to continuous distributions using differential entropy. Given that the measures were based on Shannon's information, the novel continuous complexity... more

We extend previously proposed measures of complexity, emergence, and self-organization to continuous distributions using differential entropy. Given that the measures were based on Shannon's information, the novel continuous complexity measures describe how a system's predictability changes in terms of the probability distribution parameters. This allows us to calculate the complexity of phenomena for which distributions are known. We find that a broad range of common parameters found in Gaussian and scale-free distributions present high complexity values. We also explore the relationship between our measure of complexity and information adaptation.

Review of statistical papers in the period 2011-2015

Abstract—This paper presents a new approach to the improvement of the sensor node - base station (SNBS) links of physical layer of wireless sensor networks performance. The key idea is the consideration of SNBS as the remote... more

Abstract—This paper presents a new approach to the improvement
of the sensor node - base station (SNBS) links of physical
layer of wireless sensor networks performance. The key idea is the
consideration of SNBS as the remote measurement and estimation
systems and application of the corresponding analytical tools. It is shown that the main difficulty in the improvement of digital SNBS performance is impossibility to formulate and solve the optimization task using well developed methods of the optimal estimation theory. In turn, the SNBS transmitting the signals formed by the sensors using analog PAM transmitter adjusted over the feedback channel permit the formulation of analytical form of the mean square error (MSE) and solution of
optimization task. The obtained optimal transmission–reception
algorithm enables designing of a low energy-size-cost analog
SNBS, which transmit the signals with minimal MSE and a bit
rate equal to the capacity of the link. Moreover, the MSE of
transmission determines the information characteristics of the
links, as well as enables a development of the unified methods of
the SNBS real performance evaluation and measurement.