Panagiotis Nastou - Academia.edu (original) (raw)
Papers by Panagiotis Nastou
Lecture notes in computer science, 2024
Lecture Notes in Computer Science, 2023
this paper we present approaches for generating random numbers along with potential applications.... more this paper we present approaches for generating random numbers along with potential applications. Rather than trying to provide extensive coverage of several techniques or algorithms that have appeared in the scientific literature, we focus on some representative approaches presenting their workings and properties in detail. Our goal is to delineate their strengths and weaknesses as well as their potential application domains so as the reader can judge what would be the best approach for the application in hand, possibly a combination of the available approaches. For instance, a physical source of randomness can be used for the initial seed, then suitable preprocessing can enhance its randomness and then the output of the preprocessing can feed different types of generators, e.g. a linear congruential generator, a cryptographically secure one and one based on the combination of one way hash functions and shared key cryptoalgorithms in various modes of operation. Then, if desired, th...
The formal study of computer malware was initiated in the seminal work of Fred Cohen in the mid 8... more The formal study of computer malware was initiated in the seminal work of Fred Cohen in the mid 80s who applied elements of the Theory of Computation in the investigation of the theoretical limits of using the Turing Machine formal model of computation in detecting viruses. Cohen gave a simple but realistic, formal, definition of the characteristic actions of a computer virus as a Turing Machine that replicates itself and then proved that constructing a Turing Machine that recognizes viruses (i.e. Turing Machines that act like viruses) is impossible, by reducing the Halting Problem, which is undecidable, to the problem of recognizing a computer virus. In this paper we complement Cohen's approach along similar lines, based on Recursion Function Theory and the Theory of Computation. More specifically, after providing a simple generalization of Cohen's definition of a computer virus, we show that the malware/non-malware classification problem is undecidable under this new definition. Moreover, we show that to any formal system, there correspond infinitely many, effectively constructible, programs for which no proof can be produced by the formal system that they are either malware or non-malware programs. In other words, given any formal system, one can provide a procedure that generates, systematically, an infinite number of impossible to classify, within the formal system, programs.
International Journal of Differential Equations, 2013
We investigate the properties of a general class of differential equations described by ()/ = +1 ... more We investigate the properties of a general class of differential equations described by ()/ = +1 () () +1 + () () + ⋅ ⋅ ⋅ + 2 () () 2 + 1 () () + 0 (), with > 1 a positive integer and (), 0 ≤ ≤ + 1, with (), real functions of. For = 2, these equations reduce to the class of Abel differential equations of the first kind, for which a standard solution procedure is available. However, for > 2 no general solution methodology exists, to the best of our knowledge, that can lead to their solution. We develop a general solution methodology that for odd values of connects the closed form solution of the differential equations with the existence of closed-form expressions for the roots of the polynomial that appears on the right-hand side of the differential equation. Moreover, the closed-form expression (when it exists) for the polynomial roots enables the expression of the solution of the differential equation in closed form, based on the class of Hyper-Lambert functions. However, for certain even values of , we prove that such closed form does not exist in general, and consequently there is no closed-form expression for the solution of the differential equation through this methodology.
Communications in Computer and Information Science, 2011
... Initially, the master generates IN = ⌈log2(nP rocs − 1)⌉ cryptographically strongBoolean func... more ... Initially, the master generates IN = ⌈log2(nP rocs − 1)⌉ cryptographically strongBoolean functions, all linear combinations of which sat-isfy a set of target properties (eg nonlinearity greater than a given value, SAC property, etc). ...
2009 International Symposium on Autonomous Decentralized Systems, 2009
Elliptic Curve Cryptography (ECC) is one of the most promising alternatives to conventional publi... more Elliptic Curve Cryptography (ECC) is one of the most promising alternatives to conventional public key cryptography, such as RSA and ElGamal, since it employs keys of smaller sizes for the same level of cryptographic strength. Smaller key sizes imply smaller hardware units for performing the arithmetic operations required by cryptographic protocols and, thus, ECC is an ideal candidate for implementation in embedded systems where the major computational resources (speed and storage) are limited. In this paper we present a port, written in ANSI C for maximum portability, of an open source ECCbased cryptographic library (ECC-LIB) to ATMEL's AT76C520 802.11 WLAN Access Point. One of the major features of this port, not found in similar ports, is that it supports Complex Multiplication (CM) for the construction of Elliptic Curves with good security properties. We present some experimental results that demonstrate that the port is efficient and can lead to generic embedded systems with robust ECC-based cryptographic protocols using cryptographically strong ECCs generated with CM. As an application of the ported library, an EC Diffie-Hellman key exchange protocol is developed as an alternative of the 4-way key handshake protocol of the 802.11 protocol.
Proceedings 22nd International Conference on Distributed Computing Systems Workshops
When considering block cipher designs, one feature that is seemingly not related to their robustn... more When considering block cipher designs, one feature that is seemingly not related to their robustness of a design is algorithmic variability, i.e. the ability to effect changes on a design that essentially leave its structure unchanged while they modify its functional characteristics. This feature, however, is related to robustness as there are situations where a specific algorithm is either suspected to be under cryptanalytic attack or it is not considered secure any more due to a discovered weakness. The easiest action would be to change the characteristics of the algorithm in a way that obscures the cryptanalytic attack or that eliminates the cipher's weaknesses. Our focus is on this kind of changes, using as a specific case the CAST-128 cipher. The changes we consider refer to the algorithm's substitution boxes and since the creation of good substitution boxes is a highly time consuming process, we also provide a parallel algorithm for completing this task fast.
Proceedings 16th International Parallel and Distributed Processing Symposium, 2002
ABSTRACT In this paper we propose an integrated hardware/software methodology for the implementat... more ABSTRACT In this paper we propose an integrated hardware/software methodology for the implementation of dynamically reconfigurable cryptoalgorithms with good security properties, on Field Programmable Gate Arrays (FPGAs): we start with a specific cryptoalgorithm implemented on the FPGA and then enable the modification of the only element of the algorithm that lets intact its structure, the substitution boxes. Since the properties of the s-boxes largely determine the properties of the cryptoalgorithm, we provide the architecture of a CAST-128 based cryptoalgorithm whose s-box construction methodology results in s-boxes with good properties. We map this architecture on ATMEL’s innovative processor+FPGA chip FPSLIC TM where the cryptoalgorithm is implemented on the FPGA part while the s-box construction algorithm runs on the processor part (AVR). The result is a one chip fast and secure cryptoalgorithm than can reconfigure itself at run time, either autonomously at regular time intervals or upon receipt of an external signal in order to hinder or confuse suspected cryptanalytic efforts. after more than two years of intensive scrutiny and testing of all the submissions, enabled the cryptography community to gain many helpful insights into the area of cryptosystem design and implementation as well as cryptanalysis. One of the assessment criteria for the AES competition
IISA 2013, 2013
ABSTRACT In this paper, we propose simple protocols for enabling two communicating agents that ma... more ABSTRACT In this paper, we propose simple protocols for enabling two communicating agents that may have never met before to extract common knowledge out of any initial knowledge that each of them possesses. The initial knowledge from which the agents start, may even be independent of each other, implying that the two agents need not have had previous access to common information sources. In addition, the common knowledge extracted upon the termination of the protocols depends, in a fair way, on the (possibly independent) information items initially known, separately, by the two agents. It is fair in the sense that there is a negotiation between the two agents instead of one agent forcing the other to conform to its own knowledge. These protocols, may be extended in order to support security applications where the establishment of a common knowledge is required. Moreover, the implementation of the protocols leads to reasonably small code that can also fit within resource limited devices involved in any communication network while, at the same time, it is efficient as simulation results demonstrate.
Proceedings 1st International Conference on Algorithms and Architectures for Parallel Processing, 1995
ABSTRACT This paper describes the implementation of a library of low-level image processing algor... more ABSTRACT This paper describes the implementation of a library of low-level image processing algorithms. This library is divided into two families of algorithms, one for those that apply to the spatial domain (local histogram equalization, local average filter, median filter, Sobel edge detector, and histogram evaluation), and one for those that apply to the frequency domain (forward and inverse discrete Fourier Transform, amplitude of the forward discrete Fourier transform, forward and inverse discrete cosine transform, and Butterworth filters). The efficiency of these algorithms depends on the number of processors used, the method of combining results produced by different processors (e.g., sequentially or using a binary tree), and the time required for the combination of two independently produced results compared to the time required to produce them
IEEE International Symposium on Consumer Electronics, 2004
ABSTRACT Not Available
Journal of Parallel and Distributed Computing, 1998
We introduce an average case analysis of the search primitive operations (equality and thresholdi... more We introduce an average case analysis of the search primitive operations (equality and thresholding) in associative memories. We provide a general framework for analysis, using as parameters the word space distribution and the CAM size parameters: m (number of memory words) and n (memory word length). Using this framework, we calculate the probability that the whole CAM memory responds to a search primitive operation after comparing up to k most significant bits (1 k n) in each word; furthermore, we provide a closed formula for the average value of k and the probability that there exists at least one memory word that equals the centrally broadcast word. Additionally, we derive results for the cases of uniform and exponential distribution of word spaces. We prove that in both cases the average value of k depends strongly on lg m, when n>lg m: for the case of uniform distribution, the average value is practically independent of n, while in the exponential depends weakly on the difference between the sample space size 2 n and the CAM size m. Furthermore, in both cases, the average k is approximately n when n lg m. Verification of our theoretical results through massive simulations on a parallel machine is presented. One of the main results of this work, that the average value of k can be much smaller than n article no.
Lecture notes in computer science, 2024
Lecture Notes in Computer Science, 2023
this paper we present approaches for generating random numbers along with potential applications.... more this paper we present approaches for generating random numbers along with potential applications. Rather than trying to provide extensive coverage of several techniques or algorithms that have appeared in the scientific literature, we focus on some representative approaches presenting their workings and properties in detail. Our goal is to delineate their strengths and weaknesses as well as their potential application domains so as the reader can judge what would be the best approach for the application in hand, possibly a combination of the available approaches. For instance, a physical source of randomness can be used for the initial seed, then suitable preprocessing can enhance its randomness and then the output of the preprocessing can feed different types of generators, e.g. a linear congruential generator, a cryptographically secure one and one based on the combination of one way hash functions and shared key cryptoalgorithms in various modes of operation. Then, if desired, th...
The formal study of computer malware was initiated in the seminal work of Fred Cohen in the mid 8... more The formal study of computer malware was initiated in the seminal work of Fred Cohen in the mid 80s who applied elements of the Theory of Computation in the investigation of the theoretical limits of using the Turing Machine formal model of computation in detecting viruses. Cohen gave a simple but realistic, formal, definition of the characteristic actions of a computer virus as a Turing Machine that replicates itself and then proved that constructing a Turing Machine that recognizes viruses (i.e. Turing Machines that act like viruses) is impossible, by reducing the Halting Problem, which is undecidable, to the problem of recognizing a computer virus. In this paper we complement Cohen's approach along similar lines, based on Recursion Function Theory and the Theory of Computation. More specifically, after providing a simple generalization of Cohen's definition of a computer virus, we show that the malware/non-malware classification problem is undecidable under this new definition. Moreover, we show that to any formal system, there correspond infinitely many, effectively constructible, programs for which no proof can be produced by the formal system that they are either malware or non-malware programs. In other words, given any formal system, one can provide a procedure that generates, systematically, an infinite number of impossible to classify, within the formal system, programs.
International Journal of Differential Equations, 2013
We investigate the properties of a general class of differential equations described by ()/ = +1 ... more We investigate the properties of a general class of differential equations described by ()/ = +1 () () +1 + () () + ⋅ ⋅ ⋅ + 2 () () 2 + 1 () () + 0 (), with > 1 a positive integer and (), 0 ≤ ≤ + 1, with (), real functions of. For = 2, these equations reduce to the class of Abel differential equations of the first kind, for which a standard solution procedure is available. However, for > 2 no general solution methodology exists, to the best of our knowledge, that can lead to their solution. We develop a general solution methodology that for odd values of connects the closed form solution of the differential equations with the existence of closed-form expressions for the roots of the polynomial that appears on the right-hand side of the differential equation. Moreover, the closed-form expression (when it exists) for the polynomial roots enables the expression of the solution of the differential equation in closed form, based on the class of Hyper-Lambert functions. However, for certain even values of , we prove that such closed form does not exist in general, and consequently there is no closed-form expression for the solution of the differential equation through this methodology.
Communications in Computer and Information Science, 2011
... Initially, the master generates IN = ⌈log2(nP rocs − 1)⌉ cryptographically strongBoolean func... more ... Initially, the master generates IN = ⌈log2(nP rocs − 1)⌉ cryptographically strongBoolean functions, all linear combinations of which sat-isfy a set of target properties (eg nonlinearity greater than a given value, SAC property, etc). ...
2009 International Symposium on Autonomous Decentralized Systems, 2009
Elliptic Curve Cryptography (ECC) is one of the most promising alternatives to conventional publi... more Elliptic Curve Cryptography (ECC) is one of the most promising alternatives to conventional public key cryptography, such as RSA and ElGamal, since it employs keys of smaller sizes for the same level of cryptographic strength. Smaller key sizes imply smaller hardware units for performing the arithmetic operations required by cryptographic protocols and, thus, ECC is an ideal candidate for implementation in embedded systems where the major computational resources (speed and storage) are limited. In this paper we present a port, written in ANSI C for maximum portability, of an open source ECCbased cryptographic library (ECC-LIB) to ATMEL's AT76C520 802.11 WLAN Access Point. One of the major features of this port, not found in similar ports, is that it supports Complex Multiplication (CM) for the construction of Elliptic Curves with good security properties. We present some experimental results that demonstrate that the port is efficient and can lead to generic embedded systems with robust ECC-based cryptographic protocols using cryptographically strong ECCs generated with CM. As an application of the ported library, an EC Diffie-Hellman key exchange protocol is developed as an alternative of the 4-way key handshake protocol of the 802.11 protocol.
Proceedings 22nd International Conference on Distributed Computing Systems Workshops
When considering block cipher designs, one feature that is seemingly not related to their robustn... more When considering block cipher designs, one feature that is seemingly not related to their robustness of a design is algorithmic variability, i.e. the ability to effect changes on a design that essentially leave its structure unchanged while they modify its functional characteristics. This feature, however, is related to robustness as there are situations where a specific algorithm is either suspected to be under cryptanalytic attack or it is not considered secure any more due to a discovered weakness. The easiest action would be to change the characteristics of the algorithm in a way that obscures the cryptanalytic attack or that eliminates the cipher's weaknesses. Our focus is on this kind of changes, using as a specific case the CAST-128 cipher. The changes we consider refer to the algorithm's substitution boxes and since the creation of good substitution boxes is a highly time consuming process, we also provide a parallel algorithm for completing this task fast.
Proceedings 16th International Parallel and Distributed Processing Symposium, 2002
ABSTRACT In this paper we propose an integrated hardware/software methodology for the implementat... more ABSTRACT In this paper we propose an integrated hardware/software methodology for the implementation of dynamically reconfigurable cryptoalgorithms with good security properties, on Field Programmable Gate Arrays (FPGAs): we start with a specific cryptoalgorithm implemented on the FPGA and then enable the modification of the only element of the algorithm that lets intact its structure, the substitution boxes. Since the properties of the s-boxes largely determine the properties of the cryptoalgorithm, we provide the architecture of a CAST-128 based cryptoalgorithm whose s-box construction methodology results in s-boxes with good properties. We map this architecture on ATMEL’s innovative processor+FPGA chip FPSLIC TM where the cryptoalgorithm is implemented on the FPGA part while the s-box construction algorithm runs on the processor part (AVR). The result is a one chip fast and secure cryptoalgorithm than can reconfigure itself at run time, either autonomously at regular time intervals or upon receipt of an external signal in order to hinder or confuse suspected cryptanalytic efforts. after more than two years of intensive scrutiny and testing of all the submissions, enabled the cryptography community to gain many helpful insights into the area of cryptosystem design and implementation as well as cryptanalysis. One of the assessment criteria for the AES competition
IISA 2013, 2013
ABSTRACT In this paper, we propose simple protocols for enabling two communicating agents that ma... more ABSTRACT In this paper, we propose simple protocols for enabling two communicating agents that may have never met before to extract common knowledge out of any initial knowledge that each of them possesses. The initial knowledge from which the agents start, may even be independent of each other, implying that the two agents need not have had previous access to common information sources. In addition, the common knowledge extracted upon the termination of the protocols depends, in a fair way, on the (possibly independent) information items initially known, separately, by the two agents. It is fair in the sense that there is a negotiation between the two agents instead of one agent forcing the other to conform to its own knowledge. These protocols, may be extended in order to support security applications where the establishment of a common knowledge is required. Moreover, the implementation of the protocols leads to reasonably small code that can also fit within resource limited devices involved in any communication network while, at the same time, it is efficient as simulation results demonstrate.
Proceedings 1st International Conference on Algorithms and Architectures for Parallel Processing, 1995
ABSTRACT This paper describes the implementation of a library of low-level image processing algor... more ABSTRACT This paper describes the implementation of a library of low-level image processing algorithms. This library is divided into two families of algorithms, one for those that apply to the spatial domain (local histogram equalization, local average filter, median filter, Sobel edge detector, and histogram evaluation), and one for those that apply to the frequency domain (forward and inverse discrete Fourier Transform, amplitude of the forward discrete Fourier transform, forward and inverse discrete cosine transform, and Butterworth filters). The efficiency of these algorithms depends on the number of processors used, the method of combining results produced by different processors (e.g., sequentially or using a binary tree), and the time required for the combination of two independently produced results compared to the time required to produce them
IEEE International Symposium on Consumer Electronics, 2004
ABSTRACT Not Available
Journal of Parallel and Distributed Computing, 1998
We introduce an average case analysis of the search primitive operations (equality and thresholdi... more We introduce an average case analysis of the search primitive operations (equality and thresholding) in associative memories. We provide a general framework for analysis, using as parameters the word space distribution and the CAM size parameters: m (number of memory words) and n (memory word length). Using this framework, we calculate the probability that the whole CAM memory responds to a search primitive operation after comparing up to k most significant bits (1 k n) in each word; furthermore, we provide a closed formula for the average value of k and the probability that there exists at least one memory word that equals the centrally broadcast word. Additionally, we derive results for the cases of uniform and exponential distribution of word spaces. We prove that in both cases the average value of k depends strongly on lg m, when n>lg m: for the case of uniform distribution, the average value is practically independent of n, while in the exponential depends weakly on the difference between the sample space size 2 n and the CAM size m. Furthermore, in both cases, the average k is approximately n when n lg m. Verification of our theoretical results through massive simulations on a parallel machine is presented. One of the main results of this work, that the average value of k can be much smaller than n article no.