Robert Hiromoto | University of Idaho (original) (raw)

Papers by Robert Hiromoto

Research paper thumbnail of Survey of parallel mapping schemes of the particle-in-cell method

Transactions of the American Nuclear Society, 1987

The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactio... more The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactions, is composed of two distinct computational phases that are each highly parallel, yet problematic in their integrated mapping to parallel processing systems. On a parallel distributed memory system, the different data structures prescribed by the push (the acceleration of particles) and solver (the solution of Poisson's equation) computational phases may require large amounts of data communications during transition between these phases. The ratio of global to nearest-neighbor data communication is a function of the parallel decomposition, the computational algorithms (e.g., multigrid versus Fast Fourier Transform (FFT)) selected, and the characteristics of the physical simulation under investigation. The PIC codes implemented and described here are plasma simulation programs. Unfortunately, these codes are different in their computational details, making direct comparisons of their performances tenuous. On the other hand, the details of the different mapping schemes should be instructive in understanding the complexities of the PIC algorithm and the trade-offs that must be considered in their implementations.

Research paper thumbnail of Parallel S<sub>n</sub>Iteration Schemes

Nuclear Science and Engineering, May 1, 1985

The iterative, multigroup, discrete ordinates S /sub n/ representation for the linear transport e... more The iterative, multigroup, discrete ordinates S /sub n/ representation for the linear transport equation enjoys widespread computational use and popularity. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor heterogeneous element processor, three parallel iteration schemes (two chaotic, one ordered) are investigated for solving the one-dimensional S /sub n/ transport equation.

Research paper thumbnail of On the abstracted dataflow complexity of Fast Fourier Transforms

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 1, 1992

I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww... more I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww n rIcwreEclumvu roynlly Iron hcanso 10 pubhsh or rwroduca .

Research paper thumbnail of The architecture of a reliable software monitoring system for embedded software systems

We develop the notion of a measurement-based methodology for embedded software systems to ensure ... more We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent remapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be remapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift away from the currently applied paradigm of addressing reliability, security and survivability in an add-on fashion at the end of the software development process. Rather, the integrity monitoring ability will be integrated into the overall architecture of the software system. The measurement and control methodology developed under this research program will readily migrate into hardware, leading to the development of new hardware architecture with built-in survivability, security and reliability attributes.

Research paper thumbnail of Set theoretic estimation applied to the information content of ciphers and decryption

Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditi... more Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditionally, STE has been used to solve vector problems in a Hilbert space using a distance metric to create a volume in that space. Given this type of space structure, Optimal Bounding Ellipsoid (OBE) algorithms are typically used to simplify STE estimate processing; however, OBEs are not bounded to the original estimate volume defined by the STE problem. At times the OBE algorithm includes spurious estimates in the new volume that may incorrectly expand the solution set. In contrast to prior implementations of STE, the algorithms used in this dissertation are based in topological space. Errors may still be present in the selected property sets, but these sets are used only as a priori information that are static and does not expand (change). Therefore, by choosing a topological space the problems associated with OBEs are avoided. For the first time, STE has been applied to decryption and STEs effectiveness is demonstrated. The problem of diffusion across byte boundaries is address by assuming that a block of symbols (meta-s-characters) are encrypted in that block. Language patterns are not obscured in meta-s-characters because of the constraint on the fixed block size. Furthermore, it is shown that all block ciphers are block substitution ciphers. Since all block ciphers are substitution ciphers, a single attack is effective against substitution, permutation, and block ciphers composed of combinations of substitution and permutation ciphers. The BCBB algorithm that decrypts block substitution ciphers is introduced and the results of its application are presented. Property sets designed to complement (how do they complement each other) each other are presented for the decryption problem. Further, it is then shown that that the property sets contain the properties of the Asymptotic Equipartion Property (AEP). Via the AEP, it is shown that Information Theory comes under the umbrella of STE.

Research paper thumbnail of Analysis of a Monte Carlo boundary propagation method

Computers & mathematics with applications, Mar 1, 1996

Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's ... more Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's equation, is described and estimates of its computational complexities are derived. The method yields better estimates than the standard Monte Carlo approach by incorporating boundary information more efficiently and by the implicit reuse of random walk information gathered throughout the course of the computation. The new approach reduces the computational complexity of the length of a random walk by one order of magnitude as compared to a standard method described in many text books. Also, the number of walks necessary to achieve a desired accuracy is reduced.

Research paper thumbnail of High-quality, interactive, three-dimensional character animation for the internet

The storytelling delivery mechanism of the future is certainly the Internet, due to its interacti... more The storytelling delivery mechanism of the future is certainly the Internet, due to its interactive and interconnectivity capability. However, Internet bandwidth is the single most constrictive aspect of real-time, high-quality, interactive character animation. The goal of this research is to make the smallest amount of data represents the largest spectrum of character definition and movement. We suggest that surfaces must be exact (verses polygonal approximations), movement must be abstracted (rather than geometry-specific), and computational power should be balanced against bandwidth, (i.e. computationally expensive compression and decompression should occur on the ends of the Internet pipeline to reduce traffic).

Research paper thumbnail of 3D Animation Streaming

Research paper thumbnail of Exploring parallel algorithms having no serial analogues

Information Center is to provide broadest dissemination possiof information contained in NE's Res... more Information Center is to provide broadest dissemination possiof information contained in NE's Research and Development i3eports to business, industry, the academic community, and federal, state and local gOVWlmWltSm Mthough a small portion of tk)s rep~rt is not reproducible, it is being made available to expedite the availability of information on the research discussed herein.. .. ,, k. " L.)s Alamos NaIIo~aI LdBoralorb IS CIDW~Iea b~ma unwarwly 01 Cahlornm for Ino llmlmd SImos DaDa:lmml of EnorgV unaaf comrac! w 7405. F.NG.36

Research paper thumbnail of Novel Innovations that Failed to Improve Weak PRNGs

2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)

Research paper thumbnail of Novel Innovations for Improving the Quality of Weak PRNGs

2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)

Research paper thumbnail of The Problem with Regular Multiple Byte Block Boundaries in Encryption

2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)

Research paper thumbnail of An Internet of Drone-based Multi-version Post-severe Accident Monitoring System: Structures and Reliability

Dependable IoT for Human and Industry, Sep 1, 2022

Research paper thumbnail of Equivalence of Product Ciphers to Substitution Ciphers and their Security Implications

2022 International Symposium on Networks, Computers and Communications (ISNCC)

Research paper thumbnail of An Introduction to Local Entropy and Local Unicity

2022 International Symposium on Networks, Computers and Communications (ISNCC)

As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity dista... more As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity distance are defined at a global level, under the assumption that the properties of symbols resemble that of independent random variables. However, while applying entropy and unicity to language(s), e.g., encryption and decryption, the symbols (letters) of a language are not independent. Thus, we introduce a new measure, HL(s), called the “local entropy” of a string s. HL(s) includes a priori information about the language and text at the time of application. Since the unicity distance is dependent on the entropy (because entropy is the basis of calculations for the unicity distance), local entropy leads to a local unicity distance for a string. Our local entropy measure explains why some texts are susceptible to decryption using fewer symbols than predicted by Shannon’s unicity while other texts require more. We demonstrate local entropy using a substitution cipher along with the results for an algorithm based on the principle, and show that Shannon’s unicity is an average measure rather than a lower bound; this motivates us to present a discussion on the implications of local entropy and unicity distance

Research paper thumbnail of The Art and Science of Gpu and Multi-Core Programming

International Journal of Computing, Aug 1, 2014

This paper examines the computational programming issues that arise from the introduction of GPUs... more This paper examines the computational programming issues that arise from the introduction of GPUs and multi-core computer systems. The discussions and analyses examine the implication of two principles (spatial and temporal locality) that provide useful metrics to guide programmers in designing and implementing efficient sequential and parallel application programs. Spatial and temporal locality represents a science of information flow and is relevant in the development of highly efficient computational programs. The art of high performance programming is to take combinations of these principles and unravel the bottlenecks and latencies associate with the architecture for each manufacturer computer system, and develop appropriate coding and/or task scheduling schemes to mitigate or eliminate these latencies.

Research paper thumbnail of O the Generation of Exact Solutions to the Einstein Field Equations for Homogeneous Space-Time

Research paper thumbnail of A Design for a Cryptographically Secure Pseudo Random Number Generator

2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2019

We proposes the design of a cryptographically secure pseudo random number generator (CSPRNG) that... more We proposes the design of a cryptographically secure pseudo random number generator (CSPRNG) that involves the permutation of the internal state of the PRNG. The design of the CSPRNG is described and Dieharder test comparisons are made against AES. The results of these tests are described and a discussion of future work concludes the paper.

Research paper thumbnail of Breaking Block and Product Ciphers

International Journal of Computing, 2014

The security of block and product ciphers is considered using a set theoretic estimation (STE) ap... more The security of block and product ciphers is considered using a set theoretic estimation (STE) approach to decryption. Known-ciphertext attacks are studied using permutation (P) and substitution (S) keys. The blocks are formed from two (2) alphabetic characters (meta-characters) where the applications of P and S upon the ASCII byte representation of each of the two characters are allowed to cross byte boundaries. The application of STE forgoes the need to employ chosen-plaintext or known-plaintext attacks.

Research paper thumbnail of Method to Branch and Bound Large SBO State Spaces

Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effecti... more Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effective in evaluating risk associated with complex systems. However, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state. A station blackout scenario associated with loss of heat sink, for example, may result in different outcomes depending the timing of events and the amount of energy stored within the reactor and the time the blackout occurs following reactor scram. These time and energy scales associated with the transient may vary as a function of transition time to a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Methods for solving DPRA include the theory of Continuous Event Trees (CET), Discrete Dynamic Event Trees (DDET), and Cell-to-Cell Mapping Techniques (CCMT). Each method has its advantages and disadvantages related to solving complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-tostate of the entire system.. Specifically, the theory of CET involves the solution of a partial differential equation that results in a probability density function (PDF) related to state variables of interest such at thermal-hydraulic properties (e.g., temperature, energy, pressure). The PDE is similar to the Boltzmann Transport Equation, where the PDF of state properties is being transported. The PDF contains the information of the state. As the number of state variables increases, the complexity of the problem also grows significantly. In order to address the combinatorial complexity arising from the number of possible state configurations and discretization of transition times, a characteristic scaling metric (LENDIT-length, energy, number, distribution, information and time) is proposed as a means to describe systems uniformly and thus provide means to describe relational constraints expected in the dynamics of a complex (coupled) systems. Thus when LENDIT is used to characterize four sets-'state, system, resource and response' (S2R2)-describing reactor operations (normal and off-normal), LENDIT and S2R2 in combination have the potential to 'branch and bound' the state space investigated by DPRA. LENDIT and S2R2 provide a consisting methodology across both deterministic and heuristic domains. This paper explores the reduction of a PWR's state-space using a RELAP5 model of a plant subject to a SBO. We introduce LENDIT metrics and S2R2 sets to show the branch-and-bound concept.

Research paper thumbnail of Survey of parallel mapping schemes of the particle-in-cell method

Transactions of the American Nuclear Society, 1987

The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactio... more The particle-in-cell (PIC) method, used in the simulation of fluid dynamics and plasma interactions, is composed of two distinct computational phases that are each highly parallel, yet problematic in their integrated mapping to parallel processing systems. On a parallel distributed memory system, the different data structures prescribed by the push (the acceleration of particles) and solver (the solution of Poisson's equation) computational phases may require large amounts of data communications during transition between these phases. The ratio of global to nearest-neighbor data communication is a function of the parallel decomposition, the computational algorithms (e.g., multigrid versus Fast Fourier Transform (FFT)) selected, and the characteristics of the physical simulation under investigation. The PIC codes implemented and described here are plasma simulation programs. Unfortunately, these codes are different in their computational details, making direct comparisons of their performances tenuous. On the other hand, the details of the different mapping schemes should be instructive in understanding the complexities of the PIC algorithm and the trade-offs that must be considered in their implementations.

Research paper thumbnail of Parallel S<sub>n</sub>Iteration Schemes

Nuclear Science and Engineering, May 1, 1985

The iterative, multigroup, discrete ordinates S /sub n/ representation for the linear transport e... more The iterative, multigroup, discrete ordinates S /sub n/ representation for the linear transport equation enjoys widespread computational use and popularity. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor heterogeneous element processor, three parallel iteration schemes (two chaotic, one ordered) are investigated for solving the one-dimensional S /sub n/ transport equation.

Research paper thumbnail of On the abstracted dataflow complexity of Fast Fourier Transforms

OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), May 1, 1992

I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww... more I) IM'I.AIMIW fly nccwlnrwti of lfIIS nrtmk. tha puM~hw rnmrrw~an Ihnl Iha IJ S CIOvwnmant ralnww n rIcwreEclumvu roynlly Iron hcanso 10 pubhsh or rwroduca .

Research paper thumbnail of The architecture of a reliable software monitoring system for embedded software systems

We develop the notion of a measurement-based methodology for embedded software systems to ensure ... more We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent remapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be remapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift away from the currently applied paradigm of addressing reliability, security and survivability in an add-on fashion at the end of the software development process. Rather, the integrity monitoring ability will be integrated into the overall architecture of the software system. The measurement and control methodology developed under this research program will readily migrate into hardware, leading to the development of new hardware architecture with built-in survivability, security and reliability attributes.

Research paper thumbnail of Set theoretic estimation applied to the information content of ciphers and decryption

Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditi... more Set Theoretic Estimation (STE) has been known and applied to various problems since 1969. Traditionally, STE has been used to solve vector problems in a Hilbert space using a distance metric to create a volume in that space. Given this type of space structure, Optimal Bounding Ellipsoid (OBE) algorithms are typically used to simplify STE estimate processing; however, OBEs are not bounded to the original estimate volume defined by the STE problem. At times the OBE algorithm includes spurious estimates in the new volume that may incorrectly expand the solution set. In contrast to prior implementations of STE, the algorithms used in this dissertation are based in topological space. Errors may still be present in the selected property sets, but these sets are used only as a priori information that are static and does not expand (change). Therefore, by choosing a topological space the problems associated with OBEs are avoided. For the first time, STE has been applied to decryption and STEs effectiveness is demonstrated. The problem of diffusion across byte boundaries is address by assuming that a block of symbols (meta-s-characters) are encrypted in that block. Language patterns are not obscured in meta-s-characters because of the constraint on the fixed block size. Furthermore, it is shown that all block ciphers are block substitution ciphers. Since all block ciphers are substitution ciphers, a single attack is effective against substitution, permutation, and block ciphers composed of combinations of substitution and permutation ciphers. The BCBB algorithm that decrypts block substitution ciphers is introduced and the results of its application are presented. Property sets designed to complement (how do they complement each other) each other are presented for the decryption problem. Further, it is then shown that that the property sets contain the properties of the Asymptotic Equipartion Property (AEP). Via the AEP, it is shown that Information Theory comes under the umbrella of STE.

Research paper thumbnail of Analysis of a Monte Carlo boundary propagation method

Computers & mathematics with applications, Mar 1, 1996

Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's ... more Abstraet-A modified Monte Carlo technique, first developed in estimating a solution to Poisson's equation, is described and estimates of its computational complexities are derived. The method yields better estimates than the standard Monte Carlo approach by incorporating boundary information more efficiently and by the implicit reuse of random walk information gathered throughout the course of the computation. The new approach reduces the computational complexity of the length of a random walk by one order of magnitude as compared to a standard method described in many text books. Also, the number of walks necessary to achieve a desired accuracy is reduced.

Research paper thumbnail of High-quality, interactive, three-dimensional character animation for the internet

The storytelling delivery mechanism of the future is certainly the Internet, due to its interacti... more The storytelling delivery mechanism of the future is certainly the Internet, due to its interactive and interconnectivity capability. However, Internet bandwidth is the single most constrictive aspect of real-time, high-quality, interactive character animation. The goal of this research is to make the smallest amount of data represents the largest spectrum of character definition and movement. We suggest that surfaces must be exact (verses polygonal approximations), movement must be abstracted (rather than geometry-specific), and computational power should be balanced against bandwidth, (i.e. computationally expensive compression and decompression should occur on the ends of the Internet pipeline to reduce traffic).

Research paper thumbnail of 3D Animation Streaming

Research paper thumbnail of Exploring parallel algorithms having no serial analogues

Information Center is to provide broadest dissemination possiof information contained in NE's Res... more Information Center is to provide broadest dissemination possiof information contained in NE's Research and Development i3eports to business, industry, the academic community, and federal, state and local gOVWlmWltSm Mthough a small portion of tk)s rep~rt is not reproducible, it is being made available to expedite the availability of information on the research discussed herein.. .. ,, k. " L.)s Alamos NaIIo~aI LdBoralorb IS CIDW~Iea b~ma unwarwly 01 Cahlornm for Ino llmlmd SImos DaDa:lmml of EnorgV unaaf comrac! w 7405. F.NG.36

Research paper thumbnail of Novel Innovations that Failed to Improve Weak PRNGs

2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)

Research paper thumbnail of Novel Innovations for Improving the Quality of Weak PRNGs

2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT)

Research paper thumbnail of The Problem with Regular Multiple Byte Block Boundaries in Encryption

2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)

Research paper thumbnail of An Internet of Drone-based Multi-version Post-severe Accident Monitoring System: Structures and Reliability

Dependable IoT for Human and Industry, Sep 1, 2022

Research paper thumbnail of Equivalence of Product Ciphers to Substitution Ciphers and their Security Implications

2022 International Symposium on Networks, Computers and Communications (ISNCC)

Research paper thumbnail of An Introduction to Local Entropy and Local Unicity

2022 International Symposium on Networks, Computers and Communications (ISNCC)

As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity dista... more As introduced by Shannon in “Communication Theory of Secrecy Systems”, entropy and unicity distance are defined at a global level, under the assumption that the properties of symbols resemble that of independent random variables. However, while applying entropy and unicity to language(s), e.g., encryption and decryption, the symbols (letters) of a language are not independent. Thus, we introduce a new measure, HL(s), called the “local entropy” of a string s. HL(s) includes a priori information about the language and text at the time of application. Since the unicity distance is dependent on the entropy (because entropy is the basis of calculations for the unicity distance), local entropy leads to a local unicity distance for a string. Our local entropy measure explains why some texts are susceptible to decryption using fewer symbols than predicted by Shannon’s unicity while other texts require more. We demonstrate local entropy using a substitution cipher along with the results for an algorithm based on the principle, and show that Shannon’s unicity is an average measure rather than a lower bound; this motivates us to present a discussion on the implications of local entropy and unicity distance

Research paper thumbnail of The Art and Science of Gpu and Multi-Core Programming

International Journal of Computing, Aug 1, 2014

This paper examines the computational programming issues that arise from the introduction of GPUs... more This paper examines the computational programming issues that arise from the introduction of GPUs and multi-core computer systems. The discussions and analyses examine the implication of two principles (spatial and temporal locality) that provide useful metrics to guide programmers in designing and implementing efficient sequential and parallel application programs. Spatial and temporal locality represents a science of information flow and is relevant in the development of highly efficient computational programs. The art of high performance programming is to take combinations of these principles and unravel the bottlenecks and latencies associate with the architecture for each manufacturer computer system, and develop appropriate coding and/or task scheduling schemes to mitigate or eliminate these latencies.

Research paper thumbnail of O the Generation of Exact Solutions to the Einstein Field Equations for Homogeneous Space-Time

Research paper thumbnail of A Design for a Cryptographically Secure Pseudo Random Number Generator

2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2019

We proposes the design of a cryptographically secure pseudo random number generator (CSPRNG) that... more We proposes the design of a cryptographically secure pseudo random number generator (CSPRNG) that involves the permutation of the internal state of the PRNG. The design of the CSPRNG is described and Dieharder test comparisons are made against AES. The results of these tests are described and a discussion of future work concludes the paper.

Research paper thumbnail of Breaking Block and Product Ciphers

International Journal of Computing, 2014

The security of block and product ciphers is considered using a set theoretic estimation (STE) ap... more The security of block and product ciphers is considered using a set theoretic estimation (STE) approach to decryption. Known-ciphertext attacks are studied using permutation (P) and substitution (S) keys. The blocks are formed from two (2) alphabetic characters (meta-characters) where the applications of P and S upon the ASCII byte representation of each of the two characters are allowed to cross byte boundaries. The application of STE forgoes the need to employ chosen-plaintext or known-plaintext attacks.

Research paper thumbnail of Method to Branch and Bound Large SBO State Spaces

Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effecti... more Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effective in evaluating risk associated with complex systems. However, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state. A station blackout scenario associated with loss of heat sink, for example, may result in different outcomes depending the timing of events and the amount of energy stored within the reactor and the time the blackout occurs following reactor scram. These time and energy scales associated with the transient may vary as a function of transition time to a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Methods for solving DPRA include the theory of Continuous Event Trees (CET), Discrete Dynamic Event Trees (DDET), and Cell-to-Cell Mapping Techniques (CCMT). Each method has its advantages and disadvantages related to solving complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-tostate of the entire system.. Specifically, the theory of CET involves the solution of a partial differential equation that results in a probability density function (PDF) related to state variables of interest such at thermal-hydraulic properties (e.g., temperature, energy, pressure). The PDE is similar to the Boltzmann Transport Equation, where the PDF of state properties is being transported. The PDF contains the information of the state. As the number of state variables increases, the complexity of the problem also grows significantly. In order to address the combinatorial complexity arising from the number of possible state configurations and discretization of transition times, a characteristic scaling metric (LENDIT-length, energy, number, distribution, information and time) is proposed as a means to describe systems uniformly and thus provide means to describe relational constraints expected in the dynamics of a complex (coupled) systems. Thus when LENDIT is used to characterize four sets-'state, system, resource and response' (S2R2)-describing reactor operations (normal and off-normal), LENDIT and S2R2 in combination have the potential to 'branch and bound' the state space investigated by DPRA. LENDIT and S2R2 provide a consisting methodology across both deterministic and heuristic domains. This paper explores the reduction of a PWR's state-space using a RELAP5 model of a plant subject to a SBO. We introduce LENDIT metrics and S2R2 sets to show the branch-and-bound concept.