Redundancy Research Papers - Academia.edu (original) (raw)

Keystone plants that produce seasonally critical trophic resources comprise one of the main classes of keystone species, yet no studies have attempted to examine the ecological attributes that might help us recognize them and evaluate... more

Keystone plants that produce seasonally critical trophic resources comprise one of the main classes of keystone species, yet no studies have attempted to examine the ecological attributes that might help us recognize them and evaluate their importance in species-rich plant assemblages. In this paper the concept of keystone plant resources is reviewed using potential candidates proposed in the literature for neotropical forest sites. A poorly known example of a potential keystone resource-the gums produced by mature pods of two emergent tree species (Parkia nitida and P. pendula, Leguminosae: Mimosoideae)-is described for primates and other arboreal vertebrates in Amazonian forests. In particular, the fruiting phenology, tree density, patterns of vertebrate consumption, and nutritional quality of Parkia gums in Amazonian terra firme forests are considered. Putative neotropical keystone resources are then divided into four intersecting ecological attributes defining their community-wide importance to vertebrate frugivores: (1) temporal redundancy, (2) degree of consumer specificity, (3) reliability, and (4) abundance. From a vertebrate perspective, keystone plants are here defined as those producing reliable, low-redundancy resources that are consumed by a large proportion of the bird and mammal assemblage with which they coexist. Plant populations proposed to date as keystone species range widely across two of these four variables, which may disqualify most putative taxa (including Parkia spp.) from a more formal definition of keystone resources. Other importance attributes, the context-dependent role, the taxonomic refinement, and removal effects of the keystone plant resource concept as applied to tropical forests are also discussed.

Multiple description (MD) coding is source coding in which several descriptions of the source are produced such that various reconstruction qualities are obtained from different subsets of the descriptions. Unlike multiresolution or... more

Multiple description (MD) coding is source coding in which several descriptions of the source are produced such that various reconstruction qualities are obtained from different subsets of the descriptions. Unlike multiresolution or layered source coding, there is no hierarchy of descriptions; thus, MD coding is suitable for packet erasure channels or networks without priority provisions. Generalizing work by Orchard, Wang, Vaishampayan, and Reibman, a transform-based approach is developed for producing descriptions of an -tuple source, . The descriptions are sets of transform coefficients, and the transform coefficients of different descriptions are correlated so that missing coefficients can be estimated. Several transform optimization results are presented for memoryless Gaussian sources, including a complete solution of the = 2, = 2 case with arbitrary weighting of the descriptions. The technique is effective only when independent components of the source have differing variances. Numerical studies show that this method performs well at low redundancies, as compared to uniform MD scalar quantization.

This paper describes the development of a VLSI device that provides memory system self-testing and redundancy without incurring the overhead penalties of error-correction coding or page-swapping techniques. This device isolates hard... more

This paper describes the development of a VLSI device that provides memory system self-testing and redundancy without incurring the overhead penalties of error-correction coding or page-swapping techniques. This device isolates hard errors in system memory by writing a true and complement pattern to each system memory location. Locations from an on-chip fully associative cache are then mapped into the address space in place of faulty locations. Since substitutions take place at the memory word level, this method is more efficient than page swapping. Access to the on-chip cache occurs in parallel with access to system memory, so memory access time is not increased, as it is with error detection and correction (EDAC). Analysis shows that this device can extend the mission time of a nonredundant memory system by as much as 35 times. and digital design for radar signal processing systems. He is currently an Associate Department Head in MITRE'S VLSI/VHSIC

Catastrophic events including those sparked by terrorism have cast a pall over society in recent years. Events such as September 11th or the bombing of the Alfred P. Murrah Federal Building in Okalahoma City demonstrated a sudden surge in... more

Catastrophic events including those sparked by terrorism have cast a pall over society in recent years. Events such as September 11th or the bombing of the Alfred P. Murrah Federal Building in Okalahoma City demonstrated a sudden surge in telephone call volume. These events indicated that communications systems, including cellular, can fail as a result from damage or capacity overload. Cellular systems have significant capacity given the limited radio spectrum allocated but much like airlines that oversell seats because inevitably some passengers do not show up, cellular providers typically have more customers than operational spectrum can handle in aggregate. Normally this is perfectly acceptable because a high percentage of the customer base would never be making calls simultaneously; except in an emergency. Such emergencies can be active or passive in nature. The release of dangerous airborne agents is one such scenario. The purpose of this study was to use a portable air particulate sensor as a data acquisition platform to evaluate the capability of real-time data throughput using existing wireless topologies, Java, eXtensible Markup Language (XML) and Internet protocols when cellular transmission frequencies are compromised. Results show that high levels of cellular voice traffic in a given area can hamper simultaneous data communications; however, Low Earth Orbiting Satellites (LEOS) can provide redundancy.

Test generation for combinational circuits, an NP-complete problem, shows its worst-case behavior while trying to generate tests for redundant faults and failing after exhaustive search. The performance of an automatic test generator can... more

Test generation for combinational circuits, an NP-complete problem, shows its worst-case behavior while trying to generate tests for redundant faults and failing after exhaustive search. The performance of an automatic test generator can be significantly improved by identifying redundancy via simple techniques which do not involve search. This paper presents a new technique to identify redundant faults. This technique works dynamically during test generation but is not based on a search process. It relies on test-covering relations among faults, which allow identification of additional redundant faults after the test generator fails to generate a test for a target fault. This technique bas been implemented in AT&T's Testpilot test generation system and has shown a reduction of up to 32% in the number of backtracks in test generation runs. I. INTRODUCTION Automatic test generation (ATG)-a known NP-complete search problem [ I]-requires a large computational effort for large circuits. The amount of search done by an ATG algorithm can be measured by the number of backtracks performed. Good heuristics help in reducing the number of backtracks, to the point where most detectable faults are detected without any or with only a few backtracks [2], [3]. However, any ATG algorithm still exhibits its worstcase behavior when dealing with redundant (undetectable) faults. This can be explained by analyzing how ATG handles redundant faults. Redundant faults do not have any tests. Traditional ATG algorithms, such as the D-Algorithm 141, PODEM [5], and FAN [6], are complete algorithms; i.e., given enough time, they try all possibilities to generate a test for a target fault and when they fail after implicit exhaustive enumeration, they declare that fault to be redundant. Thus, ATG can also perform Eedundancy identification (RID). This type of RID is indirect, since it is just a by-product of ATG. Any method that speeds up ATG will also speed up indirect RID 121. A direct RID technique does not involve ATG-type search. The direct RID techniques used in logic synthesis, for example [7], are not efficiently applicable in an existing structural ATG environment, because they involve function extraction, logic minimization, and modification of logic. Since indirect RID is such a computationally intensive problem, identification of any redundant fault without exhaustive search would lead to savings in computations. Existing direct RID techniques are static; i.e., they work as a preprocessing step to ATG.

In this work a novelty use of the High-availability Seamless Redundancy (HSR) in the Train Communication Network (TCN) is presented. Nowadays, the 10 years old TCN standard is being renewed toward an Ethernet-based solution, but it is... more

In this work a novelty use of the High-availability Seamless Redundancy (HSR) in the Train Communication Network (TCN) is presented. Nowadays, the 10 years old TCN standard is being renewed toward an Ethernet-based solution, but it is necessary to accomplish some special restrictions for such a critical application. The HSR standard can help solve the redundancy required in train communications.

Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to... more

Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for Rijndael symmetric encryption algorithm. These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations.

Purpose: Posterior transverse plication (PTP) has gained popularity as a technique to correct redundancy of the internal carotid artery during endarterectomy. The safety of this technique in large series of patients has not been... more

Purpose: Posterior transverse plication (PTP) has gained popularity as a technique to correct redundancy of the internal carotid artery during endarterectomy. The safety of this technique in large series of patients has not been extensively studied. We investigated 876 primary carotid endarterectomies (CEAs) performed at our institution over the last six years to determine the safety of this technique.

Research on the redundancy of the Environmental Impact Assessment (EIA) process of steel industry located in industrial estate in accordance with relevant laws is to study possible process approaches for the preparation of the EIA report... more

Research on the redundancy of the Environmental Impact Assessment (EIA)
process of steel industry located in industrial estate in accordance with relevant laws
is to study possible process approaches for the preparation of the EIA report of the
steel industry located in industrial estate. Using mixed method of qualitative research
and quantitative research by in-depth interviews, Analytic Hierarchy Process (AHP) analysis, as well as focus group. The results show redundancy between laws of 3 Acts,
6 issues, namely, effluent management, industrial waste management, air quality
management, water pollution supervisor, EIA process and compliance measure
report. The possible process for preparing EIA report are proposed into 2 approaches
for entrepreneurial choices by both approaches must prepare a document or
Memorandum of understanding (MOU). The process of the environmental impact
assessment for steel industry located in industrial estate should focus on significant
environmental impact (called Environmental Significant Statement: ESS) which meets
the criteria and requirements from the EIA report of the industrial estate.

Reliable delivery of messages is an important problem that needs to be addressed in distributed systems. In this paper we present our strategy to enable reliable delivery of messages in the presence of link and node failures. This is... more

Reliable delivery of messages is an important problem that needs to be addressed in distributed systems. In this paper we present our strategy to enable reliable delivery of messages in the presence of link and node failures. This is facilitated by a specialized repository node. We then present our strategy to make this scheme even more failure resilient, by incorporating support for repository redundancy. Each repository functions autonomously. The scheme enables updates to the redundancy scheme depending on the failure resiliency requirements. If there are N available repositories, reliable delivery guarantees will be met even if N-1 repositories fail.

We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex... more

We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex and invariant feature representation by alternating between a template matching and a maximum pooling operation. We demonstrate the strength of the approach on a range of recognition tasks: From invariant single object recognition in clutter to multiclass categorization problems and complex scene understanding tasks that rely on the recognition of both shape-based as well as texture-based objects. Given the biological constraints that the system had to satisfy, the approach performs surprisingly well: It has the capability of learning from only a few training examples and competes with state-of-the-art systems. We also discuss the existence of a universal, redundant dictionary of features that could handle the recognition of most object categories. In addition to its relevance for computer vision, the success of this approach suggests a plausibility proof for a class of feedforward models of object recognition in cortex.

In many packet-based communication systems such as TCP/IP-based systems, packets are communicated over a noisy physical layer (a channel), and if a packet cannot be decoded correctly, then the transport layer retransmits it. Of course,... more

In many packet-based communication systems such as TCP/IP-based systems, packets are communicated over a noisy physical layer (a channel), and if a packet cannot be decoded correctly, then the transport layer retransmits it. Of course, retransmissions consume significant resources and their use should be limited. However, decreasing the likelihood of retransmission requires to encode the packets with strong channel codes in the physical layer, which also requires additional channel resources. In this paper, we study the cross-layer tradeoff between coding and packet retransmissions, and optimize over the total channel resource consumption. We show that as the packet length k increases, the redundancy r beyond the k/C channel uses implied by Shannon's channel capacity C is Theta(radickln(k)) extra channel uses. Moreover, as k increases we must use stronger channel codes. We then apply these results to universal coding over a piecewise memoryless channel with transitions between unknown i.i.d. statistics. Our constructive universal algorithm has redundancy r=O(k2/3radicln(k)) using packets of polynomially increasing lengths while accounting for possible packet drops caused by transitions in the statistics.

Metadata records made available by content providers often lack the implicit information of their original use environment. Metadata aggregators therefore tend to emphasize completeness as a primary quality for shareable metadata.... more

Metadata records made available by content providers often lack the implicit information of their original use environment. Metadata aggregators therefore tend to emphasize completeness as a primary quality for shareable metadata. However, when adding implicit information to item-level records, data providers increase the redundancy of information contained in records from the same collection. The present paper reports on an effort to assess the extent and potential impact of information redundancy in metadata collections aggregated using the Open Archives Protocol for Metadata Harvesting. The first experiment quantifies the resemblance of metadata records on a collection-by-collection basis across 176 metadata collections aggregated for the CIC metadata portal. A second experiment measures the tendency of items from the same collection to appear together in results lists generated for a set of user queries. Results of the analyses correlate and suggest that within some collections item-level metadata records are not sufficiently differentiated to support certain digital library functions well. Metadata collections have a distinct role when included in larger aggregations, and in that role a minimum level of descriptive granularity is required to support digital library functions implemented by service providers. The experiments suggest possible ways to deal simultaneously with metadata record completeness, consistency, and redundancy.

This paper presents the current state-of-the-art of residual generation techniques adopted in instrument fault detection and isolation. Both traditional and innovative methods are described with their advantages and their limits. The... more

This paper presents the current state-of-the-art of residual generation techniques adopted in instrument fault detection and isolation. Both traditional and innovative methods are described with their advantages and their limits. The improvement of analytical redundancy technique performances for better dealing with high-dynamics systems and/or with online applications is pointed out as the most interesting need to focus the research efforts.

Software watermarking is a new research area that aims at providing copyright protection for commercial software. It minimizes software piracy by hiding copyright signatures inside the program code or its runtime state. Prior proposals... more

Software watermarking is a new research area that aims at providing copyright protection for commercial software. It minimizes software piracy by hiding copyright signatures inside the program code or its runtime state. Prior proposals hide the watermarks in dummy data structures, e.g., linked lists and graphs that are created during the execution of the hosting software for this reason. This makes it vulnerable to subtractive attacks, because the attacker can remove the data structure without altering the operation or the semantic of the software program. In this regard, we argue that hiding watermarks in one or more data structures that are used by the program would make the watermark more robust because removing the watermark would alter the semantic and the operations of the underlying software. However, the challenge is that the insertion of the watermark should have a minimal effect on the operations and performance of the data structure. This paper proposes a novel method for watermarking R-tree data structure and its variants. The proposed watermarking scheme takes advantage of the redundancy in the way the entries within R-tree nodes are ordered. R-trees do not require ordering the entries in a specific way. Node entries are reordered in a way to map the watermark. The new order is calculated relative to a "secret" initial order, known only to the software owner, using a technique based on a numbering system that uses variable radix and factorial base. The addition of the watermark in the R-tree data structure neither affects the performance nor increases the size of the R-tree. The paper provides a threat model and analysis to show that the watermarked R-trees are robust and can withstand various types of attacks.

This paper presents a concurrent error detection(CED) technique for a bit-slice of a full-adder. The proposed method involves computing the sum and carry bits in two alternative ways so that transient faults will be detected by comparing... more

This paper presents a concurrent error detection(CED) technique for a bit-slice of a full-adder. The proposed method involves computing the sum and carry bits in two alternative ways so that transient faults will be detected by comparing the results (Sum and Carry out) obtained from the two computing paths. This technique attempts to reduce the amount of extra hardware and cost of the circuit. In order to avoid the problem of extra time we will propagate the result when the first computation is finished so that dependent computation can commence execution as soon as possible. To prove the efficiency of the proposed method, the circuit is simulated in standard CMOS 32nm technology and some transient faults are voluntary injected in the Layout of the circuit. The proposed design involves 12.12% saving in transistor count compared to DMR (Dual Modular Redundancy) style design.

In urban communities, infrastructures that support living are indispensable. There is increased interest in alternative ways of providing such support systems, including semi-autonomous infrastructures resulting from the self-organization... more

In urban communities, infrastructures that support living are indispensable. There is increased interest in alternative ways of providing such support systems, including semi-autonomous infrastructures resulting from the self-organization of local actors. In this study, we analyze the emergence and management of such infrastructures in light of the theory of complex adaptive systems, within which they are called ‘inverse infrastructures’. Empirical evidence is drawn from the case of water cooperatives in the town of Ikaalinen, Finland. Our analysis shows that, with favorable preconditions in place, inverse infrastructures may contribute significantly to local infrastructure services and so also to the functioning of society.

Three-dimensional (3-D) video compression using wavelets decomposition along the temporal axis dictates that a number of video frames must be buffered to allow for the temporal decomposition. Buffering of frames allows for the temporal... more

Three-dimensional (3-D) video compression using wavelets decomposition along the temporal axis dictates that a number of video frames must be buffered to allow for the temporal decomposition. Buffering of frames allows for the temporal correlation to be made use of, and the larger the buffer the more effective the decomposition. One problem inherent in such a set up in interactive applications such as video conferencing, is that buffering translates into a corresponding time delay. In this paper, we show that 3-D coding of such image sequences can be achieved in the true sense of temporal direction decomposition but with much less buffering requirements. For a practical coder, this can be achieved by introducing an approximation to the way the transform coefficients are encoded. Applying wavelet decomposition using some types of filters may introduce edge errors, which become more prominent in short signal segments. We also present a solution to this problem for the Daubechies family of filters.

In this paper, we present a solution and experimental results for real-time control of manipulation tasks considering the alignment of a robots end effector relative to some reference. The developed controller is applicable to industrial... more

In this paper, we present a solution and experimental results for real-time control of manipulation tasks considering the alignment of a robots end effector relative to some reference. The developed controller is applicable to industrial manipulators and is based on the pseudoinverse redundancy resolution method. The application considered is the employment of two industrial robots in an offshore remote inspection and maintenance system. A leader robot is controlled freely from onshore, and a follower robot uses an attached camera to provide the onshore operator with live video feed of the ongoing operation. Robot manipulators constitute flexible camera platforms, compared to e.g. simple pan/tilt units, for monitoring offshore operations. We develop a controller for the follower robot such that automatic camera tracking is achieved using pseudoinverse redundancy resolution control. A minimal task space parametrization relying on stereographic projection is constructed which achieves relative end effector alignment tracking without introducing representational or algorithmic singularities. It is shown that singular configurations will only in special cases affect the closed loop behavior. The controller is applicable to tasks such as spray painting or polishing on curved surfaces. The control approach is experimentally validated on two Kuka KR-16 industrial robot manipulators.

Abstract-We describe a sequential universal data compression procedure for binary tree sources that performs the “double mixture.” Using a context tree, this method weights in an ef-ficient recursive way the coding distributions... more

Abstract-We describe a sequential universal data compression procedure for binary tree sources that performs the “double mixture.” Using a context tree, this method weights in an ef-ficient recursive way the coding distributions corresponding to all bounded memory tree sources, and ...

Existing works on variational bayesian (VB) treatment for factor analysis (FA) model such as . Variational inference for Bayesian mixture of factor analysers. In Advances in neural information proceeding systems. Cambridge, MA: MIT Press;... more

Existing works on variational bayesian (VB) treatment for factor analysis (FA) model such as . Variational inference for Bayesian mixture of factor analysers. In Advances in neural information proceeding systems. Cambridge, MA: MIT Press; Nielsen, F. B. . Variational approach to factor analysis and related models. Master's thesis, The Institute of Informatics and Mathematical Modelling, Technical University of Denmark.] are found theoretically and empirically to suffer two problems: x penalize the model more heavily than BIC and y perform unsatisfactorily in low noise cases as redundant factors can not be effectively suppressed. A novel VB treatment is proposed in this paper to resolve the two problems and a simulation study is conducted to testify its improved performance over existing treatments.

This paper proposed a penalty guided artificial bee colony algorithm (ABC) to solve the reliability redundancy allocation problem (RAP). The redundancy allocation problem involves setting reliability objectives for components or... more

This paper proposed a penalty guided artificial bee colony algorithm (ABC) to solve the reliability redundancy allocation problem (RAP). The redundancy allocation problem involves setting reliability objectives for components or subsystems in order to meet the resource consumption constraint, e.g. the total cost. RAP has been an active area of research for the past four decades. The difficulty that one is confronted with the RAP is the maintenance of feasibility with respect to three nonlinear constraints, namely, cost, weight and volume related constraints. In this paper nonlinearly mixed-integer reliability design problems are investigated where both the number of redundancy components and the corresponding component reliability in each subsystem are to be decided simultaneously so as to maximize the reliability of the system. The reliability design problems have been studied in the literature for decades, usually using mathematical programming or heuristic optimization approaches. To the best of our knowledge the ABC algorithm can search over promising feasible and infeasible regions to find the feasible optimal/near-optimal solution effectively and efficiently; numerical examples indicate that the proposed approach performs well with the reliability redundant allocation design problems considered in this paper and computational results compare favorably with previously-developed algorithms in the literature.

In this paper, an approach for integrating the data obtained from structural health monitoring (SHM) in the life-cycle performance assessment of ship structures under uncertainty is presented. Lifecycle performance of the ship structure... more

In this paper, an approach for integrating the data obtained from structural health monitoring (SHM) in the life-cycle performance assessment of ship structures under uncertainty is presented. Lifecycle performance of the ship structure is quantified in terms of the reliability with respect to first and ultimate failures and the system redundancy. The performance assessment of the structure is enhanced by incorporating prior design code-based knowledge and information obtained by SHM using Bayesian updating concepts. Advanced modeling techniques are used for the hull strength computations needed for the life-cycle performance analysis. SHM data obtained by testing a scaled model of a Joint High-speed Sealift Ship is used to update its life-cycle performance.

Fault detection and isolation (FDI) of a class of networked control systems (NCS), applied for telerobotics system is studied in this paper. The considered NCS application is related to telerobotics system, where it is modelled with a... more

Fault detection and isolation (FDI) of a class of networked control systems (NCS), applied for telerobotics system is studied in this paper. The considered NCS application is related to telerobotics system, where it is modelled with a hybrid manner, by including the continuous, discrete, uncertain, and stochastic aspects of all the system components. The main considered components of the NCS namely the network system and controlled system are completely decoupled according to their operation characteristics. The network part is taken as a discrete and stochastic system in presence of non-structured uncertainties and external faults, while the controlled part is considered as a continuous system in presence of input and output faults. Two model based fault diagnosis approaches are proposed in this paper. The first concerns a discrete and stochastic observer applied to the network system in order to detect and isolate system faults in presence of induced delay on the network part. The second is based on the analytical redundancy relations (ARR) allows detecting and isolating the input and output system' faults. Experimental results applied on telerobotics system, show the performance and the limit of the proposed fault diagnosis approach.

A coding theory approach to error control in redundant residue number systems (RRNSs) is presented. The concepts of Hamming weight, minimum distance, weight distribution, and error detection and correction capabilities in redundant... more

A coding theory approach to error control in redundant residue number systems (RRNSs) is presented. The concepts of Hamming weight, minimum distance, weight distribution, and error detection and correction capabilities in redundant residue number systems are introduced. The necessary and sufficient conditions for the desired error control capability are derived from the minimum distance point of view. Closed-form expressions are

Three-phase four-leg voltage-source converters are used in inverter, rectifier and active filter applications to control the neutral current caused by unbalanced or nonlinear loads. From the modulation point of view, a four-leg converter... more

Three-phase four-leg voltage-source converters are used in inverter, rectifier and active filter applications to control the neutral current caused by unbalanced or nonlinear loads. From the modulation point of view, a four-leg converter can be considered as a four-phase system. Hence, the modulation task can be carried out with a generic multiphase modulation algorithm. In this paper, a recent multilevel multiphase space vector PWM algorithm with switching state redundancy is particularized for multilevel three-phase four-leg converters. The obtained algorithm is compared with an existing three-dimension modulation technique showing important similarities. Finally, the new algorithm is implemented in a low-cost field-programmable gate array and it is tested with a five-level cascaded full-bridge inverter.

To take advantage of economies of scale, a growing number of firms have begun to explore the possibility of integrating supply chain activities. The advent of such a possibility would necessitate the redesign of a warehouse network.... more

To take advantage of economies of scale, a growing number of firms have begun to explore the possibility of integrating supply chain activities. The advent of such a possibility would necessitate the redesign of a warehouse network. Typically, a warehouse redesign problem involves the consolidation of regional warehouses into a fewer number of master stocking points and the subsequent phase-out of redundant or underutilized warehouses without deteriorating customer services. This paper develops a mixed-integer programming model to solve the warehouse redesign problem. The usefulness of the model was validated by its successful application to a real-world problem and by its sensitivity analyses when used with changing scenarios within a warehouse network configuration.

This paper defines and compares fault tolerances of a triple flight control architecture to a quadruple type, and investigates the impact of worst case failures on transient response of an aircraft. New insight in computing fault coverage... more

This paper defines and compares fault tolerances of a triple flight control architecture to a quadruple type, and investigates the impact of worst case failures on transient response of an aircraft. New insight in computing fault coverage is discussed. Two coverage models were used to compute the Probability of Loss of Control (PLOC) and the Probability of Mission Abort (PMA) for candidate architectures. Results indicate that both triple and quadruple architectures can meet the fault tolerant requirements with an acceptable level of transients upon first and second failures. Triple architectures will require a higher level of fault detection, isolation, and accommodation coverage than quadruple architectures and produce substantially larger transients upon second failure. REQUIREMENTS Derived from the MIL-F-9490D, a Flight Control System (FCS) (consisting of sensors, computers, and actuators) must be designed to a Probability of Mission Abort (PMA) of 5 x 10-5 and a Probability of Loss of Control (PLOC) of 5 X 10-7 per Flight Hour (FH). Top down apportionments of these requirements to Flight Control Computers (FCC) [ 11 are estimatedas: PMA = lO-5/FH, andPLOC = 5 x IO-VFH. MIL-STD 882B requires a FCS be capable of exhibiting a Fail-Operational / Fail-Operational (F02) S c a~~+ i o Once the aircraft is up and flying. Coverage for Fault Detection, Isolation, and Accommodation (FDIA), and the confidence intervals associated with these coverage parameters are not well defined.

Affordances encode relationships between actions, objects and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of... more

Affordances encode relationships between actions, objects and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We demonstrate the benefits of the acquired knowledge in imitation games.

The sensitivity of application specific integrated circuits (ASICs) to the single event effects (SEE) can induce failures of the systems which are exposed to increased radiation levels in the space and on the ground. This paper presents a... more

The sensitivity of application specific integrated circuits (ASICs) to the single event effects (SEE) can induce failures of the systems which are exposed to increased radiation levels in the space and on the ground. This paper presents a design methodology for a full fault tolerant ASIC that is immune to the single event upsets (SEU) in sequential logic, the single event transients (SET) in combinational logic and the single event latchup (SEL). The dual modular redundancy (DMR) and a SEL power-switch (SPS) are the basis for a modified ASIC design flow. Measurement results have proven the correct functionality of DMR and SPS circuits, as well as a high fault tolerance of implemented ASICs along with moderate overhead in respect of power consumption and occupied silicon area.

In this major paper, we explore Boneh's Set-Covering (SC) equivalence technique for the classification of constraints in mathematical programmes as either redundant or necessary. At the core of Boneh's technique is a method for sampling... more

In this major paper, we explore Boneh's Set-Covering (SC) equivalence technique for the classification of constraints in mathematical programmes as either redundant or necessary. At the core of Boneh's technique is a method for sampling in n-space and for each such point generating a binary row of a SC matrix, E p 0 , whose j th component is one if and only if the j th constraint is violated at that point. We describe a simple implementation of Boneh's technique that we refer to as the base method. The base method produces a redundancy free constraint matrix for a SC problem. Boneh has shown that any solution to the SC problem yields a constraint classification, which, in turn, yields a reduction of the constraint set.

Autor poddaje analizie trzy ważne parametry stosowane w typologii częściowej języków oraz dokonuje próby konfrontacji języka hiszpańskiego z językiem polskim w świetle tychże parametrów. Język hiszpański, który uchodzi za umiarkowanie... more

Autor poddaje analizie trzy ważne parametry stosowane w typologii częściowej języków oraz dokonuje próby konfrontacji języka hiszpańskiego z językiem polskim w świetle tychże parametrów. Język hiszpański, który uchodzi za umiarkowanie analityczny, wykazuje tylko nieznacznie silniejsze tendencje nominalne niż bardziej syntetyczny język polski. Ponadto język hiszpański wydaje się umiarkowanie ekonomiczny na polu leksykalnym oraz umiarkowanie redundantny na płaszczyźnie składniowej w porównaniu z językiem polskim. Nie ulega jednak wątpliwości, że obydwa języki (tak samo jak inne języki romańskie czy słowiańskie) będą językami środka, pośród całości języków świata, a różnice typologiczne między nimi okażą się niewielkie
w porównaniu z cechami typologicznymi języków egzotycznych.

Three-phase four-leg voltage-source converters are used in inverter, rectifier and active filter applications to control the neutral current caused by unbalanced or nonlinear loads. From the modulation point of view, a four-leg converter... more

Three-phase four-leg voltage-source converters are used in inverter, rectifier and active filter applications to control the neutral current caused by unbalanced or nonlinear loads. From the modulation point of view, a four-leg converter can be considered as a four-phase system. Hence, the modulation task can be carried out with a generic multiphase modulation algorithm. In this paper, a recent multilevel multiphase space vector PWM algorithm with switching state redundancy is particularized for multilevel three-phase four-leg converters. The obtained algorithm is compared with an existing three-dimension modulation technique showing important similarities. Finally, the new algorithm is implemented in a low-cost field-programmable gate array and it is tested with a five-level cascaded full-bridge inverter.

An approach to design of evolving computer system is presented. Drawbacks of existing systems are explained showing limitations of hardware and software. New design principles, models of representation for algorithms and architecture are... more

An approach to design of evolving computer system is presented. Drawbacks of existing systems are explained showing limitations of hardware and software. New design principles, models of representation for algorithms and architecture are introduced and described with explanation how to achieve better parallelism and reduce concurrency. Structure of required system software and hardware for reconfigurable adaptable system are presented and explained. Prototype architecture of evolving system of computer is discussed in terms of gain for performance, reliability and power- saving. Comparison with Berkley approach to future computer systems is given.

ABSTRACT Microprocessor-based system's robustness under Single Event Effects is a very current concern. A widely adopted solution to make robust a microprocessor-based system consists in modifying the software application by... more

ABSTRACT Microprocessor-based system's robustness under Single Event Effects is a very current concern. A widely adopted solution to make robust a microprocessor-based system consists in modifying the software application by adding redundancy and fault detection capabilities. The efficiency of the selected software-based solution must be assessed. This evaluation process allows the designers to choose the more suitable robustness technique and check if the hardened system achieves the expected dependability levels. Several approaches with this purpose can be found in the literature, but their efficiency is limited in terms of the number of faults that can be injected, as well as the level of accuracy of the fault injection process. In this paper, we propose FPGA-based fault injection techniques to evaluate software robustness methods under Single Event Upset (SEU) as well as Single Event Transient (SET). Experimental results illustrate the benefits of using the proposed fault injection method, which is able to evaluate a high amount of faults of both types of events.

CITATION / REFERENCE - Ab Manan, N. A., Zamari, Z. M., Pillay, I. A. S., Adnan, A. H. M., Yusof, J., & Raslee, N. N. (2017). Mother Tongue Interference in the Writing of English as a Second Language (ESL) Malay Learners. International... more

CITATION / REFERENCE - Ab Manan, N. A., Zamari, Z. M., Pillay, I. A. S., Adnan, A. H. M., Yusof, J., & Raslee, N. N. (2017). Mother Tongue Interference in the Writing of English as a Second Language (ESL) Malay Learners. International Journal of Academic Research in Business and Social Sciences, 7(11), 1294-1301. https://dx.doi.org/10.6007/IJARBSS/v7-i11/3566 ABSTRACT - The objective of this study is to identify the errors in writing committed by Malay ESL learners which can be attributed to mother tongue interference. Twenty pre-degree quantity surveyor students from Universiti Teknologi MARA Perak campus participated in the study. They were instructed to write between 200-to 300-word essays on a given topic in English and in Malay. Their compositions were analyzed using the algorithm for conducting error analysis (EA) proposed by James (1998) which are practiced by current EA practitioners. Since the main focus of the study is mother tongue interference, only errors related to L1 ...

This paper presents an availability optimization of an engineering system assembled in a series configuration, with redundancy of units and corrective maintenance resources as optimization parameters. The aim is to reach maximum... more

This paper presents an availability optimization of an engineering system assembled in a series configuration, with redundancy of units and corrective maintenance resources as optimization parameters. The aim is to reach maximum availability, considering as constraints installation and corrective maintenance costs, weight and volume. The optimization method uses a Genetic Algorithm based on biological concepts of species evolution. It is a robust method, as it does not converge to a local optimum. It does not require the use of differential calculus, thus facilitating computational implementation. Results indicate that the methodology is suitable to solve a wide range of engineering design problems involving allocation of redundancies and maintenance resources. q Reliability Engineering and System Safety 91 (2006) 413-420 www.elsevier.com/locate/ress 0951-8320/$ -see front matter q

Normalization is the most exercised technique for the analysis of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and... more

Normalization is the most exercised technique for the analysis of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. A normalized database does not show anomalies due to future updates. It is very much time consuming to employ an automated technique to do this data analysis, as opposed to doing it manually. At the same time, the process is tested to be reliable and correct. This paper presents a new complete automated relational database normalization method. It produces the dependency matrix and the directed graph matrix, first. It then proceeds with generating the 2NF, 3NF, and BCNF normal forms. All tables are also generated as the procedure proceeds. One more side product of this research is to automatically distinguish one primary key for every final table which is generated.

Verschiedene Formen des Doppeltsagens werden auf einigen Ebenen der Sprache und in unterschiedlichen Bereichen der Linguistik verzeichnet. In vielerlei sprachtheoretischen Kontexten, u.a. in Arbeiten zu morphosyntaktischen, pragmatischen... more

Verschiedene Formen des Doppeltsagens werden auf einigen Ebenen der Sprache und in unterschiedlichen Bereichen der Linguistik verzeichnet. In vielerlei sprachtheoretischen Kontexten, u.a. in Arbeiten zu morphosyntaktischen, pragmatischen und textlinguistischen Aspekten, fernerhin in Untersuchungen zur Stilkunde, Rhetorik etc. wird die Problematik der Wiederholung, Anhäufung, Redundanz und des Nebeneinanders von beinahe allen grammatischen Kategorien – mit verschiedener Funktion und Wirkung – zur Sprache gebracht. Derartige Aspekte schließen vieles ein, besonders jedoch die Probleme der Morphologie, Syntax, Semantik, Lexik, Phonologie sowie Pragmatik.

The paper presents a new generation of torque-controlled lightweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. In order to act in unstructured environments and interact with... more

The paper presents a new generation of torque-controlled lightweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. In order to act in unstructured environments and interact with humans, the robots have design features and control/software functionalities which distinguish them from classical robots, such as: load-to-weight ratio of 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, as well as compliant control on joint and Cartesian level. Due to the partially unknown properties of the environment, robustness of planning and control with respect to environmental variations is crucial. After briefly describing the main hardware features, the paper focuses on showing how joint torque sensing (as a main feature of the robot) is consequently used for achieving the above mentioned performance, safety, and robustness properties.

A new robust Kalman filter is proposed that detects and bounds the influence of outliers in a discrete linear system, including those generated by thick-tailed noise distributions such as impulsive noise. Besides outliers induced in the... more

A new robust Kalman filter is proposed that detects and bounds the influence of outliers in a discrete linear system, including those generated by thick-tailed noise distributions such as impulsive noise. Besides outliers induced in the process and observation noises, we consider in this paper a new type called structural outliers. For a filter to be able to counter the effect of these outliers, observation redundancy in the system is necessary. We have therefore developed a robust filter in a batch-mode regression form to process the observations and predictions together, making it very effective in suppressing multiple outliers. A key step in this filter is a new prewhitening method that incorporates a robust multivariate estimator of location and covariance. The other main step is the use of a generalized maximum likelihood-type (GM) estimator based on Schweppe's proposal and the Huber function, which has a high statistical efficiency at the Gaussian distribution and a positive breakdown point in regression. The latter is defined as the largest fraction of contamination for which the estimator yields a finite maximum bias under contamination. This GM-estimator enables our filter to bound the influence of residual and position, where the former measures the effects of observation and innovation outliers and the latter assesses that of structural outliers. The estimator is solved via the iteratively reweighted least squares (IRLS) algorithm, in which the residuals are standardized utilizing robust weights and scale estimates. Finally, the state estimation error covariance matrix of the proposed GM-Kalman filter is derived from its influence function. Simulation results revealed that our filter compares favorably with the H -filter in the presence of outliers.

published by the press syndicate of the university of cambridge

This paper looks at the dilemmas posed by ‘expertise’ in high-technology regulation by examining the US Federal Aviation Administration’s (FAA) ‘type-certification’ process, through which they evaluate new designs of civil aircraft. It... more

This paper looks at the dilemmas posed by ‘expertise’ in high-technology regulation by examining the US Federal Aviation Administration’s (FAA) ‘type-certification’ process, through which they evaluate new designs of civil aircraft. It observes that the FAA delegate a large amount of this work to the manufacturers themselves, and discusses why they do this by invoking arguments from the sociology of science and technology. It suggests that – contrary to popular portrayal – regulators of high technologies face an inevitable epistemic barrier when making technological assessments, which forces them to delegate technical questions to people with more tacit knowledge, and hence to ‘regulate’ at a distance by evaluating ‘trust’ rather than ‘technology’. It then unravels some of the implications of this and its relation to our theories of regulation and ‘regulatory capture’.