Hybrid Architecture Research Papers - Academia.edu (original) (raw)

2011, Progress in Polymer Science

Concurrent with the rapid development of both dendrimers and hyperbranched polymers, a novel class of block copolymer architectures has emerged from the combination of these dendritic architectures with linear chains, the... more

Concurrent with the rapid development of both dendrimers and hyperbranched polymers, a novel class of block copolymer architectures has emerged from the combination of these dendritic architectures with linear chains, the "linear-dendritic block copolymers" (LDBCs). This review gives a comprehensive summary of the state of the art in this rapidly developing field from pioneering early work to promising recent approaches.The different strategies leading to these hybrid architectures with either perfect dendrimer/dendron building blocks or imperfect, yet more conveniently accessible hyperbranched segments, are reviewed and compared. The consequences of the unusual polymer topology for supramolecular structures both in solution and in the solid state are summarized, and important differences in comparison with classical linear block copolymer structures are highlighted. Current challenges in the area of block copolymers, nanotechnology and potential applications of linear-dendritic block copolymers are also considered.

2010, Neurocomputing

Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain... more

Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.

2010

This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power... more

This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid.

2010

This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical... more

This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical framework of Head-driven Phrase Structure Grammar (HPSG), and it uses Minimal Recursion Semantics (MRS) for the semantic representation. We have developed a hybrid architecture which integrates shallow processing functionalities -morphological analysis, and Named Entity recognition and classification -into the parsing process. The SRG has a full coverage lexicon of closed word classes and it contains 50,852 lexical entries for open word classes. The grammar also has 64 lexical rules to perform valence changing operations on lexical items, and 191 phrase structure rules that combine words and phrases into larger constituents and compositionally build up their semantic representation. The annotation of each parsed sentence in an LKB grammar simultaneously represents a traditional phrase structure tree, and a MRS semantic representation. We provide evaluation results on sentences from newspaper texts and discuss future work.

2009

Medical billing is a complex, dynamic, knowledge oriented process. To avoid claim rejections almost all of the leading companies with their business related to medical billing/ medical claim processing provide some sort of claim scrubbing... more

Medical billing is a complex, dynamic, knowledge oriented process. To avoid claim rejections almost all of the leading companies with their business related to medical billing/ medical claim processing provide some sort of claim scrubbing software tools. This research is a comparative study of two claim scrubbing tools developed at a USA based medical billing company. Although both the tools have same purpose of medical claim scrubbing but their implementation methodology is different. One is conventional software based on stored procedures and functions; we name it simply 'Medical Claim Scrubber'. And other has been developed using rule based systems concepts; so we name it 'A Rule Based System'. This paper presents strong and weak points of both the tools. A new hybrid design has also been proposed, which combines both of them. Currently this hybrid architecture has been proposed for medical billing domain, but it can be applied to virtually any domain of real life where complex, dynamic, knowledge oriented edits are required to be implemented.

2010

In recent years, many systems have employed NAND flash memory as storage devices because of its advantages of higher performance (compared to the traditional hard disk drive), high-density, random-access, increasing capacity, and falling... more

In recent years, many systems have employed NAND flash memory as storage devices because of its advantages of higher performance (compared to the traditional hard disk drive), high-density, random-access, increasing capacity, and falling cost. On the other hand, the performance of NAND flash memory is limited by its ¿erase-before-write¿ requirement. Log-based structures have been used to alleviate this problem by writing updated data to the clean space. Prior log-based methods, however, cannot avoid excessive erase operations when there are frequent updates, which quickly consume free pages, especially when some data are updated repeatedly. In this paper, we propose a hybrid architecture for the NAND flash memory storage, of which the log region is implemented using phase change random access memory (PRAM). Compared to traditional log-based architectures, it has the following advantages: (1) the PRAM log region allows in-place updating so that it significantly improves the usage efficiency of log pages by eliminating out-of-date log records; (2) it greatly reduces the traffic of reading from the NAND flash memory storage since the size of logs loaded for the read operation is decreased; (3) the energy consumption of the storage system is reduced as the overhead of writing and reading log data is decreased with the PRAM log region; (4) the lifetime of NAND flash memory is increased because the number of erase operations are reduced. To facilitate the PRAM log region, we propose several management policies. The simulation results show that our proposed methods can substantially improve the performance, energy consumption, and lifetime of the NAND flash memory storage1.

2009

This paper discusses the psychological, philosophical and neurological definitions of consciousness and the prospects for the development of a conscious machine in the foreseeable future. Various definitions of consciousness are... more

This paper discusses the psychological, philosophical and neurological definitions of consciousness and the prospects for the development of a conscious machine in the foreseeable future. Various definitions of consciousness are introduced and discussed within the different fields mentioned. We conclude that a conscious machine may be within the realm of engineering possibilities if current technological developments, especially Moore's Law, continue at their current pace. Given the complexity of cognition and consciousness a hybrid architecture appears to offer the best solution for the implementation of a complex system of systems which functionally approximates a human mind. Ideally, this architecture would include traditional symbolic representations as well as distributed representations which approximate the nonlinear dynamics seen in the human brain.

2006

One of the most promising uses for multi agent systems is the searching for items or resources in unknown environments. The use of multi agent systems to locate unexploded ordinance proves to be an excellent example of one such... more

One of the most promising uses for multi agent systems is the searching for items or resources in unknown environments. The use of multi agent systems to locate unexploded ordinance proves to be an excellent example of one such application. This research explores the possibility of a hybrid architecture that implements mine detection, obstacle avoidance and route planning with a group of autonomous agents with coordination capabilities. Groups of inter cooperating multi agents working towards a common goal have the potential to perform a task faster and with an increased level of efficiency then the same number of agents acting in an independent manner. This coordination framework will address the issues involved during such unknown exploration

2007, QShine 2007 Workshop: Satellite/Terrestrial Interworking on - IWSTI '07

In this paper, the main role of satellite systems in hybrid satellite/terrestrial networks will be highlighted, as well as the main functions which should be performed to optimize the performance of these hybrid networks. Then, some... more

In this paper, the main role of satellite systems in hybrid satellite/terrestrial networks will be highlighted, as well as the main functions which should be performed to optimize the performance of these hybrid networks. Then, some typical services believed to occupy a great portion of the future telecommunication service market and benefiting from these hybrid architectures will be presented. Generally speaking, the list of hybrid architectures presented in this paper is not exhaustive but reflects the main work carried out in the institute of communications and navigation of DLR and in the framework of the Satellite Communications Network of Excellence "SatNEx" subsidized by the European Union, more accurately in the focus topic "Hybrid broadband network architectures" of the Joint Activity 2130 "Broadband Access Network". The critical issues related to hybrid satellite/terrestrial architectures will be identified and analyzed all along this paper and illustrated by selected realistic examples.

1998, IEEE Transactions on Systems, Man, and Cybernetics

One of the main obstacles in applying genetic algorithms (GA's) to complex problems has been the high computational cost due to their slow convergence rate. We encountered such a difficulty in our attempt to use the classical GA for... more

One of the main obstacles in applying genetic algorithms (GA's) to complex problems has been the high computational cost due to their slow convergence rate. We encountered such a difficulty in our attempt to use the classical GA for estimating parameters of a metabolic model. To alleviate this difficulty, we developed a hybrid approach that combines a GA with a stochastic variant of the simplex method in function optimization. Our motivation for developing the stochastic simplex method is to introduce a cost-effective exploration component into the conventional simplex method. In an attempt to make effective use of the simplex operation in a hybrid GA framework, we used an elite-based hybrid architecture that applies one simplex step to a top portion of the ranked population. We compared our approach with five alternative optimization techniques including a simplex-GA hybrid independently developed by Renders-Bersini (R-B) and adaptive simulated annealing (ASA). Our empirical evaluations showed that our hybrid approach for the metabolic modeling problem outperformed all other techniques in terms of accuracy and convergence rate. We used two additional function optimization problems to compare our approach with the five alternative methods

2008

Roadrunner is a 1.38 Pflop/s-peak (double precision) hybrid-architecture supercomputer developed by LANL and IBM. It contains 12,240 IBM PowerXCell 8i processors and 12,240 AMD Opteron cores in 3,060 compute nodes. Roadrunner is the first... more

Roadrunner is a 1.38 Pflop/s-peak (double precision) hybrid-architecture supercomputer developed by LANL and IBM. It contains 12,240 IBM PowerXCell 8i processors and 12,240 AMD Opteron cores in 3,060 compute nodes. Roadrunner is the first supercomputer to run Linpack at a sustained speed in excess of 1 Pflop/s. In this paper we present a detailed architectural description of Roadrunner and a detailed performance analysis of the system. A case study of optimizing the MPI-based application Sweep3D to exploit Roadrunner's hybrid architecture is also included. The performance of Sweep3D is compared to that of the code on a previous implementation of the Cell Broadband Engine architecture-the Cell BE-and on multicore processors. Using validated performance models combined with Roadrunner-specific microbenchmarks we identify performance issues in the early pre-delivery system and infer how well the final Roadrunner configuration will perform once the system software stack has matured.

Transition to hybrid CPU/GPU platforms in high performance computing is challenging in the aspect of efficient utilisation of the heterogeneous hardware and existing optimised software. During recent years, scientific software has been... more

Transition to hybrid CPU/GPU platforms in high performance computing is challenging in the aspect of efficient utilisation of the heterogeneous hardware and existing optimised software. During recent years, scientific software has been ported to multicore and GPU architectures and now should be reused on hybrid platforms. In this paper, we model the performance of such scientific applications in order to execute them efficiently on hybrid platforms. We consider a hybrid platform as a heterogeneous distributed-memory system and apply the approach of functional performance models, which was originally designed for uniprocessor machines. The functional performance model (FPM) represents the processor speed by a function of problem size and integrates many important features characterising the performance of the architecture and the application. We demonstrate that FPMs facilitate performance evaluation of scientific applications on hybrid platforms. FPM-based data partitioning algorithms have been proved to be accurate for load balancing on heterogeneous networks of uniprocessor computers. We apply FPM-based data partitioning to balance the load between cores and GPUs in the hybrid architecture. In our experiments with parallel matrix multiplication, we couple the existing software optimised for multicores and GPUs and achieve high performance of the whole hybrid system.

2000, IEEE Transactions on Control Systems Technology

In this paper, the problem of designing and developing a hybrid Fault Detection and Isolation (FDI) scheme for a network of unmanned vehicles (NUVs) that is subject to large environmental disturbances is investigated. The proposed FDI... more

In this paper, the problem of designing and developing a hybrid Fault Detection and Isolation (FDI) scheme for a network of unmanned vehicles (NUVs) that is subject to large environmental disturbances is investigated. The proposed FDI algorithm is a hybrid architecture that is composed of a bank of continuous-time residual generators and a discrete-event system (DES) fault diagnoser. A novel set of residuals is generated so that the DES fault diagnoser empowered by incorporating appropriate combinations of the residuals and their sequential features will robustly detect and isolate faults in the NUVs. Our proposed hybrid FDI algorithm is then applied to actuator fault detection and isolation in a network of quad-rotors. Simulation results demonstrate and validate the performance capabilities of our proposed hybrid FDI algorithm.

2009

In this paper we present a sound-source model for localising and tracking an acoustic source of interest along the azimuth plane in acoustically cluttered environments, for a mobile service robot. The model we present is a hybrid... more

In this paper we present a sound-source model for localising and tracking an acoustic source of interest
along the azimuth plane in acoustically cluttered environments, for a mobile service robot. The model
we present is a hybrid architecture using cross-correlation and recurrent neural networks to develop a
robotic model accurate and robust enough to perform within an acoustically cluttered environment. This
model has been developed with considerations of both processing power and physical robot size, allowing
for this model to be deployed on to a wide variety of robotic systems where power consumption and
size is a limitation. The development of the system we present has its inspiration taken from the central
auditory system (CAS) of the mammalian brain. In this paper we describe experimental results of the
proposed model including the experimental methodology for testing sound-source localisation systems.
The results of the system are shown in both restricted test environments and in real-world conditions.
This paper shows how a hybrid architecture using band pass filtering, cross-correlation and recurrent
neural networks can be used to develop a robust, accurate and fast sound-source localisation model for a
mobile robot.

2003, 10th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, 2003. Proceedings.

This article presents a hybrid software/hardware architecture for commander's decision support in tactical operations. The architecture builds on the symbolic, object-oriented visualization software called Advanced Tactical Architecture... more

This article presents a hybrid software/hardware architecture for commander's decision support in tactical operations. The architecture builds on the symbolic, object-oriented visualization software called Advanced Tactical Architecture for Combat Knowledge System (ATACKS). The extension discussed here is the design of a real-time robot agent layer that interacts wirelessly with ATACKS. This layer enacts decisions made by software agents (wargamers), continuously relays the execution states back to ATACKS, and updates its actions as advocated by replanning algorithms. The software layer is briefly described followed by the specification of the real-time requirements for the robotic architecture. The design and implementation are given with a small example that illustrates the hybrid system's operation.

1997, Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97

This paper presents a general methodology for the specification and the integration of functional modules in a distributed reactive robot architecture. The approach is based on a hybrid architecture basically composed of two levels: a... more

This paper presents a general methodology for the specification and the integration of functional modules in a distributed reactive robot architecture. The approach is based on a hybrid architecture basically composed of two levels: a lower distributed functional level controlled by a centralized decisional level. Due to this methodology, synchronous or asynchronous operating capabilities (servo-control, data processing, event monitoring) can

2007, Proceedings of the Workshop on Deep Linguistic Processing - DeepLP '07

This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical... more

This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical framework of Head-driven Phrase Structure Grammar (HPSG), and it uses Minimal Recursion Semantics (MRS) for the semantic representation. We have developed a hybrid architecture which integrates shallow processing functionalities -morphological analysis, and Named Entity recognition and classification -into the parsing process. The SRG has a full coverage lexicon of closed word classes and it contains 50,852 lexical entries for open word classes. The grammar also has 64 lexical rules to perform valence changing operations on lexical items, and 191 phrase structure rules that combine words and phrases into larger constituents and compositionally build up their semantic representation. The annotation of each parsed sentence in an LKB grammar simultaneously represents a traditional phrase structure tree, and a MRS semantic representation. We provide evaluation results on sentences from newspaper texts and discuss future work.

2010

In this paper we present a hybrid reactive/deliberative approach to the multi-robot integrated exploration problem. In contrast to other works, the design of the reactive and deliberative processes is exclusively oriented to the... more

In this paper we present a hybrid reactive/deliberative approach to the multi-robot integrated exploration problem. In contrast to other works, the design of the reactive and deliberative processes is exclusively oriented to the exploration having both the same importance level. The approach is based on the concepts of expected safe zone and gateway cell. The reactive exploration of the expected safe zone of the robot by means of basic behaviours avoids the presence of local minima. Simultaneously, a planner builds up a decision tree in order to decide between exploring the current expected safe zone or changing to other zone by means of travelling to a gateway cell. Furthermore, the model takes into account the degree of localization of the robots to return to previously explored areas when it is necessary to recover the certainty in the position of the robots. Several simulations demonstrate the validity of the approach. (M. Juliá ), o.reinoso@umh.es (Ó . Reinoso), arturo.gil@umh.es (A. Gil), m.ballesta@umh.es (M. Ballesta), lpaya@umh.es (L. Payá ). Please cite this article as: Juliá , M., et al., A hybrid solution to the multi-robot integrated exploration problem. Engineering

2009, Journal of Asian Electric Vehicles

In this paper, we describe the global approach that has been developed at INRETS-LTE for hybrid powertrain sizing, consumption gain evaluation, and optimal control. The procedure is described in the particular case of a two clutch... more

In this paper, we describe the global approach that has been developed at INRETS-LTE for hybrid powertrain sizing, consumption gain evaluation, and optimal control. The procedure is described in the particular case of a two clutch parallel hybrid architecture, but could apply to any other configuration. Firstly the context and the common issues related to this subject are described. Then our global approach is detailed: sizing procedure, offline global optimization process, method for obtaining suboptimal online energy management laws, and the use of these laws on the test bench. Simulation or optimization results are presented at each step of the procedure.

2001

This paper addresses the problem of performing structural and semantic analysis of data where the syntactic and semantic models of the domain are inadequate, and robust methods must be employed to perform a "best approximation" to a... more

This paper addresses the problem of performing structural and semantic analysis of data where the syntactic and semantic models of the domain are inadequate, and robust methods must be employed to perform a "best approximation" to a complete analysis. This problem is particularly pertinent in the domain of text analysis. The ability to deal with large amounts of possibly illformed or unforeseen text is one of the principal objectives of current research in Natural Language Processing by computer (NLP), an ability which is particularly necessary for advanced information extraction and retrieval from large textual corpora. The results of this work can, however, be applied in other domains where a mix of partial grammatical and semantic models exist, such as in image analysis. This paper proposes possible investigation of such methods and their integration with structural analysis into a hybrid architecture.

2011

Heterogeneous computers with processors and accelerators are becoming widespread in scientific computing. However, it is difficult to program hybrid architectures and there is no commonly accepted programming model. Ideally, applications... more

Heterogeneous computers with processors and accelerators are becoming widespread in scientific computing. However, it is difficult to program hybrid architectures and there is no commonly accepted programming model. Ideally, applications should be written in a way that is portable to many platforms, but providing this portability for general programs is a hard problem.

1998, Expert Systems with Applications

How to 'refuel' a nuclear power reactor, when it is shut down every year or so between two successive operation cycles, is the 'in-core fuel management' problem. To solve it, it is necessary to design and simulate a safe and efficient... more

How to 'refuel' a nuclear power reactor, when it is shut down every year or so between two successive operation cycles, is the 'in-core fuel management' problem. To solve it, it is necessary to design and simulate a safe and efficient fuel loading pattern. 'Reload design' plays a crucial role in nuclear power plant operation, in terms of both economy and safety. This article presents FuelGen, a system embodying a specialized genetic algorithm for designing refuellings. The tests on well-researched cases have shown that the algorithm is capable of finding a better loading pattern-enabling the reactor to run both longer and more efficiently per cycle-than solutions reported in the domain literature and found by other methods, such as expert systems and simulated annealing. Over a decade, the parent-project Fuelcon first inaugurated the rule-driven refuelling paradigm, then turned to probing hybrid architectures. Its sequel, FuelGen, radically supersedes Fuelcon's search mechanism, while retaining the architectural and ergonomic outlook that Fuelcon had evolved. ᭧

2007, Journal of Power Sources

Nowadays, new technologies and breakthroughs in the field of energy efficiency, alternative fuels and added-value electronics are leading to bigger, more sustainable and green thinking applications. Within the Automotive Industry, there... more

Nowadays, new technologies and breakthroughs in the field of energy efficiency, alternative fuels and added-value electronics are leading to bigger, more sustainable and green thinking applications. Within the Automotive Industry, there is a clear declaration of commitment with the environment and natural resources. The presence of passenger vehicles of hybrid architecture, public transport powered by cleaner fuels, non-aggressive utility vehicles and an encouraging social awareness, are bringing to light a new scenario where conventional and advanced solutions will be in force.

2004, Proceedins of the 14th ACM Great Lakes symposium on VLSI - GLSVLSI '04

In this paper we compare the routing architecture of island-style FPGAs based on field-programmable switch boxes with a maskprogrammable routing structure, in order to assess its position in the design space of routing opportunities... more

In this paper we compare the routing architecture of island-style FPGAs based on field-programmable switch boxes with a maskprogrammable routing structure, in order to assess its position in the design space of routing opportunities available to VLSI IC designers. Although the results presented in this work depend on a few implementation details that will be discussed in the paper, the mask-programmable routing structure shows a large area saving and delay improvement with respect to the field-programmable switch box. As a consequence, we believe that between the two bounds of the design space, i.e., ASICs and FPGAs, there are several hybrid architectural solutions trading off performances, power, area, and programmability, which in the future can be considered for different applications.

2009, IEEE Transactions on Signal Processing - TSP

This paper presents a novel hybrid encoding method for encoding of low-density parity-check (LDPC) codes. The design approach is applied to design 10-Gigabit Ethernet transceivers over copper cables. For a specified encoding speed, the... more

This paper presents a novel hybrid encoding method for encoding of low-density parity-check (LDPC) codes. The design approach is applied to design 10-Gigabit Ethernet transceivers over copper cables. For a specified encoding speed, the proposed method requires substantially lower complexity in terms of area and storage. Furthermore, this method is generic and can be adapted easily for other LDPC codes. One major advantage of this design is that it does not require column swapping and it maintains compatibility with optimized LDPC decoders. For a 10-Gigabit Ethernet transceiver which is compliant with the IEEE 802.3 an standard, the proposed sequential (5-Parallel) hybrid architecture has the following implementation properties: critical path: (log2(324) + 1)Txor + Tand, number of XOR gates: 11 056, number of and gates: 1620, and ROM storage: 104 976 bits (which can be minimized to 52 488 bits using additional hardware). This method achieves comparable critical path, and requires 74%...

2011, 2011 9th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA)

Multicore architectures enhanced with multiple GPUs are likely to become mainstream High Performance Computing (HPC) platforms in a near future. In this paper, we present the design and implementation of an LU factorization using tile... more

Multicore architectures enhanced with multiple GPUs are likely to become mainstream High Performance Computing (HPC) platforms in a near future. In this paper, we present the design and implementation of an LU factorization using tile algorithm that can fully exploit the potential of such platforms in spite of their complexity. We use a methodology derived from previous work on Cholesky and QR factorizations. Our contributions essentially consist of providing new CPU/GPU hybrid LU kernels, studying the impact on performance of the looking variants as well as the storage layout in presence of pivoting, tuning the kernels for two different machines composed of multiple recent NVIDIA Tesla S1070 (four GPUs total) and Fermi-based S2050 GPUs (three GPUs total), respectively. The hybrid tile LU asymptotically achieves 1 Tflop/s in single precision on both hardwares. The performance in double precision arithmetic reaches 500 Gflop/s on the Fermi-based system, twice faster than the old GPU generation of Tesla S1070. We also discuss the impact of the number of tiles on the numerical stability. We show that the numerical results of the tile LU factorization will be accurate enough for most applications as long as the computations are performed in double precision arithmetic.

We propose a hybrid architecture for the NTCIR-5 CLQA C-C (Cross Language Question Answering from Chinese to Chinese) Task. Our system, the Academia Sinica Question-Answering System (ASQA), outputs exact answers to six types of factoid... more

We propose a hybrid architecture for the NTCIR-5 CLQA C-C (Cross Language Question Answering from Chinese to Chinese) Task. Our system, the Academia Sinica Question-Answering System (ASQA), outputs exact answers to six types of factoid question: personal names, location names, organization names, artifacts, times, and numbers. The architecture of ASQA comprises four main components: Question Processing, Passage Retrieval, Answer Extraction, and Answer Ranking. ASQA successfully combines machine learning and knowledge-based approaches to answer Chinese factoid questions, achieving 37.5% and 44.5% Top1 accuracy for correct, and correct+unsupported answers, respectively.

2000, IEEE Transactions on Audio, Speech, and Language Processing

We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov... more

We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8% and 9.2% (or relative error reduction of 16.0% and 23.2%) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively.

2011

Sparse iterative linear solvers are critical for large-scale scientific simulations, many of which spend the majority of their run time in solvers. Algebraic Multigrid (AMG) is a popular solver because of its linear run-time complexity... more

Sparse iterative linear solvers are critical for large-scale scientific simulations, many of which spend the majority of their run time in solvers. Algebraic Multigrid (AMG) is a popular solver because of its linear run-time complexity and its proven scalabil-ity in distributed-memory ...

1995, Lecture Notes in Computer Science

This paper presents a novel approach to the problem of designing autonomous agents that is based on the idea of anticipatory systems. An anticipatory system has a model of itself and of the relevant part of its environment and will use... more

This paper presents a novel approach to the problem of designing autonomous agents that is based on the idea of anticipatory systems. An anticipatory system has a model of itself and of the relevant part of its environment and will use this model to predict the future. The predictions are then utilised to determine the agent's behaviour, i.e. it lets future states affect its present states. We argue that systems based on causal reasoning only, are too limited to serve as a proper base for designing autonomous agents. An anticipatory agent, on the other hand, will use reasoning from final cause to guide its current actions. We then discuss to what extent an anticipatory agent can be constructed from computable functions and we conclude that this problem is best expressed and analysed in linguistic terms. This discussion points out how such an agent should be designed and results in a proposal of an appropriate architecture. However, as the meta-linguistic problems involved are very hard to solve, a simpler architecture is also proposed. This is also a hybrid architecture that synthesizes reactive behaviour and deliberative reasoning, which, we believe, still has its merits compared to previous approaches to the design of autonomous agents. Finally, we discuss introspection and reflection, and show that the underlying concepts are easy to comprehend in the context of anticipatory systems.

2011, 2011 9th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA)

Multicore architectures enhanced with multiple GPUs are likely to become mainstream High Performance Computing (HPC) platforms in a near future. In this paper, we present the design and implementation of an LU factorization using tile... more

Multicore architectures enhanced with multiple GPUs are likely to become mainstream High Performance Computing (HPC) platforms in a near future. In this paper, we present the design and implementation of an LU factorization using tile algorithm that can fully exploit the potential of such platforms in spite of their complexity. We use a methodology derived from previous work on Cholesky and QR factorizations. Our contributions essentially consist of providing new CPU/GPU hybrid LU kernels, studying the impact on performance of the looking variants as well as the storage layout in presence of pivoting, tuning the kernels for two different machines composed of multiple recent NVIDIA Tesla S1070 (four GPUs total) and Fermi-based S2050 GPUs (three GPUs total), respectively. The hybrid tile LU asymptotically achieves 1 Tflop/s in single precision on both hardwares. The performance in double precision arithmetic reaches 500 Gflop/s on the Fermi-based system, twice faster than the old GPU generation of Tesla S1070. We also discuss the impact of the number of tiles on the numerical stability. We show that the numerical results of the tile LU factorization will be accurate enough for most applications as long as the computations are performed in double precision arithmetic.

2009, ACM SIGPLAN Notices

A trend is developing in high performance computing in which commodity processors are coupled to various types of computational accelerators. Such systems are commonly called hybrid systems. In this paper, we describe our experience... more

A trend is developing in high performance computing in which commodity processors are coupled to various types of computational accelerators. Such systems are commonly called hybrid systems. In this paper, we describe our experience developing an implementation of the Linpack benchmark for a petascale hybrid system, the LANL Roadrunner cluster built by IBM for Los Alamos National Laboratory. This system combines traditional x86-64 host processors with IBM PowerXCell™ 8i accelerator processors. The implementation of Linpack we developed was the first to achieve a performance result in excess of 1.0 PFLOPS, and made Roadrunner the #1 system on the Top500 list in June 2008. We describe the design and implementation of hybrid Linpack, including the special optimizations we developed for this hybrid architecture. We then present actual results for single node and multi-node executions. From this work, we conclude that it is possible to achieve high performance for certain applications on hybrid architectures when careful attention is given to efficient use of memory bandwidth, scheduling of data movement between the host and accelerator memories, and proper distribution of work between the host and accelerator processors.

2010, Hybrid Artificial …

In recent years there has been a growing interest in the combination of rules and ontologies. Notably, many works have focused on the theoretical aspects of such integration, sometimes leading to concrete solutions. However, solutions... more

In recent years there has been a growing interest in the combination of rules and ontologies. Notably, many works have focused on the theoretical aspects of such integration, sometimes leading to concrete solutions. However, solutions proposed so far typically reason upon crisp concepts, while concrete domains require also fuzzy expressiveness.

2003, Ftdcs

We propose a new architecture for on-demand media streaming centered around the peer-to-peer (P2P) paradigm.

2001, Computers in Industry

Remote manufacturing allows companies to dynamically establish manufacturing alliances by exploiting advantages of enabling information technologies. Product data management and information integration are two of the most important issues... more

Remote manufacturing allows companies to dynamically establish manufacturing alliances by exploiting advantages of enabling information technologies. Product data management and information integration are two of the most important issues for success of remote manufacturing. In this paper, we studied the two issues by taking a case of a remote manufacturing implementation in a robot manufacturing company. An integrated product data model and the related configuration management methods are developed. Under a hybrid architecture of Web browser/server and client/server, an application system is presented to manipulate the product data and to carry out a variety of manufacturing functions. Finally, a CORBA standards-based integration framework is proposed to achieve interoperability among multiple data and application objects over the Internet and company Intranets. # 2001 Published by Elsevier Science B.V.

2011

Content cloud systems, e.g. CloudFront [1] and CloudBurst [2], in which content items are retrieved by endusers from the edge nodes of the cloud, are becoming increasingly popular. The retrieval latency in content clouds depends on... more

Content cloud systems, e.g. CloudFront [1] and CloudBurst [2], in which content items are retrieved by endusers from the edge nodes of the cloud, are becoming increasingly popular. The retrieval latency in content clouds depends on content availability in the edge nodes, which in turn depends on the caching policy at the edge nodes. In case of local content unavailability (i.e., a cache miss), edge nodes resort to source selection strategies to retrieve the content items either vertically from the central server, or horizontally from other edge nodes. Consequently, managing the latency in content clouds needs to take into account several interrelated issues: asymmetric bandwidth and caching capacity for both source types as well as edge node heterogeneity in terms of caching policies and source selection strategies applied.

2010, 2010 International Conference on Autonomous and Intelligent Systems, AIS 2010

Over the past decades, the design and development of mission based Autonomous Underwater Vehicle (AUV) continues to challenge researchers. Although AUV technology has matured and commercial systems have appeared in the market, a generic... more

Over the past decades, the design and development of mission based Autonomous Underwater Vehicle (AUV) continues to challenge researchers. Although AUV technology has matured and commercial systems have appeared in the market, a generic yet robust AUV command and control (C2) system still remains a key research area. This paper presents a command and control system architecture for modular AUVs. We particularly focus on the design and development of a generic control and software architecture for a single modular AUV while allowing natural extensions to multi-vehicle scenarios. This proposed C2 system has a hybrid modular-hierarchical control architecture. It adopts top-down approach in mission level decision making and task planning while utilizing bottom-up approach for navigational control, obstacle avoidance and vehicle fault detection. Each level consists of one or more autonomous agent components handling different C2 tasks. This structure provides the vehicle developers with an explicit view of the clearly defined control responsibilities at different level of control hierarchy. The resultant C2 system is currently operational on the STARFISH AUV built at the ARL of the National University of Singapore. It has successfully executed some autonomous missions during sea trials carried out around the Singapore coastal area.

2007, Lecture Notes in Computer Science

Data Grids are today's emerging infrastructure providing specialized services on handling large datasets that needs to be transferred and replicated among different grid sites. Data replication is an important technique for data... more

Data Grids are today's emerging infrastructure providing specialized services on handling large datasets that needs to be transferred and replicated among different grid sites. Data replication is an important technique for data availability and fast access. In this paper we present a comparison of various replication models and techniques employed by some major topologies used in data grid environment. We focus on dynamic strategies for replica placement in tree, Peer-to-Peer (P2P) and hybrid architectures. Beside tree model which is being implemented for many Data Grid applications, hybrid and P2P grid models of replication are also emerging for providing scientific communities with better availability and efficient access to massive data.

2010

Spectrum sensing is an important process in cognitive communication and must be performed accurately. In this paper we propose a low complexity detector based on a combination of two well-known and complementary spectrum sensing methods:... more

Spectrum sensing is an important process in cognitive communication and must be performed accurately. In this paper we propose a low complexity detector based on a combination of two well-known and complementary spectrum sensing methods: energy and cyclostationary detection. The cyclostationary detector is used to estimate the noise level 0 N , which is then used to fix the threshold of the energy detector. Simulation results show promising performances of the proposed detector in low Signal to Noise Ratio (SNR).

1999, International Journal of General Systems

Zadeh's Fuzzy Sets are extended with the Dempster-Shafer Theory of Evidence into a new mathematical structure called Evidence Sets, which can capture more efficiently all recognized forms of uncertainty in a formalism that explicitly... more

Zadeh's Fuzzy Sets are extended with the Dempster-Shafer Theory of Evidence into a new mathematical structure called Evidence Sets, which can capture more efficiently all recognized forms of uncertainty in a formalism that explicitly models the subjective context dependencies of linguistic categories. A belief-based theory of Approximate Reasoning is proposed for these structures. Evidence sets are then used in the development of a relational database architecture useful for the data mining of information stored in several networked databases. This useful data mining application establishes an Artificial Intelligence model of Cognitive Categorization with a hybrid architecture that possesses both connectionist and symbolic attributes.

A core area of P2P systems research is the topology of the overlay network. It has ranged from random unstructured networks like Gnutella [8] to architectures to the recent trend of structured overlays based on Distributed Hash Tables... more

A core area of P2P systems research is the topology of the overlay network. It has ranged from random unstructured networks like Gnutella [8] to architectures to the recent trend of structured overlays based on Distributed Hash Tables (DHTs) . While the unstructured networks have excessive lookup costs and un-guaranteed lookups, the structured systems offer no anonymity and delegate control over data items to unrelated peers. In this paper, we present an in-themiddle hybrid architecture which uses a mix of both topologies to create a decentralized P2P infrastructure. The system provides scalable and guaranteed lookups in addition to mutual anonymity and also allows hosting content with the content-owner. We validate our architecture through a thorough analytical and empirical performance analysis of the system.

1991, International Journal of Human-computer Studies / International Journal of Man-machine Studies

The Knowledge Acquisition (KA) process consists on extracting and representing knowledge of a domain expert. In this work, one of the goals is to minimize the intrinsic difficulties of the KA process. We have obtained all possible rules... more

The Knowledge Acquisition (KA) process consists on extracting and representing knowledge of a domain expert. In this work, one of the goals is to minimize the intrinsic difficulties of the KA process. We have obtained all possible rules from the domain expert in a short time and also a set of examples. Other goal, we are proposed a Hybrid Expert System (HES) to minimize the problems of the KA task using a new methodology. Building this kind of hybrid architecture has led us to use many tools: symbolic paradigm, connectionist paradigm, fuzzy logic and, Genetic Algorithm (GA). Another aim goal of this paper is to present two new algorithms, e.g., the first one is a learning algorithm to be applied to fuzzy feed-forward neural networks, as well as complexity problems to optimize the network topology. The second one is for extracting fuzzy rules of a trained fuzzy neural network. The learning algorithm was inspired on the classical back-propagation algorithm. It owns some variations due to kind of network used. The extraction algorithm of fuzzy rules owns also some particularities. As the methodology developed to HES as both of algorithms were tested through toy and real problems.

The increasing acceptance of XML as a standard for document markup promises to provide solutions for the problems of document management and retrieval. However, existing documents must be converted into XML. In this paper we present the... more

The increasing acceptance of XML as a standard for document markup promises to provide solutions for the problems of document management and retrieval. However, existing documents must be converted into XML. In this paper we present the AutoTag system, which automatically converts text documents into XML. The system has a hybrid architecture, arranging tagged documents on a twodimensional Self-Organizing Map (SOM) such that nearby locations contain similar documents, and then using an inductive learning algorithm to automatically extract and apply auto-tagging rules from the neighbours of an untagged document. The system is adaptive, so that once a document is tagged in XML, AutoTag learns from its errors in order to improve accuracy. The automatically tagged documents can subsequently be categorized on the Self-Organizing Map, further improving the map's resolution. Our system has been evaluated on a number of different domains, giving good results.

2011, 2011 13th International Conference on Transparent Optical Networks

Control on the optical response of 2-dimensional colloidal photonic crystals was achieved by combining them with thin metal films. In such hybrids the light transport depends on the parameters of the sphere lattice and the topology of the... more

Control on the optical response of 2-dimensional colloidal photonic crystals was achieved by combining them with thin metal films. In such hybrids the light transport depends on the parameters of the sphere lattice and the topology of the metal film. This approach allows mixing Bloch modes and surface plasmon polariton modes in the total optical response of hybrid architectures. Stronger modification of the hybrid's optical properties was achieved in the case of tighter light confinement.

2009, Journal of Applied Physics

The general purpose of spin electronics is to take advantage of the electron's spin in addition to its electrical charge to build innovative electronic devices. These devices combine magnetic materials which are used as spin polarizer or... more

The general purpose of spin electronics is to take advantage of the electron's spin in addition to its electrical charge to build innovative electronic devices. These devices combine magnetic materials which are used as spin polarizer or analyzer together with semiconductors or insulators, resulting in innovative hybrid CMOS/magnetic ͑Complementary MOS͒ architectures. In particular, magnetic tunnel junctions ͑MTJs͒ can be used for the design of magnetic random access memories ͓S. Tehrani, Proc. IEEE 91, 703 ͑2003͔͒, magnetic field programmable gate arrays ͓Y. Guillement, International Journal of Reconfigurable Computing, 2008͔, low-power application specific integrated circuits ͓S. Matsunaga, Appl. Phys. Express 1, 091301 ͑2008͔͒, and rf oscillators. The thermally assisted switching ͑TAS͒ technology requires heating the MTJ before writing it by means of an external field. It reduces the overall power consumption, solves the data writing selectivity issues, and improves the thermal stability of the written information for high density applications. The design of hybrid architectures requires a MTJ compact model, which can be used in standard electrical simulators of the industry. As a result, complete simulations of CMOS/MTJ hybrid circuits can be performed before experimental realization and testing. This article presents a highly accurate model of the MTJ based on the TAS technology. It is compatible with the Spectre electrical simulator of Cadence design suite.

2005

We compare Action Selection and Schema mechanisms for robotic control, focusing mainly on the reactive vs. anticipatory distinction. We present AKIRA, an agent-based hybrid architecture, focusing on its capabilites to design fuzzy-based... more

We compare Action Selection and Schema mechanisms for robotic control, focusing mainly on the reactive vs. anticipatory distinction. We present AKIRA, an agent-based hybrid architecture, focusing on its capabilites to design fuzzy-based Schema models. We implement in AKIRA reactive and anticipatory mechanisms, and we compare them in an experimental set-up in the Visual Search domain 1 .

Modern reconfigurable computing systems feature powerful hybrid architectures with multiple microprocessor cores, large reconfigurable logic arrays and distributed memory hierarchies. Mapping applications to these complex systems requires... more

Modern reconfigurable computing systems feature powerful hybrid architectures with multiple microprocessor cores, large reconfigurable logic arrays and distributed memory hierarchies. Mapping applications to these complex systems requires a representation that allows both hardware and software synthesis. Additionally, this representation must enable optimizations that exploit fine and coarse grained parallelism in order to effectively utilize the performance of the underlying reconfigurable architecture. Our work explores a representation based on the program dependence graph (PDG) incorporated with the static single-assignment (SSA) for synthesis to high performance reconfigurable devices. The PDG effectively describes control dependencies, while SSA yields precise data dependencies. When used together, these two representations provide a powerful, synthesizable form that exploits both fine and coarse grained parallelism. Compared to other commonly used representations for reconfigurable systems, the PDG+SSA form creates faster execution time, while using similar area.