Computational Grid Research Papers - Academia.edu (original) (raw)

The hybridization of heuristics methods aims at exploring the synergies among stand alone heuristics in order to achieve better results for the optimization problem under study. In this paper we present a hybridization of Genetic... more

The hybridization of heuristics methods aims at exploring the synergies among stand alone heuristics in order to achieve better results for the optimization problem under study. In this paper we present a hybridization of Genetic Algorithms (GAs) and Tabu Search (TS) for scheduling in computational grids. The purpose in this hybridization is to benefit the exploration of the solution space by a population of individuals with the exploitation of solutions through a smart search of the TS. Our GA (TS) hybrid algorithm runs the GA as ...

Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever before. To solve these complicated problems, grid computing becomes a popular tool. A grid environment collects,... more

Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever before. To solve these complicated problems, grid computing becomes a popular tool. A grid environment collects, integrates, and uses heterogeneous or homogeneous resources scattered around the globe by a high-speed network. A grid environment can be classified into two types: computing grids and data grids. This paper mainly focuses on computing grids.In computing grid, job scheduling is a very important task. A good scheduling algorithm can assign jobs to resources efficiently and can balance the system load.In this paper, we propose a hierarchical framework and a job scheduling algorithm called Hierarchical Load Balanced Algorithm (HLBA) for Grid environment. In our algorithm, we use the system load as a parameter in determining a balance threshold. And the scheduler adapts the balance threshold dynamically when the system load changes. The main contributions of this paper are twofold. First, the scheduling algorithm balances the system load with an adaptive threshold and second, it minimizes the makespan of jobs. Experimental results show that the performance of HLBA is better than those of other algorithms.► A hierarchical framework and a job scheduling algorithm for grid are proposed. ► The algorithm is called the Hierarchical Load Balanced Algorithm (HLBA). ► The main contributions are system load balancing and makespan minimization.

We present a novel level set representation and front propagation scheme for active contours where the analysis/evolution domain is sampled by unstructured point cloud. These sampling points are adaptively distributed according to both... more

We present a novel level set representation and front propagation scheme for active contours where the analysis/evolution domain is sampled by unstructured point cloud. These sampling points are adaptively distributed according to both local data and level set geometry, hence allow extremely convenient enhancement/reduction of local front precision by simply putting more/fewer points on the computation domain without grid refinement (as the cases in finite difference schemes) or remeshing (typical infinite element methods). The front evolution process is then conducted on the point-sampled domain, without the use of computational grid or mesh, through the precise but relatively expensive moving least squares (MLS) approximation of the continuous domain, or the faster yet coarser generalized finite difference (GFD) representation and calculations. Because of the adaptive nature of the sampling point density, our strategy performs fast marching and level set local refinement concurrently. We have evaluated the performance of the method in image segmentation and shape recovery applications using real and synthetic data.

The capability to predict the host load of a system is significant for computational grids to make efficient use of shared resources. This paper attempts to improve the accuracy of host load predictions by applying a neural network... more

The capability to predict the host load of a system is significant for computational grids to make efficient use of shared resources. This paper attempts to improve the accuracy of host load predictions by applying a neural network predictor to reach the goal of best performance and load balance. We describe feasibility of the proposed predictor in a dynamic environment, and perform experimental evaluation using collected load traces. The results show that the neural network achieves a consistent performance improvement with surprisingly low overhead. Compared with the best previously proposed method, the typical 20:10:1 network reduces the mean and standard deviation of the prediction errors by approximately 60% and 70%, respectively. The training and testing time is extremely low, as this network needs only a couple of seconds to be trained with more than 100,000 samples in order to make tens of thousands of accurate predictions within just a second.

We present a Geant4-based application for the simulation of the absorbed dose distribution given by a medical linac used for intensity modulated radiation therapy (IMRT). The linac geometry is accurately described in Monte Carlo code... more

We present a Geant4-based application for the simulation of the absorbed dose distribution given by a medical linac used for intensity modulated radiation therapy (IMRT). The linac geometry is accurately described in Monte Carlo code using the accelerator's manifacturer's specifications. The flexible design of this object-oriented system allows for an easy configuration of the geometry of the treatment head for various types of medical accelerators used in clinical practice. The precision of the software system relies on the application of Geant4 Low Energy Electromagnetic models, extending the treatment of electron and photon interaction down to low energies for precise dosimetry. The capability of the software to evaluate dose distribution has been verified by comparison with measurements in water phantom; the comparisons were performed for percent depth dose (PDD) and for flatness at 15, 50 and 100 mm depth for various field size, for a 6 MV electron beam. The source-surf...

Grid computing offers the perspective of solving massive computational problems using a large number of computers arranged as clusters embedded in a distributed telecommunication infrastructure. It involves sharing heterogeneous resources... more

Grid computing offers the perspective of solving massive computational problems using a large number of computers arranged as clusters embedded in a distributed telecommunication infrastructure. It involves sharing heterogeneous resources (based on different platforms, hardware/software

Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever... more

Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Motivation Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. Methods In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. Results On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. Conclusion The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.