Simulation Study Research Papers - Academia.edu (original) (raw)
We have used coarse-grained simulation methods to investigate the effect of stretching-induced structure orientation on the proton conductivity of Nafion-like polymer electrolyte membranes. Our simulations show that uniaxial stretching... more
We have used coarse-grained simulation methods to investigate the effect of stretching-induced structure orientation on the proton conductivity of Nafion-like polymer electrolyte membranes. Our simulations show that uniaxial stretching causes the hydrophilic regions to become elongated in the stretching direction. This change has a strong effect on the proton conductivity, which is enhanced along the stretching direction, while the conductivity perpendicular to the stretched polymer backbone is reduced. In a humidified membrane, stretching also causes the perfluorinated side chains to tend to orient perpendicular to the stretching axis. This in turn affects the distribution of water at low water contents. The water forms a continuous network with narrow bridges between small water clusters absorbed in head group multiplets. In a dry membrane the side chains orient along the stretching direction.
In the present study, integrated models have been developed to find out the most appropriate cost of municipal solid waste (MSW) disposal using two potential and widely used methodologies, viz. landfill system with gas recovery (LFSGR)... more
In the present study, integrated models have been developed to find out the most appropriate cost of municipal solid waste (MSW) disposal using two potential and widely used methodologies, viz. landfill system with gas recovery (LFSGR) and aerobic composting (AC). Objective functions with important costs and benefits including externalities were developed to find out the net unit cost of disposal. Multivariate functional models have been developed for each activity of the objective functions. These integrated techno-economic models can be used not only to determine the most appropriate cost of waste disposal, but also to explain the interparametric linkages and even to compare the potentiality and suitability of a particular methodology for a set of conditions. This can give valuable information that can enhance environmental management leading to sustainable development.In the simulation studies carried out, LFSGR with its proven energy generating potential from MSW in the form of ...
One possible approach to cluster analysis is the mixture maximum likelihood method, in which the data to be clustered are assumed to come from a finite mixture of populations. The method has been well developed, and much used, for the... more
One possible approach to cluster analysis is the mixture maximum likelihood method, in which the data to be clustered are assumed to come from a finite mixture of populations. The method has been well developed, and much used, for the case of multivariate normal populations. Practical applications, however, often involve mixtures of categorical and continuous variables. Everitt (1988) and Everitt and Merette (1990) recently extended the normal model to deal with such data by incorporating the use of thresholds for the categorical variables. The computations involved in this model are so extensive, however, that it is only feasible for data containing very few categorical variables. In the present paper we consider an alternative model, known as the homogeneous Conditional Gaussian model in graphical modelling and as the location model in discriminant analysis. We extend this model to the finite mixture situation, obtain maximum likelihood estimates for the population parameters, and show that computation is feasible for an arbitrary number of variables. Some data sets are clustered by this method, and a small simulation study demonstrates characteristics of its performance.
Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper we introduce a new concept for an automatic, fast... more
Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper we introduce a new concept for an automatic, fast and objective localisation of the ictal onset zone in ictal EEG recordings and show with a simulation study that canonical decomposition of ictal scalp EEG allows a robust and accurate localisation of the ictal onset zone.
We analyse the key algorithms of data and information fusion from a linguistic point of view, and show that they fall into two paradigms: the primarily syntactic, and the primarily semantic. We propose an alternative grammatical paradigm... more
We analyse the key algorithms of data and information fusion from a linguistic point of view, and show that they fall into two paradigms: the primarily syntactic, and the primarily semantic. We propose an alternative grammatical paradigm which exploits the ability of grammar to combine syntactic inference with semantic representation. We generalize the concept of formal generative grammar to include multiple rule classes each having a topology and a base vocabulary. A generalized Chomsky hierarchy is defined. Analysing fusion algorithms in terms of grammatical representations, we find that most (including multiple hypothesis tracking) can be expressed in terms of conventional regular grammars. Situation analysis, however, is commonly attempted using first order predicate logic, which while expressive, is recursively enumerable and so scales badly. We argue that the core issue in situation assessment is force deployment assessment, the extraction and scoring of hypotheses of the force deployment history, each of which is a multiresolution account of the activities, groupings and interactions of force components. The force deployment history represents these relationships at multiple levels of granularity and is expressed over time and space. We provide a grammatical approach for inferring such histories, and show that they can be estimated accurately and scalably. We employ a generalized context-free grammar incorporating both sequence and multiset productions. Elaborating [D. McMichael, G. Jarrad, S. Williams, M.
Based on the two-dimensional (2-D) least squares method, this paper presents a novel numerical method to calculate the magnetic characteristics for switched reluctance motor drives. In this method, the 2-D orthogonal polynomials are used... more
Based on the two-dimensional (2-D) least squares method, this paper presents a novel numerical method to calculate the magnetic characteristics for switched reluctance motor drives. In this method, the 2-D orthogonal polynomials are used to model the magnetic characteristics. The coefficients in these polynomials are determined by the 2-D least squares method. These coefficients can be computed off line and can also be trained on line. The computed results agree well with the experimental results. In addition, the effect of the order number of the polynomials on the computation errors is discussed. The proposed method is very helpful in torque prediction, simulation studies and development of sensorless control of switched reluctance motor drives.
In this paper, we used simulations to investigate the effect of sample size, number of indicators, factor loadings, and factor correlations on frequencies of the acceptance/rejection of models (true and misspecified) when selected... more
In this paper, we used simulations to investigate the effect of sample size, number of indicators, factor loadings, and factor correlations on frequencies of the acceptance/rejection of models (true and misspecified) when selected goodness-of-fit indices were compared with prespecified cutoff values. We found the percent of true models accepted when a goodness-of-fit index was compared with a prespecified cutoff value was affected by the interaction of the sample size and the total number of indicators. In addition, for the Tucker-Lewis index (TLI) and the relative noncentrality index (RNI), model acceptance percentages were affected by the interaction of sample size and size of factor loadings. For misspecified models, model acceptance percentages were affected by the interaction of the number of indicators and the degree of model misspecification. This suggests that researchers should use caution in using cutoff values for evaluating model fit. However, the study suggests that researchers who prefer to use prespecified cutoff values should use TLI, RNI, NNCP, and root-mean-square-error-ofapproximation (RMSEA) to assess model fit. The use of GFI should be discouraged.
A molecular dynamics simulation study of friction in boundary lubrication was conducted in order to investigate the atomic-scale behavior of lubricant molecules during sliding motion. The simulated system consisted of two silicon (001)... more
A molecular dynamics simulation study of friction in boundary lubrication was conducted in order to investigate the atomic-scale behavior of lubricant molecules during sliding motion. The simulated system consisted of two silicon (001) semi-infinite substrates lubricated by a three-layer film of dodecane. Silicon was modeled using the Stillinger–Weber potential, and the dodecane with the Consistent Force Field function; a novel scheme was used to generate the silicon–dodecane interaction potentials. The simulations show that dodecane molecules strongly prefer to adsorb into the ledges on the silicon surface. The orientation of the adsorbed molecules depends, however, on the concentration of the lubricant at the surface, showing a tendency to stand up at high lubricant concentrations. In sliding, the dodecane layers adsorbed on the surfaces behave as a solid, whereas the middle layer exhibits liquid-like characteristics. The friction coefficient of this well-lubricated case was calcu...
... PART 1. A SIMULATION STUDY RAYMOND E. MARCH and ADAM W. McMAHON Department of Chemistry, Trent University, Peterborough, Ontario, K9J 7B8 ... Monte Carlo methods employed to investigate the motion of ions in an ion trap with the ions... more
... PART 1. A SIMULATION STUDY RAYMOND E. MARCH and ADAM W. McMAHON Department of Chemistry, Trent University, Peterborough, Ontario, K9J 7B8 ... Monte Carlo methods employed to investigate the motion of ions in an ion trap with the ions allowed to undergo ...
The dynamic and steady state performance of a non-isothermal tubular reactor packed with spherical encapsulated enzyme particles has been modeled in terms of different dimensionless transport and kinetic parameters. The dynamic... more
The dynamic and steady state performance of a non-isothermal tubular reactor packed with spherical encapsulated enzyme particles has been modeled in terms of different dimensionless transport and kinetic parameters. The dynamic concentration profile for an initially substrate-free reactor reaches a maximum before achieving steady state. The steady state dimensionless bulk substrate concentration, unlike the temperature, progressively decreases along the reactor bed. On increase in the external mass transfer coefficient KL and Biot number Bi, for mass transfer, the concentration profile decreases more steeply. The simulation study shows that the biocatalyst particles may be considered isothermal.
We investigate the idea of using nodes with controllable mobility as intermediate relays for reducing the power consumption in a network of mobile wireless sensors. We present the relay deployment problem, which is to optimally position... more
We investigate the idea of using nodes with controllable mobility as intermediate relays for reducing the power consumption in a network of mobile wireless sensors. We present the relay deployment problem, which is to optimally position the relay nodes in the network so as to minimize the power consumed in communication. We discuss and evaluate a localized solution methodology that computes the optimal position and movement of the relay nodes based on the information pertaining to the active data flows and the mobility patterns of the sensors in the network. Results from a preliminary simulation study indicate that deployment of relay nodes can result in considerable energy savings. We also outline some of the issues that need to be addressed in deploying such mobilitycontrollable relay nodes in a real network.
A mathematical model for a continuous direct esterification reactor has been developed. The solid-liquid equilibrium of terephthalic acid (TPA) was considered in our modeling, and the characteristic dissolution time, an adjustable... more
A mathematical model for a continuous direct esterification reactor has been developed. The solid-liquid equilibrium of terephthalic acid (TPA) was considered in our modeling, and the characteristic dissolution time, an adjustable parameter, was introduced to account for the mass-transfer effect in the dissolution of TPA. The effects of the characteristic dissolution time, monomer feed ratio, temperature, and pressure on the reactor performance at different residence times were investigated through simulation. It was observed that the behavior of the first reactor strongly depends on whether the solid TPA is completely dissolved in the reaction mixtures. From the dynamic simulations, it was found that a sudden change in the operating conditions affects the ethylene glycol (EC) vapor flow rate instantly. For the esterification process having two reactors in series, the strategy for time distribution and recycling of EG is also discussed. 01997 John Wiley & Sons, Inc.
We present a method for estimating the mean vector from a multivariate skew dis-tribution that includes some unobserved data below the detection limits. The method uses a Box-Cox transformation, of which the parameters are found by... more
We present a method for estimating the mean vector from a multivariate skew dis-tribution that includes some unobserved data below the detection limits. The method uses a Box-Cox transformation, of which the parameters are found by maximizing the likelihood function ...
In this paper, we consider a k-nearest neighbor kernel type estimator when the random variables belong in a Riemannian manifolds. We study asymptotic properties such as the consistency and the asymptotic distribution. A simulation study... more
In this paper, we consider a k-nearest neighbor kernel type estimator when the random variables belong in a Riemannian manifolds. We study asymptotic properties such as the consistency and the asymptotic distribution. A simulation study is also consider to evaluate the performance of the proposal. Finally, to illustrate the potential applications of the proposed estimator, we analyzed two real example where two different manifolds are considered.
To explain patterns and variations in concentration of defence-related carbon-based secondary compounds in plant tissues, different complementary and in some parts contradictory plant-defence hypotheses were developed since several years.... more
To explain patterns and variations in concentration of defence-related carbon-based secondary compounds in plant tissues, different complementary and in some parts contradictory plant-defence hypotheses were developed since several years. These conceptual hypotheses cannot be fully evaluated without a plant growth simulation model and consequently, the discussion about adequacy of plantdefence hypotheses neglects to a large extent the dynamics of plant growth. To get a more realistic view of the volatile dynamics of plant internal source and sink strengths of carbon and nitrogen during different phenological growth stages, a modelling approach for allocation of carbohydrates to a pool of carbon-based secondary compounds was integrated into the functional generic plant growth model PLATHO. In this paper, we explain the fundamental assumptions of our modelling approach and present the results of a simulation study for juvenile beech and spruce trees growing in mono-and mixed cultures under ambient and elevated atmospheric CO 2 .
- by Eckart Priesack
- •
- Botany, Carbon, Plant growth, Nitrogen
The Vapor Extraction (VAPEX) process, a newly developed Enhanced Oil Recovery (EOR) process to recover heavy oil and bitumen, has been studied theoretically and experimentally and is found a promising EOR method for certain heavy oil... more
The Vapor Extraction (VAPEX) process, a newly developed Enhanced Oil Recovery (EOR) process to recover heavy oil and bitumen, has been studied theoretically and experimentally and is found a promising EOR method for certain heavy oil reservoirs. In this work, a simulation study ...
The study of the water gas shift reaction performance in terms of complete conversions is presented. The behaviour of a membrane reactor (MR) consisting of a tubular microporous ceramic within a thin palladium membrane was compared with a... more
The study of the water gas shift reaction performance in terms of complete conversions is presented. The behaviour of a membrane reactor (MR) consisting of a tubular microporous ceramic within a thin palladium membrane was compared with a membrane reactor using a palladium/silver membrane. Membranes were developed in order to obtain a metallic layer thick enough to avoid any defects of the metallic layer and ensure infinite hydrogen selectivity with respect to other gases. The lumen of both membrane reactors was filled with the catalyst. The experiments were carried out by using nitrogen as inert gas in the streep having a flow rate ranging between 1 ×10 − 4 and 4×10 − 4 mol s − 1 in co-current and counter-current mode in the temperature range 331-350°C and in the feed molar flow range 3.05 ×10 − 5-7.1×10 − 5 mol s − 1. Hydrogen was the only one gas passing through both membranes. A complete separation of hydrogen from the other gases of the reaction system was obtained. The water gas shift reaction conversion was close to 100% by using the Pd/Ag membrane. A mathematical model was developed to interpret the experimental data. It described the system under isothermal conditions and considered an axial differential mass balance in terms of partial pressure for each chemical species. The simulation study and the experimental results show a satisfactory agreement and both highlight the possibility to shift towards 100% the conversion of the considered reaction.
Bridging the gaps between the GCM scales and the hydrological scales is a key issue when studying the impacts of potential climate changes on the water resources and, more generally, the links between climate variability and hydrological... more
Bridging the gaps between the GCM scales and the hydrological scales is a key issue when studying the impacts of potential climate changes on the water resources and, more generally, the links between climate variability and hydrological variability. This is especially true in tropical regions where (1) rainfall is highly variable in space and time due to its convective nature, and (2) measurements are scarce. Using high-resolution data collected in the semi arid region of Niamey, Niger, Lebel et al. [1998] have proposed a space-time model (the LBC model), allowing the disaggregation of large-scale estimates produced either from satellite images or general circulation model (GCM) outputs. The behavior of this model was shown to be globally satisfying when tested on a small number of selected Sahelian mesoscale convective complexes (SMCCs). However, to be of use in simulation studies of the impact of climate changes as predicted by GCMs or in an operational context where only satellite data are readily available, a more systematic validation was required. Also, the initial version of the model was intended at dealing with SMCCs only, leaving aside the other convective systems displaying a less coherent spatial structure. This led to develop a new version of the LBC model, presented here, characterized by the following improvements: (1) a more precise modeling of the spatial structure of the total storm rain fields by taking into account their anisotropy and using a nested covariance, (2) a better representation of the storm kinematic by dealing with arrival times of rain rather than with speeds of movement, (3) a revision of the parameters used to define the standard hyetograph which is the basis of the time disaggregation algorithm. This new version of the model has two main advantages as compared to the older one: (1) the capacity of dealing with every kind of Sahetian mesoscale convective systems (SMCSs), which account for more than 90% of the total annual rainfall in the region; (2) the possibility of using the model both in simulation and in disaggregation modes. The validation of the model is carried out by comparing the rainfall statistics at various scales of aggregation, for a set of 170 observed SMCSs (corresponding to the 1990-1993 operating period) and a set of 170 simulated SMCSs. 1. Introduction Rainfall disaggregation is a key step in bridging the gaps between the hydrological scales, linked to the landscape heterogeneitys, and the atmospheric scales, de
A theoretical study of possible neuromuscular incapacitation based on the application of high-intensity, ultrashort electric pulses is presented. The analysis is applied to a rat, but the approach is general and can be extended to any... more
A theoretical study of possible neuromuscular incapacitation based on the application of high-intensity, ultrashort electric pulses is presented. The analysis is applied to a rat, but the approach is general and can be extended to any whole-animal and applies for any arbitrary pulse waveform. It is hypothesized that repeatable and reversible action potential blocks in nerves can be attained based on the electroporation mechanism. Our numerical studies are based on the Hodgkin-Huxley distributed circuit representation of nerves, and incorporate a nodal analysis for the time-dependent and volumetric perturbing potentials and internal electric fields in whole animals. The predictions are compared to actual 600-ns experimental reports on rats and shown to be in very good agreement. Effective strength-duration plots for neuromuscular incapacitation are also generated.
Soilless plant growth systems are widely used as a means to save irrigation water and to reduce groundwater contamination. While nutrient concentrations in the growth medium are depleted due to uptake by the plants, salinity and toxic... more
Soilless plant growth systems are widely used as a means to save irrigation water and to reduce groundwater contamination. While nutrient concentrations in the growth medium are depleted due to uptake by the plants, salinity and toxic substances accumulate due to transpiration. A theoretical model is suggested, to simulate nutrient uptake by plants grown in soilless cultures with recycled solutions. The model accounts for salinity accumulation with time and plant growth, and its effects on uptake of the different nutrients by means of interaction with Na and Cl ions. The sink term occurs due to uptake by a growing root system. Influx as a function of the ion concentration is according to Michaelis-Menten active mechanisms for K + , NO 3 − -N, NH 4 + -N, PO 4 -P, Ca 2+ , Mg 2+ and SO 4 2− , whose influx parameters are affected by Na and Cl − , but not with time (age). Sodium influx is passive above a critical concentration. Sum of cations-anions concentrations is balanced by Cl − to maintain electro-neutrality of the growth solution. Salinity (by means of Na concentration) suppresses root and leaf growth, which further effect uptake and transpiration. The model accounts for instantaneous transpiration losses, during daytime only and its effect on uptake of nutrients and plant development due to salt accumulation. The model was tested against NO 3 − and K + uptake by plants associated with cumulative transpiration and with different NaCl salinity levels. Deviations from observed K + uptake should be attributed to the salinity tolerance of the plants. In a study with data obtained from published literature, the model indicated that nutrient depletion and salinity buildup might be completely different with fully grown-up plants (that do not grow) and plants that grow with time. Depletion of different nutrients are according to their initial concentration and plant uptake rate, but also affected by their interactions with Na and Cl ions.
The paper investigates the consequences of sample selection in multilevel or mixed models, focusing on the random intercept two-level linear model under a selection mechanism acting at both hierarchical levels. The behavior of sample... more
The paper investigates the consequences of sample selection in multilevel or mixed models, focusing on the random intercept two-level linear model under a selection mechanism acting at both hierarchical levels. The behavior of sample selection and the resulting biases on the regression coefficients and on the variance components are studied both theoretically and through a simulation study. Most theoretical results exploit the properties of Normal and Skew-Normal distributions. In the case of clusters of size two, analytic formulae of the bias are provided that generalize Heckman's formulae. The analysis allows to outline a taxonomy of sample selection in the multilevel framework that can support the qualitative assessment of the problem in specific applications and the development of suitable techniques for diagnosis and correction.
The potential of a ground-based microwave temperature profiler to combine full tropospheric profiling with highresolution profiling of the boundary layer is investigated. For that purpose, statistical retrieval algorithms that incorporate... more
The potential of a ground-based microwave temperature profiler to combine full tropospheric profiling with highresolution profiling of the boundary layer is investigated. For that purpose, statistical retrieval algorithms that incorporate observations from different elevation angles and frequencies are derived from long-term radiosonde data. A simulation study shows the potential to significantly improve the retrieval performance in the lowest kilometer by combining angular information from relatively opaque channels with zenith-only information from more transparent channels. Observations by a state-of-the-art radiometer employed during the International Lindenberg Campaign for Assessment of Humidity and Cloud Profiling Systems and Its Impact on High-Resolution Modeling (LAUNCH) in Lindenberg, Germany, are used for an experimental evaluation with observations from a 99-m mast and radiosondes. The comparison not only reveals the high accuracy achieved by combining angular and spectral observations (overall, less than 1 K below 1.5 km), but also emphasizes the need for a realistic description of radiometer noise within the algorithm. The capability of the profiler to observe the height and strength of low-level temperature inversions is highlighted.
Mixture modeling within the context of pharmacokinetic (PK)/pharmacodynamic (PD) mixed effects modeling is a useful tool to explore a population for the presence of two or more subpopulations, not explained by evaluated covariates. At... more
Mixture modeling within the context of pharmacokinetic (PK)/pharmacodynamic (PD) mixed effects modeling is a useful tool to explore a population for the presence of two or more subpopulations, not explained by evaluated covariates. At present, statistical tests for the existence of mixed populations have not been developed. Therefore, a simulation study was undertaken to evaluate mixture modeling with NONMEM and explore the following questions. First, what is the probability of concluding that a mixed population exists when there truly is not a mixture (false positive significance level)? Second, what is the probability of concluding that a mixed population (two subpopulations) exists when there is truly a mixed population (power), and how well can the mixture be estimated, both in terms of the population parameters and the individual subject classification. Seizure count data were simulated using a Poisson distribution such that each subject's count could decrease from its baseline value, as a function of dose via an Emax model. The dosing design for the simulation was based on a trial with the investigational anti-epileptic drug pregabalin. Four hundred and forty seven subjects received pregabalin as add on therapy for partial seizures, each with a baseline seizure count and up to three subsequent seizure counts. For the mixtures, the two subpopulations were simulated to differ in their Emax values and relative proportions. One subpopulation always had its Emax set to unity (Emax hi), allowing the count to approach zero with increasing dose. The other subpopulation was allowed to vary in its Emax value (Emax lo=0.75, 0.5, 0.25, and 0) and in its relative proportion (pr) of the population (pr=0.05, 0.10, 0.25, and 0.50) giving a total of 4 ⋅ 4=16 different mixtures explored. Three hundred data sets were simulated for each scenario and estimations performed using NONMEM. Metrics used information about the parameter estimates, their standard errors (SE), the difference between minimum objective function (MOF) values for mixture and non-mixture models (MOF(δ)), the proportion of subjects classified correctly, and the estimated conditional probabilities of a subject being simulated as having Emax lo (Emax hi) given that they were estimated as having Emaxlo (Emax hi) and being estimated as having Emaxlo (Emax hi) given that they were simulated as having Emax lo (Emax hi). The false positive significance level was approximately 0.04 (using all 300 runs) or 0.078 (using only those runs with a successful covariance step), when there was no mixture. When simulating mixed data and for those characterizations with successful estimation and covariance steps, the median (range) percentage of 95% confidence intervals containing the true values for the parameters defining the mixture were 94% (89–96%), 89.5% (58–96%), and 95% (92–97%) for pr, Emax lo, and Emax hi, respectively. The median value of the estimated parameters pr, Emax lo (excluding the case when Emax lo was simulated to equal 0) and Emax hi within a scenario were within ±28% of the true values. The median proportion of subjects classified correctly ranged from 0.59 to 0.96. In conclusion, when no mixture was present the false positive probability was less than 0.078 and when mixtures were present they were characterized with varying degrees of success, depending on the nature of the mixture. When the difference between subpopulations was greater (as Emax lo approached zero or pr approached 0.5) the mixtures became easier to characterize.
This paper presents a maximum power point tracking (MPPT) controller employing a modified hill top algorithm implemented using embedded microcontroller for solar photovoltaic (PV) module. The proposed MPPT algorithm has high efficiency... more
This paper presents a maximum power point tracking (MPPT) controller employing a modified hill top algorithm implemented using embedded microcontroller for solar photovoltaic (PV) module. The proposed MPPT algorithm has high efficiency compared to conventional hill top algorithm in terms of transferring power from source to load. The designed controller regulates the output voltage through control of the DC-DC boost converter under varying environmental conditions. The comparative result for hill top and modified hill top algorithm based on the proposed MPPT controller has been obtained through the simulation studies in PSCAD/EMTDC software. The validity of the proposed modified hill top algorithm has been obtained through experimental implementation of the MPPT controller on a solar PV module driving a load.
We report here on our latest developments in the forward and inverse problems of electrocardiology. In the forward problem, a coupled cellular model of cardiac excitation-contraction is embedded within an anatomically realistic model of... more
We report here on our latest developments in the forward and inverse problems of electrocardiology. In the forward problem, a coupled cellular model of cardiac excitation-contraction is embedded within an anatomically realistic model of the cardiac ventricles, which is itself embedded within a torso model. This continuum modelling framework allows the effects of cellular-level activity on the surface electrocardiogram (ECG) to be carefully examined. Furthermore, the contributions of contraction and local ischemia on body surface recordings can also be elucidated. Such information can provide theoretical limits to the sensitivity and ultimately the detection capability of body surface ECG recordings. Despite being very useful, such detailed forward modelling is not directly applicable when seeking to use densely sampled ECG information to assess a patient in a clinical environment (the inverse problem). In such a situation patient specific models must be constructed and, due to the nature of the inverse problem, the level of detail that can be reliably reproduced is limited. Extensive simulation studies have shown that the accuracy with which the heart is localised within the torso is the primary limiting factor. To further identify the practical performance capabilities of the current inverse algorithms, high quality experimental data is urgently needed. We have been working towards such an objective with a number of groups, including our local hospital in Auckland. At that hospital, in patients undergoing catheter ablation surgery, up to 256 simultaneous body surface signals were recorded by using various catheter pacing protocols. The geometric information required to customize the heart and torso model was obtained using a combination of ultrasound and laser scanning technologies. Our initial results indicate that such geometric imaging modalities are sufficient to produce promising inversely-constructed activation profiles.
This study was designed to evaluate the feasibility of using cylindrical ultrasound transducers mounted on a catheter for the ablation of cardiac tissues. In addition, the effects of ultrasound frequency and power was evaluated both using... more
This study was designed to evaluate the feasibility of using cylindrical ultrasound transducers mounted on a catheter for the ablation of cardiac tissues. In addition, the effects of ultrasound frequency and power was evaluated both using computer simulations and in vitro experiments. Frequencies of 4.5, 6, and 10 MHz were selected based on the simulation studies and manufacturing feasibility. These transducers were mounted on the tip of 7-French catheters and applied in vitro to fresh ventricular canine endocardium, submerged in flowing degassed saline at 37 degree C. When the power was regulated to maintain transducer interface temperature at 90-100 degree C, the 10-, 6-, and 4.5-MHz transducers generated a lesion depth of 5.9 +/- 0.2 mm, 4.6 +/- 1.0 mm, and 5.3 +/- 0.9 mm, respectively. The 10-MHz transducer was chosen for the in vivo tests since the maximum lesion depth was achieved with the lowest power. Two dogs were anesthetized and sonications were performed in both the left and right ventricles. The 10-MHz cylindrical transducers caused an average lesion depth of 6.4 +/- 2.5 mm. In conclusion, the results show that cylindrical ultrasound transducers can be used for cardiac tissue ablation and that they may be able to produce deeper tissue necrosis than other methods currently in use.
The paper presents a novel control approach for the robot-assisted motion augmentation of disabled subjects during the standing-up manoeuvre. The main goal of the proposal is to integrate the voluntary activity of a person in the control... more
The paper presents a novel control approach for the robot-assisted motion augmentation of disabled subjects during the standing-up manoeuvre. The main goal of the proposal is to integrate the voluntary activity of a person in the control scheme of the rehabilitation robot. The algorithm determines the supportive force to be tracked by a robot force controller. The basic idea behind
Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman)... more
Robust calibration of option valuation models to quoted option prices is non-trivial but crucial for good performance. A framework based on the state-space formulation of the option valuation model is introduced. Non-linear (Kalman) filters are needed to do inference since the models have latent variables (e.g. volatility). The statistical framework is made adaptive by introducing stochastic dynamics for the parameters. This allows the parameters to change over time, while treating the measurement noise in a statistically consistent way and using all data efficiently. The performance and computational efficiency of standard and iterated extended Kalman filters (EKF and IEKF) are investigated. These methods are compared to common calibration such as weighted least squares (WLS) and penalized weighted least squares (PWLS). A simulation study, using the Bates model, shows that the adaptive framework is capable of tracking time varying parameters and latent processes such as stochastic ...
In this paper, we consider constructing reliable confidence intervals for regression parameters using robust M-estimation allowing for the possibility of time series correlation among the errors. The change of variance function is used to... more
In this paper, we consider constructing reliable confidence intervals for regression parameters using robust M-estimation allowing for the possibility of time series correlation among the errors. The change of variance function is used to approximate the theoretical coverage ...
Transverse vibrations of drillstrings caused by axial loading and impact with the wellbore wall is studied. The drillstring is modeled as a slender beam with a simply-supported lower part. The Assumed Modes Method is used to obtain the... more
Transverse vibrations of drillstrings caused by axial loading and impact with the wellbore wall is studied. The drillstring is modeled as a slender beam with a simply-supported lower part. The Assumed Modes Method is used to obtain the governing equations of motion. Non-linear coupling terms are retained in the formulation which leads to full coupling between the axial and transverse vibration modes. The Hertzian contact theory is used to model impacts between the drillstring and the wellbore wall. The governing non-linear equations are solved numerically to obtain the response. Simulation studies are used to compare the dynamic response obtained from both fully coupled and uncoupled models. Significant differences between the two models are observed. The coupled model yields an unstable behavior at a lower load than the uncoupled model, and the subsequent response is not periodic.
In a road simulator study, a significant source of sub-micrometer fine particles produced by the road-tire interface was observed. Since the particle size distribution and source strength is dependent on the type of tire used, it is... more
In a road simulator study, a significant source of sub-micrometer fine particles produced by the road-tire interface was observed. Since the particle size distribution and source strength is dependent on the type of tire used, it is likely that these particles largely originate from the tires, and not the road pavement. The particles consisted most likely of mineral oils from the softening filler and fragments of the carbon-reinforcing filler material (soot agglomerates). This identification was based on transmission electron microscopy studies of collected ultrafine wear particles and on-line thermal treatment using a thermodesorber.
Dynamic lightpath protection in survivable WDM networks requires finding a pair of diverse routes (i.e., a primary route and a backup route that are link-disjoint) that form a cycle upon the arrival of a new connection request. In this... more
Dynamic lightpath protection in survivable WDM networks requires finding a pair of diverse routes (i.e., a primary route and a backup route that are link-disjoint) that form a cycle upon the arrival of a new connection request. In this paper, we propose a novel hybrid algorithm for this problem based on a combination of the mobile agents technique and genetic algorithms (GA). By keeping a suitable number of mobile agents in the network to cooperatively explore the network states and continuously report diverse routes into the routing tables, our new hybrid algorithm can promptly determine the first population of cycles for a new request based on the routing table of its source node, without requiring the time consuming process associated with current GA-based lightpath protection schemes. We furthermore improve the performance of our algorithm by introducing a more advanced fitness function. Extensive simulation studies on the ns-2 network simulator show that our hybrid algorithm achieve a significantly lower blocking probability and a smaller execution time than the conventional survivable routing algorithms.
The paper describes a variable speed wind generation system where fuzzy logic principles are used for efficiency optimization and performance enhancement control. A squirrel cage induction generator feeds the power to a double-sided pulse... more
The paper describes a variable speed wind generation system where fuzzy logic principles are used for efficiency optimization and performance enhancement control. A squirrel cage induction generator feeds the power to a double-sided pulse width modulated converter system which pumps power to a utility grid or can supply to an autonomous system. The generation system has fuzzy logic control with vector control in the inner loops. A fuzzy controller tracks the generator speed with the wind velocity to extract the maximum power. A second fuzzy controller programs the machine flux for light load efficiency improvement, and a third fuzzy controller gives robust speed control against wind gust and turbine oscillatory torque. The complete control system has been developed, analyzed, and validated by simulation study. Performances have then been evaluated in detail.
Swarm, a satellite constellation to measure Earth's magnetic field with unpreceded accuracy, has been selected by ESA for launch in 2009. The mission will provide the best ever survey of the geomagnetic field and its temporal... more
Swarm, a satellite constellation to measure Earth's magnetic field with unpreceded accuracy, has been selected by ESA for launch in 2009. The mission will provide the best ever survey of the geomagnetic field and its temporal evolution, in order to gain new insights into the Earth system by improving our understanding of the Earth's interior and climate. An End-to-End mission
The present document summarizes the work that has been done in Work Package 5 (WP5) where the focus is on modelling and control of the Uniflex-PM system. The models used in the WP5 are described in detail. Since the grid synchronization... more
The present document summarizes the work that has been done in Work Package 5 (WP5) where the focus is on modelling and control of the Uniflex-PM system. The models used in the WP5 are described in detail. Since the grid synchronization and monitoring techniques play an important role in the control of the Uniflex-PM system, a special attention is paid to this topic. The events in the electrical networks are treated in detail in terms of definitions and classifications from standards, origins, and surveys in different countries. A summary of the grid synchronization and monitoring methods is also given with a special focus on Phase Locked Loop systems. The response of the single and three phase PLLs is analyzed under different grid events. Four control strategies are studied in the WP5, namely: synchronous reference frame control, predictive control stationary reference frame control with Proportional Resonant current controllers and natural reference frame control with Proportional Resonant current controllers. These control strategies are evaluated under different grid conditions such as voltage excursions, voltage unbalances, phase jumps and shortcircuits in the Point of Common Coupling. Finally, some conclusions and recommendations for future work are given.
Declustered data organizations have been proposed to achieve less-intrusive reconstruction of a failed disk's conten ts. In previous work, Holland and Gibson identified six desirabl e properties for ideal layouts. Ideal layouts exist... more
Declustered data organizations have been proposed to achieve less-intrusive reconstruction of a failed disk's conten ts. In previous work, Holland and Gibson identified six desirabl e properties for ideal layouts. Ideal layouts exist for a very limited family of configurations. The P RIME data layout deviates from the stated ideal only slightly and its run-time performance is very good for
Smith et al. report a large study of the accuracy of 38 search procedures for recovering effective connections in simulations of DCM models under 28 different conditions. Their results are disappointing: no method reliably finds and... more
Smith et al. report a large study of the accuracy of 38 search procedures for recovering effective connections in simulations of DCM models under 28 different conditions. Their results are disappointing: no method reliably finds and directs connections without large false negatives, large false positives, or both. Using multiple subject inputs, we apply a previously published search algorithm, IMaGES, and novel orientation algorithms, LOFS, in tandem to all of the simulations of DCM models described by Smith et al. (2011). We find that the procedures accurately identify effective connections in almost all of the conditions that Smith et al. simulated and, in most conditions, direct causal connections with precision greater than 90% and recall greater than 80%.
A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental... more
A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed. C 2002 Elsevier Science (USA)
In this paper, we present a Minimum Spanning Tree (MST) based topology control algorithm, called Local Minimum Spanning Tree (LMST), for wireless multi-hop networks. In this algorithm, each node builds its local minimum spanning tree... more
In this paper, we present a Minimum Spanning Tree (MST) based topology control algorithm, called Local Minimum Spanning Tree (LMST), for wireless multi-hop networks. In this algorithm, each node builds its local minimum spanning tree independently and only keeps on-tree nodes that are one-hop away as its neighbors in the final topology. We analytically prove several important properties of LMST: (1) the topology derived under LMST preserves the network connectivity; (2) the node degree of any node in the resulting topology is bounded by 6; and (3) the topology can be transformed into one with bi-directional links (without impairing the network connectivity) after removal of all uni-directional links. These results are corroborated in the simulation study.
Advances in mobile networks and positioning technologies have made location information a valuable asset in vehicular ad-hoc networks (VANETs). However, the availability of such information must be weighted against the potential for... more
Advances in mobile networks and positioning technologies have made location information a valuable asset in vehicular ad-hoc networks (VANETs). However, the availability of such information must be weighted against the potential for abuse. In this paper, we investigate the problem of alleviating unauthorized tracking of target vehicles by adversaries in VANETs. We propose a vehicle density-based location privacy (DLP) scheme which can provide location privacy by utilizing the neighboring vehicle density as a threshold to change the pseudonyms. We derive the delay distribution and the average total delay of a vehicle within a density zone. Given the delay information, an adversary may still be available to track the target vehicle by some selection rules. We investigate the effectiveness of DLP based on extensive simulation study. Simulation results show that the probability of successful location tracking of a target vehicle by an adversary is inversely proportional to both the traffic arrival rate and the variance of vehicles' speed. Our proposed DLP scheme also has a better performance than both Mix-Zone scheme and AMOEBA with random silent period.
In many applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. Here we consider partially monotone problems, where the target... more
In many applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. Here we consider partially monotone problems, where the target variable depends monotonically on some of the predictor variables but not all. We propose an approach to build partially monotone models based on the convolution of monotone neural networks and kernel functions. The results from simulations and a real case study on house pricing show that our approach has significantly better performance than partially monotone linear models. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its... more
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
We present a 3-D simulation study, supported by experimental results, which clarifies the roleplayed by theparasitic BJT activation on the interaction between generated charge and electricfield during ion impact in SEBISEGR ofpower MOSFET... more
We present a 3-D simulation study, supported by experimental results, which clarifies the roleplayed by theparasitic BJT activation on the interaction between generated charge and electricfield during ion impact in SEBISEGR ofpower MOSFET This activation is caused by the movement ofthe holes deposited during the ion impact and gives rise to a huge amount ofcharge that is sustained by avalanche multiplication. During SEGR phenomena this generated charge interact with the high electricfield that is formed underneath the gate oxide thus causing damages to it. Whereas during SEBphenomena the generated charge causes a double injection phenomenon to takeplace that induces an electrical instability and, then, the MOSFET destruction.