Programming Models Research Papers - Academia.edu (original) (raw)
In this paper we will solve some linear programming problems by solving systems of differential equations using game theory. The linear programming problem must be a classical constraints problem or a classical menu problem, i.e. a... more
In this paper we will solve some linear programming problems by solving systems of differential equations using game theory. The linear programming problem must be a classical constraints problem or a classical menu problem, i.e. a maximization/minimization problem in the canonical form with all the coefficients (from objective function, constraints matrix and right sides) positive. Firstly we will transform the linear programming problem such that the new problem and its dual have to be solved in order to find the Nash equilibrium of a matriceal game. Next we find the Nash equilibrium by solving a system of differential equations as we know from evolutionary game theory, and we express the solution of the obtained linear programming problem (by the above transformation of the initial problem) using the Nash equilibrium and the corresponding mixed optimal strategies. Finally, we transform the solution of the obtained problem to obtain the solution of the initial problem. We make als...
Service-oriented computing is a popular design methodology for large scale business computing systems. Grid computing enables the sharing of distributed computing and data resources such as processing, networking and storage capacity to... more
Service-oriented computing is a popular design methodology for large scale business computing systems. Grid computing enables the sharing of distributed computing and data resources such as processing, networking and storage capacity to create a cohesive resource environment for executing distributed applications in service-oriented computing. Grid computing represents more business-oriented orchestration of pretty homogeneous and powerful distributed computing resources to optimize the execution of time consuming process as well. Grid computing have received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational in e-Science and businesses. The objective of the journal is to serve as both the premier venue for presenting foremost research results in the area and as a forum for introducing and exploring new concepts.
We sketch the main aspects of Greece’s electricity system from a market-based point of view. First, we provide data concerning the mix of generating units, the system load and the frequency-related ancillary services. Then, we formulate a... more
We sketch the main aspects of Greece’s electricity system from a market-based point of view. First, we provide data concerning the mix of generating units, the system load and the frequency-related ancillary services. Then, we formulate a simplified model of Greece’s Day-Ahead Scheduling (DAS) problem that constitutes the basis for our analysis. We examine various cases concerning the format of the objective function as well as the pricing and compensation schemes. An illustrative example is used to indicate the impact of reserve and fixed (start-up, shut-down, and minimum-load) costs on the resulting dispatching of units and on clearing prices, under the different cases. Our analysis aims at unveiling the impact of cost components other than energy offers on the DAS problem, and provide the grounds for future research on the design of the electricity market.
Signal waveforms are very fast dampening oscillatory time series composed of exponential functions. The regular least squares fitting techniques are often unstable when used to fit exponential functions to such signal waveforms since such... more
Signal waveforms are very fast dampening oscillatory time series composed of exponential functions. The regular least squares fitting techniques are often unstable when used to fit exponential functions to such signal waveforms since such functions are highly correlated. Of late, some attempts have been made to estimate the parameters of such functions by Monte Carlo based search/random walk algorithms. In this study we use the Differential Evaluation based method of least squares to fit the exponential functions and obtain much more accurate results.
The significant increase in complexity of Exascale platforms due to energy-constrained, billion-way parallelism, with major changes to processor and memory architecture, requires new energy-efficient and resilient programming techniques... more
The significant increase in complexity of Exascale platforms due to energy-constrained, billion-way parallelism, with major changes to processor and memory architecture, requires new energy-efficient and resilient programming techniques that are portable across mul-tiple future generations of machines. We believe that guarantee-ing adequate scalability, programmability, performance portability, resilience, and energy efficiency requires a fundamentally new ap-proach, combined with a transition path for existing scientific ap-plications, to fully explore the rewards of todays and tomorrows systems. We present HPX – a parallel runtime system which ex-tends the C++11/14 standard to facilitate distributed operations, enable fine-grained constraint based parallelism, and support run-time adaptive resource management. This provides a widely ac-cepted API enabling programmability, composability and perfor-mance portability of user applications. By employing a global ad-dress space, we seam...
AbstractIndonesia has been stricken by so many disasters in the last decade. To name a few of the major disasters that happened in Indonesia are the Tsunami in Aceh in 2004, earthquake in Yogyakarta in 2006 and the recent earthquakes in... more
AbstractIndonesia has been stricken by so many disasters in the last decade. To name a few of the major disasters that happened in Indonesia are the Tsunami in Aceh in 2004, earthquake in Yogyakarta in 2006 and the recent earthquakes in southern Java and ...
In this paper we will solve some linear programming problems by solving systems of differential equations using game theory. The linear programming problem must be a classical constraints problem or a classical menu problem, i.e. a... more
In this paper we will solve some linear programming problems by solving systems of differential equations using game theory. The linear programming problem must be a classical constraints problem or a classical menu problem, i.e. a maximization/minimization problem in the canonical form with all the coefficients (from objective function, constraints matrix and right sides) positive. Firstly we will transform the linear programming problem such that the new problem and its dual have to be solved in order to find the Nash equilibrium of a matriceal game. Next we find the Nash equilibrium by solving a system of differential equations as we know from evolutionary game theory, and we express the solution of the obtained linear programming problem (by the above transformation of the initial problem) using the Nash equilibrium and the corresponding mixed optimal strategies. Finally, we transform the solution of the obtained problem to obtain the solution of the initial problem. We make als...
This paper analyzes the effect of the recent market crash on the international diversification of equity portfolios from the perspective of dependence structure. We use the generalized Pareto distribution to fit the left and the right... more
This paper analyzes the effect of the recent market crash on the international diversification of equity portfolios from the perspective of dependence structure. We use the generalized Pareto distribution to fit the left and the right tail of each return distribution in order to evaluate the upside and the downside risk measures separately after removing both autocorrelation and heteroscedasticity in the historical returns. We thereafter build a multivariate generalized Pareto distribution and draw one million simulated returns for each time series using three Archimedean copulas – Gumbel, Clayton and Frank. Using the data from emerging and developed countries; we find that the Clayton copula exhibits strong left tail dependence structure with higher Sharpe ratio and relatively weak right tail dependence after the subprime crisis. We also find that the Clayton copula is ultimately useful in modelling the left tail dependence structure in bear markets only. In addition; our empirical...
We study several aspects of the dynamic programming approach to optimal control of abstract evolution equations, including a class of semilinear partial differential equations. We introduce and prove a verification theorem which provides... more
We study several aspects of the dynamic programming approach to optimal control of abstract evolution equations, including a class of semilinear partial differential equations. We introduce and prove a verification theorem which provides a sufficient condition for optimality. Moreover we prove sub- and superoptimality principles of dynamic programming and give an explicit construction of epsilon\epsilonepsilon-optimal controls.
The Intel Xeon Phi offers a promising solution to coprocessing, since it is based on the popular x86 instruction set. However, to fully utilize its potential, applications must be vectorized to leverage the wide SIMD lanes, in addition to... more
The Intel Xeon Phi offers a promising solution to coprocessing, since it is based on the popular x86 instruction set. However, to fully utilize its potential, applications must be vectorized to leverage the wide SIMD lanes, in addition to effective large-scale shared memory parallelism. Compared to the SIMT execution model on GPGPUs with CUDA or OpenCL, SIMD parallelism with a SSE-like instruction set imposes many restrictions, and has generally not benefitted applications involving branches, irregular accesses, or even reductions in the past. In this paper, we consider the problem of accelerating applications involving different communication patterns on Xeon Phis, with an emphasis on effectively using available SIMD parallelism. We offer an API for both shared memory and SIMD parallelization, and demonstrate its implementation. We use implementations of overloaded functions as a mechanism for providing SIMD code, which is assisted by runtime data reordering and our methods to effectively manage control flow. Our extensive evaluation with 6 popular applications shows large gains over the SIMD parallelization achieved by the production (ICC) compiler, and we even outperform OpenMP for MIMD parallelism.
This paper outlines the practical steps which need to be undertaken to use autoregressive integrated moving average (ARIMA) time series models for forecasting Irish inflation. A framework for ARIMA forecasting is drawn up. It considers... more
This paper outlines the practical steps which need to be undertaken to use autoregressive integrated moving average (ARIMA) time series models for forecasting Irish inflation. A framework for ARIMA forecasting is drawn up. It considers two alternative approaches to the issue of identifying ARIMA models - the Box Jenkins approach and the objective penalty function methods. The emphasis is on forecast performance which suggests more focus on minimising out-of-sample forecast errors than on maximising in-sample ‘goodness of fit’. Thus, the approach followed is unashamedly one of ‘model mining’ with the aim of optimising forecast performance. Practical issues in ARIMA time series forecasting are illustrated with reference to the harmonised index of consumer prices (HICP) and some of its major sub-components.
This document analyzes the changes that suffer the amounts demanded of a good, due to changes in its price. For this we use the decomposition in Income and Substitution effects posed by Slutsky and Hicks in their work concerning the... more
This document analyzes the changes that suffer the amounts demanded of a good, due to changes in its price. For this we use the decomposition in Income and Substitution effects posed by Slutsky and Hicks in their work concerning the analysis of the demand. In general, when the price of a good varies the amount also change, according to the relation of the demand function with the price variable, these may increase or decrease depending on the function. In some cases according with the preferences of individuals, the change in the price of a good, it can affect the demand of the other good, depending on the relation between them, it means, there could be a relation of substitutability, complementarity or neutrality between goods that are analyzed. The preferences chosen to develop this document are those of Leontief type (fixed proportions - perfect complements), the formulas used of the substitution-effect and the income-effect are proposed by Hal Varian in his text Intermediate Mic...
ARM single-ISA heterogeneous multicore processors combine high-performance big cores with power-efficient small cores. They aim at achieving a suitable balance between performance and energy. However , a main challenge is to program such... more
ARM single-ISA heterogeneous multicore processors combine high-performance big cores with power-efficient small cores. They aim at achieving a suitable balance between performance and energy. However , a main challenge is to program such architectures so as to efficiently exploit their features. In this paper, we study the impact on performance and energy trade-offs of single-ISA architecture according to OpenMP 3.0 and the OmpSs programming models. We consider different symmetric/asymmetric architecture configurations in terms of core frequency and core count between big and LITTLE clusters. Experiments are conducted on both a real Samsung Exynos 5 Octa system-on-chip and the gem5/McPAT simulation frameworks. Results show that OmpSs implementations are more sensitive to loop scheduling parameters than OpenMP 3.0. In most cases, best OmpSs configurations significantly outperform OpenMP ones. While cluster frequency asymmetry provides uninteresting results, asymmetric cluster configuration with single high-performance core and multiple low-power cores provides better performance/energy trade-offs in many cases.
The programming of heterogeneous clusters is inherently complex, as these architectures require programmers to manage both distributed memory and computational units with a very different nature. Fortunately, there has been extensive... more
The programming of heterogeneous clusters is inherently complex, as these architectures require programmers to manage both distributed memory and computational units with a very different nature. Fortunately, there has been extensive research on the development of frameworks that raise the level of abstraction of cluster-based applications, thus enabling the use of programming models that are much more convenient that the traditional one based on message-passing. One of such proposals is the Hierarchically Tiled Array (HTA), a data type that represents globally distributed arrays on which it is possible to perform a wide range of data-parallel operations. In this paper we explore for the first time the development of heterogeneous applications for clusters using HTAs. In order to use a high level API also for the heterogenous parts of the application, we developed them using the Heterogeneous Programming Library (HPL), which operates on top of OpenCL but providing much better programmability. Our experiments show that this approach is a very attractive alternative, as it obtains large programmability benefits with respect to a traditional implementation based on MPI and OpenCL, while presenting average performance overheads just around 2%.
Abstract: A high degree of multicollinearity among the explanatory variables severely impairs estimation of regression coefficients by the Ordinary Least Squares. Several methods have been suggested to ameliorate the deleterious effects... more
Abstract: A high degree of multicollinearity among the explanatory variables severely impairs estimation of regression coefficients by the Ordinary Least Squares. Several methods have been suggested to ameliorate the deleterious effects of multicollinearity.
In this work, we discuss an extension of the set of principles that should guide the future design and development of skeletal programming systems, as defined by Cole in his “pragmatic manifesto’” paper. The three further principles... more
In this work, we discuss an extension of the set of principles that should guide the future design and development of skeletal programming systems, as defined by Cole in his “pragmatic manifesto’” paper. The three further principles introduced are related to the ability to exploit existing sequential code as well as to the ability to target typical modern architectures, those made out of heterogeneous processing elements with dynamically varying availability, processing power and connectivity features such as grids or heterogeneous, non-dedicated clusters. We outline two skeleton based programming environments currently developed at our university and we discuss how these environments adhere to the proposed set of principles. Eventually, we outline how some other relevant, well-known skeleton environments conform to the same set of principles.
This paper examines the panel data models when the regression coefficients are fixed, random, and mixed, and proposed the different estimators for this model. We used the Mote Carlo simulation for making comparisons between the behavior... more
This paper examines the panel data models when the regression coefficients are fixed, random, and mixed, and proposed the different estimators for this model. We used the Mote Carlo simulation for making comparisons between the behavior of several estimation methods, such as Random Coefficient Regression (RCR), Classical Pooling (CP), and Mean Group (MG) estimators, in the three cases for regression coefficients. The Monte Carlo simulation results suggest that the RCR estimators perform well in small samples if the coefficients are random. While CP estimators perform well in the case of fixed model only. But the MG estimators perform well if the coefficients are random or fixed.