Joan Vila - Academia.edu (original) (raw)

Papers by Joan Vila

Research paper thumbnail of Building Ethernet Drivers on RTLinux-GPL1

This paper describes how to port Linux Ethernet drivers to RTLinux-GPL (hereafter RTLinux). It pr... more This paper describes how to port Linux Ethernet drivers to RTLinux-GPL (hereafter RTLinux). It presents an architecture that will let Linux and RTLinux to share the same driver while accessing an Ethernet card. This architecture will let real-time applications to access the network as a character device (i.e. using the standard open(), read(), close(), write() and ioctl() calls) and also would let Linux to use the driver as it always did (ifconfig, route, etc.). This paper also discusses some scheduling policies that will give to RTLinux the highest priority while accessing the Ethernet card.

Research paper thumbnail of Risk-Based Method for Determining Separation Minima in Unmanned Aircraft Systems

Journal of Air Transportation

Risk assessment is a key issue in enabling the use of unmanned aircraft systems (UASs) in nonsegr... more Risk assessment is a key issue in enabling the use of unmanned aircraft systems (UASs) in nonsegregated areas, especially in very low-level airspace and urban areas. The specific operations risk assessment (SORA) methodology represents an important milestone in performing the risk assessments required by aviation safety agencies to UAS operators in specific operations. However, the SORA is a qualitative method used for UASs operating inside a well-bounded operational volume. This paper proposes a quantitative method that can not only be used in a closed volume but also in an airspace shared by several UAS missions and even general aviation. The basis for this is providing a separation volume (a “bubble”) to prevent collisions that is calculated using a risk-based approach. The method consists of setting a target level of safety, which is accomplished using a tradeoff between strategic and tactical mitigations. A probabilistic methodology for performing quantitative risk assessment o...

Research paper thumbnail of A fast method to optimise network resources for video-on-demand transmission

Proceedings of the 26th Euromicro Conference. EUROMICRO 2000. Informatics: Inventing the Future, 2000

The transmission of video on demand (VoD) will become one of the most successful services on the ... more The transmission of video on demand (VoD) will become one of the most successful services on the Internet. This implies the playback of stored video over a high-speed network with a strict quality of service (QoS). Guaranteeing this QoS requires a very demanding reservation of network resources. That makes optimisation of network resources a key issue. The paper introduces a

Research paper thumbnail of Network Performance Analysis based on Histogram Workload Models

2007 15th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, 2007

Network performance analysis relies mainly on two models: a workload model and a performance mode... more Network performance analysis relies mainly on two models: a workload model and a performance model. This paper proposes to use histograms for characterising the arrival workloads and a performance model based on a stochastic process. This new stochastic process works directly with histograms using a set of specific operators. The result is the buffer occupancy distribution. The loss rate and network delay distribution can be obtained using this distribution. Three traffic models are proposed: the first model (the HD model) is a basic histogram model that is compact and reflects only first order statistics. The other two captures second order statistics: the (HD (N) model) is based on obtaining several histograms using different time scales and the (the HD (H) model) is based on the Hurst parameter and it is long-range dependent. This method is evaluated using several real traffic network workloads and the results show that the model is very accurate. The model forms an excellent basis for a decision support tool to allow system architects to predict the behavior of computer networks.

Research paper thumbnail of Efficient QoS Routing for Differentiated Services EF Flows

10th IEEE Symposium on Computers and Communications (ISCC'05)

ABSTRACT Expedited forwarding (EF) is the differentiated services class of service that provides ... more ABSTRACT Expedited forwarding (EF) is the differentiated services class of service that provides high quality transmission with node bounded delay. Nevertheless, in order to obtain a bounded network delay it is necessary to compute a route that meets the required end-to-end delay. Therefore, we study the requirements (bandwidth, buffer and delay) for a new EF connection. As detailed in the paper, with these constraints the routing algorithm is a NP-complete problem. Therefore, we present an efficient routing scheme that has low polynomial computational cost. Finally, the evaluation of this routing shows that is as efficient as using the exact routing algorithm.

Research paper thumbnail of Real-time transmission over Switched Ethernet using a contracts based framework

2009 IEEE Conference on Emerging Technologies & Factory Automation, 2009

ABSTRACT Switched Ethernet is being used for real time transmissions in industrial automation mor... more ABSTRACT Switched Ethernet is being used for real time transmissions in industrial automation more and and more. Most modern industrial switches are equipped with mechanisms to deal with time predictability. However, real-time transmission not only requires these mechanisms, but also the proper policies for managing network resources. This paper proposes the use of contracts. A contract is a set of transmission specifications which are negotiated between the applications and the run-time support. They define the application workload and the required performance guarantees. We implement contracts for real-time streaming as an extension of FRESCOR (framework for real-time embedded systems based on contracts). This framework was initially thought for providing deterministic performance guarantees to strictly periodic workloads. This work extends it by using the concept of classes of service (CoS) to deal with a wider range of workloads and guarantees and, particularly, with the transmission of highly variable bit rate (VBR) streams, like video. CoS enables, for example, joint transmission of real-time periodic workloads and VBR streams. CoS are implemented using a combination of resource reservation and resource preallocation techniques. The packet scheduling facilities of managed switches and Linux are shown to be key for managing network resources. Evaluations about the effectiveness of the extended FRESCOR framework and the feasibility of using switched Ethernet in real-time industrial environments are also presented.

Research paper thumbnail of Network Provisioning Using Multimedia Aggregates

Advances in Multimedia, 2007

Multimedia traffic makes network provisioning a key issue. Optimal provisioning of network resour... more Multimedia traffic makes network provisioning a key issue. Optimal provisioning of network resources is crucial for reducing the service cost of multimedia transmission. Multimedia traffic requires not only provisioning bandwidth and buffer resources in the network but also guaranteeing a given maximum end-to-end delay. In this paper we present methods and tools for the optimal dimensioning of networks based on multimedia aggregates. The proposed method minimises the network resources reservations of traffic aggregates providing a bounded delay. The paper also introduces several methods to generate multimedia traffic aggregation using real video traces. The method is evaluated using a network topology based on the European GÉANT network. The results of these simulations allow us to discover the relationship between a required delay and the necessary bandwidth reservation (or the achievable utilisation limit). An interesting conclusion of these scenarios is that, following several re...

Research paper thumbnail of Resource management for mobile operating systems based on the Active Object model

Personal devices are penetrating most technology market sectors nowadays. Despite their growing c... more Personal devices are penetrating most technology market sectors nowadays. Despite their growing computation power, they execute powerful yet heavy software platforms that, in the end, expose the physical hardware resource limitations. Such software platforms are becoming thick layers that contain an embedded mobile operating system with graphical tools, communication software supporting wired and wireless protocols, and virtual machines and hosting platforms to enable portable code execution. In the last decade, mobile devices have also become part of some real-time domains that in the past only used specialized computing hardware; this is the case of, for example, industrial control infrastructures where personal devices are used mainly for interfacing purposes. Still, the full adoption of personal embedded devices in real-time environments has not been achieved due to their temporal unpredictability derived, among other reasons, from the operating system's execution and concurrency model. Therefore, mechanisms for efficient and timely management of resources are needed to meet, at least, soft real-time constrains of the emerging application domains that are heavy resource consumers. In this paper, we describe a scheme for integrating resource management techniques on top of the concurrency model of embedded operating systems that use the active object concurrency model; we illustrate the approach by taking, just as example, the model of Symbian. Also, results are presented and discussed that validate the proposed resource management scheme.

Research paper thumbnail of Developing CAN based networks on RT-Linux

ETFA 2001. 8th International Conference on Emerging Technologies and Factory Automation. Proceedings (Cat. No.01TH8597)

Research paper thumbnail of Using exact feasibility tests for allocating real-time tasks in multiprocessor systems

Proceeding. 10th EUROMICRO Workshop on Real-Time Systems (Cat. No.98EX168)

Abstract This paper introduces improvements in partitioning schemes for multiprocessor real - tim... more Abstract This paper introduces improvements in partitioning schemes for multiprocessor real - time systems which allow higher processor utilization and enhanced schedulability by using exact feasibility tests to evaluate the schedulability limit of a processor The paper analyzes how to combine these tests with existing bin - packing algorithms for proces - sor allocation and provides new variants which are

Research paper thumbnail of Analysis of Self-Similar Workload on Real-Time Systems

2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium, 2010

ABSTRACT Real-time systems used in media processing and transmission produce bursty workloads wit... more ABSTRACT Real-time systems used in media processing and transmission produce bursty workloads with highly variable execution and transmission times. To avoid the drawbacks of using the worst-case approach with these workloads, this paper uses a variation of the usual real-time task model where the WCET is replaced by a discrete statistical distribution. Using this approach, tasks are characterized by their processing time over a sampling period. We could expect that increasing the sampling period would smooth, in principle, the workload variability and the proposed analysis would provide more deterministic long- term results. However, we have surprisingly observed that this variability does not decreases with the sampling period: work- loads are bursty on many time scales. This property is known as self-similarity and is measured using the Hurst parameter.This paper studies how to properly model and analyze self- similar task sets showing the influence of the Hurst parameter on the schedulability analysis. It shows, through an analytical model and simulations, that this parameter may have a very negative impact on system performance. As a conclusion, it can be stated that this factor should be taken into account for statistical analysis of real-time systems, since simplistic workload models can lead to inaccurate results. It also shows that the negative effect of this parameter can be bounded using scheduling policies based on the bandwidth isolation principle.

Research paper thumbnail of A fast and efficient backup routing scheme with bounded delay guarantees

2006 2nd Conference on Next Generation Internet Design and Engineering, 2006. NGI '06.

Reliable transmission is essential for several real-time applications. Backup channels introduce ... more Reliable transmission is essential for several real-time applications. Backup channels introduce the notion of availability at the cost of increasing the use of network resources. However, this over provisioning of resources is potentially wasted, since packet delays are usually lower than the required end-to-end channel delay. The goal of this paper is to present a new scheme for obtaining the

Research paper thumbnail of On the nature and impact of self-similarity in real-time systems

Real-Time Systems, 2012

In real-time systems with highly variable task execution times simplistic task models are insuffi... more In real-time systems with highly variable task execution times simplistic task models are insufficient to accurately model and to analyze the system. Variability can be tackled using distributions rather than a single value, but the proper characterization depends on the degree of variability. Self-similarity is one of the deepest kinds of variability. It characterizes the fact that a workload is not only highly variable, but it is also bursty on many timescales. This paper identifies in which situations this source of indeterminism can appear in a real-time system: the combination of variability in task inter-arrival times and execution times. Although self-similarity is not a claim for all systems with variable execution times, it is not unusual in some applications with real-time requirements, like video processing, networking and gaming. The paper shows how to properly model and to analyze self-similar task sets and how improper modeling can mask deadline misses. The paper derives an analytical expression for the dependence of the deadline miss ratio on the degree of self-similarity and proofs its negative impact on real-time systems performance through system's modeling and simulation. This study about the nature and impact of self-similarity on soft real-time systems can help to reduce its effects, to choose the proper scheduling policies, and to avoid its causes at system design time.

Research paper thumbnail of An analysis method for variable execution time tasks based on histograms

Real-Time Systems, 2007

Real-time analysis methods are usually based on worst-case execution times (WCET). This leads to ... more Real-time analysis methods are usually based on worst-case execution times (WCET). This leads to pessimistic results and poor resource utilisation when applied to highly variable execution time tasks. This paper proposes a discrete statistical description of task execution times, known as histograms. The proposed characterisation facilitates a powerful analytical method which offers a statistical distribution of task response times. The analysis enables workloads to be studied with a utilisation higher than 1 during transient overloads. System behaviour is shown to be a stochastic process that converges to steady state probability distribution when the average utilisation is lower or equal to 1. The paper shows that workload isolation is a desirable property of scheduling algorithms which greatly aids analysis and makes it algorithm independent. The alternative, when workload isolation cannot be assumed, is the so-called interference method, which is also introduced for the case of GPS (Generalised Processor Sharing) algorithms. As an example, the proposed method is evaluated using network routers and real traffic traces. The obtained results are compared to alternative analysis methods based on solving queueing systems (M/D/1/N) analytically.

Research paper thumbnail of A proactive backup scheme for reliable real-time transmission

Journal of Parallel and Distributed Computing, 2009

Reliable transmission is a key issue for distributed real-time applications. The concept of Real-... more Reliable transmission is a key issue for distributed real-time applications. The concept of Real-time Dependable Channel was introduced to provide availability to real-time transmission. Two aspects are important for the efficiency of a Real-time dependable channel: assuring the end-to-end delay bound and optimising the utilisation of network resources. A packet can miss its delay bound for two reasons: network congestion or network failure. The classic solution to this problem has been the use of Backup Channels which introduces the notion of availability at the expense of increasing the use of network resources. However, this over-provisioning of resources is potentially wasted, since the failure rate is very low. This paper introduces a new failure detection scheme for Real-time Transmission, which is called Proactive Backup Channel. This scheme is based on activating the backup channel before a network failure or congestion is produced. This way, the failure recovery time is reduced, and, as proven in the paper, the use of network resources is minimized.

Research paper thumbnail of Dynamic Scheduling Solutions for Real-Time Multiprocessor Systems

Control Engineering Practice, 1997

This paper analyzes the behavior and performance of four dynamic algorithms for multiprocessor sc... more This paper analyzes the behavior and performance of four dynamic algorithms for multiprocessor scheduling of hard real-time tasks. The well known algorithms are the Earliest Deadline First (EDF) algorithm and the Least Laxity First (LLF) algorithm. An important feature of the original ones is that they are guarantee-oriented. The performance of these algorithms has been measured through simulations which analyze the behavior of the algorithms under two main load hypothesis: static load and dynamic load. Simulation results present practical bounds and a comparative study on the loads that they are able to guarantee, context switches and cpu utilization.

Research paper thumbnail of Web server performance analysis using histogram workload models

Computer Networks, 2009

Web servers are required to perform millions of transaction requests per day at an acceptable Qua... more Web servers are required to perform millions of transaction requests per day at an acceptable Quality of Service (QoS) level in terms of client response time and server throughput. Consequently, a thorough understanding of the performance capabilities and limitations of web servers is critical. Finding a simple web traffic model described by a reasonable number of parameters that enables powerful analysis methods and provides accurate results has been a challenging problem during the last few decades. This paper proposes a discrete statistical description of web traffic that is based on histograms. In order to reflect the second-order statistics (long-range dependence and selfsimilarity) of the workload, this basic model has been extended using the Hurst parameter. Then, a system performance model-based on histogram operators (histogram calculus) is introduced. The proposed model has been evaluated using real workload traces using a single-site server model. These evaluations show that the model is accurate and improves the results of classic queueing models. The model provides an excellent basis for a decision support tool to predict the behavior of web servers.

Research paper thumbnail of Network queue and loss analysis using histogram-based traffic models

Computer Communications, 2010

ABSTRACT This paper proposes a new performance analysis for obtaining the queue length distributi... more ABSTRACT This paper proposes a new performance analysis for obtaining the queue length distribution in a finite size buffer system. This analysis uses a simple traffic model based on histograms that captures the arrival rate distribution on a network model. In order to reflect the second-order statistics (long-range dependence and self-similarity) of network traffic this basic model has been extended using the Hurst parameter.The buffer occupancy distribution is obtained using these traffic models and defining a new stochastic process that works directly with histograms. The loss probability and network delay distribution can be obtained using this distribution. The estimation of the expected traffic loss probability is a key issue in provisioning Quality of Service in telecommunications networks. This paper shows that the proposed buffer occupancy distribution and loss probability calculations techniques are very precise.

Research paper thumbnail of DRST: a new Network Simulation Tool for Differentiated Services

The Differentiated Services (DiffServ) architecture defines a new framework for the support of qu... more The Differentiated Services (DiffServ) architecture defines a new framework for the support of quality of service (QoS) in IP-based networks. This paper presents a distributed object oriented simulator based on a functional model of this architecture that allows to easily configure and test different router and network configurations in the range of traffic conditioning and per-hop behavior (PHB) functionalities described in the DiffServ Architecture. The simulator is an object oriented fashion implemented in Java RMI and provides classes for all the datapath elements defined in the model, provides an scheme for easily interconnecting them forming a DAG as specified in a configuration file, and provides means for allowing a distributed simulation.

Research paper thumbnail of TRECOM: Reliable distributed embedded real-time systems based on components TIC2002-04123-C03

Citeseer

The TRECOM project is aimed at developing methods and tools for building distributed, embedded re... more The TRECOM project is aimed at developing methods and tools for building distributed, embedded real-time systems with a high level of reliability and quality of service requirements. The approach is based on using software component technologies integrating specification and analysis methods for system properties related to reliability, quality of service, and temporal behaviour.

Research paper thumbnail of Building Ethernet Drivers on RTLinux-GPL1

This paper describes how to port Linux Ethernet drivers to RTLinux-GPL (hereafter RTLinux). It pr... more This paper describes how to port Linux Ethernet drivers to RTLinux-GPL (hereafter RTLinux). It presents an architecture that will let Linux and RTLinux to share the same driver while accessing an Ethernet card. This architecture will let real-time applications to access the network as a character device (i.e. using the standard open(), read(), close(), write() and ioctl() calls) and also would let Linux to use the driver as it always did (ifconfig, route, etc.). This paper also discusses some scheduling policies that will give to RTLinux the highest priority while accessing the Ethernet card.

Research paper thumbnail of Risk-Based Method for Determining Separation Minima in Unmanned Aircraft Systems

Journal of Air Transportation

Risk assessment is a key issue in enabling the use of unmanned aircraft systems (UASs) in nonsegr... more Risk assessment is a key issue in enabling the use of unmanned aircraft systems (UASs) in nonsegregated areas, especially in very low-level airspace and urban areas. The specific operations risk assessment (SORA) methodology represents an important milestone in performing the risk assessments required by aviation safety agencies to UAS operators in specific operations. However, the SORA is a qualitative method used for UASs operating inside a well-bounded operational volume. This paper proposes a quantitative method that can not only be used in a closed volume but also in an airspace shared by several UAS missions and even general aviation. The basis for this is providing a separation volume (a “bubble”) to prevent collisions that is calculated using a risk-based approach. The method consists of setting a target level of safety, which is accomplished using a tradeoff between strategic and tactical mitigations. A probabilistic methodology for performing quantitative risk assessment o...

Research paper thumbnail of A fast method to optimise network resources for video-on-demand transmission

Proceedings of the 26th Euromicro Conference. EUROMICRO 2000. Informatics: Inventing the Future, 2000

The transmission of video on demand (VoD) will become one of the most successful services on the ... more The transmission of video on demand (VoD) will become one of the most successful services on the Internet. This implies the playback of stored video over a high-speed network with a strict quality of service (QoS). Guaranteeing this QoS requires a very demanding reservation of network resources. That makes optimisation of network resources a key issue. The paper introduces a

Research paper thumbnail of Network Performance Analysis based on Histogram Workload Models

2007 15th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, 2007

Network performance analysis relies mainly on two models: a workload model and a performance mode... more Network performance analysis relies mainly on two models: a workload model and a performance model. This paper proposes to use histograms for characterising the arrival workloads and a performance model based on a stochastic process. This new stochastic process works directly with histograms using a set of specific operators. The result is the buffer occupancy distribution. The loss rate and network delay distribution can be obtained using this distribution. Three traffic models are proposed: the first model (the HD model) is a basic histogram model that is compact and reflects only first order statistics. The other two captures second order statistics: the (HD (N) model) is based on obtaining several histograms using different time scales and the (the HD (H) model) is based on the Hurst parameter and it is long-range dependent. This method is evaluated using several real traffic network workloads and the results show that the model is very accurate. The model forms an excellent basis for a decision support tool to allow system architects to predict the behavior of computer networks.

Research paper thumbnail of Efficient QoS Routing for Differentiated Services EF Flows

10th IEEE Symposium on Computers and Communications (ISCC'05)

ABSTRACT Expedited forwarding (EF) is the differentiated services class of service that provides ... more ABSTRACT Expedited forwarding (EF) is the differentiated services class of service that provides high quality transmission with node bounded delay. Nevertheless, in order to obtain a bounded network delay it is necessary to compute a route that meets the required end-to-end delay. Therefore, we study the requirements (bandwidth, buffer and delay) for a new EF connection. As detailed in the paper, with these constraints the routing algorithm is a NP-complete problem. Therefore, we present an efficient routing scheme that has low polynomial computational cost. Finally, the evaluation of this routing shows that is as efficient as using the exact routing algorithm.

Research paper thumbnail of Real-time transmission over Switched Ethernet using a contracts based framework

2009 IEEE Conference on Emerging Technologies & Factory Automation, 2009

ABSTRACT Switched Ethernet is being used for real time transmissions in industrial automation mor... more ABSTRACT Switched Ethernet is being used for real time transmissions in industrial automation more and and more. Most modern industrial switches are equipped with mechanisms to deal with time predictability. However, real-time transmission not only requires these mechanisms, but also the proper policies for managing network resources. This paper proposes the use of contracts. A contract is a set of transmission specifications which are negotiated between the applications and the run-time support. They define the application workload and the required performance guarantees. We implement contracts for real-time streaming as an extension of FRESCOR (framework for real-time embedded systems based on contracts). This framework was initially thought for providing deterministic performance guarantees to strictly periodic workloads. This work extends it by using the concept of classes of service (CoS) to deal with a wider range of workloads and guarantees and, particularly, with the transmission of highly variable bit rate (VBR) streams, like video. CoS enables, for example, joint transmission of real-time periodic workloads and VBR streams. CoS are implemented using a combination of resource reservation and resource preallocation techniques. The packet scheduling facilities of managed switches and Linux are shown to be key for managing network resources. Evaluations about the effectiveness of the extended FRESCOR framework and the feasibility of using switched Ethernet in real-time industrial environments are also presented.

Research paper thumbnail of Network Provisioning Using Multimedia Aggregates

Advances in Multimedia, 2007

Multimedia traffic makes network provisioning a key issue. Optimal provisioning of network resour... more Multimedia traffic makes network provisioning a key issue. Optimal provisioning of network resources is crucial for reducing the service cost of multimedia transmission. Multimedia traffic requires not only provisioning bandwidth and buffer resources in the network but also guaranteeing a given maximum end-to-end delay. In this paper we present methods and tools for the optimal dimensioning of networks based on multimedia aggregates. The proposed method minimises the network resources reservations of traffic aggregates providing a bounded delay. The paper also introduces several methods to generate multimedia traffic aggregation using real video traces. The method is evaluated using a network topology based on the European GÉANT network. The results of these simulations allow us to discover the relationship between a required delay and the necessary bandwidth reservation (or the achievable utilisation limit). An interesting conclusion of these scenarios is that, following several re...

Research paper thumbnail of Resource management for mobile operating systems based on the Active Object model

Personal devices are penetrating most technology market sectors nowadays. Despite their growing c... more Personal devices are penetrating most technology market sectors nowadays. Despite their growing computation power, they execute powerful yet heavy software platforms that, in the end, expose the physical hardware resource limitations. Such software platforms are becoming thick layers that contain an embedded mobile operating system with graphical tools, communication software supporting wired and wireless protocols, and virtual machines and hosting platforms to enable portable code execution. In the last decade, mobile devices have also become part of some real-time domains that in the past only used specialized computing hardware; this is the case of, for example, industrial control infrastructures where personal devices are used mainly for interfacing purposes. Still, the full adoption of personal embedded devices in real-time environments has not been achieved due to their temporal unpredictability derived, among other reasons, from the operating system's execution and concurrency model. Therefore, mechanisms for efficient and timely management of resources are needed to meet, at least, soft real-time constrains of the emerging application domains that are heavy resource consumers. In this paper, we describe a scheme for integrating resource management techniques on top of the concurrency model of embedded operating systems that use the active object concurrency model; we illustrate the approach by taking, just as example, the model of Symbian. Also, results are presented and discussed that validate the proposed resource management scheme.

Research paper thumbnail of Developing CAN based networks on RT-Linux

ETFA 2001. 8th International Conference on Emerging Technologies and Factory Automation. Proceedings (Cat. No.01TH8597)

Research paper thumbnail of Using exact feasibility tests for allocating real-time tasks in multiprocessor systems

Proceeding. 10th EUROMICRO Workshop on Real-Time Systems (Cat. No.98EX168)

Abstract This paper introduces improvements in partitioning schemes for multiprocessor real - tim... more Abstract This paper introduces improvements in partitioning schemes for multiprocessor real - time systems which allow higher processor utilization and enhanced schedulability by using exact feasibility tests to evaluate the schedulability limit of a processor The paper analyzes how to combine these tests with existing bin - packing algorithms for proces - sor allocation and provides new variants which are

Research paper thumbnail of Analysis of Self-Similar Workload on Real-Time Systems

2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium, 2010

ABSTRACT Real-time systems used in media processing and transmission produce bursty workloads wit... more ABSTRACT Real-time systems used in media processing and transmission produce bursty workloads with highly variable execution and transmission times. To avoid the drawbacks of using the worst-case approach with these workloads, this paper uses a variation of the usual real-time task model where the WCET is replaced by a discrete statistical distribution. Using this approach, tasks are characterized by their processing time over a sampling period. We could expect that increasing the sampling period would smooth, in principle, the workload variability and the proposed analysis would provide more deterministic long- term results. However, we have surprisingly observed that this variability does not decreases with the sampling period: work- loads are bursty on many time scales. This property is known as self-similarity and is measured using the Hurst parameter.This paper studies how to properly model and analyze self- similar task sets showing the influence of the Hurst parameter on the schedulability analysis. It shows, through an analytical model and simulations, that this parameter may have a very negative impact on system performance. As a conclusion, it can be stated that this factor should be taken into account for statistical analysis of real-time systems, since simplistic workload models can lead to inaccurate results. It also shows that the negative effect of this parameter can be bounded using scheduling policies based on the bandwidth isolation principle.

Research paper thumbnail of A fast and efficient backup routing scheme with bounded delay guarantees

2006 2nd Conference on Next Generation Internet Design and Engineering, 2006. NGI '06.

Reliable transmission is essential for several real-time applications. Backup channels introduce ... more Reliable transmission is essential for several real-time applications. Backup channels introduce the notion of availability at the cost of increasing the use of network resources. However, this over provisioning of resources is potentially wasted, since packet delays are usually lower than the required end-to-end channel delay. The goal of this paper is to present a new scheme for obtaining the

Research paper thumbnail of On the nature and impact of self-similarity in real-time systems

Real-Time Systems, 2012

In real-time systems with highly variable task execution times simplistic task models are insuffi... more In real-time systems with highly variable task execution times simplistic task models are insufficient to accurately model and to analyze the system. Variability can be tackled using distributions rather than a single value, but the proper characterization depends on the degree of variability. Self-similarity is one of the deepest kinds of variability. It characterizes the fact that a workload is not only highly variable, but it is also bursty on many timescales. This paper identifies in which situations this source of indeterminism can appear in a real-time system: the combination of variability in task inter-arrival times and execution times. Although self-similarity is not a claim for all systems with variable execution times, it is not unusual in some applications with real-time requirements, like video processing, networking and gaming. The paper shows how to properly model and to analyze self-similar task sets and how improper modeling can mask deadline misses. The paper derives an analytical expression for the dependence of the deadline miss ratio on the degree of self-similarity and proofs its negative impact on real-time systems performance through system's modeling and simulation. This study about the nature and impact of self-similarity on soft real-time systems can help to reduce its effects, to choose the proper scheduling policies, and to avoid its causes at system design time.

Research paper thumbnail of An analysis method for variable execution time tasks based on histograms

Real-Time Systems, 2007

Real-time analysis methods are usually based on worst-case execution times (WCET). This leads to ... more Real-time analysis methods are usually based on worst-case execution times (WCET). This leads to pessimistic results and poor resource utilisation when applied to highly variable execution time tasks. This paper proposes a discrete statistical description of task execution times, known as histograms. The proposed characterisation facilitates a powerful analytical method which offers a statistical distribution of task response times. The analysis enables workloads to be studied with a utilisation higher than 1 during transient overloads. System behaviour is shown to be a stochastic process that converges to steady state probability distribution when the average utilisation is lower or equal to 1. The paper shows that workload isolation is a desirable property of scheduling algorithms which greatly aids analysis and makes it algorithm independent. The alternative, when workload isolation cannot be assumed, is the so-called interference method, which is also introduced for the case of GPS (Generalised Processor Sharing) algorithms. As an example, the proposed method is evaluated using network routers and real traffic traces. The obtained results are compared to alternative analysis methods based on solving queueing systems (M/D/1/N) analytically.

Research paper thumbnail of A proactive backup scheme for reliable real-time transmission

Journal of Parallel and Distributed Computing, 2009

Reliable transmission is a key issue for distributed real-time applications. The concept of Real-... more Reliable transmission is a key issue for distributed real-time applications. The concept of Real-time Dependable Channel was introduced to provide availability to real-time transmission. Two aspects are important for the efficiency of a Real-time dependable channel: assuring the end-to-end delay bound and optimising the utilisation of network resources. A packet can miss its delay bound for two reasons: network congestion or network failure. The classic solution to this problem has been the use of Backup Channels which introduces the notion of availability at the expense of increasing the use of network resources. However, this over-provisioning of resources is potentially wasted, since the failure rate is very low. This paper introduces a new failure detection scheme for Real-time Transmission, which is called Proactive Backup Channel. This scheme is based on activating the backup channel before a network failure or congestion is produced. This way, the failure recovery time is reduced, and, as proven in the paper, the use of network resources is minimized.

Research paper thumbnail of Dynamic Scheduling Solutions for Real-Time Multiprocessor Systems

Control Engineering Practice, 1997

This paper analyzes the behavior and performance of four dynamic algorithms for multiprocessor sc... more This paper analyzes the behavior and performance of four dynamic algorithms for multiprocessor scheduling of hard real-time tasks. The well known algorithms are the Earliest Deadline First (EDF) algorithm and the Least Laxity First (LLF) algorithm. An important feature of the original ones is that they are guarantee-oriented. The performance of these algorithms has been measured through simulations which analyze the behavior of the algorithms under two main load hypothesis: static load and dynamic load. Simulation results present practical bounds and a comparative study on the loads that they are able to guarantee, context switches and cpu utilization.

Research paper thumbnail of Web server performance analysis using histogram workload models

Computer Networks, 2009

Web servers are required to perform millions of transaction requests per day at an acceptable Qua... more Web servers are required to perform millions of transaction requests per day at an acceptable Quality of Service (QoS) level in terms of client response time and server throughput. Consequently, a thorough understanding of the performance capabilities and limitations of web servers is critical. Finding a simple web traffic model described by a reasonable number of parameters that enables powerful analysis methods and provides accurate results has been a challenging problem during the last few decades. This paper proposes a discrete statistical description of web traffic that is based on histograms. In order to reflect the second-order statistics (long-range dependence and selfsimilarity) of the workload, this basic model has been extended using the Hurst parameter. Then, a system performance model-based on histogram operators (histogram calculus) is introduced. The proposed model has been evaluated using real workload traces using a single-site server model. These evaluations show that the model is accurate and improves the results of classic queueing models. The model provides an excellent basis for a decision support tool to predict the behavior of web servers.

Research paper thumbnail of Network queue and loss analysis using histogram-based traffic models

Computer Communications, 2010

ABSTRACT This paper proposes a new performance analysis for obtaining the queue length distributi... more ABSTRACT This paper proposes a new performance analysis for obtaining the queue length distribution in a finite size buffer system. This analysis uses a simple traffic model based on histograms that captures the arrival rate distribution on a network model. In order to reflect the second-order statistics (long-range dependence and self-similarity) of network traffic this basic model has been extended using the Hurst parameter.The buffer occupancy distribution is obtained using these traffic models and defining a new stochastic process that works directly with histograms. The loss probability and network delay distribution can be obtained using this distribution. The estimation of the expected traffic loss probability is a key issue in provisioning Quality of Service in telecommunications networks. This paper shows that the proposed buffer occupancy distribution and loss probability calculations techniques are very precise.

Research paper thumbnail of DRST: a new Network Simulation Tool for Differentiated Services

The Differentiated Services (DiffServ) architecture defines a new framework for the support of qu... more The Differentiated Services (DiffServ) architecture defines a new framework for the support of quality of service (QoS) in IP-based networks. This paper presents a distributed object oriented simulator based on a functional model of this architecture that allows to easily configure and test different router and network configurations in the range of traffic conditioning and per-hop behavior (PHB) functionalities described in the DiffServ Architecture. The simulator is an object oriented fashion implemented in Java RMI and provides classes for all the datapath elements defined in the model, provides an scheme for easily interconnecting them forming a DAG as specified in a configuration file, and provides means for allowing a distributed simulation.

Research paper thumbnail of TRECOM: Reliable distributed embedded real-time systems based on components TIC2002-04123-C03

Citeseer

The TRECOM project is aimed at developing methods and tools for building distributed, embedded re... more The TRECOM project is aimed at developing methods and tools for building distributed, embedded real-time systems with a high level of reliability and quality of service requirements. The approach is based on using software component technologies integrating specification and analysis methods for system properties related to reliability, quality of service, and temporal behaviour.