Ali Kalakech | Universite Libanaise (original) (raw)

Papers by Ali Kalakech

Research paper thumbnail of Feature Selection for Android Keystroke Dynamics

2018 International Arab Conference on Information Technology (ACIT), 2018

Keystroke Dynamic Authentication is the way of authenticating users by analyzing their typing rhy... more Keystroke Dynamic Authentication is the way of authenticating users by analyzing their typing rhythm and behavior. While key hold time, inter-key interval time and flight time can be captured on all devices; applying Keystroke Dynamic Authentication to mobile devices allows capturing and analyzing additional keystroke features like finger area on screen, and pressure applied on the key. This paper aims to reduce the number of captured features without affecting the efficiency of the user prediction. For this purpose, we used a benchmark dataset and implemented 3 different filter feature selection methods to sort the features by their relevance. Sets of different sizes were created and tested against classification methods.

Research paper thumbnail of Feature selection approach based on hypothesis-margin and pairwise constraints

2018 IEEE Middle East and North Africa Communications Conference (MENACOMM), 2018

In this paper, we propose a semi-supervised margin-based feature selection algorithm called Relie... more In this paper, we propose a semi-supervised margin-based feature selection algorithm called Relief-Sc. It is a modification of the well-known Relief algorithm from its optimization perspective. It utilizes cannot-link constraints only to solve a simple convex problem in a closed form giving a unique solution. Experimental results on well-known datasets validate the effectiveness of our proposed algorithm. Only with little supervision information, Relief-Sc proved to be comparable to supervised feature selection algorithms and was superior to the unsupervised ones.

Research paper thumbnail of Congestion control dependability assessment

2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), 2018

We have examined the use of fault injections to evaluate the dependability of a transport layer p... more We have examined the use of fault injections to evaluate the dependability of a transport layer protocol (i.e., TCP) in wireless sensor networks (WSN). We have focused on the layer's service i.e., congestion control, then, we have defined workload, faultload and dependability measures. The workload has two main components which are: the WSN architecture and the execution profile. We have defined in this paper the fault that may face different services and present their implementations in order to specify the faultload. We have introduced two dependability measures which are tolerance threshold and recovery time. We have demonstrated that the communication is interrupted after exceeding a given tolerance threshold (below this limit, the service employs a recovery time to resume execution). Our benchmarking targets eight different congestion control algorithms and presents several experiments to assess their dependability. According to our measures, BIC is the most dependable algorithm whereas Newreno and veno are the least dependable.

Research paper thumbnail of Performance of Revocation Protocols for Vehicular Ad-Hoc Network. Review of State-of-Art Techniques and Proposition of New Enhancement Revocation Method

2018 2nd Cyber Security in Networking Conference (CSNet), 2018

Security is one of the most important issues concerning Vehicular Ad-Hoc Networks (VANETS), speci... more Security is one of the most important issues concerning Vehicular Ad-Hoc Networks (VANETS), specifically when dealing with misbehaving vehicles to prevent them threatening the safety of others. In this paper, we present a review about revoking misbehaving vehicles based on classical Certificate Revocation List (CRL) in IEEE Standard. The main disadvantage of these algorithms resides in the fact that the Certification Authority (CA) is overwhelmed because it is responsible to distribute the whole CRL to all the requesting vehicles. To overcome this drawback in European Telecommunication Standards Institute (ETSI) standard, we propose our contribution that aims to minimize the tasks of the CA, by decomposing the CRL into different chunks that are distributed separately by the different RSUs of the same zone.

Research paper thumbnail of Reusability of DDS Information-Model for Distributed VRE

Virtual Reality Environments (VRE) which simulate reality and thus present a safer learning envir... more Virtual Reality Environments (VRE) which simulate reality and thus present a safer learning environment, are increasingly being adopted to simulate complex systems. Such systems make the process of engineering virtual environments a complex task, especially due to the abundance of dynamic data types like behaviors. In parallel, distribution services have become essential following advances in telecommunications and the subsequent demand on mobile technologies. Hence, middleware enables technologies to provide such services to existing and newly-developed applications. Data Distribution Service (DDS) is a middleware standard for real-time applications based on a peer-to-peer architecture. DDS requires awareness about the type of distributed data which is achieved by defining an information-model. Consequently, distributing VRE using DDS complicates the development process of its information-model in order to meet the requirements of complex data types. In this paper, we propose a gen...

Research paper thumbnail of A Comparative Study Between Lebanon and Middle East Countries Based on Data Mining Techniques

2018 International Arab Conference on Information Technology (ACIT), 2018

Fighting poverty is one of the main objectives of sustainable development program. In a country l... more Fighting poverty is one of the main objectives of sustainable development program. In a country like Lebanon, where poverty is a real threat and hidden under a good living looking, the situation should be explored in depth. This paper aims to evaluate the position of Lebanon compared to other Middle East countries in sustainable development. Furthermore, our goal is to reveal the power and weaknesses of resources management, based on income and non-income indicators retrieved from World data bank. For this purpose, we adopted a combination of data mining techniques as tools to study the relationship between these indicators. The K-means clustering technique is used to define the different levels of living. In order to extract the most relevant non-income indicators to our study, information gain as feature selection technique was applied. Finally, KNN classification technique was used for the predicting model.

Research paper thumbnail of Network Layer Dependability Benchmarking: Route identification

The use of wireless sensor networks (WSN) is widespread; it covers, particularly, environmental a... more The use of wireless sensor networks (WSN) is widespread; it covers, particularly, environmental and critical systems monitoring. Since the structure of the WSN has various layers including the application, the routing, the transfer, the Media Access Control(MAC) and the Radio Frequency(RF) Media, its dependability evaluation can be challenging. This paper defines the essential components of the network layers’ benchmark, which are: the target, the execution profile, and the robustness measure. The dependability assessment is addressed in our benchmark by focusing on three standard protocols: AdHoc on Demand Distance Vector Protocol (AODV), Optimized Link State Routing Protocol (OLSR) and Destination Sequence Distance Vector Routing Protocol (DSDV). The NS-3 simulator was used for the test bed. After the evaluation campaigns, we noticed that the DSDV and AODV protocols have an equivalent robustness. OLSR is the least robust but it is a fail-safe protocol. Keywords–Dependability; WSN;...

Research paper thumbnail of Top development indicators for middle eastern countries

2018 Sixth International Conference on Digital Information, Networking, and Wireless Communications (DINWC), 2018

Fisher and Relief are two well-known and largely used scores for supervised feature selection. In... more Fisher and Relief are two well-known and largely used scores for supervised feature selection. In this paper, we propose using these scores in order to select the relevant human development indicators that most contribute in the development classes affected to the different Middle Eastern countries. Experimental results show the importance of indicator selection. They also reveal the importance of some indicators related to the women labor force participation in the development of these Middle Eastern countries.

Research paper thumbnail of Toward New Vision of XLINK

Lecture Notes in Computer Science, 2011

In this article, we present the limitations of HTML hyperlink and how could be solve it by using ... more In this article, we present the limitations of HTML hyperlink and how could be solve it by using a new XML based language XLINK. Till now there is neither clear specification nor implementation of this language, a new comprehensive design using UML is proposed.

Research paper thumbnail of A new Round Robin based scheduling algorithm for operating systems: Dynamic quantum using the mean average

arXiv preprint arXiv:1111.5348, Nov 22, 2011

Abstract: Round Robin, considered as the most widely adopted CPU scheduling algorithm, undergoes ... more Abstract: Round Robin, considered as the most widely adopted CPU scheduling algorithm, undergoes severe problems directly related to quantum size. If time quantum chosen is too large, the response time of the processes is considered too high. On the other hand, if this quantum is too small, it increases the overhead of the CPU. In this paper, we propose a new algorithm, called AN, based on a new approach called dynamic-time-quantum; the idea of this approach is to make the operating systems adjusts the time quantum according to the ...

Research paper thumbnail of Etalonnage de la sûreté de fonctionnement de systèmes d’exploitation

Cet article présente un étalon de sûreté de fonctionnement pour des systèmes d'exploitation (OS) ... more Cet article présente un étalon de sûreté de fonctionnement pour des systèmes d'exploitation (OS) à usage général. L'étalon est défini au travers des spécifications de ses composants principaux. Les spécifications sont implémentées sous la forme d'un prototype de l'étalon. L'étalon proposé présente trois particularités. En premier lieu, il s'appuie sur un ensemble complet et structuré de mesures prenant en compte à la fois le comportement de l'OS et celui des applications. La seconde particularité est que les mesures ne concernent pas uniquement des mesures de robustesse, mais incluent aussi des mesures temporelles en présence de fautes (temps de réaction et de redémarrage de l'OS). Enfin, le système est soumis à une activité reconnue, correspondant à l'activité d'un étalon de performance (client TPC-C). Le prototype d'étalon développé est utilisé pour comparer la sûreté de fonctionnement de trois systèmes d'exploitation (Windows NT4, Windows 2000, Windows XP) vis-à-vis d'un comportement erroné du niveau applicatif. Les résultats montrent un comportement similaire pour les trois OS vis-à-vis de la robustesse, mais une différence notable en termes de temps de réaction et de redémarrage, Windows XP se révélant avoir les temps les plus courts. Summary This paper presents a dependability benchmark for general-purpose operating systems (OSs). The benchmark is defined through the specifications of its main components. The specifications are implemented in the form of a benchmark prototype. Our benchmark has three particularities. First, it lies on a comprehensive and structured set of measures: outcomes are considered both at the OS level and at the application level. Second, these measures include not only robustness measures, but also related temporal measures in the presence of faults (e.g., OS reaction time and restart time). Finally, we are using a realistic workload (namely, TPC-C client), instead of a synthetic workload. The benchmark prototype is used to compare the dependability of three operating systems (Windows NT4, Windows 2000 and Windows XP) with respect to erroneous behaviour of the application layer. The results show a similar behaviour of the three OSs with respect to robustness and a noticeable difference in OS reaction and restart times (Windows XP has the shortest reaction and restart times).

Research paper thumbnail of Active learning of constraints for weighted feature selection

Advances in Data Analysis and Classification

Research paper thumbnail of Selection of world development indicators for countries classification

2016 International Conference on Digital Economy (ICDEc), 2016

Research paper thumbnail of Network Layer Benchmarking: Investigation of AODV Dependability

Communications in Computer and Information Science, 2016

In wireless sensor networks (WSN), the sensor nodes have a limited transmission range and storage... more In wireless sensor networks (WSN), the sensor nodes have a limited transmission range and storage capabilities as well as their energy resources are also limited. Routing protocols for WSN are responsible for maintaining the routes in the network and have to ensure reliable multi-hop communication under these conditions. This paper defines the essential components of the network layer benchmark, which are: the target, the measures and the execution profile. This work investigates the behavior of the Ad Hoc On-Demand Distance Vector (AODV) routing protocol in situations of link failure. The test bed implementation and the dependability measures are carried out through the NS-3 simulator. 2 Related Work Various routing protocols have been compared, in the literature, using different aspects, namely the evaluation of performance or dependability. In the first case,

Research paper thumbnail of Selection of income indicators for Middle East country classification

2016 Sixth International Conference on Digital Information Processing and Communications (ICDIPC), 2016

Research paper thumbnail of Étalonnage de la sûreté de fonctionnement des systèmes d’exploitation – Spécifications et mise en oeuvre

Les développeurs des systèmes informatiques, y compris critiques, font souvent appel à des systèm... more Les développeurs des systèmes informatiques, y compris critiques, font souvent appel à des systèmes d'exploitation sur étagère. Cependant, un mauvais fonctionnement d'un système d'exploitation peut avoir un fort impact sur la sûreté de fonctionnement du système global, ...

Research paper thumbnail of A control layer for a Peer-to-Peer middleware using behavior semantics

2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), 2015

Research paper thumbnail of EFRED: Enhancement of Fair Random Early Detection Algorithm

International Journal of Communications, Network and System Sciences, 2015

Quality of Service (QoS) generally refers to measurable like latency and throughput, things that ... more Quality of Service (QoS) generally refers to measurable like latency and throughput, things that directly affect the user experience. Queuing (the most popular QoS tool) involves choosing the packets to be sent based on something other than arrival time. The Active queue management is important subject to manage this queue to increase the effectiveness of Transmission Control Protocol networks. Active queue management (AQM) is an effective means to enhance congestion control, and to achieve trade-off between link utilization and delay. The de facto standard, Random Early Detection (RED), and many of its variants employ queue length as a congestion indicator to trigger packet dropping. One of these enhancements of RED is FRED or Fair Random Early Detection attempts to deal with a fundamental aspect of RED in that it imposes the same loss rate on all flows, regardless of their bandwidths. FRED also uses per-flow active accounting, and tracks the state of active flows. FRED protects fragile flows by deterministically accepting flows from low bandwidth connections and fixes several shortcomings of RED by computing queue length during both arrival and departure of the packet. Unlike FRED, we propose a new scheme that used hazard rate estimated packet dropping function in FRED. We call this new scheme Enhancement Fair Random Early Detection. The key idea is that, with EFRED Scheme change packet dropping function, to get packet dropping less than RED and other AQM algorithms like ARED, REM, RED, etc. Simulations demonstrate that EFRED achieves a more stable throughput and performs better than current active queue management algorithms due to decrease the packets loss percentage and lowest in queuing delay, end to end delay and delay variation (JITTER).

Research paper thumbnail of A New Layer of Service Oriented Architecture to Solve Versioning Problem

British Journal of Mathematics & Computer Science, 2015

Research paper thumbnail of Benchmarking operating system dependability: Windows 2000 as a case study

10th IEEE Pacific Rim International Symposium on Dependable Computing, 2004. Proceedings., 2004

Abstract We propose a dependability benchmark suitable for a general purpose operating system (OS... more Abstract We propose a dependability benchmark suitable for a general purpose operating system (OS). The specifications of the benchmark components are presented and illustrated on a benchmark prototype dedicated to Windows 2000. The important novelty, as regards ...

Research paper thumbnail of Feature Selection for Android Keystroke Dynamics

2018 International Arab Conference on Information Technology (ACIT), 2018

Keystroke Dynamic Authentication is the way of authenticating users by analyzing their typing rhy... more Keystroke Dynamic Authentication is the way of authenticating users by analyzing their typing rhythm and behavior. While key hold time, inter-key interval time and flight time can be captured on all devices; applying Keystroke Dynamic Authentication to mobile devices allows capturing and analyzing additional keystroke features like finger area on screen, and pressure applied on the key. This paper aims to reduce the number of captured features without affecting the efficiency of the user prediction. For this purpose, we used a benchmark dataset and implemented 3 different filter feature selection methods to sort the features by their relevance. Sets of different sizes were created and tested against classification methods.

Research paper thumbnail of Feature selection approach based on hypothesis-margin and pairwise constraints

2018 IEEE Middle East and North Africa Communications Conference (MENACOMM), 2018

In this paper, we propose a semi-supervised margin-based feature selection algorithm called Relie... more In this paper, we propose a semi-supervised margin-based feature selection algorithm called Relief-Sc. It is a modification of the well-known Relief algorithm from its optimization perspective. It utilizes cannot-link constraints only to solve a simple convex problem in a closed form giving a unique solution. Experimental results on well-known datasets validate the effectiveness of our proposed algorithm. Only with little supervision information, Relief-Sc proved to be comparable to supervised feature selection algorithms and was superior to the unsupervised ones.

Research paper thumbnail of Congestion control dependability assessment

2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), 2018

We have examined the use of fault injections to evaluate the dependability of a transport layer p... more We have examined the use of fault injections to evaluate the dependability of a transport layer protocol (i.e., TCP) in wireless sensor networks (WSN). We have focused on the layer's service i.e., congestion control, then, we have defined workload, faultload and dependability measures. The workload has two main components which are: the WSN architecture and the execution profile. We have defined in this paper the fault that may face different services and present their implementations in order to specify the faultload. We have introduced two dependability measures which are tolerance threshold and recovery time. We have demonstrated that the communication is interrupted after exceeding a given tolerance threshold (below this limit, the service employs a recovery time to resume execution). Our benchmarking targets eight different congestion control algorithms and presents several experiments to assess their dependability. According to our measures, BIC is the most dependable algorithm whereas Newreno and veno are the least dependable.

Research paper thumbnail of Performance of Revocation Protocols for Vehicular Ad-Hoc Network. Review of State-of-Art Techniques and Proposition of New Enhancement Revocation Method

2018 2nd Cyber Security in Networking Conference (CSNet), 2018

Security is one of the most important issues concerning Vehicular Ad-Hoc Networks (VANETS), speci... more Security is one of the most important issues concerning Vehicular Ad-Hoc Networks (VANETS), specifically when dealing with misbehaving vehicles to prevent them threatening the safety of others. In this paper, we present a review about revoking misbehaving vehicles based on classical Certificate Revocation List (CRL) in IEEE Standard. The main disadvantage of these algorithms resides in the fact that the Certification Authority (CA) is overwhelmed because it is responsible to distribute the whole CRL to all the requesting vehicles. To overcome this drawback in European Telecommunication Standards Institute (ETSI) standard, we propose our contribution that aims to minimize the tasks of the CA, by decomposing the CRL into different chunks that are distributed separately by the different RSUs of the same zone.

Research paper thumbnail of Reusability of DDS Information-Model for Distributed VRE

Virtual Reality Environments (VRE) which simulate reality and thus present a safer learning envir... more Virtual Reality Environments (VRE) which simulate reality and thus present a safer learning environment, are increasingly being adopted to simulate complex systems. Such systems make the process of engineering virtual environments a complex task, especially due to the abundance of dynamic data types like behaviors. In parallel, distribution services have become essential following advances in telecommunications and the subsequent demand on mobile technologies. Hence, middleware enables technologies to provide such services to existing and newly-developed applications. Data Distribution Service (DDS) is a middleware standard for real-time applications based on a peer-to-peer architecture. DDS requires awareness about the type of distributed data which is achieved by defining an information-model. Consequently, distributing VRE using DDS complicates the development process of its information-model in order to meet the requirements of complex data types. In this paper, we propose a gen...

Research paper thumbnail of A Comparative Study Between Lebanon and Middle East Countries Based on Data Mining Techniques

2018 International Arab Conference on Information Technology (ACIT), 2018

Fighting poverty is one of the main objectives of sustainable development program. In a country l... more Fighting poverty is one of the main objectives of sustainable development program. In a country like Lebanon, where poverty is a real threat and hidden under a good living looking, the situation should be explored in depth. This paper aims to evaluate the position of Lebanon compared to other Middle East countries in sustainable development. Furthermore, our goal is to reveal the power and weaknesses of resources management, based on income and non-income indicators retrieved from World data bank. For this purpose, we adopted a combination of data mining techniques as tools to study the relationship between these indicators. The K-means clustering technique is used to define the different levels of living. In order to extract the most relevant non-income indicators to our study, information gain as feature selection technique was applied. Finally, KNN classification technique was used for the predicting model.

Research paper thumbnail of Network Layer Dependability Benchmarking: Route identification

The use of wireless sensor networks (WSN) is widespread; it covers, particularly, environmental a... more The use of wireless sensor networks (WSN) is widespread; it covers, particularly, environmental and critical systems monitoring. Since the structure of the WSN has various layers including the application, the routing, the transfer, the Media Access Control(MAC) and the Radio Frequency(RF) Media, its dependability evaluation can be challenging. This paper defines the essential components of the network layers’ benchmark, which are: the target, the execution profile, and the robustness measure. The dependability assessment is addressed in our benchmark by focusing on three standard protocols: AdHoc on Demand Distance Vector Protocol (AODV), Optimized Link State Routing Protocol (OLSR) and Destination Sequence Distance Vector Routing Protocol (DSDV). The NS-3 simulator was used for the test bed. After the evaluation campaigns, we noticed that the DSDV and AODV protocols have an equivalent robustness. OLSR is the least robust but it is a fail-safe protocol. Keywords–Dependability; WSN;...

Research paper thumbnail of Top development indicators for middle eastern countries

2018 Sixth International Conference on Digital Information, Networking, and Wireless Communications (DINWC), 2018

Fisher and Relief are two well-known and largely used scores for supervised feature selection. In... more Fisher and Relief are two well-known and largely used scores for supervised feature selection. In this paper, we propose using these scores in order to select the relevant human development indicators that most contribute in the development classes affected to the different Middle Eastern countries. Experimental results show the importance of indicator selection. They also reveal the importance of some indicators related to the women labor force participation in the development of these Middle Eastern countries.

Research paper thumbnail of Toward New Vision of XLINK

Lecture Notes in Computer Science, 2011

In this article, we present the limitations of HTML hyperlink and how could be solve it by using ... more In this article, we present the limitations of HTML hyperlink and how could be solve it by using a new XML based language XLINK. Till now there is neither clear specification nor implementation of this language, a new comprehensive design using UML is proposed.

Research paper thumbnail of A new Round Robin based scheduling algorithm for operating systems: Dynamic quantum using the mean average

arXiv preprint arXiv:1111.5348, Nov 22, 2011

Abstract: Round Robin, considered as the most widely adopted CPU scheduling algorithm, undergoes ... more Abstract: Round Robin, considered as the most widely adopted CPU scheduling algorithm, undergoes severe problems directly related to quantum size. If time quantum chosen is too large, the response time of the processes is considered too high. On the other hand, if this quantum is too small, it increases the overhead of the CPU. In this paper, we propose a new algorithm, called AN, based on a new approach called dynamic-time-quantum; the idea of this approach is to make the operating systems adjusts the time quantum according to the ...

Research paper thumbnail of Etalonnage de la sûreté de fonctionnement de systèmes d’exploitation

Cet article présente un étalon de sûreté de fonctionnement pour des systèmes d'exploitation (OS) ... more Cet article présente un étalon de sûreté de fonctionnement pour des systèmes d'exploitation (OS) à usage général. L'étalon est défini au travers des spécifications de ses composants principaux. Les spécifications sont implémentées sous la forme d'un prototype de l'étalon. L'étalon proposé présente trois particularités. En premier lieu, il s'appuie sur un ensemble complet et structuré de mesures prenant en compte à la fois le comportement de l'OS et celui des applications. La seconde particularité est que les mesures ne concernent pas uniquement des mesures de robustesse, mais incluent aussi des mesures temporelles en présence de fautes (temps de réaction et de redémarrage de l'OS). Enfin, le système est soumis à une activité reconnue, correspondant à l'activité d'un étalon de performance (client TPC-C). Le prototype d'étalon développé est utilisé pour comparer la sûreté de fonctionnement de trois systèmes d'exploitation (Windows NT4, Windows 2000, Windows XP) vis-à-vis d'un comportement erroné du niveau applicatif. Les résultats montrent un comportement similaire pour les trois OS vis-à-vis de la robustesse, mais une différence notable en termes de temps de réaction et de redémarrage, Windows XP se révélant avoir les temps les plus courts. Summary This paper presents a dependability benchmark for general-purpose operating systems (OSs). The benchmark is defined through the specifications of its main components. The specifications are implemented in the form of a benchmark prototype. Our benchmark has three particularities. First, it lies on a comprehensive and structured set of measures: outcomes are considered both at the OS level and at the application level. Second, these measures include not only robustness measures, but also related temporal measures in the presence of faults (e.g., OS reaction time and restart time). Finally, we are using a realistic workload (namely, TPC-C client), instead of a synthetic workload. The benchmark prototype is used to compare the dependability of three operating systems (Windows NT4, Windows 2000 and Windows XP) with respect to erroneous behaviour of the application layer. The results show a similar behaviour of the three OSs with respect to robustness and a noticeable difference in OS reaction and restart times (Windows XP has the shortest reaction and restart times).

Research paper thumbnail of Active learning of constraints for weighted feature selection

Advances in Data Analysis and Classification

Research paper thumbnail of Selection of world development indicators for countries classification

2016 International Conference on Digital Economy (ICDEc), 2016

Research paper thumbnail of Network Layer Benchmarking: Investigation of AODV Dependability

Communications in Computer and Information Science, 2016

In wireless sensor networks (WSN), the sensor nodes have a limited transmission range and storage... more In wireless sensor networks (WSN), the sensor nodes have a limited transmission range and storage capabilities as well as their energy resources are also limited. Routing protocols for WSN are responsible for maintaining the routes in the network and have to ensure reliable multi-hop communication under these conditions. This paper defines the essential components of the network layer benchmark, which are: the target, the measures and the execution profile. This work investigates the behavior of the Ad Hoc On-Demand Distance Vector (AODV) routing protocol in situations of link failure. The test bed implementation and the dependability measures are carried out through the NS-3 simulator. 2 Related Work Various routing protocols have been compared, in the literature, using different aspects, namely the evaluation of performance or dependability. In the first case,

Research paper thumbnail of Selection of income indicators for Middle East country classification

2016 Sixth International Conference on Digital Information Processing and Communications (ICDIPC), 2016

Research paper thumbnail of Étalonnage de la sûreté de fonctionnement des systèmes d’exploitation – Spécifications et mise en oeuvre

Les développeurs des systèmes informatiques, y compris critiques, font souvent appel à des systèm... more Les développeurs des systèmes informatiques, y compris critiques, font souvent appel à des systèmes d'exploitation sur étagère. Cependant, un mauvais fonctionnement d'un système d'exploitation peut avoir un fort impact sur la sûreté de fonctionnement du système global, ...

Research paper thumbnail of A control layer for a Peer-to-Peer middleware using behavior semantics

2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), 2015

Research paper thumbnail of EFRED: Enhancement of Fair Random Early Detection Algorithm

International Journal of Communications, Network and System Sciences, 2015

Quality of Service (QoS) generally refers to measurable like latency and throughput, things that ... more Quality of Service (QoS) generally refers to measurable like latency and throughput, things that directly affect the user experience. Queuing (the most popular QoS tool) involves choosing the packets to be sent based on something other than arrival time. The Active queue management is important subject to manage this queue to increase the effectiveness of Transmission Control Protocol networks. Active queue management (AQM) is an effective means to enhance congestion control, and to achieve trade-off between link utilization and delay. The de facto standard, Random Early Detection (RED), and many of its variants employ queue length as a congestion indicator to trigger packet dropping. One of these enhancements of RED is FRED or Fair Random Early Detection attempts to deal with a fundamental aspect of RED in that it imposes the same loss rate on all flows, regardless of their bandwidths. FRED also uses per-flow active accounting, and tracks the state of active flows. FRED protects fragile flows by deterministically accepting flows from low bandwidth connections and fixes several shortcomings of RED by computing queue length during both arrival and departure of the packet. Unlike FRED, we propose a new scheme that used hazard rate estimated packet dropping function in FRED. We call this new scheme Enhancement Fair Random Early Detection. The key idea is that, with EFRED Scheme change packet dropping function, to get packet dropping less than RED and other AQM algorithms like ARED, REM, RED, etc. Simulations demonstrate that EFRED achieves a more stable throughput and performs better than current active queue management algorithms due to decrease the packets loss percentage and lowest in queuing delay, end to end delay and delay variation (JITTER).

Research paper thumbnail of A New Layer of Service Oriented Architecture to Solve Versioning Problem

British Journal of Mathematics & Computer Science, 2015

Research paper thumbnail of Benchmarking operating system dependability: Windows 2000 as a case study

10th IEEE Pacific Rim International Symposium on Dependable Computing, 2004. Proceedings., 2004

Abstract We propose a dependability benchmark suitable for a general purpose operating system (OS... more Abstract We propose a dependability benchmark suitable for a general purpose operating system (OS). The specifications of the benchmark components are presented and illustrated on a benchmark prototype dedicated to Windows 2000. The important novelty, as regards ...