Anis Ismail | Universite Libanaise (original) (raw)
Papers by Anis Ismail
ArXiv, 2011
Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of internet traffic. P... more Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of internet traffic. P2P systems have emerged as a popular way to share huge volumes of data. Requirements for widely distributed information systems supporting virtual organizations have given rise to a new category of P2P systems called schema- based. In such systems each peer is a database management system in itself, ex-posing its own schema. A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. In such settings, the main objective is the efficient search across peer databases by processing each incoming query without overly consuming bandwidth. The usability of these systems depends on effective techniques to find and retrieve data; however, efficient and effective routing of content- based queries is an emerging problem in P2P networks. In this paper, we propose an architecture, based on super-peers, and we focus on query ro...
2010 The 7th International Conference on Informatics and Systems (INFOS), 2010
Since a large amount of information is added onto the internet on a daily basis, the efficiency o... more Since a large amount of information is added onto the internet on a daily basis, the efficiency of peer-to-peer (P2P) search has become increasingly important. However, how to quickly discover the right resource in a large-scale P2P network without generating too much network traffic and with minimum possible time remain highly challenging. In this paper, we propose a new P2P search method by applying the concept of data mining (decision tree) in order to improve search performance. We focus on routing queries to the right destination. Through a super-peer-based architecture, peers having similar interests are grouped together under a super-peer (SP) connected to a Meta-Super-Peer (MSP) operating with an index (decision tree) to predict the relevant domains (super-peers) to answer a given query. Compared with a super-peer based approach, our proposal architectures show the effect of the data mining with better performance with respect to response time and precision.
La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la re... more La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la recherche d’information dans de tels reseaux et la problematique de la fouille des donnees dans le contexte pair-a-pair en se focalisant plus particulierement sur les methodes de regroupement (clustering) et les arbres de decision.La seconde partie traite des reseaux ou les pairs disposent de leurs propres schemas de donnees. On y analyse plus particulierement les fondements et le fonctionnement du systeme SenPeer. On propose alors une architecture supportant une organisation communautaire des reseaux pair-a-pairs semantiques. Cela nous permet alors de construire des reseaux pair-a-pair semantiques structures en communautes appeles cSON (CommunitySemantic Overlay Network).Ce qui pose alors les questions concernant l’explicitation des communautes et leur exploitation pour ameliorer les performances (temps de reponse, nombres de messages, precision et le rappel). Pour construire les communau...
Peer-to-peer (P2P) computing is currently attracting enormous attention. In P2P systems a very la... more Peer-to-peer (P2P) computing is currently attracting enormous attention. In P2P systems a very large number of autonomous computing nodes (the peers) pool together their resources and rely on each other for data and services. Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of Internet traffic. Examples include P2P systems for network storage, web caching, searching and indexing of relevant documents and distributed network-threat analysis. Requirements for widely distributed information systems supporting virtual organizations have given rise to a new category of P2P systems called schema-based. In such systems each peer exposes its own schema and the main objective is the efficient search across the P2P network by processing each incoming query without overly consuming bandwidth. The usability of these systems depends on effective techniques to find and retrieve data; however, efficient and effective routing of content-based queries is a challenging probl...
One of the trends we have been observing for some time now is the blurring of divisional lines be... more One of the trends we have been observing for some time now is the blurring of divisional lines between different types of malware. Classifying a newly discovered 'creature' as a virus, a worm, a Trojan or a security exploit becomes more difficult and anti-virus researchers spend a significant amount of their time discussing the proper classification of new viruses and Trojans. Depending on the point of view, very often, the same program may be perceived as a Remote Administration Tool (RAT) allowing a potentially malicious user to remotely control the system. A Remote Administration Tool is a remote control software that when installed on a computer it allows a remote computer to take control of it. With remote control software you can work on a remote computer exactly as if you were right there at its keyboard. With fast, reliable, easy-to-use pc from remote control software, it lets you save hours of running up and down stairs between computers. Remote control software all...
Prior to 1980’s computers had a limited function. The use of computer systems by advocate in the ... more Prior to 1980’s computers had a limited function. The use of computer systems by advocate in the domain of laws was very limited and just for office issues. On the contrary, nowadays, the practice of law is parallel to the speed of life and Information and Communication Technology. Lawyers need more creative and fast tools to help them stay up to date in their work. This paper describes a theoretical and empirical study to design and develop an Advocate Office Management System
This paper aims to present an electronic voting system (E-Voting), hopefully, to be applied to th... more This paper aims to present an electronic voting system (E-Voting), hopefully, to be applied to the Lebanese electoral system. This E-Voting system (E-VS) was designed for electorates through computers programmed with convivial Graphical User Interfaces. The complex treatments and features are achieved at the levels of applicative layer and database. Several security measures were integrated into the E-VS in order to achieve an enhanced, speedy and accurate performance. It is about time that conventional voting in Lebanon gives way to E-Voting and hence simplifies the task for Electorates, Deputy Returning Officer and Returning Officer.
2019 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD)
In this paper, we present a new technology called Nucleofiles that allows storing a huge amount o... more In this paper, we present a new technology called Nucleofiles that allows storing a huge amount of information in a very small and stable medium, the DNA. The discovery of DNA computers can be used to solve extremely complex mathematical problems because of their structure composed of silicon counterparts. DNA, or Deoxyribonucleic Acid, is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA located in the cell nucleus. The DNA is the primary genetic material of all living organisms and as such, the DNA molecule found in the nucleus of all cells can hold more information in a cubic centimeter than a trillion music CDs. The presented solution that is to be a website allows users to login to the system and upload their data. The data uploaded is to be converted to D-File that is to be sent later to a DNA laboratory to be synthesized as DNA sequence. The presented solution allows users to store huge amount of data in a very stable medium; thus presenting a new generation for mass storage solving Big Data issues and contributing to Green IT.
2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon)
Image indexing and retrieval became an interesting field of research nowadays due to the lack of ... more Image indexing and retrieval became an interesting field of research nowadays due to the lack of advanced methodologies to index and retrieve images and to the existence of huge quantities of images available everywhere; especially on the web. The available solutions are able to find similar items having the exact shape but not the same item if it has a different shape. In this paper, we present different available technique concerning image indexing and retrieval. The first procedure that comes to mind is Content Based Image retrieval (CBIR). Many years of researching have been made on this topic using this methodology. Then, we explain the functionality of this technique and shows all work done in CBIR and also we discuss the Description Based Image Retrieval (DBIR).
Proceedings of the 2019 5th International Conference on Computer and Technology Applications
With the growth of the Internet in the recent years, search engines such as Google, Bing, and Yah... more With the growth of the Internet in the recent years, search engines such as Google, Bing, and Yahoo are becoming more and more crucial and reliable. The role of search engines is to index billions of web pages and display only the most relevant results for a given search query. When creating the website, many webmasters forget to take into consideration an essential factor, which is making the world aware of their website. Most of the times the main focus is set on making the website as user-friendly as possible, stable, fast, and secure. However, all of these techniques can be useless if the website does not have any visitor or people just cannot find it. To solve this problem, and to enhance the structure of the website to become more search-engine-friendly, a web application is developed to analyze any given webpage and to provide information on how to enhance and improve its structure to make it more search-engine-friendly and to improve its ranking on search engines. This process is also known as Search Engine Optimization (SEO).
Computer Science & Information Technology (CS & IT), Jan 19, 2019
AIRCC Publishing Corporation, 2019
CAPTCHA is almost a standard security technology, and has found widespread application in commerc... more CAPTCHA is almost a standard security technology, and has found widespread application in commercial websites. There are two types: labeling and image based CAPTCHAs. To date, almost all CAPTCHA designs are labeling based. Labeling based CAPTCHAs refer to those that make judgment based on whether the question “what is it?” has been correctly answered. Essentially in Artificial Intelligence (AI), this means judgment depends on whether the new label provided by the user side matches the label already known to the server. Labeling based CAPTCHA designs have some common weaknesses that can be taken advantage of attackers. First, the label set, i.e., the number of classes, is small and fixed. Due to deformation and noise in CAPTCHAs, the classes have to be further reduced to avoid confusion. Second, clean segmentation in current design, in particular character labeling based CAPTCHAs, is feasible. The state of the art of CAPTCHA design suggests that the robustness of character labeling s...
Http Www Theses Fr, Jul 13, 2010
La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la re... more La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la recherche d’information dans de tels reseaux et la problematique de la fouille des donnees dans le contexte pair-a-pair en se focalisant plus particulierement sur les methodes de regroupement (clustering) et les arbres de decision.La seconde partie traite des reseaux ou les pairs disposent de leurs propres schemas de donnees. On y analyse plus particulierement les fondements et le fonctionnement du systeme SenPeer. On propose alors une architecture supportant une organisation communautaire des reseaux pair-a-pairs semantiques. Cela nous permet alors de construire des reseaux pair-a-pair semantiques structures en communautes appeles cSON (CommunitySemantic Overlay Network).Ce qui pose alors les questions concernant l’explicitation des communautes et leur exploitation pour ameliorer les performances (temps de reponse, nombres de messages, precision et le rappel). Pour construire les communautes, nous etudions deux alternatives differentes : (1) Mediation semantique : la construction des communautes se base sur les liens semantiques entre les super-pairs et la confiance qu’ils ont les uns envers les autres et (2) Clustering : un algorithme de clustering base sur l’analyse des requetes traitees par les super-pairs est a la base de construction des communautes. Ensuite, nous proposons deux methodes pour calculer des caracterisations des communautaires en se placant dans les deux champs de recherche suivants : (1) Data mining: on cherche a caracteriser chaque communaute a l’aide d’une connaissance extraite des requetes traitees par ses super-pairs d’une meme communaute CK (Communauty Knowledge) et (2) Hypergraphes : A l’inverse de la methode precedente, notre objectif maintenant est de caracteriser collectivement les communautes. On formalise ce probleme comme la recherche des MCS (minimal covering shortcuts) qui sont des raccourcis, entre les super pairs,minimaux couvrants toutes les communautes. Nous developpons ensuite deux methodes de routages de requetes CK-rooting et MCS-rooting en utilisant respectivement la connaissance communautaire et les MCS afin d’identifier les super-pairs susceptibles de traiter une requete donnee.Dans la troisieme partie, nous presentons le simulateur developpe pour supporter l’approche cSON. Nous presentons alors les resultats empiriques resultant de simulations et qui montrent une amelioration significative des performances de l’approche basee uniquement sur la mediation semantique. Cette partie se termine avec la description d’une application de recherche d’information basee sur le partage de documents scientifiques enrichis.
International Journal of Engineering and Technology, 2017
Proceedings of the 17th International Conference on Enterprise Information Systems, 2015
2014 International Conference and Workshop on the Network of the Future (NOF), 2014
Traffic Jam in Lebanon has been an increasingly difficult problem with no reliable solution yet. ... more Traffic Jam in Lebanon has been an increasingly difficult problem with no reliable solution yet. Nowadays, the increasing popularity of smartphone devices equipped with multiple sensors (GPS, accelerometer, gyroscope) present an unprecedented opportunity to measure traffic, help users route around traffic jams and have a representation of the traffic conditions on the roads in real-time. In this paper, we present a new mobile application “Tari'ak” that intelligently learns about traffic conditions by turning every user into a traffic sensor. This takes place by measuring users' movement, speed, and location. By gathering all this data, the application can learn about traffic conditions in real-time and inform all users about the traffic conditions on the roads. This all takes place automatically without any user interference. If a user is not in transport, the application will power down its GPS use and therefore preserve battery power. Most importantly this prevents fraudulent reports by users reporting traffic manually as is the case with traditional and existing approaches. The result is an application that knows the traffic conditions on all traveled roads in real-time and offers it to users to help them avoid traffic jams. The traffic data can also be used to help users choose the fastest route between any two locations.
Encyclopedia of Information Science and Technology, Third Edition, 2015
2013 International Conference on Electronics, Computer and Computation (ICECCO), 2013
ABSTRACT In this paper, we propose a new autonomic model and framework that automatically self-cu... more ABSTRACT In this paper, we propose a new autonomic model and framework that automatically self-customize computer applications. It predominantly features four aspects; GUI self-customization, events handlers' self-customization; self-optimizing and security policies self-customization. The whole mechanism is geared by an XML language that provides the actual customizing instructions. Formally, the model is founded on Venn diagram and mathematical set theory. The proposed model supports C#.NET platform and Windows operating system. Experiments conducted showed a highly successfully practice of self-customization of computer-based applications, and a tangible improvement in time-to-manage and time-to-maintain in the IT industry.
Review of Computer Engineering Research, 2016
ArXiv, 2011
Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of internet traffic. P... more Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of internet traffic. P2P systems have emerged as a popular way to share huge volumes of data. Requirements for widely distributed information systems supporting virtual organizations have given rise to a new category of P2P systems called schema- based. In such systems each peer is a database management system in itself, ex-posing its own schema. A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. In such settings, the main objective is the efficient search across peer databases by processing each incoming query without overly consuming bandwidth. The usability of these systems depends on effective techniques to find and retrieve data; however, efficient and effective routing of content- based queries is an emerging problem in P2P networks. In this paper, we propose an architecture, based on super-peers, and we focus on query ro...
2010 The 7th International Conference on Informatics and Systems (INFOS), 2010
Since a large amount of information is added onto the internet on a daily basis, the efficiency o... more Since a large amount of information is added onto the internet on a daily basis, the efficiency of peer-to-peer (P2P) search has become increasingly important. However, how to quickly discover the right resource in a large-scale P2P network without generating too much network traffic and with minimum possible time remain highly challenging. In this paper, we propose a new P2P search method by applying the concept of data mining (decision tree) in order to improve search performance. We focus on routing queries to the right destination. Through a super-peer-based architecture, peers having similar interests are grouped together under a super-peer (SP) connected to a Meta-Super-Peer (MSP) operating with an index (decision tree) to predict the relevant domains (super-peers) to answer a given query. Compared with a super-peer based approach, our proposal architectures show the effect of the data mining with better performance with respect to response time and precision.
La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la re... more La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la recherche d’information dans de tels reseaux et la problematique de la fouille des donnees dans le contexte pair-a-pair en se focalisant plus particulierement sur les methodes de regroupement (clustering) et les arbres de decision.La seconde partie traite des reseaux ou les pairs disposent de leurs propres schemas de donnees. On y analyse plus particulierement les fondements et le fonctionnement du systeme SenPeer. On propose alors une architecture supportant une organisation communautaire des reseaux pair-a-pairs semantiques. Cela nous permet alors de construire des reseaux pair-a-pair semantiques structures en communautes appeles cSON (CommunitySemantic Overlay Network).Ce qui pose alors les questions concernant l’explicitation des communautes et leur exploitation pour ameliorer les performances (temps de reponse, nombres de messages, precision et le rappel). Pour construire les communau...
Peer-to-peer (P2P) computing is currently attracting enormous attention. In P2P systems a very la... more Peer-to-peer (P2P) computing is currently attracting enormous attention. In P2P systems a very large number of autonomous computing nodes (the peers) pool together their resources and rely on each other for data and services. Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of Internet traffic. Examples include P2P systems for network storage, web caching, searching and indexing of relevant documents and distributed network-threat analysis. Requirements for widely distributed information systems supporting virtual organizations have given rise to a new category of P2P systems called schema-based. In such systems each peer exposes its own schema and the main objective is the efficient search across the P2P network by processing each incoming query without overly consuming bandwidth. The usability of these systems depends on effective techniques to find and retrieve data; however, efficient and effective routing of content-based queries is a challenging probl...
One of the trends we have been observing for some time now is the blurring of divisional lines be... more One of the trends we have been observing for some time now is the blurring of divisional lines between different types of malware. Classifying a newly discovered 'creature' as a virus, a worm, a Trojan or a security exploit becomes more difficult and anti-virus researchers spend a significant amount of their time discussing the proper classification of new viruses and Trojans. Depending on the point of view, very often, the same program may be perceived as a Remote Administration Tool (RAT) allowing a potentially malicious user to remotely control the system. A Remote Administration Tool is a remote control software that when installed on a computer it allows a remote computer to take control of it. With remote control software you can work on a remote computer exactly as if you were right there at its keyboard. With fast, reliable, easy-to-use pc from remote control software, it lets you save hours of running up and down stairs between computers. Remote control software all...
Prior to 1980’s computers had a limited function. The use of computer systems by advocate in the ... more Prior to 1980’s computers had a limited function. The use of computer systems by advocate in the domain of laws was very limited and just for office issues. On the contrary, nowadays, the practice of law is parallel to the speed of life and Information and Communication Technology. Lawyers need more creative and fast tools to help them stay up to date in their work. This paper describes a theoretical and empirical study to design and develop an Advocate Office Management System
This paper aims to present an electronic voting system (E-Voting), hopefully, to be applied to th... more This paper aims to present an electronic voting system (E-Voting), hopefully, to be applied to the Lebanese electoral system. This E-Voting system (E-VS) was designed for electorates through computers programmed with convivial Graphical User Interfaces. The complex treatments and features are achieved at the levels of applicative layer and database. Several security measures were integrated into the E-VS in order to achieve an enhanced, speedy and accurate performance. It is about time that conventional voting in Lebanon gives way to E-Voting and hence simplifies the task for Electorates, Deputy Returning Officer and Returning Officer.
2019 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD)
In this paper, we present a new technology called Nucleofiles that allows storing a huge amount o... more In this paper, we present a new technology called Nucleofiles that allows storing a huge amount of information in a very small and stable medium, the DNA. The discovery of DNA computers can be used to solve extremely complex mathematical problems because of their structure composed of silicon counterparts. DNA, or Deoxyribonucleic Acid, is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA located in the cell nucleus. The DNA is the primary genetic material of all living organisms and as such, the DNA molecule found in the nucleus of all cells can hold more information in a cubic centimeter than a trillion music CDs. The presented solution that is to be a website allows users to login to the system and upload their data. The data uploaded is to be converted to D-File that is to be sent later to a DNA laboratory to be synthesized as DNA sequence. The presented solution allows users to store huge amount of data in a very stable medium; thus presenting a new generation for mass storage solving Big Data issues and contributing to Green IT.
2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon)
Image indexing and retrieval became an interesting field of research nowadays due to the lack of ... more Image indexing and retrieval became an interesting field of research nowadays due to the lack of advanced methodologies to index and retrieve images and to the existence of huge quantities of images available everywhere; especially on the web. The available solutions are able to find similar items having the exact shape but not the same item if it has a different shape. In this paper, we present different available technique concerning image indexing and retrieval. The first procedure that comes to mind is Content Based Image retrieval (CBIR). Many years of researching have been made on this topic using this methodology. Then, we explain the functionality of this technique and shows all work done in CBIR and also we discuss the Description Based Image Retrieval (DBIR).
Proceedings of the 2019 5th International Conference on Computer and Technology Applications
With the growth of the Internet in the recent years, search engines such as Google, Bing, and Yah... more With the growth of the Internet in the recent years, search engines such as Google, Bing, and Yahoo are becoming more and more crucial and reliable. The role of search engines is to index billions of web pages and display only the most relevant results for a given search query. When creating the website, many webmasters forget to take into consideration an essential factor, which is making the world aware of their website. Most of the times the main focus is set on making the website as user-friendly as possible, stable, fast, and secure. However, all of these techniques can be useless if the website does not have any visitor or people just cannot find it. To solve this problem, and to enhance the structure of the website to become more search-engine-friendly, a web application is developed to analyze any given webpage and to provide information on how to enhance and improve its structure to make it more search-engine-friendly and to improve its ranking on search engines. This process is also known as Search Engine Optimization (SEO).
Computer Science & Information Technology (CS & IT), Jan 19, 2019
AIRCC Publishing Corporation, 2019
CAPTCHA is almost a standard security technology, and has found widespread application in commerc... more CAPTCHA is almost a standard security technology, and has found widespread application in commercial websites. There are two types: labeling and image based CAPTCHAs. To date, almost all CAPTCHA designs are labeling based. Labeling based CAPTCHAs refer to those that make judgment based on whether the question “what is it?” has been correctly answered. Essentially in Artificial Intelligence (AI), this means judgment depends on whether the new label provided by the user side matches the label already known to the server. Labeling based CAPTCHA designs have some common weaknesses that can be taken advantage of attackers. First, the label set, i.e., the number of classes, is small and fixed. Due to deformation and noise in CAPTCHAs, the classes have to be further reduced to avoid confusion. Second, clean segmentation in current design, in particular character labeling based CAPTCHAs, is feasible. The state of the art of CAPTCHA design suggests that the robustness of character labeling s...
Http Www Theses Fr, Jul 13, 2010
La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la re... more La premiere partie de cette these est dediee a l’etat de l’art sur les reseaux pair-a-pair, la recherche d’information dans de tels reseaux et la problematique de la fouille des donnees dans le contexte pair-a-pair en se focalisant plus particulierement sur les methodes de regroupement (clustering) et les arbres de decision.La seconde partie traite des reseaux ou les pairs disposent de leurs propres schemas de donnees. On y analyse plus particulierement les fondements et le fonctionnement du systeme SenPeer. On propose alors une architecture supportant une organisation communautaire des reseaux pair-a-pairs semantiques. Cela nous permet alors de construire des reseaux pair-a-pair semantiques structures en communautes appeles cSON (CommunitySemantic Overlay Network).Ce qui pose alors les questions concernant l’explicitation des communautes et leur exploitation pour ameliorer les performances (temps de reponse, nombres de messages, precision et le rappel). Pour construire les communautes, nous etudions deux alternatives differentes : (1) Mediation semantique : la construction des communautes se base sur les liens semantiques entre les super-pairs et la confiance qu’ils ont les uns envers les autres et (2) Clustering : un algorithme de clustering base sur l’analyse des requetes traitees par les super-pairs est a la base de construction des communautes. Ensuite, nous proposons deux methodes pour calculer des caracterisations des communautaires en se placant dans les deux champs de recherche suivants : (1) Data mining: on cherche a caracteriser chaque communaute a l’aide d’une connaissance extraite des requetes traitees par ses super-pairs d’une meme communaute CK (Communauty Knowledge) et (2) Hypergraphes : A l’inverse de la methode precedente, notre objectif maintenant est de caracteriser collectivement les communautes. On formalise ce probleme comme la recherche des MCS (minimal covering shortcuts) qui sont des raccourcis, entre les super pairs,minimaux couvrants toutes les communautes. Nous developpons ensuite deux methodes de routages de requetes CK-rooting et MCS-rooting en utilisant respectivement la connaissance communautaire et les MCS afin d’identifier les super-pairs susceptibles de traiter une requete donnee.Dans la troisieme partie, nous presentons le simulateur developpe pour supporter l’approche cSON. Nous presentons alors les resultats empiriques resultant de simulations et qui montrent une amelioration significative des performances de l’approche basee uniquement sur la mediation semantique. Cette partie se termine avec la description d’une application de recherche d’information basee sur le partage de documents scientifiques enrichis.
International Journal of Engineering and Technology, 2017
Proceedings of the 17th International Conference on Enterprise Information Systems, 2015
2014 International Conference and Workshop on the Network of the Future (NOF), 2014
Traffic Jam in Lebanon has been an increasingly difficult problem with no reliable solution yet. ... more Traffic Jam in Lebanon has been an increasingly difficult problem with no reliable solution yet. Nowadays, the increasing popularity of smartphone devices equipped with multiple sensors (GPS, accelerometer, gyroscope) present an unprecedented opportunity to measure traffic, help users route around traffic jams and have a representation of the traffic conditions on the roads in real-time. In this paper, we present a new mobile application “Tari'ak” that intelligently learns about traffic conditions by turning every user into a traffic sensor. This takes place by measuring users' movement, speed, and location. By gathering all this data, the application can learn about traffic conditions in real-time and inform all users about the traffic conditions on the roads. This all takes place automatically without any user interference. If a user is not in transport, the application will power down its GPS use and therefore preserve battery power. Most importantly this prevents fraudulent reports by users reporting traffic manually as is the case with traditional and existing approaches. The result is an application that knows the traffic conditions on all traveled roads in real-time and offers it to users to help them avoid traffic jams. The traffic data can also be used to help users choose the fastest route between any two locations.
Encyclopedia of Information Science and Technology, Third Edition, 2015
2013 International Conference on Electronics, Computer and Computation (ICECCO), 2013
ABSTRACT In this paper, we propose a new autonomic model and framework that automatically self-cu... more ABSTRACT In this paper, we propose a new autonomic model and framework that automatically self-customize computer applications. It predominantly features four aspects; GUI self-customization, events handlers' self-customization; self-optimizing and security policies self-customization. The whole mechanism is geared by an XML language that provides the actual customizing instructions. Formally, the model is founded on Venn diagram and mathematical set theory. The proposed model supports C#.NET platform and Windows operating system. Experiments conducted showed a highly successfully practice of self-customization of computer-based applications, and a tangible improvement in time-to-manage and time-to-maintain in the IT industry.
Review of Computer Engineering Research, 2016