Abdelmajid Hamadou | Sfax University Tunisia (original) (raw)
Papers by Abdelmajid Hamadou
Proceedings of the 15th International Conference on Web Information Systems and Technologies, 2019
The issue of plagiarism in documents has been present for centuries. Yet, the widespread dissemin... more The issue of plagiarism in documents has been present for centuries. Yet, the widespread dissemination of information technology, including the internet, made plagiarism much easier. Consequently, methods and systems aiding in the detection of plagiarism have attracted much research within the last two decades. This paper introduces a plagiarism detection technique based on the semantic knowledge, notably semantic class and thematic role. This technique analyzes and compares text based on the semantic allocation for each term in the sentence. Semantic knowledge is superior in semantically generating arguments for each sentence. Weighting for each argument generated by semantic knowledge to study its behavior is also introduced in this paper. It was found that not all arguments affect the plagiarism detection process.
Concurrency and Computation: Practice and Experience, 2021
The LMF ISO standard provides a large cover of lexical knowledge using a fine structure. However,... more The LMF ISO standard provides a large cover of lexical knowledge using a fine structure. However, like most of the electronic dictionaries, the available normalized LMF dictionaries comprise only basic morpho‐syntactic and semantic knowledge, such as the meanings of lexical entries through the definitions and the associated examples, and sometimes the indication of the synonyms and antonyms. Other sophisticated knowledge, such as the syntactic behaviors, semantic classes and syntactico‐semantic links, which are scarce, requires a high expertise and its adding to dictionaries is expensive. In fact in this paper, we propose an approach of lexical data mining of the widely available textual content associated with the meanings, notably in the normalized LMF dictionaries, in order to perform the self‐enrichment of these dictionaries. First, we contribute to the enrichment of the syntactic behaviors by linking them to the suitable meanings. Second, we focus on the enrichment of the meani...
This paper describes the MIRACL statistical Machine Translation system and the improvements that ... more This paper describes the MIRACL statistical Machine Translation system and the improvements that were developed during the IWSLT 2010 evaluation campaign. We participated to the Arabic to English BTEC tasks using a phrase -based statistical machine translat ion approach. In this paper, we first discuss some challenges in translating from Arabic to English and we explore various techniques to improve performances on a such task. Next , we present our solution for disambiguating the output of an Arabic morpholo gical analyzer. In fact, The Arabic morphological analyzer used produces all possible morphological structures for each word, with an unique correct proposition. In this work we exploit the Arabic -English alignment to choose the correct segmented form and the correct morpho -syntactic features produced by our morphological analyzer. 1. Introduction Translati ng two languages with very different morphological structures, such as English and Arabic poses a challenge to successfu...
This paper presents a novel algorithm to measure semantic similarity between sentences. It will i... more This paper presents a novel algorithm to measure semantic similarity between sentences. It will introduce a method that takes into account of not only semantic knowledge but also syntactico-semantic knowledge notably semantic predicate, semantic class and thematic role. Firstly, semantic similarity between sentences is derived from words synonymy. Secondly, syntactico-semantic similarity is computed from the common semantic class and thematic role of words in the sentence. Indeed, this information is related to semantic predicate. Finally, semantic similarity is computed as a combination of lexical similarity, semantic similarity and syntactico-semantic similarity using a supervised learning. The proposed algorithm is applied to detect the information redundancy in LMF Arabic dictionary especially the definitions and the examples of lexical entries. Experimental results show that the proposed algorithm reduces the redundant information to improve the content quality of LMF Arabic di...
Computación y Sistemas, 2018
Sentence similarity computing is increasingly growing in several applications, such as question a... more Sentence similarity computing is increasingly growing in several applications, such as question answering, machine-translation, information retrieval and automatic abstracting systems. This paper firstly sums up several methods to calculate similarity between sentences which consider semantic and syntactic knowledge. Second, it presents a new method for the sentence similarity measure that aggregates, in a linear function, three components: the Lexical similarity Lexsim including the common words, the semantic similarity SemSim using the synonymy words and the syntactico-semantic similarity SynSemSim based on common semantic arguments, notably, thematic role and semantic class. Concerning the word-based semantic similarity, a measure is computed to estimate the semantic degree between words by exploiting the WordNet "is a" taxonomy. Moreover, the semantic argument determination is based on the VerbNet database. The proposed method yielded competitive results compared to previously proposed measures and with regard to the Li's benchmark, which shown a high correlation with human ratings. Furthermore, experiments performed on the Microsoft Paraphrase Corpus showed the best F-measure values compared to other measures for high similarity thresholds.
Computación y Sistemas, 2018
Social networks are considered today as revolutionary tools of communication that have a tremendo... more Social networks are considered today as revolutionary tools of communication that have a tremendous impact on our lives. However, these tools can be manipulated by vicious users namely terrorists. The process of collecting and analyzing such profiles is a considerably challenging task which has not yet been well established. For this purpose, we propose, in this paper, a new method for data extraction and annotation of suspicious users from social networks threatening the national security. Our method allows constructing a rich Arabic corpus designed for detecting terrorist users spreading on social networks. The amendment of our corpora is ensured following a set of rules defined by a domain expert. All these steps are described in details, and some typical examples are given. Also, some statistics are reported from the data collection and annotation stages as well as the evaluation of the annotated features based on the intra-agreement measurement between different experts.
Computación y Sistemas, 2016
This paper introduces a new approach for hand gesture recognition based on depth Map captured by ... more This paper introduces a new approach for hand gesture recognition based on depth Map captured by an RGB-D Kinect camera. Although this camera provides two types of information "Depth Map" and "RGB Image", only the depth data information is used to analyze and recognize the hand gestures. Given the complexity of this task, a new method based on edge detection is proposed to eliminate the noise and segment the hand. Moreover, new descriptors are introduce to model the hand gesture. These features are invariant to scale, rotation and translation. Our approach is applied on French sign language alphabet to show its effectiveness and evaluate the robustness of the proposed descriptors. The experimental results clearly show that the proposed system is very satisfactory as it to recognizes the French alphabet sign with an accuracy of more than 93%. Our approach is also applied to a public dataset in order to be compared in the existing studies. The results prove that our system can outperform previous methods using the same dataset.
Vietnam Journal of Computer Science, 2016
The measure of sentence similarity is useful in various research fields, such as artificial intel... more The measure of sentence similarity is useful in various research fields, such as artificial intelligence, knowledge management, and information retrieval. Several methods have been proposed to measure the sentence similarity based on syntactic and/or semantic knowledge. Most proposals are evaluated on English sentences where the accuracy can decrease when these proposals are applied to other languages. Moreover, the results of these methods are unsatisfactory, as much relevant semantic knowledge, such as semantic class, thematic role and syntactico-semantic knowledge like the semantic predicates, are not taken into account. We must acknowledge that this kind of knowledge is rare in most of the lexical resources. Recently, the International Organization for Standardization (ISO) has published the Lexical Markup Framework (LMF) ISO-24613 norm for the development of lexical resources. This norm provides, for each meaning of a lexical entry, all the semantic and syntactico-semantic knowledge in a fine structure. Profiting from the availability of LMF-standardized dictionaries, we propose, in this paper, a generic method that enhances the measure of sentence similarity by applying semantic and syntactico-semantic knowledge. An experiment was carried out on Arabic, as this language is processed within our research team and an LMF-standardized Arabic dictionary is at hand where the semantic and the syntactico-semantic B Wafa Wali
Lecture Notes in Computer Science, 2016
In this paper, we deal with the representation of syntactic knowledge, particularly the syntactic... more In this paper, we deal with the representation of syntactic knowledge, particularly the syntactic behavior of verbs. In this context, we propose an approach to identify syntactic behaviors from a corpus based on the LMF Context-Field in order to enrich the syntactic extension of LMF normalized dictionary. Our approach consists of the following steps: (i) Identification of syntactic patterns, (ii) Construction of a grammar suitable for each syntactic pattern, (iii) Construction of a corpus from the LMF normalized dictionary, (iv) Application of grammars to the corpus and (v) Enrichment of the LMF dictionary. To validate this approach, we carried out an experiment that focuses on the syntactic behavior of Arabic verbs. We used the NooJ linguistic platform and an available LMF Arabic dictionary that contains 37,000 entries and 10,800 verbs. The obtained results concerning more than 7,800 treated verbs show 85 % of precision and 87 % of recall.
Natural Language Engineering, 2015
In this paper, we address the problem of the large coverage dictionaries of Arabic language usabl... more In this paper, we address the problem of the large coverage dictionaries of Arabic language usable both for direct human reading and automatic Natural Language Processing. For these purposes, we propose a normalized and implemented modeling, based on Lexical Markup Framework (LMF-ISO 24613) and Data Registry Category (DCR-ISO 12620), which allows a stable and well-defined interoperability of lexical resources through a unification of the linguistic concepts. Starting from the features of the Arabic language, and due to the fact that a large range of details and refinements need to be described specifically for Arabic, we follow a finely structuring strategy. Besides its richness in morphology, syntax and semantics knowledge, our model includes all the Arabic morphological patterns to generate the inflected forms from a given lemma and highlights the syntactic–semantic relations. In addition, an appropriate codification has been designed for the management of all types of relationshi...
Proceedings of the 5th International Workshop on Natural Language Processing and Cognitive Science, 2008
This paper is interested in the development of the Arabic electronic dictionaries of human use (e... more This paper is interested in the development of the Arabic electronic dictionaries of human use (editorial use). It proposes a unified and standardized model for these dictionaries according to the future standard LMF (Lexical Markup Framework) ISO 24613. Thanks to its subtle and standardized structure, this model allows the development of extendable dictionaries on which generic interrogation functions adapted to the user's needs can be implemented. This model has already been carried out on some existing Arabic dictionaries using the ADIQTQ (Arabic DIctionary Query Tool) system, which we developed for the generic interrogation of standardized dictionaries of Arabic.
Lecture Notes in Computer Science
Abstract. The development of the Web has been paralleled by the pro-liferation of harmful Web pag... more Abstract. The development of the Web has been paralleled by the pro-liferation of harmful Web pages content. Using Violent Web page as a case study, we review some existing solutions, then we propose a vio-lent Web content detection and filtering system called WebAngels ...
In this paper, we present our submitted MT system for the IWSLT2014 Evaluation Campaign. We parti... more In this paper, we present our submitted MT system for the IWSLT2014 Evaluation Campaign. We participated in the English-French translation task. In this article we focus on one of the most important component of SMT: the language model. The idea is to use a phrase-based language model. For that, sequences from the source and the target language models are retrieved and used to calculate a phrase n-gram language model. These phrases are used to rewrite the parallel corpus which is then used to calculate a new translation model.
Nous proposons dans cet article une approche de re multi-scripts. la nature du script, on procède... more Nous proposons dans cet article une approche de re multi-scripts. la nature du script, on procède alors sans segmentation. Des caractéristiques bas niveaux basées sur les directions et les densités des pixels sont combinées à travers une approche multi-flux. P proposée, nous avons effectué des expérimentations sur
Dans cet article, nous présentons les améliorations que nous avons apportées au système ExtraNews... more Dans cet article, nous présentons les améliorations que nous avons apportées au système ExtraNews de résumé automatique de documents multiples. Ce système se base sur l'utilisation d'un algorithme génétique qui permet de combiner les phrases des documents sources pour former les extraits, qui seront croisés et mutés pour générer de nouveaux extraits. La multiplicité des critères de sélection d'extraits nous a inspiré une première amélioration qui consiste à utiliser une technique d'optimisation multi-objectif en vue d'évaluer ces extraits. La deuxième amélioration consiste à intégrer une étape de pré-filtrage de phrases qui a pour objectif la réduction du nombre des phrases des textes sources en entrée. Une évaluation des améliorations apportées à notre système est réalisée sur les corpus de DUC'04 et DUC'07.
Proceedings Tenth IEEE International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2001, 2001
This paper presents a survey of technologies involved in cooperative systems development for info... more This paper presents a survey of technologies involved in cooperative systems development for information sharing and exchange used for collaborative activities support. The paper emphasizes the needs for: synchronous and asynchronous cooperation technologies, and for interoperability at the different levels of the systems including data, documents, information, tools and applications. At end, the paper summarizes the set of important requirements for architecture design for cooperative systems.
Lecture Notes in Computer Science, 2007
Keeping people away from litigious information becomes one of the most important research area in... more Keeping people away from litigious information becomes one of the most important research area in network information security. Indeed, Web filtering is used to prevent access to undesirable Web pages. In this paper we review some existing solutions, then we propose a violent Web content detection and filtering system called "WebAngels filter" which uses textual and structural analysis. "WebAngels filter" has the advantage of combining several data-mining algorithms for Web site classification. We discuss how the combination learning based methods can improve filtering performances. Our preliminary results show that it can detect and filter violent content effectively.
It is our great pleasure to welcome you to read this special issue of the eighth Maghrebian Confe... more It is our great pleasure to welcome you to read this special issue of the eighth Maghrebian Conference on Software Engineering and Artificial Intelligence MCSEAI'04 that was held of the 9 to the 12 may 2004 in Sousse-Tunisia. This eighth edition intends to be a forum for the most recent research on the engineering of advanced information systems. Web and multimedia applications, natural language processing, as well as distributed intelligent systems. As you will note, the 11 selected papers out of 51 presented (i.e., a selection rate of about 20%) have a high quality and they are agreeable to read.
Proceedings of the 15th International Conference on Web Information Systems and Technologies, 2019
The issue of plagiarism in documents has been present for centuries. Yet, the widespread dissemin... more The issue of plagiarism in documents has been present for centuries. Yet, the widespread dissemination of information technology, including the internet, made plagiarism much easier. Consequently, methods and systems aiding in the detection of plagiarism have attracted much research within the last two decades. This paper introduces a plagiarism detection technique based on the semantic knowledge, notably semantic class and thematic role. This technique analyzes and compares text based on the semantic allocation for each term in the sentence. Semantic knowledge is superior in semantically generating arguments for each sentence. Weighting for each argument generated by semantic knowledge to study its behavior is also introduced in this paper. It was found that not all arguments affect the plagiarism detection process.
Concurrency and Computation: Practice and Experience, 2021
The LMF ISO standard provides a large cover of lexical knowledge using a fine structure. However,... more The LMF ISO standard provides a large cover of lexical knowledge using a fine structure. However, like most of the electronic dictionaries, the available normalized LMF dictionaries comprise only basic morpho‐syntactic and semantic knowledge, such as the meanings of lexical entries through the definitions and the associated examples, and sometimes the indication of the synonyms and antonyms. Other sophisticated knowledge, such as the syntactic behaviors, semantic classes and syntactico‐semantic links, which are scarce, requires a high expertise and its adding to dictionaries is expensive. In fact in this paper, we propose an approach of lexical data mining of the widely available textual content associated with the meanings, notably in the normalized LMF dictionaries, in order to perform the self‐enrichment of these dictionaries. First, we contribute to the enrichment of the syntactic behaviors by linking them to the suitable meanings. Second, we focus on the enrichment of the meani...
This paper describes the MIRACL statistical Machine Translation system and the improvements that ... more This paper describes the MIRACL statistical Machine Translation system and the improvements that were developed during the IWSLT 2010 evaluation campaign. We participated to the Arabic to English BTEC tasks using a phrase -based statistical machine translat ion approach. In this paper, we first discuss some challenges in translating from Arabic to English and we explore various techniques to improve performances on a such task. Next , we present our solution for disambiguating the output of an Arabic morpholo gical analyzer. In fact, The Arabic morphological analyzer used produces all possible morphological structures for each word, with an unique correct proposition. In this work we exploit the Arabic -English alignment to choose the correct segmented form and the correct morpho -syntactic features produced by our morphological analyzer. 1. Introduction Translati ng two languages with very different morphological structures, such as English and Arabic poses a challenge to successfu...
This paper presents a novel algorithm to measure semantic similarity between sentences. It will i... more This paper presents a novel algorithm to measure semantic similarity between sentences. It will introduce a method that takes into account of not only semantic knowledge but also syntactico-semantic knowledge notably semantic predicate, semantic class and thematic role. Firstly, semantic similarity between sentences is derived from words synonymy. Secondly, syntactico-semantic similarity is computed from the common semantic class and thematic role of words in the sentence. Indeed, this information is related to semantic predicate. Finally, semantic similarity is computed as a combination of lexical similarity, semantic similarity and syntactico-semantic similarity using a supervised learning. The proposed algorithm is applied to detect the information redundancy in LMF Arabic dictionary especially the definitions and the examples of lexical entries. Experimental results show that the proposed algorithm reduces the redundant information to improve the content quality of LMF Arabic di...
Computación y Sistemas, 2018
Sentence similarity computing is increasingly growing in several applications, such as question a... more Sentence similarity computing is increasingly growing in several applications, such as question answering, machine-translation, information retrieval and automatic abstracting systems. This paper firstly sums up several methods to calculate similarity between sentences which consider semantic and syntactic knowledge. Second, it presents a new method for the sentence similarity measure that aggregates, in a linear function, three components: the Lexical similarity Lexsim including the common words, the semantic similarity SemSim using the synonymy words and the syntactico-semantic similarity SynSemSim based on common semantic arguments, notably, thematic role and semantic class. Concerning the word-based semantic similarity, a measure is computed to estimate the semantic degree between words by exploiting the WordNet "is a" taxonomy. Moreover, the semantic argument determination is based on the VerbNet database. The proposed method yielded competitive results compared to previously proposed measures and with regard to the Li's benchmark, which shown a high correlation with human ratings. Furthermore, experiments performed on the Microsoft Paraphrase Corpus showed the best F-measure values compared to other measures for high similarity thresholds.
Computación y Sistemas, 2018
Social networks are considered today as revolutionary tools of communication that have a tremendo... more Social networks are considered today as revolutionary tools of communication that have a tremendous impact on our lives. However, these tools can be manipulated by vicious users namely terrorists. The process of collecting and analyzing such profiles is a considerably challenging task which has not yet been well established. For this purpose, we propose, in this paper, a new method for data extraction and annotation of suspicious users from social networks threatening the national security. Our method allows constructing a rich Arabic corpus designed for detecting terrorist users spreading on social networks. The amendment of our corpora is ensured following a set of rules defined by a domain expert. All these steps are described in details, and some typical examples are given. Also, some statistics are reported from the data collection and annotation stages as well as the evaluation of the annotated features based on the intra-agreement measurement between different experts.
Computación y Sistemas, 2016
This paper introduces a new approach for hand gesture recognition based on depth Map captured by ... more This paper introduces a new approach for hand gesture recognition based on depth Map captured by an RGB-D Kinect camera. Although this camera provides two types of information "Depth Map" and "RGB Image", only the depth data information is used to analyze and recognize the hand gestures. Given the complexity of this task, a new method based on edge detection is proposed to eliminate the noise and segment the hand. Moreover, new descriptors are introduce to model the hand gesture. These features are invariant to scale, rotation and translation. Our approach is applied on French sign language alphabet to show its effectiveness and evaluate the robustness of the proposed descriptors. The experimental results clearly show that the proposed system is very satisfactory as it to recognizes the French alphabet sign with an accuracy of more than 93%. Our approach is also applied to a public dataset in order to be compared in the existing studies. The results prove that our system can outperform previous methods using the same dataset.
Vietnam Journal of Computer Science, 2016
The measure of sentence similarity is useful in various research fields, such as artificial intel... more The measure of sentence similarity is useful in various research fields, such as artificial intelligence, knowledge management, and information retrieval. Several methods have been proposed to measure the sentence similarity based on syntactic and/or semantic knowledge. Most proposals are evaluated on English sentences where the accuracy can decrease when these proposals are applied to other languages. Moreover, the results of these methods are unsatisfactory, as much relevant semantic knowledge, such as semantic class, thematic role and syntactico-semantic knowledge like the semantic predicates, are not taken into account. We must acknowledge that this kind of knowledge is rare in most of the lexical resources. Recently, the International Organization for Standardization (ISO) has published the Lexical Markup Framework (LMF) ISO-24613 norm for the development of lexical resources. This norm provides, for each meaning of a lexical entry, all the semantic and syntactico-semantic knowledge in a fine structure. Profiting from the availability of LMF-standardized dictionaries, we propose, in this paper, a generic method that enhances the measure of sentence similarity by applying semantic and syntactico-semantic knowledge. An experiment was carried out on Arabic, as this language is processed within our research team and an LMF-standardized Arabic dictionary is at hand where the semantic and the syntactico-semantic B Wafa Wali
Lecture Notes in Computer Science, 2016
In this paper, we deal with the representation of syntactic knowledge, particularly the syntactic... more In this paper, we deal with the representation of syntactic knowledge, particularly the syntactic behavior of verbs. In this context, we propose an approach to identify syntactic behaviors from a corpus based on the LMF Context-Field in order to enrich the syntactic extension of LMF normalized dictionary. Our approach consists of the following steps: (i) Identification of syntactic patterns, (ii) Construction of a grammar suitable for each syntactic pattern, (iii) Construction of a corpus from the LMF normalized dictionary, (iv) Application of grammars to the corpus and (v) Enrichment of the LMF dictionary. To validate this approach, we carried out an experiment that focuses on the syntactic behavior of Arabic verbs. We used the NooJ linguistic platform and an available LMF Arabic dictionary that contains 37,000 entries and 10,800 verbs. The obtained results concerning more than 7,800 treated verbs show 85 % of precision and 87 % of recall.
Natural Language Engineering, 2015
In this paper, we address the problem of the large coverage dictionaries of Arabic language usabl... more In this paper, we address the problem of the large coverage dictionaries of Arabic language usable both for direct human reading and automatic Natural Language Processing. For these purposes, we propose a normalized and implemented modeling, based on Lexical Markup Framework (LMF-ISO 24613) and Data Registry Category (DCR-ISO 12620), which allows a stable and well-defined interoperability of lexical resources through a unification of the linguistic concepts. Starting from the features of the Arabic language, and due to the fact that a large range of details and refinements need to be described specifically for Arabic, we follow a finely structuring strategy. Besides its richness in morphology, syntax and semantics knowledge, our model includes all the Arabic morphological patterns to generate the inflected forms from a given lemma and highlights the syntactic–semantic relations. In addition, an appropriate codification has been designed for the management of all types of relationshi...
Proceedings of the 5th International Workshop on Natural Language Processing and Cognitive Science, 2008
This paper is interested in the development of the Arabic electronic dictionaries of human use (e... more This paper is interested in the development of the Arabic electronic dictionaries of human use (editorial use). It proposes a unified and standardized model for these dictionaries according to the future standard LMF (Lexical Markup Framework) ISO 24613. Thanks to its subtle and standardized structure, this model allows the development of extendable dictionaries on which generic interrogation functions adapted to the user's needs can be implemented. This model has already been carried out on some existing Arabic dictionaries using the ADIQTQ (Arabic DIctionary Query Tool) system, which we developed for the generic interrogation of standardized dictionaries of Arabic.
Lecture Notes in Computer Science
Abstract. The development of the Web has been paralleled by the pro-liferation of harmful Web pag... more Abstract. The development of the Web has been paralleled by the pro-liferation of harmful Web pages content. Using Violent Web page as a case study, we review some existing solutions, then we propose a vio-lent Web content detection and filtering system called WebAngels ...
In this paper, we present our submitted MT system for the IWSLT2014 Evaluation Campaign. We parti... more In this paper, we present our submitted MT system for the IWSLT2014 Evaluation Campaign. We participated in the English-French translation task. In this article we focus on one of the most important component of SMT: the language model. The idea is to use a phrase-based language model. For that, sequences from the source and the target language models are retrieved and used to calculate a phrase n-gram language model. These phrases are used to rewrite the parallel corpus which is then used to calculate a new translation model.
Nous proposons dans cet article une approche de re multi-scripts. la nature du script, on procède... more Nous proposons dans cet article une approche de re multi-scripts. la nature du script, on procède alors sans segmentation. Des caractéristiques bas niveaux basées sur les directions et les densités des pixels sont combinées à travers une approche multi-flux. P proposée, nous avons effectué des expérimentations sur
Dans cet article, nous présentons les améliorations que nous avons apportées au système ExtraNews... more Dans cet article, nous présentons les améliorations que nous avons apportées au système ExtraNews de résumé automatique de documents multiples. Ce système se base sur l'utilisation d'un algorithme génétique qui permet de combiner les phrases des documents sources pour former les extraits, qui seront croisés et mutés pour générer de nouveaux extraits. La multiplicité des critères de sélection d'extraits nous a inspiré une première amélioration qui consiste à utiliser une technique d'optimisation multi-objectif en vue d'évaluer ces extraits. La deuxième amélioration consiste à intégrer une étape de pré-filtrage de phrases qui a pour objectif la réduction du nombre des phrases des textes sources en entrée. Une évaluation des améliorations apportées à notre système est réalisée sur les corpus de DUC'04 et DUC'07.
Proceedings Tenth IEEE International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2001, 2001
This paper presents a survey of technologies involved in cooperative systems development for info... more This paper presents a survey of technologies involved in cooperative systems development for information sharing and exchange used for collaborative activities support. The paper emphasizes the needs for: synchronous and asynchronous cooperation technologies, and for interoperability at the different levels of the systems including data, documents, information, tools and applications. At end, the paper summarizes the set of important requirements for architecture design for cooperative systems.
Lecture Notes in Computer Science, 2007
Keeping people away from litigious information becomes one of the most important research area in... more Keeping people away from litigious information becomes one of the most important research area in network information security. Indeed, Web filtering is used to prevent access to undesirable Web pages. In this paper we review some existing solutions, then we propose a violent Web content detection and filtering system called "WebAngels filter" which uses textual and structural analysis. "WebAngels filter" has the advantage of combining several data-mining algorithms for Web site classification. We discuss how the combination learning based methods can improve filtering performances. Our preliminary results show that it can detect and filter violent content effectively.
It is our great pleasure to welcome you to read this special issue of the eighth Maghrebian Confe... more It is our great pleasure to welcome you to read this special issue of the eighth Maghrebian Conference on Software Engineering and Artificial Intelligence MCSEAI'04 that was held of the 9 to the 12 may 2004 in Sousse-Tunisia. This eighth edition intends to be a forum for the most recent research on the engineering of advanced information systems. Web and multimedia applications, natural language processing, as well as distributed intelligent systems. As you will note, the 11 selected papers out of 51 presented (i.e., a selection rate of about 20%) have a high quality and they are agreeable to read.