Son Mai | Hanoi University of Science and Technology (original) (raw)
Papers by Son Mai
IEEE Transactions on Knowledge and Data Engineering, 2021
The heterogeneity of today's Web sources requires information retrieval (IR) systems to handle mu... more The heterogeneity of today's Web sources requires information retrieval (IR) systems to handle multi-modal queries. Such queries define a user's information needs by different data modalities, such as keywords, hashtags, user profiles, and other media. Recent IR systems answer such a multi-modal query by considering it as a set of separate uni-modal queries. However, depending on the chosen operationalisation, such an approach is inefficient or ineffective. It either requires multiple passes over the data or leads to inaccuracies since the relations between data modalities are neglected in the relevance assessment. To mitigate these challenges, we present an IR system that has been designed to answer genuine multi-modal queries. It relies on a heterogeneous network embedding, so that features from diverse modalities can be incorporated when representing both, a query and the data over which it shall be evaluated. By embedding a query and the data in the same vector space, the relations across modalities are made explicit and exploited for more accurate query evaluation. At the same time, multi-modal queries are answered with a single pass over the data. An experimental evaluation using diverse real-world and synthetic datasets illustrates that our approach returns twice the amount of relevant information compared to baseline techniques, while scaling to large multi-modal databases. Index Terms-query embedding, graph embedding, heterogeneous information network ✦ • A model for heterogeneous data: We introduce a representation of heterogeneous data based on HINs to capture the semantic relations between data of different modalities of the same entity [13], [14]. This includes a
We introduce a novel interactive framework to handle both instance-level and temporal smoothness ... more We introduce a novel interactive framework to handle both instance-level and temporal smoothness constraints for clustering large temporal data. It consists of a constrained clustering algorithm which optimizes the clustering quality, constraint violation and the historical cost between consecutive data snapshots. At the center of our framework is a simple yet effective active learning technique for iteratively selecting the most informative pairs of objects to query users about, and updating the clustering with new constraints. Those constraints are then propagated inside each snapshot and between snapshots via constraint inheritance and propagation to further enhance the results. Experiments show better or comparable clustering results than existing techniques as well as high scalability for large datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
The density-based clustering algorithm is a fundamental data clustering technique with many real-... more The density-based clustering algorithm is a fundamental data clustering technique with many real-world applications. However, when the database is frequently changed, how to effectively update clustering results rather than reclustering from scratch remains a challenging task. In this work, we introduce IncAnyDBC, a unique parallel incremental data clustering approach to deal with this problem. First, IncAnyDBC can process changes in bulks rather than batches like state-of-the-art methods for reducing update overheads. Second, it keeps an underlying cluster structure called the object node graph during the clustering process and uses it as a basis for incrementally updating clusters wrt. inserted or deleted objects in the database by propagating changes around affected nodes only. In additional, IncAnyDBC actively and iteratively examines the graph and chooses only a small set of most meaningful objects to produce exact clustering results of DBSCAN or to approximate results under arbitrary time constraints. This makes it more efficient than other existing methods. Third, by processing objects in blocks, IncAnyDBC can be efficiently parallelized on multicore CPUs, thus creating a work-efficient method. It runs much faster than existing techniques using one thread while still scaling well with multiple threads. Experiments are conducted on various large real datasets for demonstrating the performance of IncAnyDBC.
Southwestern Historical Quarterly, 2021
Proceedings of the 2015 International Conference on Industrial Technology and Management Science, 2015
Social Casework, 1975
Ogalala are veterans of Vietnam. They are knowledgeable about national and internati?nal affairs.... more Ogalala are veterans of Vietnam. They are knowledgeable about national and internati?nal affairs. They see an analogy between the situations at Pine Ridge and Vietnam, and that analogy is repeated often in this book. If this medicine is too strong, one can turn to Solving the Indian Problem: The Wh,ite Man's Burdensome Business, a New York TImes book. It concentrates on Indian affairs largely through news stories and essays that have appeared in the New York Times since December 22, 1872. Through these reprints and the writings of Murray L. Wax and Robert W. Buchanan, much valuable information is given. However, of the thirteen authors, only Vine Deloria,jr., is Indian, revealing how seldom the news media turn to Indians for information about Indians. One would not get the impression from this book that Indians are able to speak for themselves or that "the people are standing up." The introductory sections by Wax and Buchanan, Deloria's essays, the two chapters on termination, and the essay on the Taos Pueblo by Winthrop Griffith are the strongest pa.rt.s of the book. Griffith sensed the mood and SpIrIt of the Pueblo reservation. He was interested in the Indians' experience and their evaluation of their situation. A disturbing aspect of a number of the reprints is the tongue-in-cheek attitude of the instant experts who spent a week or two on a reservation. One portrayal of Indian life reads more like a description of a local zoo. "The Ogalala was a friendly and, at times, a witty creature. He loves athletic games and plays them well." The Sioux tribal council wrote the editor, calling the article condescending and demeaning. The book has a disjointed feel as one goes from one article to another. The theme begins to resemble a format: things were bad in the past, but now there are new government I?rograms solving everything. No wonder Indians are confused: five years after such an article is written, the situation on a particular reservation is as bad as ever, or worse. Newsmen seem to feel that a story about Indians will be of interest only if they use stereotyped images that reflect Indians of the past as savages. One article reporting on. ~he unfair characterizations of Indians on television says:
Automatica, 2018
Two robust model predictive control (MPC) schemes are proposed for tracking unicycle robots with ... more Two robust model predictive control (MPC) schemes are proposed for tracking unicycle robots with input constraint and bounded disturbances: tube-MPC and nominal robust MPC (NRMPC). In tube-MPC, the control signal consists of a control action and a nonlinear feedback law based on the deviation of the actual states from the states of a nominal system. It renders the actual trajectory within a tube centered along the optimal trajectory of the nominal system. Recursive feasibility and input-to-state stability are established and the constraints are ensured by tightening the input domain and the terminal region. In NRMPC, an optimal control sequence is obtained by solving an optimization problem based on the current state, and then the first portion of this sequence is applied to the real system in an open-loop manner during each sampling period. The state of the nominal system model is updated by the actual state at each step, which provides additional feedback. By introducing a robust state constraint and tightening the terminal region, recursive feasibility and input-to-state stability are guaranteed. Simulation results demonstrate the effectiveness of both strategies proposed.
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016
The density-based clustering algorithm DBSCAN is a stateof-the-art data clustering technique with... more The density-based clustering algorithm DBSCAN is a stateof-the-art data clustering technique with numerous applications in many fields. However, its O(n 2) time complexity still remains a severe weakness. In this paper, we propose a novel anytime approach to cope with this problem by reducing both the range query and the label propagation time of DBSCAN. Our algorithm, called AnyDBC, compresses the data into smaller density-connected subsets called primitive clusters and labels objects based on connected components of these primitive clusters for reducing the label propagation time. Moreover, instead of passively performing the range query for all objects like existing techniques, AnyDBC iteratively and actively learns the current cluster structure of the data and selects a few most promising objects for refining clusters at each iteration. Thus, in the end, it performs substantially fewer range queries compared to DBSCAN while still guaranteeing the exact final result of DBSCAN. Experiments show speedup factors of orders of magnitude compared to DBSCAN and its fastest variants on very large real and synthetic complex datasets.
Society of Nuclear Medicine Annual Meeting Abstracts, May 1, 2010
Society of Nuclear Medicine Annual Meeting Abstracts, May 1, 2010
International Forum on Strategic Technology 2010, 2010
Abstract-OPC (Openness, Productivity, and Collaboration) standards released by the OPC Foundation... more Abstract-OPC (Openness, Productivity, and Collaboration) standards released by the OPC Foundation have provided a solution for system integration in recent years, especially in industrial automation and the enterprise systems that support industry. Because the OPC standards based in ...
Proceedings of the 2011 workshop on Data mining for medicine and healthcare - DMMH '11, 2011
Diffusion tensor imaging (DTI) is an MRI-based technology in neuroscience which provides a non-in... more Diffusion tensor imaging (DTI) is an MRI-based technology in neuroscience which provides a non-invasive way to explore the white matter fiber tracks in the human brain. From DTI, thousands of fibers can be extracted, and thus need to be clustered automatically into anatomically meaningful bundles for further use. In this paper, we focus on the essential question how to provide an efficient and effective similarity measure for the fiber clustering problem. Our novel similarity measure is based on the adapted Longest Common Subsequence method to measure shape similarity between fibers. Moreover, the distance between start and end points of a pair of fibers is also included with the shape similarity to form a unified and flexible fiber similarity measure which can effectively capture the similarity between fibers in the same bundles even in noisy conditions. To enhance the efficiency, the lower bounding technique is used to restrict the comparison of two fibers thus saving computational cost. Our new similarity measure is used together with density-based clustering algorithm to segment fibers into groups. Experiments on synthetic and real data sets show the efficiency and effectiveness of our approach compared to other distance-based techniques, namely Dynamic Time Warping (DTW), Mean of Closest Point (MCP) and Hausdorf (HDD) distance.
Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014
Redundancy and parameters challenge. Clusters usually exist in subspaces but not full-D space.
Proceedings of 2011 6th International Forum on Strategic Technology, 2011
Classic OPC, released by OPC Foundation, is well accepted and applied in industrial automation, w... more Classic OPC, released by OPC Foundation, is well accepted and applied in industrial automation, which led to many OPC products on the market from a variety of companies. However, OPC technology was based on retiring Microsoft COM/DCOM. The OPC Unified Architecture was introduced as the new generation specification with the main goal of keeping all the functionality of Classic OPC and switching from COM/DCOM technology to state -of-the-art web services. The OPC Foundation has been also developing OPC UA Toolkit that provide a collection of libraries, classes, and interfaces which make developers and programmers easy to create and implement OPC UA components. However, this toolkit is insufficient for developers and programmers to implement real monitoring and control applications from industry due to the limitations of such a toolkit, the complexity of related decision tasks and information systems, etc. In this paper, an OPC UA (Unified Architecture) client framework is proposed and developed by using OPC UA specifications, Service Oriented Architecture (SOA), web services, XML, OPC UA SDK, etc. This framework minimizes the efforts of developers and programmers in learning new techniques and allows system arc hitects and designers to perform dependency analysis on the development of monitoring and control applications. The initial results from the system implemented by Visual Studio 2008 are also provided.
Studies in Computational Intelligence, 2010
In this work, we introduce some novel heuristics which can enhance the efficiency of the Heuristi... more In this work, we introduce some novel heuristics which can enhance the efficiency of the Heuristic Discord Discovery (HDD) algorithm proposed by Keogh et al. for finding most unusual time series subsequences, called time series discords. Our new heuristics consist of a new discord measure function which helps to set up a range of alternative good orderings for the outer loop in the HDD algorithm and a branch-and-bound search mechanism that is carried out in the inner loop of the algorithm. Through extensive experiments on a variety of diverse datasets, our scheme is shown to have better performance than previous schemes, namely HOT SAX and WAT.
English Language Teaching, 2012
This paper evaluated two ESP textbooks using the evaluation of McDonough and Shaw (2003) based on... more This paper evaluated two ESP textbooks using the evaluation of McDonough and Shaw (2003) based on external and internal evaluation. The first textbook is Business Objectives (1996) by Vicki Hollett, and the second textbook is Business Studies, Second Edition (2002) by Alain Anderton. To avoid repetition, I will use BO and BS, respectively, to abbreviate the names of these books. The paper briefly discusses the external evaluation and then concludes with the results of a detailed evaluation of one chapter from each textbook for a course that I am teaching. The course is for business major students who wish to apply for jobs at The Saudi Telecommunication Company (STC), which requires a strong command of English. The evaluation indicated that both books would be appropriate if we merge them together and add some additional materials, as a textbook that can accommodate the needs of all learners does not exist.
Chemical Engineering Journal, 2005
There is a need to develop methodologies enabling one to determine UASB reactor performance, not ... more There is a need to develop methodologies enabling one to determine UASB reactor performance, not only for designing more efficient UASB reactors but also for predicting the performance of existing reactors under various conditions of influent wastewater flows and characteristics. In this work dynamic mathematical models for the prediction of the efficiency of a UASB reactor were developed. The dynamic modeling technique was applied successfully to a three-month data record from a laboratory milk wastewater treatment UASB reactor. The technique used included regression analysis by residuals. 11 parameters were examined including the following: % COD efficiency, influent COD, COD reduction, biomass produced, biogas production rate, % methane in biogas, alkalinity, reactor's temperature and RedOx, recirculation vessel's temperature and pH, and each parameter with a time lag of up to 3 days. Finally, after all parameters and all time lag trials two were the best fitted models that were developed. The models' adequacy was checked by X 2 test and F test for a data record of the same UASB reactor but at a different time period and proved to be very satisfactory. Additionally, the model's ability to predict and to control the plant's operation was examined. Simulation results thus obtained were carefully analyzed based on qualitative understanding of UASB process and were found to provide important insights into key variables that were responsible for influencing the working of the UASB reactor under varying input conditions.
I. INTRODUCTION The Software Technology Group at TU Dresden has long experience with component-ba... more I. INTRODUCTION The Software Technology Group at TU Dresden has long experience with component-based software development and techniques. For a recent addition to the public debate, see the book entitled Invasive Software Composition [1]. Currently, the group is involved in projects (e.g. European NoE REWERSE, IP ModelPlex, feasiPLe etc.) addressing composition for declarative languages. More precisely, languages important for the development of the Semantic Web and in software modeling are addressed. Such languages include, for example, rule languages (Xcerpt, R2ML), Web query languages (XQuery), ontology languages (OWL, Notation3) and general modeling languages (MOF, UML, Ecore). To enable component-based development for such languages, the composition framework Reuseware 1 is being developed [3], both as a conceptual framework and as a tool. Szyperski [4] defines a software component as follows: "A software component is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties." [4] This definition calls for components to be black-box components where no information can be inferred beyond the explicitly specified interfaces of the component. Such an approach enforces strong encapsulation and is very useful for reuse of components by third parties as these third parties need only rely on the-relatively little-information provided in the interface specifications. For declarative languages, a pure black-box approach cannot always be taken. We currently see two reasons for this. First, not all declarative languages describe processing entities (e.g. ontology languages). As such, there is not even a notion of well-defined inputs and outputs to interface components, which is an assumption made for black-boxes. Thus, a different composition paradigm is needed to address certain declarative languages. We argue that the grey-box and fragment-based component paradigm is more suited for these languages. Refined language Dedicated composition system Dedicated Comp. model Dedicated operators Composition language Upper-level Comp. model Composition technique Generic composition system Use Inherits Refines Dictates requirements
Administrative Science Quarterly, 1960
Publikationsansicht. 34238430. The Professional Soldier : A Social and Political Portrait (1960).... more Publikationsansicht. 34238430. The Professional Soldier : A Social and Political Portrait (1960). Janowitz, Morris. Abstract. Catalogación rápida. Details der Publikation. Download, http://148.201.96.14/dc/ver.aspx?ns=000265991. Herausgeber, Nueva York, EUA : Free Press. ...
IEEE Transactions on Knowledge and Data Engineering, 2021
The heterogeneity of today's Web sources requires information retrieval (IR) systems to handle mu... more The heterogeneity of today's Web sources requires information retrieval (IR) systems to handle multi-modal queries. Such queries define a user's information needs by different data modalities, such as keywords, hashtags, user profiles, and other media. Recent IR systems answer such a multi-modal query by considering it as a set of separate uni-modal queries. However, depending on the chosen operationalisation, such an approach is inefficient or ineffective. It either requires multiple passes over the data or leads to inaccuracies since the relations between data modalities are neglected in the relevance assessment. To mitigate these challenges, we present an IR system that has been designed to answer genuine multi-modal queries. It relies on a heterogeneous network embedding, so that features from diverse modalities can be incorporated when representing both, a query and the data over which it shall be evaluated. By embedding a query and the data in the same vector space, the relations across modalities are made explicit and exploited for more accurate query evaluation. At the same time, multi-modal queries are answered with a single pass over the data. An experimental evaluation using diverse real-world and synthetic datasets illustrates that our approach returns twice the amount of relevant information compared to baseline techniques, while scaling to large multi-modal databases. Index Terms-query embedding, graph embedding, heterogeneous information network ✦ • A model for heterogeneous data: We introduce a representation of heterogeneous data based on HINs to capture the semantic relations between data of different modalities of the same entity [13], [14]. This includes a
We introduce a novel interactive framework to handle both instance-level and temporal smoothness ... more We introduce a novel interactive framework to handle both instance-level and temporal smoothness constraints for clustering large temporal data. It consists of a constrained clustering algorithm which optimizes the clustering quality, constraint violation and the historical cost between consecutive data snapshots. At the center of our framework is a simple yet effective active learning technique for iteratively selecting the most informative pairs of objects to query users about, and updating the clustering with new constraints. Those constraints are then propagated inside each snapshot and between snapshots via constraint inheritance and propagation to further enhance the results. Experiments show better or comparable clustering results than existing techniques as well as high scalability for large datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
The density-based clustering algorithm is a fundamental data clustering technique with many real-... more The density-based clustering algorithm is a fundamental data clustering technique with many real-world applications. However, when the database is frequently changed, how to effectively update clustering results rather than reclustering from scratch remains a challenging task. In this work, we introduce IncAnyDBC, a unique parallel incremental data clustering approach to deal with this problem. First, IncAnyDBC can process changes in bulks rather than batches like state-of-the-art methods for reducing update overheads. Second, it keeps an underlying cluster structure called the object node graph during the clustering process and uses it as a basis for incrementally updating clusters wrt. inserted or deleted objects in the database by propagating changes around affected nodes only. In additional, IncAnyDBC actively and iteratively examines the graph and chooses only a small set of most meaningful objects to produce exact clustering results of DBSCAN or to approximate results under arbitrary time constraints. This makes it more efficient than other existing methods. Third, by processing objects in blocks, IncAnyDBC can be efficiently parallelized on multicore CPUs, thus creating a work-efficient method. It runs much faster than existing techniques using one thread while still scaling well with multiple threads. Experiments are conducted on various large real datasets for demonstrating the performance of IncAnyDBC.
Southwestern Historical Quarterly, 2021
Proceedings of the 2015 International Conference on Industrial Technology and Management Science, 2015
Social Casework, 1975
Ogalala are veterans of Vietnam. They are knowledgeable about national and internati?nal affairs.... more Ogalala are veterans of Vietnam. They are knowledgeable about national and internati?nal affairs. They see an analogy between the situations at Pine Ridge and Vietnam, and that analogy is repeated often in this book. If this medicine is too strong, one can turn to Solving the Indian Problem: The Wh,ite Man's Burdensome Business, a New York TImes book. It concentrates on Indian affairs largely through news stories and essays that have appeared in the New York Times since December 22, 1872. Through these reprints and the writings of Murray L. Wax and Robert W. Buchanan, much valuable information is given. However, of the thirteen authors, only Vine Deloria,jr., is Indian, revealing how seldom the news media turn to Indians for information about Indians. One would not get the impression from this book that Indians are able to speak for themselves or that "the people are standing up." The introductory sections by Wax and Buchanan, Deloria's essays, the two chapters on termination, and the essay on the Taos Pueblo by Winthrop Griffith are the strongest pa.rt.s of the book. Griffith sensed the mood and SpIrIt of the Pueblo reservation. He was interested in the Indians' experience and their evaluation of their situation. A disturbing aspect of a number of the reprints is the tongue-in-cheek attitude of the instant experts who spent a week or two on a reservation. One portrayal of Indian life reads more like a description of a local zoo. "The Ogalala was a friendly and, at times, a witty creature. He loves athletic games and plays them well." The Sioux tribal council wrote the editor, calling the article condescending and demeaning. The book has a disjointed feel as one goes from one article to another. The theme begins to resemble a format: things were bad in the past, but now there are new government I?rograms solving everything. No wonder Indians are confused: five years after such an article is written, the situation on a particular reservation is as bad as ever, or worse. Newsmen seem to feel that a story about Indians will be of interest only if they use stereotyped images that reflect Indians of the past as savages. One article reporting on. ~he unfair characterizations of Indians on television says:
Automatica, 2018
Two robust model predictive control (MPC) schemes are proposed for tracking unicycle robots with ... more Two robust model predictive control (MPC) schemes are proposed for tracking unicycle robots with input constraint and bounded disturbances: tube-MPC and nominal robust MPC (NRMPC). In tube-MPC, the control signal consists of a control action and a nonlinear feedback law based on the deviation of the actual states from the states of a nominal system. It renders the actual trajectory within a tube centered along the optimal trajectory of the nominal system. Recursive feasibility and input-to-state stability are established and the constraints are ensured by tightening the input domain and the terminal region. In NRMPC, an optimal control sequence is obtained by solving an optimization problem based on the current state, and then the first portion of this sequence is applied to the real system in an open-loop manner during each sampling period. The state of the nominal system model is updated by the actual state at each step, which provides additional feedback. By introducing a robust state constraint and tightening the terminal region, recursive feasibility and input-to-state stability are guaranteed. Simulation results demonstrate the effectiveness of both strategies proposed.
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016
The density-based clustering algorithm DBSCAN is a stateof-the-art data clustering technique with... more The density-based clustering algorithm DBSCAN is a stateof-the-art data clustering technique with numerous applications in many fields. However, its O(n 2) time complexity still remains a severe weakness. In this paper, we propose a novel anytime approach to cope with this problem by reducing both the range query and the label propagation time of DBSCAN. Our algorithm, called AnyDBC, compresses the data into smaller density-connected subsets called primitive clusters and labels objects based on connected components of these primitive clusters for reducing the label propagation time. Moreover, instead of passively performing the range query for all objects like existing techniques, AnyDBC iteratively and actively learns the current cluster structure of the data and selects a few most promising objects for refining clusters at each iteration. Thus, in the end, it performs substantially fewer range queries compared to DBSCAN while still guaranteeing the exact final result of DBSCAN. Experiments show speedup factors of orders of magnitude compared to DBSCAN and its fastest variants on very large real and synthetic complex datasets.
Society of Nuclear Medicine Annual Meeting Abstracts, May 1, 2010
Society of Nuclear Medicine Annual Meeting Abstracts, May 1, 2010
International Forum on Strategic Technology 2010, 2010
Abstract-OPC (Openness, Productivity, and Collaboration) standards released by the OPC Foundation... more Abstract-OPC (Openness, Productivity, and Collaboration) standards released by the OPC Foundation have provided a solution for system integration in recent years, especially in industrial automation and the enterprise systems that support industry. Because the OPC standards based in ...
Proceedings of the 2011 workshop on Data mining for medicine and healthcare - DMMH '11, 2011
Diffusion tensor imaging (DTI) is an MRI-based technology in neuroscience which provides a non-in... more Diffusion tensor imaging (DTI) is an MRI-based technology in neuroscience which provides a non-invasive way to explore the white matter fiber tracks in the human brain. From DTI, thousands of fibers can be extracted, and thus need to be clustered automatically into anatomically meaningful bundles for further use. In this paper, we focus on the essential question how to provide an efficient and effective similarity measure for the fiber clustering problem. Our novel similarity measure is based on the adapted Longest Common Subsequence method to measure shape similarity between fibers. Moreover, the distance between start and end points of a pair of fibers is also included with the shape similarity to form a unified and flexible fiber similarity measure which can effectively capture the similarity between fibers in the same bundles even in noisy conditions. To enhance the efficiency, the lower bounding technique is used to restrict the comparison of two fibers thus saving computational cost. Our new similarity measure is used together with density-based clustering algorithm to segment fibers into groups. Experiments on synthetic and real data sets show the efficiency and effectiveness of our approach compared to other distance-based techniques, namely Dynamic Time Warping (DTW), Mean of Closest Point (MCP) and Hausdorf (HDD) distance.
Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014
Redundancy and parameters challenge. Clusters usually exist in subspaces but not full-D space.
Proceedings of 2011 6th International Forum on Strategic Technology, 2011
Classic OPC, released by OPC Foundation, is well accepted and applied in industrial automation, w... more Classic OPC, released by OPC Foundation, is well accepted and applied in industrial automation, which led to many OPC products on the market from a variety of companies. However, OPC technology was based on retiring Microsoft COM/DCOM. The OPC Unified Architecture was introduced as the new generation specification with the main goal of keeping all the functionality of Classic OPC and switching from COM/DCOM technology to state -of-the-art web services. The OPC Foundation has been also developing OPC UA Toolkit that provide a collection of libraries, classes, and interfaces which make developers and programmers easy to create and implement OPC UA components. However, this toolkit is insufficient for developers and programmers to implement real monitoring and control applications from industry due to the limitations of such a toolkit, the complexity of related decision tasks and information systems, etc. In this paper, an OPC UA (Unified Architecture) client framework is proposed and developed by using OPC UA specifications, Service Oriented Architecture (SOA), web services, XML, OPC UA SDK, etc. This framework minimizes the efforts of developers and programmers in learning new techniques and allows system arc hitects and designers to perform dependency analysis on the development of monitoring and control applications. The initial results from the system implemented by Visual Studio 2008 are also provided.
Studies in Computational Intelligence, 2010
In this work, we introduce some novel heuristics which can enhance the efficiency of the Heuristi... more In this work, we introduce some novel heuristics which can enhance the efficiency of the Heuristic Discord Discovery (HDD) algorithm proposed by Keogh et al. for finding most unusual time series subsequences, called time series discords. Our new heuristics consist of a new discord measure function which helps to set up a range of alternative good orderings for the outer loop in the HDD algorithm and a branch-and-bound search mechanism that is carried out in the inner loop of the algorithm. Through extensive experiments on a variety of diverse datasets, our scheme is shown to have better performance than previous schemes, namely HOT SAX and WAT.
English Language Teaching, 2012
This paper evaluated two ESP textbooks using the evaluation of McDonough and Shaw (2003) based on... more This paper evaluated two ESP textbooks using the evaluation of McDonough and Shaw (2003) based on external and internal evaluation. The first textbook is Business Objectives (1996) by Vicki Hollett, and the second textbook is Business Studies, Second Edition (2002) by Alain Anderton. To avoid repetition, I will use BO and BS, respectively, to abbreviate the names of these books. The paper briefly discusses the external evaluation and then concludes with the results of a detailed evaluation of one chapter from each textbook for a course that I am teaching. The course is for business major students who wish to apply for jobs at The Saudi Telecommunication Company (STC), which requires a strong command of English. The evaluation indicated that both books would be appropriate if we merge them together and add some additional materials, as a textbook that can accommodate the needs of all learners does not exist.
Chemical Engineering Journal, 2005
There is a need to develop methodologies enabling one to determine UASB reactor performance, not ... more There is a need to develop methodologies enabling one to determine UASB reactor performance, not only for designing more efficient UASB reactors but also for predicting the performance of existing reactors under various conditions of influent wastewater flows and characteristics. In this work dynamic mathematical models for the prediction of the efficiency of a UASB reactor were developed. The dynamic modeling technique was applied successfully to a three-month data record from a laboratory milk wastewater treatment UASB reactor. The technique used included regression analysis by residuals. 11 parameters were examined including the following: % COD efficiency, influent COD, COD reduction, biomass produced, biogas production rate, % methane in biogas, alkalinity, reactor's temperature and RedOx, recirculation vessel's temperature and pH, and each parameter with a time lag of up to 3 days. Finally, after all parameters and all time lag trials two were the best fitted models that were developed. The models' adequacy was checked by X 2 test and F test for a data record of the same UASB reactor but at a different time period and proved to be very satisfactory. Additionally, the model's ability to predict and to control the plant's operation was examined. Simulation results thus obtained were carefully analyzed based on qualitative understanding of UASB process and were found to provide important insights into key variables that were responsible for influencing the working of the UASB reactor under varying input conditions.
I. INTRODUCTION The Software Technology Group at TU Dresden has long experience with component-ba... more I. INTRODUCTION The Software Technology Group at TU Dresden has long experience with component-based software development and techniques. For a recent addition to the public debate, see the book entitled Invasive Software Composition [1]. Currently, the group is involved in projects (e.g. European NoE REWERSE, IP ModelPlex, feasiPLe etc.) addressing composition for declarative languages. More precisely, languages important for the development of the Semantic Web and in software modeling are addressed. Such languages include, for example, rule languages (Xcerpt, R2ML), Web query languages (XQuery), ontology languages (OWL, Notation3) and general modeling languages (MOF, UML, Ecore). To enable component-based development for such languages, the composition framework Reuseware 1 is being developed [3], both as a conceptual framework and as a tool. Szyperski [4] defines a software component as follows: "A software component is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties." [4] This definition calls for components to be black-box components where no information can be inferred beyond the explicitly specified interfaces of the component. Such an approach enforces strong encapsulation and is very useful for reuse of components by third parties as these third parties need only rely on the-relatively little-information provided in the interface specifications. For declarative languages, a pure black-box approach cannot always be taken. We currently see two reasons for this. First, not all declarative languages describe processing entities (e.g. ontology languages). As such, there is not even a notion of well-defined inputs and outputs to interface components, which is an assumption made for black-boxes. Thus, a different composition paradigm is needed to address certain declarative languages. We argue that the grey-box and fragment-based component paradigm is more suited for these languages. Refined language Dedicated composition system Dedicated Comp. model Dedicated operators Composition language Upper-level Comp. model Composition technique Generic composition system Use Inherits Refines Dictates requirements
Administrative Science Quarterly, 1960
Publikationsansicht. 34238430. The Professional Soldier : A Social and Political Portrait (1960).... more Publikationsansicht. 34238430. The Professional Soldier : A Social and Political Portrait (1960). Janowitz, Morris. Abstract. Catalogación rápida. Details der Publikation. Download, http://148.201.96.14/dc/ver.aspx?ns=000265991. Herausgeber, Nueva York, EUA : Free Press. ...