Theo van der Weide | Radboud University Nijmegen (original) (raw)

Papers by Theo van der Weide

Research paper thumbnail of Quality Makes the Information Market

On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE, 2006

In this paper we consider information exchange via the Web to be an information market. The notio... more In this paper we consider information exchange via the Web to be an information market. The notion of quality plays an important role on this information market. We present a model of quality and discuss how this model can be operationalized. This leads us to quality measurement, interpretation of measurements and the associated accuracy. An illustration in the form of a basic quality assessment system is presented. ⋆ The investigations were partly supported by the Dutch Organization for Scientific Research (NWO).

Research paper thumbnail of Conceptual Diagram Development for Sustainable eGovernment Implementation

Electronic Journal of e-Government, 2017

E-government implementation has received extra attention as a new way to increase the development... more E-government implementation has received extra attention as a new way to increase the development of a country. International organizations such as the ITU and the United Nations provide strategic guidance to overcome the challenges of ICT use. Yet, the development of ICT use in developing countries is still very low. Researchers are contributing to support successful implementation with models and theories that conceptualize the complex situation. This paper extends the DPSIR-based e-government conceptual framework into the direction of implementation strategies. The main focus is on improving stakeholder involvement during requirements engineering. Object Role Modeling (ORM) was used (1) to develop a semi-natural language (controlled language) that is understandable both for domain stakeholders and system analysts and (2) to make a common description of the application domain in this language. The proposed model can be used to construct quantitative simulation tools to be used by policy makers.

Research paper thumbnail of Agents in Cyberspace – Towards a Framework for Multi-Agent Systems in Information Discovery

Electronic Workshops in Computing, 1998

Research paper thumbnail of A Linguistic-Based Systematic Approach to Complex System Dynamics and its Application to E-government Introduction in Zanzibar

Complex Systems Informatics and Modeling Quarterly, 2017

System thinking has become an effective strategy when dealing with complex systems. Such systems ... more System thinking has become an effective strategy when dealing with complex systems. Such systems are characterized by mutual interactions, causality and inter-dependency between system components. A typical example is the cooperation between governmental organizations and stakeholder interaction. The complexity of developing an e-government system suggests a more fundamental approach, where the roles of domain expert and system analyst are clearly separated. The main focus of this article is (1) to propose a linguistically-based systematic approach to the construction of models for the dynamics of complex systems, and (2) to propose extended causal diagrams. Our research methodology is based on Design Science. We start from a conceptual language developed for the application domain at hand and use this to define the dynamic factors. Then, we show how the resulting extended causal diagram is transformed into a framework for System Dynamics. We have demonstrated this approach by using a basic form of an e-government as a running example. Our intention is to use this approach as a basis for a systematic step-wise introduction of e-government in Zanzibar. Besides, this method is useful for modeling any complex system, especially for the description and evaluation of intended policies.

Research paper thumbnail of Matching Index Expressions for Information Retrieval

Information Retrieval

The INN system is a dynamic hypertext tool for searching and exploring the WWW. It uses a dynamic... more The INN system is a dynamic hypertext tool for searching and exploring the WWW. It uses a dynamically built ancillary layer to support easy interaction. This layer features the subexpressions of index expressions that are extracted from rendered documents. Currently, the INN system uses keyword based matching. The effectiveness of the INN system may be increased by using matching functions for index expressions. In the design of such functions, several constraints stemming from the INN must be taken into account. Important constraints are a limited response time and storage space, a focus on discriminating (different notions of) subexpressions for index expressions, and domain independency. With these contextual constraints in mind, several matching functions are designed and both theoretically and practically evaluated.

Research paper thumbnail of PROFILE: A Multi-disciplinary approach to information discovery.: Technical Report CSIR0001

Research paper thumbnail of Information Parallax

Application of Agents and Intelligent Information Technologies

To effectively use and exchange information among AI systems, a formal specification of the repre... more To effectively use and exchange information among AI systems, a formal specification of the representation of their shared domain of discourse—called an ontology—is indispensable. In this chapter we introduce a special kind of knowledge representation based on a dual view on the universe of discourse and show how it can be used in human activities such as searching, in-depth exploration and browsing. After a formal definition of dualistic ontologies we exemplify this definition with three different (well known) kinds of ontologies, based on the vector model, on formal concept analysis and on fuzzy logic respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model and the fuzzy-set model lead to formal concept analysis, in which the fuzzy-set model is equipped with a parameter that controls the fine-graining of the resulting concepts. We discuss the relation between the resulting systems of concepts. F...

Research paper thumbnail of eHealth Service Discovery Framework: A Case Study in Ethiopia

Lecture Notes in Computer Science, 2011

Research paper thumbnail of Service Discovery Framework for Personalized Ehealth

International Conference on Instrumentation, Measurement, Circuits and Systems (ICIMCS 2011)

Personalization of services based on user preferences and user context facilitates service discov... more Personalization of services based on user preferences and user context facilitates service discovery. Several researches have been conducted on service personalization, however, little has been done to allow non-literate users to query services based on their educational level, their linguistic, cognitive, physiological and psychological abilities. In developing countries users have diversified culture, language and traditional values. As a result it is valuable to design the services on the basis of user literacy level, user preference and user context.

Research paper thumbnail of eHealth service discovery framework for a low infrastructure context

2010 2nd International Conference on Computer Technology and Development, 2010

Research paper thumbnail of Adaptation in Multimedia Systems

Multimedia Tools and Applications, 2005

Multimedia systems can profit a lot from personalization. Such a personalization is essential to ... more Multimedia systems can profit a lot from personalization. Such a personalization is essential to give users the feeling that the system is easily accessible especially if it is done automatically. The way this adaptive personalization works is very dependent on the adaptation model that is chosen. We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling in user modeling systems.

Research paper thumbnail of Dualistic Ontologies

International Journal of Intelligent Information Technologies, 2005

To effectively use and exchange information among AI systems, a formal specification of the repre... more To effectively use and exchange information among AI systems, a formal specification of the representation of their shared domain of discourse—called an ontology—is indispensable. In this article we introduce a special kind of knowledge representation based on a dual view of the universe of discourse and show how it can be used in human activities such as searching, in-depth exploration, and browsing. After a formal definition of dualistic ontologies, we exemplify this definition with three different (well-known) kinds of ontologies, based on the vector model, on formal concept analysis, and on fuzzy logic, respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model as the fuzzy set model led to Formal Concept Analysis, in which the fuzzy set model is equipped with a parameter that controls the fine-graining of the resulting concepts. We discuss the relation between the resulting systems of concepts....

Research paper thumbnail of Measuring the incremental information value of documents

Information Sciences, 2006

Research paper thumbnail of On the quality of resources on the Web: An information retrieval perspective

Information Sciences, 2007

We use information from the Web for performing our daily tasks more and more often. Locating the ... more We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks. In this paper we propose to use the notion of quality as a metric for estimating the aptness of online resources for individual searchers. The formal model for quality as presented in this paper is firmly grounded in literature. It is based on the observations that objects (dubbed artefacts in our work) can play different roles (i.e., perform different functions). An artefact can be of high quality in one role but of poor quality in another. Even more, the notion of quality is highly personal. Our quality-computations for estimating the aptness of resources for searches uses the notion of linguistic variables from the field of fuzzy logic. After presenting our model for quality we also show how manipulation of online resoureces by means of transformations can influence the quality of these resources.

Research paper thumbnail of Phase-based information retrieval

Information Processing & Management, 1998

ÐIn this article we describe a retrieval schema which goes beyond the classical information retri... more ÐIn this article we describe a retrieval schema which goes beyond the classical information retrieval keyword hypothesis and takes into account also linguistic variation. Guided by the failures and successes of other state-of-the-art approaches, as well as our own experience with the IRENA system, our approach is based on phrases and incorporates linguistic resources and processors. In this respect, we introduce the phrase retrieval hypothesis to replace the keyword retrieval hypothesis. We suggest a representation of phrases suitable for indexing, and an architecture for such a retrieval system. Syntactical normalization is introduced to improve retrieval eectiveness. Morphological and lexico-semantical normalizations are adjusted to ®t in this model.

Research paper thumbnail of A general theory for evolving application models

IEEE Transactions on Knowledge and Data Engineering, 1995

In this article we focus on evolving information systems. First a delimitation of the concept of ... more In this article we focus on evolving information systems. First a delimitation of the concept of evolution is provided, resulting in a first attempt to a general theory for such evolutions. The theory makes a distinction between the underlying information structure at the conceptual level, its evolution on the one hand, and the description and semantics of operations on the information structure and its population on the other hand. Main issues within this theory are object typing, type relatedness and identification of objects. In terms of these concepts, we propose some axioms on the well-formedness of evolution. In this general theory, the underlying data model is a parameter, making the theory applicable for a wide range of modelling techniques, including object-role modelling and object oriented techniques.

Research paper thumbnail of Expressiveness in conceptual data modelling

Data & Knowledge Engineering, 1993

Conceptual data modelling techniques aim at the representation of data at a high level of abstrac... more Conceptual data modelling techniques aim at the representation of data at a high level of abstraction. The Conceptualisation Principle states that only those aspects are to be represented that deal with the meaning of the Universe of Discourse. Conventional conceptual data modelling techniques, as e.g. ER or NIAM, have to violate the Conceptualisation Principle when dealing with objects with a complex structure. In order to represent these objects conceptually irrelevant choices have to made. It is even worse: sometimes the Universe of Discourse has to be adapted to suit the modelling technique. These objects typically occur in domains as meta-modelling, hypermedia and CAD/CAM. In this paper extensions to an existing data modelling technique (NIAM) will be discussed and formally de ned, that make it possible to naturally represent objects with complex structures without having to violate the Conceptualisation Principle. These extensions will be motivated from a practical point of view by examples and from a theoretical point of view by a comparison with the expressive power of formal set theory and grammar theory.

Research paper thumbnail of Conceptual query expansion

Data & Knowledge Engineering, 2006

Without detailed knowledge of a collection, most users find it difficult to formulate effective q... more Without detailed knowledge of a collection, most users find it difficult to formulate effective queries. In fact, as observed from Web search engines, users may spend large amounts of time reformulating their queries in order to satisfy their information need. A proven successful method to overcome this difficulty is to treat the query as an initial attempt to retrieve information and use it to construct a new, hopefully better query. Another way to improve a query is to use global (thesauri-like) information. In this article a new, hybrid approach is presented that projects the initial query result on the global information, leading to a local conceptual overview. The resulting concepts are candidates for query refinement. To show its effectiveness, we show that the conceptual structure resulting after a typical short query (2 terms) contains refinements that perform as well as a most accurate query formulation. Next we show that query by navigation is an effective mechanism that in most cases finds the optimal concept in a small number of steps. If the optimal concept is not found, then it still finds an acceptable sub-optimum. We show that the proposed method compares favorably to existing techniques.

Research paper thumbnail of Value and the information market

Data & Knowledge Engineering, 2007

In this paper we explore how (micro)economic theory can be used to analyze and model the exchange... more In this paper we explore how (micro)economic theory can be used to analyze and model the exchange of information on the Web. More specif ically, we consider searchers for information who engage in transactions on the Web. Searchers will engage in web transactions only if they gain something in such a transaction. To this end we develop a formal model for markets, based on the no tions of value and transaction. This model enables us to examine trans actions on an information market. In this market we have a dual view on transactions, creating a dichotomy of transactors and transactands.

Research paper thumbnail of A Novel Model of Autonomous Intelligent Agent Topic Tracking System for the World Wide Web

Autonomous agents are software systems situated within and a part of an environment that senses s... more Autonomous agents are software systems situated within and a part of an environment that senses stimuli in that environment, acts on it, over time, in pursuit of its own agenda so as to effect what it senses in the future. Autonomous agents take action without user intervention and operate concurrently, either while the user is idle or taking other actions. The internet encompasses a large number of documents to which search engines try to provide access. Even for many narrow topics and potential information needs, there are often many web pages online. The user of a web search engine would prefer the best pages to be returned. The use of autonomous intelligent agent topic tracker will help to make decision on behalf of the user, by narrowing the search domain and decreasing the human computer interaction, phenomenally. Previous research works on information retrieval system usually consists of long list of results containing documents with low relevance to the user query. Thus, the goal of this paper is to build an Intelligent Agent Topic Tracking System, that employs document concepts to track identical document related to the researcher's needs within a publication topic development. The system solely refines the user query as well as retrieving the result from a search engine with the help of Google API and refines the noisy result produced using Document-document Similarity model and the Document Component model to find similar topic documents in the document pool indexed by the search engines. In addition, the Web Structure Analysis model will use the hub and authority algorithm to evaluate the importance of web pages or to determine their relatedness to a particular topic. Finally, clustering is used to automatically group document pool into similar topics.

Research paper thumbnail of Quality Makes the Information Market

On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE, 2006

In this paper we consider information exchange via the Web to be an information market. The notio... more In this paper we consider information exchange via the Web to be an information market. The notion of quality plays an important role on this information market. We present a model of quality and discuss how this model can be operationalized. This leads us to quality measurement, interpretation of measurements and the associated accuracy. An illustration in the form of a basic quality assessment system is presented. ⋆ The investigations were partly supported by the Dutch Organization for Scientific Research (NWO).

Research paper thumbnail of Conceptual Diagram Development for Sustainable eGovernment Implementation

Electronic Journal of e-Government, 2017

E-government implementation has received extra attention as a new way to increase the development... more E-government implementation has received extra attention as a new way to increase the development of a country. International organizations such as the ITU and the United Nations provide strategic guidance to overcome the challenges of ICT use. Yet, the development of ICT use in developing countries is still very low. Researchers are contributing to support successful implementation with models and theories that conceptualize the complex situation. This paper extends the DPSIR-based e-government conceptual framework into the direction of implementation strategies. The main focus is on improving stakeholder involvement during requirements engineering. Object Role Modeling (ORM) was used (1) to develop a semi-natural language (controlled language) that is understandable both for domain stakeholders and system analysts and (2) to make a common description of the application domain in this language. The proposed model can be used to construct quantitative simulation tools to be used by policy makers.

Research paper thumbnail of Agents in Cyberspace – Towards a Framework for Multi-Agent Systems in Information Discovery

Electronic Workshops in Computing, 1998

Research paper thumbnail of A Linguistic-Based Systematic Approach to Complex System Dynamics and its Application to E-government Introduction in Zanzibar

Complex Systems Informatics and Modeling Quarterly, 2017

System thinking has become an effective strategy when dealing with complex systems. Such systems ... more System thinking has become an effective strategy when dealing with complex systems. Such systems are characterized by mutual interactions, causality and inter-dependency between system components. A typical example is the cooperation between governmental organizations and stakeholder interaction. The complexity of developing an e-government system suggests a more fundamental approach, where the roles of domain expert and system analyst are clearly separated. The main focus of this article is (1) to propose a linguistically-based systematic approach to the construction of models for the dynamics of complex systems, and (2) to propose extended causal diagrams. Our research methodology is based on Design Science. We start from a conceptual language developed for the application domain at hand and use this to define the dynamic factors. Then, we show how the resulting extended causal diagram is transformed into a framework for System Dynamics. We have demonstrated this approach by using a basic form of an e-government as a running example. Our intention is to use this approach as a basis for a systematic step-wise introduction of e-government in Zanzibar. Besides, this method is useful for modeling any complex system, especially for the description and evaluation of intended policies.

Research paper thumbnail of Matching Index Expressions for Information Retrieval

Information Retrieval

The INN system is a dynamic hypertext tool for searching and exploring the WWW. It uses a dynamic... more The INN system is a dynamic hypertext tool for searching and exploring the WWW. It uses a dynamically built ancillary layer to support easy interaction. This layer features the subexpressions of index expressions that are extracted from rendered documents. Currently, the INN system uses keyword based matching. The effectiveness of the INN system may be increased by using matching functions for index expressions. In the design of such functions, several constraints stemming from the INN must be taken into account. Important constraints are a limited response time and storage space, a focus on discriminating (different notions of) subexpressions for index expressions, and domain independency. With these contextual constraints in mind, several matching functions are designed and both theoretically and practically evaluated.

Research paper thumbnail of PROFILE: A Multi-disciplinary approach to information discovery.: Technical Report CSIR0001

Research paper thumbnail of Information Parallax

Application of Agents and Intelligent Information Technologies

To effectively use and exchange information among AI systems, a formal specification of the repre... more To effectively use and exchange information among AI systems, a formal specification of the representation of their shared domain of discourse—called an ontology—is indispensable. In this chapter we introduce a special kind of knowledge representation based on a dual view on the universe of discourse and show how it can be used in human activities such as searching, in-depth exploration and browsing. After a formal definition of dualistic ontologies we exemplify this definition with three different (well known) kinds of ontologies, based on the vector model, on formal concept analysis and on fuzzy logic respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model and the fuzzy-set model lead to formal concept analysis, in which the fuzzy-set model is equipped with a parameter that controls the fine-graining of the resulting concepts. We discuss the relation between the resulting systems of concepts. F...

Research paper thumbnail of eHealth Service Discovery Framework: A Case Study in Ethiopia

Lecture Notes in Computer Science, 2011

Research paper thumbnail of Service Discovery Framework for Personalized Ehealth

International Conference on Instrumentation, Measurement, Circuits and Systems (ICIMCS 2011)

Personalization of services based on user preferences and user context facilitates service discov... more Personalization of services based on user preferences and user context facilitates service discovery. Several researches have been conducted on service personalization, however, little has been done to allow non-literate users to query services based on their educational level, their linguistic, cognitive, physiological and psychological abilities. In developing countries users have diversified culture, language and traditional values. As a result it is valuable to design the services on the basis of user literacy level, user preference and user context.

Research paper thumbnail of eHealth service discovery framework for a low infrastructure context

2010 2nd International Conference on Computer Technology and Development, 2010

Research paper thumbnail of Adaptation in Multimedia Systems

Multimedia Tools and Applications, 2005

Multimedia systems can profit a lot from personalization. Such a personalization is essential to ... more Multimedia systems can profit a lot from personalization. Such a personalization is essential to give users the feeling that the system is easily accessible especially if it is done automatically. The way this adaptive personalization works is very dependent on the adaptation model that is chosen. We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling in user modeling systems.

Research paper thumbnail of Dualistic Ontologies

International Journal of Intelligent Information Technologies, 2005

To effectively use and exchange information among AI systems, a formal specification of the repre... more To effectively use and exchange information among AI systems, a formal specification of the representation of their shared domain of discourse—called an ontology—is indispensable. In this article we introduce a special kind of knowledge representation based on a dual view of the universe of discourse and show how it can be used in human activities such as searching, in-depth exploration, and browsing. After a formal definition of dualistic ontologies, we exemplify this definition with three different (well-known) kinds of ontologies, based on the vector model, on formal concept analysis, and on fuzzy logic, respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model as the fuzzy set model led to Formal Concept Analysis, in which the fuzzy set model is equipped with a parameter that controls the fine-graining of the resulting concepts. We discuss the relation between the resulting systems of concepts....

Research paper thumbnail of Measuring the incremental information value of documents

Information Sciences, 2006

Research paper thumbnail of On the quality of resources on the Web: An information retrieval perspective

Information Sciences, 2007

We use information from the Web for performing our daily tasks more and more often. Locating the ... more We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks. In this paper we propose to use the notion of quality as a metric for estimating the aptness of online resources for individual searchers. The formal model for quality as presented in this paper is firmly grounded in literature. It is based on the observations that objects (dubbed artefacts in our work) can play different roles (i.e., perform different functions). An artefact can be of high quality in one role but of poor quality in another. Even more, the notion of quality is highly personal. Our quality-computations for estimating the aptness of resources for searches uses the notion of linguistic variables from the field of fuzzy logic. After presenting our model for quality we also show how manipulation of online resoureces by means of transformations can influence the quality of these resources.

Research paper thumbnail of Phase-based information retrieval

Information Processing & Management, 1998

ÐIn this article we describe a retrieval schema which goes beyond the classical information retri... more ÐIn this article we describe a retrieval schema which goes beyond the classical information retrieval keyword hypothesis and takes into account also linguistic variation. Guided by the failures and successes of other state-of-the-art approaches, as well as our own experience with the IRENA system, our approach is based on phrases and incorporates linguistic resources and processors. In this respect, we introduce the phrase retrieval hypothesis to replace the keyword retrieval hypothesis. We suggest a representation of phrases suitable for indexing, and an architecture for such a retrieval system. Syntactical normalization is introduced to improve retrieval eectiveness. Morphological and lexico-semantical normalizations are adjusted to ®t in this model.

Research paper thumbnail of A general theory for evolving application models

IEEE Transactions on Knowledge and Data Engineering, 1995

In this article we focus on evolving information systems. First a delimitation of the concept of ... more In this article we focus on evolving information systems. First a delimitation of the concept of evolution is provided, resulting in a first attempt to a general theory for such evolutions. The theory makes a distinction between the underlying information structure at the conceptual level, its evolution on the one hand, and the description and semantics of operations on the information structure and its population on the other hand. Main issues within this theory are object typing, type relatedness and identification of objects. In terms of these concepts, we propose some axioms on the well-formedness of evolution. In this general theory, the underlying data model is a parameter, making the theory applicable for a wide range of modelling techniques, including object-role modelling and object oriented techniques.

Research paper thumbnail of Expressiveness in conceptual data modelling

Data & Knowledge Engineering, 1993

Conceptual data modelling techniques aim at the representation of data at a high level of abstrac... more Conceptual data modelling techniques aim at the representation of data at a high level of abstraction. The Conceptualisation Principle states that only those aspects are to be represented that deal with the meaning of the Universe of Discourse. Conventional conceptual data modelling techniques, as e.g. ER or NIAM, have to violate the Conceptualisation Principle when dealing with objects with a complex structure. In order to represent these objects conceptually irrelevant choices have to made. It is even worse: sometimes the Universe of Discourse has to be adapted to suit the modelling technique. These objects typically occur in domains as meta-modelling, hypermedia and CAD/CAM. In this paper extensions to an existing data modelling technique (NIAM) will be discussed and formally de ned, that make it possible to naturally represent objects with complex structures without having to violate the Conceptualisation Principle. These extensions will be motivated from a practical point of view by examples and from a theoretical point of view by a comparison with the expressive power of formal set theory and grammar theory.

Research paper thumbnail of Conceptual query expansion

Data & Knowledge Engineering, 2006

Without detailed knowledge of a collection, most users find it difficult to formulate effective q... more Without detailed knowledge of a collection, most users find it difficult to formulate effective queries. In fact, as observed from Web search engines, users may spend large amounts of time reformulating their queries in order to satisfy their information need. A proven successful method to overcome this difficulty is to treat the query as an initial attempt to retrieve information and use it to construct a new, hopefully better query. Another way to improve a query is to use global (thesauri-like) information. In this article a new, hybrid approach is presented that projects the initial query result on the global information, leading to a local conceptual overview. The resulting concepts are candidates for query refinement. To show its effectiveness, we show that the conceptual structure resulting after a typical short query (2 terms) contains refinements that perform as well as a most accurate query formulation. Next we show that query by navigation is an effective mechanism that in most cases finds the optimal concept in a small number of steps. If the optimal concept is not found, then it still finds an acceptable sub-optimum. We show that the proposed method compares favorably to existing techniques.

Research paper thumbnail of Value and the information market

Data & Knowledge Engineering, 2007

In this paper we explore how (micro)economic theory can be used to analyze and model the exchange... more In this paper we explore how (micro)economic theory can be used to analyze and model the exchange of information on the Web. More specif ically, we consider searchers for information who engage in transactions on the Web. Searchers will engage in web transactions only if they gain something in such a transaction. To this end we develop a formal model for markets, based on the no tions of value and transaction. This model enables us to examine trans actions on an information market. In this market we have a dual view on transactions, creating a dichotomy of transactors and transactands.

Research paper thumbnail of A Novel Model of Autonomous Intelligent Agent Topic Tracking System for the World Wide Web

Autonomous agents are software systems situated within and a part of an environment that senses s... more Autonomous agents are software systems situated within and a part of an environment that senses stimuli in that environment, acts on it, over time, in pursuit of its own agenda so as to effect what it senses in the future. Autonomous agents take action without user intervention and operate concurrently, either while the user is idle or taking other actions. The internet encompasses a large number of documents to which search engines try to provide access. Even for many narrow topics and potential information needs, there are often many web pages online. The user of a web search engine would prefer the best pages to be returned. The use of autonomous intelligent agent topic tracker will help to make decision on behalf of the user, by narrowing the search domain and decreasing the human computer interaction, phenomenally. Previous research works on information retrieval system usually consists of long list of results containing documents with low relevance to the user query. Thus, the goal of this paper is to build an Intelligent Agent Topic Tracking System, that employs document concepts to track identical document related to the researcher's needs within a publication topic development. The system solely refines the user query as well as retrieving the result from a search engine with the help of Google API and refines the noisy result produced using Document-document Similarity model and the Document Component model to find similar topic documents in the document pool indexed by the search engines. In addition, the Web Structure Analysis model will use the hub and authority algorithm to evaluate the importance of web pages or to determine their relatedness to a particular topic. Finally, clustering is used to automatically group document pool into similar topics.