Data Warehouses Research Papers - Academia.edu (original) (raw)
There is increasing interest of organization for advanced presentation and data analysis for public users. This paper shows how to integrate data from enterprise data warehouse with spatial data warehouse, publish them together to online... more
There is increasing interest of organization for advanced presentation and data analysis for public users. This paper shows how to integrate data from enterprise data warehouse with spatial data warehouse, publish them together to online interactive map, and enable public users to perform analysis in simple web interface. As case study is used Business Intelligence System for Investors, where data comes from different sources different levels, structured and unstructured. This approach has three phases: creating spatial data warehouse, implementing ETL (extract, transform and load) procedure for data from different sources (spatial and non-spatial) and, finally, designing interface for performing data analysis. The fact, that this is a public site, where users are not known in advanced and not trained, calls for importance of usability design and self-evident interface. Investors are not willing to invest any time in learning the basics of a system. Geographic information providers ...
AbstractWith the proliferation of international standards for grid-enabled databases, the need for data loading and data mapping in a large integrated environment of heterogeneous databases highlights issues of consistency and integrity.... more
AbstractWith the proliferation of international standards for grid-enabled databases, the need for data loading and data mapping in a large integrated environment of heterogeneous databases highlights issues of consistency and integrity. We discuss methods for providing semi-...
Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we... more
Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.
- by Ammar Jalalimanesh and +2
- •
- Data Mining, Data Warehousing, Databases, Decision Trees
The recent introduction of a spate of data access applications, such as CLAP and data mining tools, has led to an increased interest on the part of both scholars and practitioners on how best to use and benefit from these tools. This... more
The recent introduction of a spate of data access applications, such as CLAP and data mining tools, has led to an increased interest on the part of both scholars and practitioners on how best to use and benefit from these tools. This paper reports on six exploratory case studies involving eight decision-makers and seven end-users. A process model based on the Value Chain is proposed and explained. Results show that database usage and information processing practices have indeed grown more sophisticated. Implications for practice and future research aimed at testing the Value Chain model are proposed.
The data warehouse designer should consider it effectiveness while the design process, this might be a part of it work by analyzing the update frequency of production databases. Actually, to decide in a small time interval becomes the... more
The data warehouse designer should consider it effectiveness while the design process, this might be a part of it work by analyzing the update frequency of production databases. Actually, to decide in a small time interval becomes the most important concern for deciders, this because it's always depending on data warehouse refresh using the ETLs and the cube generate process based on data warehouse schema type. To present the KPI's to the management quickly, we need to minimize the query execution time. This papers shows that the star schema is more advantageous when using a bitmap index based on ETL query execution time and data access time.
To improve the security of mobile management is to identify and analyze the needs of wireless mobility that revolve around the protection modules data lost or stolen devices through remote functions To achieve this, it was first necessary... more
To improve the security of mobile management is to identify and analyze the needs of wireless mobility that revolve around the protection modules data lost or stolen devices through remote functions To achieve this, it was first necessary to apply data prevention strategies, ensures the encryption of compliance by blocking devices unbridled or unencrypted for it must be given an intelligence vision, through securing the different mobile data components of its technical architecture. To isolate mobile data analysis queries and operational queries. As analysis requests are often very cumbersome, the main objective was to preserve the performance of the mobile operating systems. Then this database dedicated to queries and analysis gradually transferred and organized.
The MURA problems will result lots of problems in photomask and TFT-LCD industries as well. In this paper, we propose a new concept for the photomask production industries, which is based on data mining techniques. Our model is suitable... more
The MURA problems will result lots of problems in photomask and TFT-LCD industries as well. In this paper, we propose a new concept for the photomask production industries, which is based on data mining techniques. Our model is suitable for every company which has the problems in MURA effects. Because our data mining techniques is a way collecting the correct information by itself, it could clean the data through their own expert. By building their own expert data warehouse, it could make our data mining techniques become a optimization mining technique, which means our model is one of the best solution to them. It could be suitable not only for every photomask company but also companies facing to the MURA problems. Our model helps them to cut down their manufacturing cost as well as promote the quality of their products.
Abstract - Increasing market competitiveness has led entrepreneurs to require more and more information from Information Technology areas to allow for efficient decision-making. Two alternatives to solving this problem are allied in this... more
Abstract - Increasing market competitiveness has led entrepreneurs to require more and more information from Information Technology areas to allow for efficient decision-making. Two alternatives to solving this problem are allied in this work: use of Model Driven Architecture (MDA) and ...
"" Query processors fail to retrieve information of interest in the presence of misspelled keywords given by users. The above problem persists because most of currently used indexing system does not have fault-tolerance ability to map the... more
"" Query processors fail to retrieve information of interest in the presence of misspelled keywords given by users. The above problem persists because most of currently used indexing system does not have fault-tolerance ability to map the misspelled keywords to the correct records stored at physical level of databases. Therefore, the information retrieval systems need additional support of spell check mechanism with limitations for correction of misspelled keywords before submitting them to query processors. In this paper, a data indexing system is introduced for indexing multidimensional data cube, which
maps the keywords to the records stored at physical level in multidimensional data structure and also has normalised longest common subsequence-based string approximation method to find correct keywords against misspelled keywords which comes directly to indexing processes through user queries. It provides more than 90% accurate results in mapping misspelled keywords to the physically stored records. These results are consistent even for large datasets. ""
Business intelligence systems are highly complex systems that senior executives use to process vast amounts of information when making decisions. Business intelligence systems are rarely used to their full potential due to a poor... more
Business intelligence systems are highly complex systems that senior executives use to process vast amounts of information when making decisions. Business intelligence systems are rarely used to their full potential due to a poor understanding of the factors that contribute to system success. Organizations using business intelligence systems frequently find that it is not easy to evaluate the effectiveness of these systems, and researchers have noted that there is limited scholarly and practical understanding of how quality factors affect information use within these systems. This quantitative post positivist research used the information system (IS) success model to analyze how information quality and system quality influence information use in business intelligence systems. This study was also designed to investigate the moderating effects of maturity constructs (i.e., data sources and analytical capabilities) on the relationships between quality factors and information use.
In this paper, an optimal aggregation and counter-aggregation (drill-down) methodology is proposed on multidimensional data cube. The main idea is to aggregate on smaller cuboids after partitioning those depending on the cardinality of... more
In this paper, an optimal aggregation and counter-aggregation (drill-down) methodology is proposed on multidimensional data cube. The main idea is to aggregate on smaller cuboids after partitioning those depending on the cardinality of the individual dimensions. Based on the operations to make these partitions, a Galois Connection is identified for formal analysis that allow to guarantee the soundness of optimizations of storage space and time complexity for the abstraction and concretization functions defined on the lattice structure. Our contribution can be seen as an application to OLAP operations on multidimensional data model in the Abstract Interpretation framework.