Database Design Research Papers - Academia.edu (original) (raw)

Relationships are an integral part of the design of a database. Comparing and integrating relationships from heterogeneous databases requires that the relationships be mapped to each other or to a common classification. Identifying... more

Relationships are an integral part of the design of a database. Comparing and integrating relationships from heterogeneous databases requires that the relationships be mapped to each other or to a common classification. Identifying similarities and resolving differences in relationships across large data sources is a resource-intensive task that could benefit greatly from semi-automated approaches. A prerequisite to developing such approaches is a clear understanding of the semantics of relationships used in database design. This research presents a layered ontology for classifying the semantics of relationships. It consists of a core layer that captures the fundamental types of relationships between entities. A middle layer provides the internal context, obtained from entities surrounding the relationship, to interpret the fundamental types. The outer layer allows further interpretation using the external context, that is, the domain in which a relationship is being used. An initial assessment on relationships from a variety of application domains demonstrates that the ontology can be adequate and useful for comparing relationships across databases.

Practical experience shows that the design of very large database schemata causes severe problems, and no systematic support is provided. In this paper we address this problem. We define an Entity-Relationship schema algebra, which... more

Practical experience shows that the design of very large database schemata causes severe problems, and no systematic support is provided. In this paper we address this problem. We define an Entity-Relationship schema algebra, which permits the representation of very large database schemata by algebraic expressions involving smaller schemata. Similar to abstraction mechanisms found in semantic data models the schema constructors can be classified into three groups for building associations and collections of subschemata, and for folding subschemata. Furthermore, based on the analysis of a large number of very large database schemata we identify twelve frequently recurring meta-structures in three categories associated with schema construction, lifespan and context. In combination with the schema algebra the meta-structures permit a component-based approach to database schema design, which can further be formalised by graph-rewriting. Povzetek: Predstavljena je nova shema entitet in r...

This paper describes how a database design can play an important role for developing practical industrial maintenance system. The good database design will in return give a better information sharing and good system in term of data... more

This paper describes how a database design can play an important role for developing practical industrial maintenance system. The good database design will in return give a better information sharing and good system in term of data accessibility. The design of the database are presented and illustrated by a case. The preliminary result presented to show that is possible to

A key is simple if it consists of a single attribute.

Database design is often described as an intuitive, even artistic, process. Many researchers, however, are currently working on applying techniques from artificial intelligence to provide effective automated assistance for this task. This... more

Database design is often described as an intuitive, even artistic, process. Many researchers, however, are currently working on applying techniques from artificial intelligence to provide effective automated assistance for this task. This article presents a summary of the current state of the art for the benefit of future researchers and users of this technology. Thirteen examples of knowledge-based tools for database design are briefly described and then compared in terms of the source, content, and structure of their knowledge bases; the amount of support they provide to the human designer; the data models and phases of the design process they support; and the capabilities they expect of their users. The findings show that there has apparently been very little empirical verification of the effectiveness of these systems. In addition, most rely exclusively on knowledge provided by the developers themselves and have tittle ability to expand their knowledge based on experience. Although such systems ideally would be used by application specialists rather than database professionals, most of these systems expect the user to have some knowledge of database technology.

Database dependencies, such as functional and multivalued dependencies, express the presence of structure in database relations, that can be utilised in the database design process. The discovery of database dependencies can be viewed as... more

Database dependencies, such as functional and multivalued dependencies, express the presence of structure in database relations, that can be utilised in the database design process. The discovery of database dependencies can be viewed as an induction problem, in which general rules (dependencies) are obtained from specific facts (the relation). This viewpoint has the advantage of abstracting away as much as possible from the particulars of the dependencies. The algorithms in this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between topdown algorithms and bottom-up algorithms. In a top-down approach, hypotheses are generated in a systematic way and then tested against the given relation. In a bottom-up approach, the relation is inspected in order to see what dependencies it may satisfy or violate. We give algorithms for both approaches.

An intelligent manufacturing system is intended to produce one or more subjects that it is designed for in an optimal way. This means that it has to find a proper production process to produce the subject in an optimal way. The... more

An intelligent manufacturing system is intended to produce one or more subjects that it is designed for in an optimal way. This means that it has to find a proper production process to produce the subject in an optimal way. The manufacturing system can be called "intelligent" when it is able to find applicable optimisation criteria upon its past experiences thus improving its performance in future. Therefore, an intelligent manufacturing system needs capabilities to store data and make decisions upon them. Such a "brain" can be established by a proper design of a technological database and its database management system (DBMS). Examining all constitutive parameters of a work operation a model of a production process organization can be made, which can serve as a basis for a suitable database design. In addition, an application programme that will check the existence and availability of work operations in the database has to be added to the DBMS. What remains are some optimisation criteria upon which we will choose an operation among suitable and available work operations. This task is fulfilled by a genetic algorithm optimisation technique that would consider work operations' data as parameters of optimisation and on this basis search the optimal one out of the set of available operation.

The infrastructure departments of the Portuguese municipalities need to make great efforts in beginning or continuing to use advanced information technology tools to increase institutional productivity and effectiveness in managing their... more

The infrastructure departments of the Portuguese municipalities need to make great efforts in beginning or continuing to use advanced information technology tools to increase institutional productivity and effectiveness in managing their municipal infrastructures. Nowadays they use or are beginning to use independent management systems for each type of infrastructure, i.e., road pavements, bridges, signs, water pipelines, sewer pipelines, parks, etc. In the future, they need to begin the development of an Integrated Infrastructure Management System (IMS). This paper focuses on the issues and needs that emerged from some studies developed by the authors involving the implementation of transportation management systems in local municipalities. This paper first introduces and discusses the functions of various participating divisions inside a municipal department of infrastructures. Then, the paper identifies the issues and needs that must be fully understood and considered in the development of an Integrated Infrastructure Management System. Doing so involved determining and standardizing an effective base linear referencing system (LRS) to meet its needs, standardizing data terminology, determining the shared data needs of the several divisions inside the department of infrastructures, and developing a comprehensive database design with focused attention given to the types of data analysis functions performed by each division. The paper includes the application of the IMS in road maintenance management and road safety management. The two sub-systems, a road maintenance management system and a road safety management system use the same base linear referencing system. In both subsystems, in order to handle dynamic segmentation, the road network model is composed by road segments with (x, y) coordinates and measure (m) values.

This paper deals with memory management issues of robotics. In our proposal we break one of the major issues in creating humanoid. . Database issue is the complicated thing in robotics schema design here in our proposal we suggest new... more

This paper deals with memory management issues of robotics. In our proposal we break one of the major
issues in creating humanoid. . Database issue is the complicated thing in robotics schema design here in
our proposal we suggest new concept called NOSQL database for the effective data retrieval, so that the
humanoid robots will get the massive thinking ability in searching each items using chained instructions.
For query transactions in robotics we need an effective consistency transactions so by using latest
technology called CloudTPS which guarantees full ACID properties so that the robot can make their
queries using multi-item transactions through this we obtain data consistency in data retrievals. In addition
we included map reduce concepts it can splits the job to the respective workers so that it can process the
data in a parallel way.

Medical database security plays an important role in the overall security of medical information systems. The development of appropriate secure database design and operation methodologies is an important problem in the area and a... more

Medical database security plays an important role in the overall security of medical information systems. The development of appropriate secure database design and operation methodologies is an important problem in the area and a necessary prerequisite for the successful development of such systems. The general framework for medical database security and a number of parameters of the secure medical database design and operation problem are presented and discussed. A secure medical database development methodology is also presented which could help overcome some of the problems currently encountered.

Big Data Visualization Tools : A Survey of the State of the Art and Challenges Ahead

Relational databases are holding the maximum amount of data underpinning the web. They show excellent record of convenience and efficiency in repository, optimized query execution, scalability, security and accuracy. Recently graph... more

Relational databases are holding the maximum amount of data underpinning the web. They show excellent record of convenience and efficiency in repository, optimized query execution, scalability, security and accuracy. Recently graph databases are seen as an good replacement for relational database. When compared to the relational data model, graph data model is more vivid, strong and data expressed in it models relationships among data properly. An important requirement is to increase the vast quantities of data stored in RDB into web. In this situation, migration from relational to graph format is very advantageous. Both databases have advantages and limitations depending on the form of queries. Thus, this paper converts relational to graph database by utilizing the schema in order to develop a dual database system through migration, which merges the capability of both relational db and graph db. The experimental results are provided to demonstrate the practicability of the method and query response time over the target database. The proposed concept is proved by implementing it on MySQL and Neo4j.

XML is rapidly becoming the standard method for sending information across the Internet. XML Schema, since its elevation to W3C Recommendation on the 2 nd May 2001, is fast becoming the preferred means of describing structured XML data.... more

XML is rapidly becoming the standard method for sending information across the Internet. XML Schema, since its elevation to W3C Recommendation on the 2 nd May 2001, is fast becoming the preferred means of describing structured XML data. However, until recently, there has been no effective means of graphically designing XML Schemas without exposing designers to low-level implementation issues. Bird, Goodchild and Halpin proposed a method to address this shortfall using the 'Object Role Modelling' conceptual language to generate XML Schemas.

A large scale operative data format for transparent storage, administration and retrieval of environmental Life Cycle Inventory (LCI} data has been implemented by applying data modelling and database design. Key concepts in the design are... more

A large scale operative data format for transparent storage, administration and retrieval of environmental Life Cycle Inventory (LCI} data has been implemented by applying data modelling and database design. Key concepts in the design are "activity" and "flow": An activity is a technical system, such as a process or a transport, or an aggregate of different processes or transports. A flow is any matter entering or leaving an activity, such as natural resources, energywarc, raw material, emission, waste or products. Any numerical data set on an activity can be thoroughly described by supplying meta data. Meta data fields are prepared for a wide set of commonly known LCA-data aspects, such as descriptions of data acquisition methods, system boundary conditions and relevant dates.

In a financially volatile market, as in the case of asset market, it is important to have a very precise prediction of the upcoming trends to take advantage of the market changes. This requires highly advanced machine learning algorithms... more

In a financially volatile market, as in the case of asset market, it is important to have a very precise prediction of the upcoming trends to take advantage of the market changes. This requires highly advanced machine learning algorithms with factorization of human sentiments. The value of an asset is highly gullible due to factors such as market news and human behaviour, which can instantly increase or decrease the asset price. Therefore, the issue becomes that of buying or selling asset at an asset exchange at right moment for generating profit. This aspect has attracted researchers for years and has led to creation of various different algorithms that can predict the outcome in this nonlinear volatile market. This wave of algorithm in Artificial Intelligence, mainly machine learning are still being tried to tackle the above problem. It is also very well known that news articles and related information have great impact on the asset price and its trends. Hence, we aim to combine these two approaches in an attempt to understand the relationship between the attributes in order to yield a predicted output with great accuracy.

A federated database system (FDBS) is a collection of cooperating database systems that are autonomous and possibly heterogeneous. In this paper, we define a reference architecture for distributed database management systems from system... more

A federated database system (FDBS) is a collection of cooperating database systems that are autonomous and possibly heterogeneous. In this paper, we define a reference architecture for distributed database management systems from system and schema viewpoints and show how various FDBS architectures can be developed. We then define a methodology for developing one of the popular architectures of an FDBS. Finally, we discuss critical issues related to developing and operating an FDBS.

In an administrative information system, of course a database will be the main pillar in supporting system operations. However in designing a database for a system there are several stages that become the basis for determining the scope... more

In an administrative information system, of course a database will be the main pillar in supporting system operations. However in designing a database for a system there are several stages that become the basis for determining the scope of the database to support optimally and efficiently. in this study, The author tries to explain the steps taken in carrying out database design that is suitable for the business needs of PT XYZ and uses design steps such as ERD, LRS, Normalization, Code design, database specification design in order to support PT XYZ's activities in running customer retention and loyalty administrative operations.

Database is a core component the Electronic Health Record (EHR) system, and creating a datamodel for that database is challenging due to the EHR system’s special nature. Because of complexity, spatial, sparseness, interrelation, temporal,... more

Database is a core component the Electronic Health Record (EHR) system, and creating a datamodel for that database is challenging due to the EHR system’s special nature. Because of complexity, spatial, sparseness, interrelation, temporal, heterogeneity, and fast evolution of EHR data, modeling its database is complex process. This paper tried to build dynamic, complete and stable data model for EHR database. There are a little work and standards in this aspect because of its difficulty.
We will use object relational modeling approach and entity attribute value with classes and relationships to build the model. This design facilitates and enhances the operations of data mining and decision support which is integrating component of an EHR system.

Objectives: To develop a simplified Therapeutic Intervention Scoring System (TISS) based on the TISS-28 items and to validate the new score in an independent database. Design: Retrospective statistical analysis of a database and a... more

Objectives: To develop a simplified Therapeutic Intervention Scoring System (TISS) based on the TISS-28 items and to validate the new score in an independent database. Design: Retrospective statistical analysis of a database and a prospective multicentre study. Setting: Development in the database of the Foundation for Research on Intensive Care in Europe with external validation in 64 intensive care units (ICUs) of 11 European countries. Measurements and results: Development of NEMS on a random sample of TISS-28 items, cross validation on another random sample of TISS-28, and external validation of NEMS in comparison with TISS-28 scored by two independent raters on the day of the visit to the ICUs participating in an international study. Multivariable regression techniques, Pearson's correlation, and paired sample t-tests were used (significance at p < 0.05 level). Intraclass correlation, rate of agreement, and kappa statistics were used for interrater reliability tests. The TISS-28 items were reduced to NEMS (9 items) in a random sample of 2000 records; the means of the two scores were no different: TISS-28 26.23 ± 10.38, NEMS 26.19 ± 9.12, NS. Cross-validation in a random sample of 996 records; mean TISS-28 26.13 ± 10.38, NEMS 26.17 ± 9.38, NS; R 2 = 0.76. External validation on 369 pairs of TISS-28 and NEMS has shown that the means of the two scores were no different: TISS-28 27.56 ± 11.03, NEMS 27.02 ± 8.98, NS; R 2 = 0.59. Reliability tests have shown an "almost perfect" interrater correlation. Similar to studies correlating TISS with Simplified Acute Physiology Score (SAPS)-I and/or Acute Physiology and Chronic Health Evaluation II scores, the value of NEMS scored on the first day accounts for 30.4 % of the variation of SAPS-II score. Conclusions: NEMS is a suitable therapeutic index to measure nursing workload at the ICU level. The use of NEMS is indicated for: (a) multicentre ICU studies; (b) management purposes in the general (macro) evaluation and comparison of workload at the ICU level; (c) the prediction of workload and planning of nursing staff allocation at the individual patient level.

Assigment: Cost calculations of a project, build DB include administration of DB when project ends, possible loos of data because of working in cloud. Students are expected to interact with their group colleagues to develop a database... more

Assigment: Cost calculations of a project, build DB include administration of DB when project ends, possible loos of data because of working in cloud.
Students are expected to interact with their group colleagues to develop a database in three steps: PART A: Conceptual Data Model (CDM); PART B: Logical Data Model (LDM); PART C: Physical Data Model (PDM). The onus is on each group to develop their own style of Project Logbook but learning should be applied from the IS Project Management. Every meeting must be recorded by reports and attached to main documents.

The amount of data stored in IoT databases increases as the IoT applications extend throughout smart city appliances, industry and agriculture. Contemporary database systems must process huge amounts of sensory and actuator data in... more

The amount of data stored in IoT databases increases as the IoT applications extend throughout smart city appliances, industry and agriculture. Contemporary database systems must process huge amounts of sensory and actuator data in real-time or interactively. Facing this first wave of IoT revolution, database vendors struggle day-by-day in order to gain more market share, develop new capabilities and attempt to overcome the disadvantages of previous releases, while providing features for the IoT. There are two popular database types: The Relational Database Management Systems and NoSQL databases, with NoSQL gaining ground on IoT data storage. In the context of this paper these two types are examined. Focusing on open source databases, the authors experiment on IoT data sets and pose an answer to the question which one performs better than the other. It is a comparative study on the performance of the commonly market used open source databases, presenting results for the NoSQL MongoDB database and SQL databases of MySQL and PostgreSQL

Database and data model evolution cause significant problems in the highly dynamic business environment that we experience these days. To support the rapidly changing data requirements of agile companies, conceptual data models, which... more

Database and data model evolution cause significant problems in the highly dynamic business environment that we experience these days. To support the rapidly changing data requirements of agile companies, conceptual data models, which constitute the foundation of database design, should be sufficiently flexible to be able to incorporate changes easily and smoothly. In order to understand what factors drive the maintainability of conceptual data models and to improve conceptual modelling processes, we need to be able to assess conceptual data model properties and qualities in an objective and cost-efficient manner. The scarcity of early available and thoroughly validated maintainability measurement instruments motivated us to define a set of metrics for Entity-Relationship (ER) diagrams. In this paper we show that these easily calculated and objective metrics, measuring structural properties of ER diagrams, can be used as indicators of the understandability of the diagrams. Understandability is a key factor in determining maintainability as model modifications must be preceded by a thorough understanding of the model. The validation of the metrics as early understandability indicators opens up the way for an in-depth study of how structural properties determine conceptual data model understandability. It also allows building maintenance-related prediction models that can be used in conceptual data modelling practice.

Web applications and particularly Web based Information Systems are very popular due to several reasons. The main reason is the ability to update and maintain them without distributing and installing software on thousand of client... more

Web applications and particularly Web
based Information Systems are very popular due to
several reasons. The main reason is the ability to
update and maintain them without distributing and
installing software on thousand of client computers.
This paper describes the main stages of the
development process and mainly requirement analysis
and database design. Such kind of system will provide
students with the opportunity to choose among more
projects, to work with different specialists in other
conditions, in international teams, in different
environments and to have more opportunities for their
future professional realization.

Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and... more

Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and the dynamic properties of an application. Although the need for data models with richer semantics is widely recognized, no single approach has won general acceptance. This paper describes the generic properties of semantic data models and presents a representative selection of models that have been proposed since the mid-1970s. In addition to explaining the features of the individual models, guidelines are offered for the comparison of models. The paper concludes with a discussion of future directions in the area of conceptual data modeling.

 Abstract: Relational databases are holding the maximum amount of data underpinning the web. They show excellent record of convenience and efficiency in repository, optimized query execution, scalability, security and accuracy. Recently... more

 Abstract: Relational databases are holding the maximum amount of data underpinning the web. They show excellent record of convenience and efficiency in repository, optimized query execution, scalability, security and accuracy. Recently graph databases are seen as an good replacement for relational database. When compared to the relational data model, graph data model is more vivid, strong and data expressed in it models relationships among data properly. An important requirement is to increase the vast quantities of data stored in RDB into web. In this situation, migration from relational to graph format is very advantageous. Both databases have advantages and limitations depending on the form of queries. Thus, this paper converts relational to graph database by utilizing the schema in order to develop a dual database system through migration, which merges the capability of both relational db and graph db. The experimental results are provided to demonstrate the practicability of the method and query response time over the target database. The proposed concept is proved by implementing it on MySQL and Neo4j.

We present HyperGraphDB, a novel graph database based on generalized hypergraphs where hyperedges can contain other hyperedges. This generalization automatically reifies every entity expressed in the database thus removing many of the... more

We present HyperGraphDB, a novel graph database based on generalized hypergraphs where hyperedges can contain other hyperedges. This generalization automatically reifies every entity expressed in the database thus removing many of the usual difficulties in dealing with higher-order relationships. An open two-layered architecture of the data organization yields a highly customizable system where specific domain representations can be optimized while remaining within a uniform conceptual framework. HyperGraphDB is an embedded, transactional database designed as a universal data model for highly complex, large scale knowledge representation applications such as found in artificial intelligence, bioinformatics and natural language processing.

This paper was submitted to the 10th World Computer Congress, IFIP 1986 conference, but rejected by the referee. It introduces a (still) new notation and arithmetic for numbering concepts.

Insecurity is one of the major challenges that the entire world is facing now, each country having their peculiar security issues. The crime rate in every part of the society these days has become a threatening issue such that vehicles... more

Insecurity is one of the major challenges that
the entire world is facing now, each country having their
peculiar security issues. The crime rate in every part of
the society these days has become a threatening issue
such that vehicles are now used for committing criminal
activities more than before. The issue of vehicle theft has
increased tremendously, mostly at gunpoint or car parks.
In view of these, there is a need for adequate records of
stolen, identified and recovered vehicles which are not
readily available in our society and as such very
important. The development of a vehicle theft alert and
location identification system becomes more necessary
for vehicle owners to ensure theft prevention and a
speedy identification towards recovery efforts in
situations where a vehicle is missing, stolen or driven by
an unauthorized person. The theft alert function makes
use of a GSM application developed and installed in a
mobile phone device which is embedded in the vehicle to
communicate with the vehicle owner’s mobile phone.
The communication is established via SMS (i.e. between
the installed mobile phone device and that of the vehicle
owner). The communications established include; (i).
Sending an SMS alert from installed mobile phone device
to vehicle owner mobile phone when the car ignition is
put on. (ii). Sending an SMS from the vehicle owner’s
mobile phone to start and stop the installed mobile phone
device application. The location identification function
makes use of a web application developed to; (i).
Determine the real time location of a vehicle by means of
tracking using GPS. (ii). Broadcast missing or stolen
vehicle information to social media and security agency.
The implementation of the installed mobile phone device
application was done using JAVA because of its
capabilities in programming mobile applications while
PHP and MySQL was used for the web application
functions. Integration testing of the system was carried
out using simple percentage calculation for the
performance evaluation. Fifty seven (57) vehicle owners
were sampled and questionnaires were distributed to them
in order to ascertain the acceptability and workability of
the developed system. The result obtained shows the
effectiveness of the system and hence it can be used to
effectively monitor vehicle as it is been driven within or
outside its jurisdiction. More so, the system can be used
as database of missing, identified or recovered vehicles
by various security agencies.
Index Terms—Vehicle, Theft alert, Location
identification, Tracking, GPS.

I. Pengantar RDBMS Dan Konsep DB II. Konsep SQL A. Data Definition Language (DDL) B. Interactive Data Manipulation Language (IDML) C. Embedded Data Manipulation Language (EDML) D. View Definition E.... more

I. Pengantar RDBMS Dan Konsep DB
II. Konsep SQL
A. Data Definition Language (DDL)
B. Interactive Data Manipulation Language (IDML)
C. Embedded Data Manipulation Language (EDML)
D. View Definition
E. Authorization
F. Integrity
G. Transaction Control
III. Case Study RDBMS
IV. Aplikasi RDBMS (Basis Data Server)

Software as a Service (SaaS) is an online software delivery model which permits a third party provider offering software services to be used on-demand by tenants over the internet, instead of installing and maintaining them in their... more

Software as a Service (SaaS) is an online software delivery model which permits a third party provider offering software services to be used on-demand by tenants over the internet, instead of installing and maintaining them in their premises. Nowadays, more and more companies are offering their web-base business application by adopting this model. Multi-tenancy is the primary characteristic of SaaS,

Oracle Academy English Database Design
Prodi TI
Politeknik Negeri Semarang

Oracle Academy English Database Design
Prodi TI
Politeknik Negeri Semarang

This paper first examines crime situation in Benin metropolis using questionnaire to elicit information from the public and the police. Result shows that crime is on the rise and that the police are handicapped in managing it because of... more

This paper first examines crime situation in Benin metropolis using questionnaire to elicit information from the public and the police. Result shows that crime is on the rise and that the police are handicapped in managing it because of the obsolete methods and resources at their disposal. It also reveals that members of the public have no confidence in the police force as 80% do not report cases for fear of exposure to the informant to the criminal. In the light of these situations, the second part of the paper looks at the possibility of utilizing GIS for effective management of crime in Nigeria. This option was explored by showing the procedural method of creating 1) digital landuse map showing the crime locations, 2) crime geo-spatial database, and 3) spatial analysis such as query and buffering using ILWIS and ArcGIS software and GPS. The result of buffering analysis shows crime hotspots, areas deficient in security outfit, areas of overlap and areas requiring constant police patrol. The study proves that GIS can give a better synoptic perspective to crime study, analysis, mapping, proactive decision making and prevention of crime. It however suggests that migrating from traditional method of crime management to GIS demands capacity building in the area of personnel, laboratory and facilities backed up with policy statement.

Database design is often described as an intuitive, even artistic, process. Many researchers, however, are currently working on applying techniques from artificial intelligence to provide effective automated assistance for this task. This... more

Database design is often described as an intuitive, even artistic, process. Many researchers, however, are currently working on applying techniques from artificial intelligence to provide effective automated assistance for this task. This article presents a summary of the current state of the art for the benefit of future researchers and users of this technology. Thirteen examples of knowledge-based tools for database design are briefly described and then compared in terms of the source, content, and structure of their knowledge bases; the amount of support they provide to the human designer; the data models and phases of the design process they support; and the capabilities they expect of their users. The findings show that there has apparently been very little empirical verification of the effectiveness of these systems. In addition, most rely exclusively on knowledge provided by the developers themselves and have tittle ability to expand their knowledge based on experience. Although such systems ideally would be used by application specialists rather than database professionals, most of these systems expect the user to have some knowledge of database technology.

Materi Perkuliahan Ke-3 Database Design (Perancangan Basis Data), berisi proses perancangan database dengan tahap-tahap koleksi dan analisis persyaratan, rancangan konseptual, pemilihan DBMS, perancangan logikal, perancangan fisik, dan... more

Materi Perkuliahan Ke-3 Database Design (Perancangan Basis Data), berisi proses perancangan database dengan tahap-tahap koleksi dan analisis persyaratan, rancangan konseptual, pemilihan DBMS, perancangan logikal, perancangan fisik, dan implementasi. Dijelaskan juga tentang Entity-Relationship (E-R Model).

Materi Perkuliahan Ke-2 Database Design (Perancangan Basis Data). Berisi sejarah perkembangan teknologi database, pemetaan database, model pengembangan perangkat lunak, model-model database dan abstaksi data.

The use of design patterns such as the GRASP (General Responsibility Assignment Software Principles) or GoF (Gang-of-Four) patterns in software engineering has been well-documented and widely used in software design and implementation.... more

The use of design patterns such as the GRASP (General Responsibility Assignment Software Principles) or GoF (Gang-of-Four) patterns in software engineering has been well-documented and widely used in software design and implementation. Research efforts have also been made to apply these generic software engineering design patterns to other design and implementation endeavors in computer science. One such effort is our

Materi Perkuliahan Ke-5 : Database Design (Perancangan Basis Data), berisi tentang komponen-komponen SQL yang terdiri atas Data Manipulation Languge (DML), Data Definition Language (DML), Data Control Language (DCL) dan Transaction... more

Materi Perkuliahan Ke-5 : Database Design (Perancangan Basis Data), berisi tentang komponen-komponen SQL yang terdiri atas Data Manipulation Languge (DML), Data Definition Language (DML), Data Control Language (DCL) dan Transaction Control Language (TCL), disertai strktur command dalam SQL.