Gwang Jung - Academia.edu (original) (raw)

Papers by Gwang Jung

Research paper thumbnail of A critical investigation of recall and precision as measures of retrieval system performance

ACM Transactions on Information Systems, 1989

Recall and precision are often used to evaluate the effectiveness of information retrieval system... more Recall and precision are often used to evaluate the effectiveness of information retrieval systems. They are easy to define if there is a single query and if the retrieval result generated for the query is a linear ordering. However, when the retrieval results are weakly ordered, in the sense that several documents have an identical retrieval status value with respect to a query, some probabilistic notion of precision has to be introduced. Relevance probability, expected precision, and so forth, are some alternatives mentioned in the literature for this purpose. Furthermore, when many queries are to be evaluated and the retrieval results averaged over these queries, some method of interpolation of precision values at certain preselected recall levels is needed. The currently popular approaches for handling both a weak ordering and interpolation are found to be inconsistent, and the results obtained are not easy to interpret. Moreover, in cases where some alternatives are available, ...

Research paper thumbnail of Generic Architecture for Building Knowledge Base Automatically by Analyzing Big Data Available in the Web Environment

In this paper, we present generic architecture for building domain specific knowledge bases that ... more In this paper, we present generic architecture for building domain specific knowledge bases that can be automatically acquired by analyzing big data collected from the web environment. As a reference implementation, Nursing Home Application is developed based on our proposed architecture. The initial experiment result shows valuable warrant for future studies.

Research paper thumbnail of Teaching Cyber Security Topics Effectively in a College or University with Limited Resources

To handle cyber security threats, we need to develop courses to educate students about cyber secu... more To handle cyber security threats, we need to develop courses to educate students about cyber security concepts, methods to handle various attacks in the cyber space. In this paper, we address what resources would be required to develop courses to effectively teach students the cyber security concepts and methods at small colleges or universities with limited resources.

Research paper thumbnail of Database SQL Injection Security Problem Handling with Examples

2019 International Conference on Computational Science and Computational Intelligence (CSCI)

Database management system have been in existence for over fifty years and they are used to store... more Database management system have been in existence for over fifty years and they are used to store private and sensitive data. DBMS must ensure the data stored is safe from malicious hackers' attacks. In this paper, we summarize methods for detecting and preventing malicious SQL Injection in a standalone DBMS or Cloud space DBMS.

Research paper thumbnail of 3D animation watermarking using positionlnterpolator

Lecture Notes in Computer Science, 2006

Research paper thumbnail of Spatial Invariance Issues in Logical Representation of Images

Research paper thumbnail of Enabling High-Performance Data Service in the Web Environment

Research paper thumbnail of Connectionist domain knowledge acquisition and its evaluation in information retrieval

Information retrieval(IR) systems are designed to provide references to documents that would cont... more Information retrieval(IR) systems are designed to provide references to documents that would contain the information desired by a user. An IR system needs knowledge about the domain of discourse in order to produce effective responses to the users' queries. Several methods have been developed in order to extract useful domain knowledge such as a thesaurus knowledge structure and classification rules for the conceptual classification of documents. Most of the earlier work on the acquisition of such domain knowledge, in IR literature, involves intensive human effort. Although some studies for the automatic acquisition of such knowledge have been conducted, those approaches are ad hoc and computationally expensive and do not provide a systematic way of using that knowledge. In this research, we investigate methods for acquisition and use of domain knowledge employing connectionist learning which can be conveniently realized in a Connectionist Model (CM). A class of CM, known as high-order CM, is modified in such a way that meaningful high-order correlations among terms are easily used, and a large number of terms are handled efficiently during the learning process. We first investigate a method of automatic acquisition of thesaurus domain-knowledge termed as pseudo-thesaurus. Instead of establishing complex semantic relationships among terms such as synonymity or generality, we determine negative/positive semantic relationships among them by connectionist learning based on accumulated users' relevance feedback. The pseudo-thesaurus is optimal and is incorporated in the retrieval process systematically. Secondly, a method of deriving characterization rules for conceptual classification of documents by using connectionist learning is developed. The domain expert knowledge in the form of conceptual categories are identified by a knowledge elicitation method known as Personal Construct Theory. The construction of the characterization rule is then achieved by a learning algorithm that analyzes samples from each conceptual category. This investigation not only provides a way to improve retrieval effectiveness and efficiency by having conceptual clusters of documents but also provides a systematic way of identifying concepts present in a natural language document. In addition to the investigation described above, a precision measure, which is suitable for evaluating in a predictive sense IR systems that employ some sort of learning mechanism, is also provided based on sound theoretical basis.

Research paper thumbnail of Efficient algorithm for identifying dependency regions for fast fractal image decoding

Proceedings of Spie the International Society For Optical Engineering, Apr 1, 2000

This paper describes novel fractal coding scheme that significantly improves the efficiency of th... more This paper describes novel fractal coding scheme that significantly improves the efficiency of the fractal image decoding. Removing great number of contractive transforms required by the decoding process can significantly reduce the decoding time. The proposed decoding scheme effectively finds dependency regions, whose range blocks are decoded by only one contractive transformation, from an encoded image. The experimental results show the significance of our proposed scheme in improving the efficiency of the fractal decoding process.

Research paper thumbnail of A Fast Full Search Algorithm for Motion Estimation Using Priority of Matching Scan

Lecture Notes in Computer Science, 2006

Full search motion estimation in real-time video coding requires large amount of computations. Re... more Full search motion estimation in real-time video coding requires large amount of computations. Reducing computational cost for full search motion estimation is critical research issue for enabling fast real-time video coding such as MPEG-4 advanced video coding. In this paper, we propose a novel fast full search block matching algorithmwhich significantly reduces unnecessary computations without affecting motion prediction quality. The proposed algorithm identifies computational matching order from initial calculation of matching differences. According to the computational order identified, matching errors are calculated based on partial distortion elimination method. The proposed algorithm could reduce about 45% of computational cost for calculating block matching errors compared with the conventional algorithm without degrading any motion prediction quality. The proposed algorithm will be particularly useful for realizing fast real-time video coding applications, such as MPEG-4 advanced video coding, that require large amount of computations.

Research paper thumbnail of <title>Novel approach to expanding dependency regions for fast fractal image decoding</title>

Multimedia Systems and Applications III, 2001

To accelerate decoding fractal-coded images, researchers have actively investigated methods for d... more To accelerate decoding fractal-coded images, researchers have actively investigated methods for deriving dependency regions. This paper describes a novel approach to expanding dependency regions for fast fractal image decoding. Our approach carefully identifies and encodes data dependency at encoding time in such a way that the decoder guarantees to get the dependency regions and further expands the dependency regions effectively. The experimental results show our approach improves the efficiency of the fractal decoding process significantly.

Research paper thumbnail of Efficient and dependable multimedia data delivery service in World Wide Web environment

Multimedia data is characterized by large objects that require high-bandwidth. This paper present... more Multimedia data is characterized by large objects that require high-bandwidth. This paper presents technique that enables efficient and dependable data storage and delivery of large multimedia objects in the World Wide Web environment. The proposed approach strips data into blocks that are distributed over multiple Web servers. Data distribution is transparent to users. The parallelism of multiple Web servers is exploited to achieve high data rates. The paper presents two data coding techniques to achieve high dependability

Research paper thumbnail of Distributed multimedia data storage for dependability, scalability, and high performance

Multimedia Storage and Archiving Systems III, 1998

In this paper, we describe a method for enabling efficient and dependable multimedia data storage... more In this paper, we describe a method for enabling efficient and dependable multimedia data storage and transfer in a large scale distributed computing and communication environment such as Web environment. The proposed method has several advantages over traditional ones: high data rates, scalability, availability, reliability, and seamless system controlled load balancing.

Research paper thumbnail of Automatic determination and visualization of relationships among symptoms for building medical knowledge bases

Proceedings of the 1995 ACM symposium on Applied computing - SAC '95, 1995

Medicine is one of the most important areas to be ex- plored by many researchers in Artificial In... more Medicine is one of the most important areas to be ex- plored by many researchers in Artificial Intelligence. The interest has been primarily in the applied as- pects of Artificial Intelligence, that is, Expert Sys- tems [I, 21. Current expert system work in medicine includes interpretation ...

Research paper thumbnail of Spatial knowledge representation and retrieval in 3-D image databases

Proceedings of the International Conference on Multimedia Computing and Systems, 1995

... 7 Conclusions In this paper, we have proposed a geometry-based image representation scheme an... more ... 7 Conclusions In this paper, we have proposed a geometry-based image representation scheme and an algo-rithm/function for retrieving i&D images of relevance (based on spatial similarity) to ... The algorithm is robust-in the sense that it can recognize translation and scale ...

Research paper thumbnail of Distributed adaptive attribute-based image retrieval

SPIE Proceedings, 1995

ABSTRACT In this paper, we describe a prototype system, named DAIRS, for distributed image retrie... more ABSTRACT In this paper, we describe a prototype system, named DAIRS, for distributed image retrieval. DAIRS features adaptive query reformulation mechanism for improving the retrieval effectiveness. The query reformulation mechanism is based on the calculation of the functional dependency between each image attribute and the user&#39;s relevance feedback. The importance (or weight) of each attribute is modified in the reformulated query based on the degree of such functional dependency. Since image servers are dynamically evolving in a distributed environment, DAIRS has been designed to deal with image databases in various domains distributed in the Internet. The DAIRS communication protocol is designed to cooperate with http and ftp, so that the client can easily access the distributed image repositories. Experimental results show that the query reformulation mechanism significantly improves the retrieval effectiveness.

Research paper thumbnail of 3D Animation Watermarking Using PositionInterpolator

Multimedia Content Representation, Classification and Security, 2006

For real-time animation, keyframe animation that consists of translation, rotation, scaling inter... more For real-time animation, keyframe animation that consists of translation, rotation, scaling interpolator nodes is used widely in 3D graphics. This paper presents 3D keyframe animation watermarking based on vertex coordinates in CoordIndex node and keyvalues in PositionInterpolator node for VRML animation. Experimental results verify that the proposed algorithm has the robustness against geometrical attacks and timeline attacks as well as

Research paper thumbnail of On probabilistic notions of precision as a function of recall

Information Processing & Management, 1992

Two problems that arise when recall and precision are used to evaluate information retrieval syst... more Two problems that arise when recall and precision are used to evaluate information retrieval systems are due to the weak ordering of the documents generated by the system and evaluation with multiple queries. Although several alternative stopping criteria are available, our emphasis in this paper is on defining precision when recall is used as the stopping criterion. A number of different probabilistic notions of precision for handling the problem of weak ordering have been proposed in the past, including PRECALL, probability of relevance given retrieval (PRR), and expected precision (EP). Recently Raghavan et al. provided a comparative analysis of PRECALL, PRR, and EP They showed that previous usages of PRECALL for dealing with the problem of weak ordering and interpolation, which involved the application of ceiling operation, are inconsistent, and the results obtained are not easy to interpret. Consequently, they introduced an interpolation scheme, termed intuitive interpolation, that leads to consistent and meaningful handling of averaging results given by PRR over multiple queries. A simple way of calculating PRR was also given. However, a comparable analysis of precision defined as EP has not been provided. Furthermore, given that several alternative ways of defining precision in a probabilistic sense are available, no theoretical basis for deciding which alternative to use in a specific situation exists. This paper initially investigates an efficient way of calculating EP and an interpolation scheme for averaging EP that are consistent with the intuitive interpolation scheme proposed for PRR. In addition, PRECALL with intuitive interpolation is termed R-B Precision, and is shown to have interpretation as the value of PRR and EP, in the limit. From this result, PRR and EP are shown to be attractive in their ability to present experimental results in a descriptive sense. In contrast, in situations where experimental tests are intended for predictive use, R-B Precision is shown to be a better choice.

Research paper thumbnail of Distributed multimedia data storage for dependability, scalability, and high performance

In this paper, we describe a method for enabling efficient and dependable multimedia data storage... more In this paper, we describe a method for enabling efficient and dependable multimedia data storage and transfer in a large scale distributed computing and communication environment such as Web environment. The proposed method has several advantages over traditional ones: high data rates, scalability, availability, reliability, and seamless system controlled load balancing.

Research paper thumbnail of Novel approach to expanding dependency regions for fast fractal image decoding

To accelerate decoding fractal-coded images, researchers have actively investigated methods for d... more To accelerate decoding fractal-coded images, researchers have actively investigated methods for deriving dependency regions. This paper describes a novel approach to expanding dependency regions for fast fractal image decoding. Our approach carefully identifies and encodes data dependency at encoding time in such a way that the decoder guarantees to get the dependency regions and further expands the dependency regions effectively. The experimental results show our approach improves the efficiency of the fractal decoding process significantly.

Research paper thumbnail of A critical investigation of recall and precision as measures of retrieval system performance

ACM Transactions on Information Systems, 1989

Recall and precision are often used to evaluate the effectiveness of information retrieval system... more Recall and precision are often used to evaluate the effectiveness of information retrieval systems. They are easy to define if there is a single query and if the retrieval result generated for the query is a linear ordering. However, when the retrieval results are weakly ordered, in the sense that several documents have an identical retrieval status value with respect to a query, some probabilistic notion of precision has to be introduced. Relevance probability, expected precision, and so forth, are some alternatives mentioned in the literature for this purpose. Furthermore, when many queries are to be evaluated and the retrieval results averaged over these queries, some method of interpolation of precision values at certain preselected recall levels is needed. The currently popular approaches for handling both a weak ordering and interpolation are found to be inconsistent, and the results obtained are not easy to interpret. Moreover, in cases where some alternatives are available, ...

Research paper thumbnail of Generic Architecture for Building Knowledge Base Automatically by Analyzing Big Data Available in the Web Environment

In this paper, we present generic architecture for building domain specific knowledge bases that ... more In this paper, we present generic architecture for building domain specific knowledge bases that can be automatically acquired by analyzing big data collected from the web environment. As a reference implementation, Nursing Home Application is developed based on our proposed architecture. The initial experiment result shows valuable warrant for future studies.

Research paper thumbnail of Teaching Cyber Security Topics Effectively in a College or University with Limited Resources

To handle cyber security threats, we need to develop courses to educate students about cyber secu... more To handle cyber security threats, we need to develop courses to educate students about cyber security concepts, methods to handle various attacks in the cyber space. In this paper, we address what resources would be required to develop courses to effectively teach students the cyber security concepts and methods at small colleges or universities with limited resources.

Research paper thumbnail of Database SQL Injection Security Problem Handling with Examples

2019 International Conference on Computational Science and Computational Intelligence (CSCI)

Database management system have been in existence for over fifty years and they are used to store... more Database management system have been in existence for over fifty years and they are used to store private and sensitive data. DBMS must ensure the data stored is safe from malicious hackers' attacks. In this paper, we summarize methods for detecting and preventing malicious SQL Injection in a standalone DBMS or Cloud space DBMS.

Research paper thumbnail of 3D animation watermarking using positionlnterpolator

Lecture Notes in Computer Science, 2006

Research paper thumbnail of Spatial Invariance Issues in Logical Representation of Images

Research paper thumbnail of Enabling High-Performance Data Service in the Web Environment

Research paper thumbnail of Connectionist domain knowledge acquisition and its evaluation in information retrieval

Information retrieval(IR) systems are designed to provide references to documents that would cont... more Information retrieval(IR) systems are designed to provide references to documents that would contain the information desired by a user. An IR system needs knowledge about the domain of discourse in order to produce effective responses to the users' queries. Several methods have been developed in order to extract useful domain knowledge such as a thesaurus knowledge structure and classification rules for the conceptual classification of documents. Most of the earlier work on the acquisition of such domain knowledge, in IR literature, involves intensive human effort. Although some studies for the automatic acquisition of such knowledge have been conducted, those approaches are ad hoc and computationally expensive and do not provide a systematic way of using that knowledge. In this research, we investigate methods for acquisition and use of domain knowledge employing connectionist learning which can be conveniently realized in a Connectionist Model (CM). A class of CM, known as high-order CM, is modified in such a way that meaningful high-order correlations among terms are easily used, and a large number of terms are handled efficiently during the learning process. We first investigate a method of automatic acquisition of thesaurus domain-knowledge termed as pseudo-thesaurus. Instead of establishing complex semantic relationships among terms such as synonymity or generality, we determine negative/positive semantic relationships among them by connectionist learning based on accumulated users' relevance feedback. The pseudo-thesaurus is optimal and is incorporated in the retrieval process systematically. Secondly, a method of deriving characterization rules for conceptual classification of documents by using connectionist learning is developed. The domain expert knowledge in the form of conceptual categories are identified by a knowledge elicitation method known as Personal Construct Theory. The construction of the characterization rule is then achieved by a learning algorithm that analyzes samples from each conceptual category. This investigation not only provides a way to improve retrieval effectiveness and efficiency by having conceptual clusters of documents but also provides a systematic way of identifying concepts present in a natural language document. In addition to the investigation described above, a precision measure, which is suitable for evaluating in a predictive sense IR systems that employ some sort of learning mechanism, is also provided based on sound theoretical basis.

Research paper thumbnail of Efficient algorithm for identifying dependency regions for fast fractal image decoding

Proceedings of Spie the International Society For Optical Engineering, Apr 1, 2000

This paper describes novel fractal coding scheme that significantly improves the efficiency of th... more This paper describes novel fractal coding scheme that significantly improves the efficiency of the fractal image decoding. Removing great number of contractive transforms required by the decoding process can significantly reduce the decoding time. The proposed decoding scheme effectively finds dependency regions, whose range blocks are decoded by only one contractive transformation, from an encoded image. The experimental results show the significance of our proposed scheme in improving the efficiency of the fractal decoding process.

Research paper thumbnail of A Fast Full Search Algorithm for Motion Estimation Using Priority of Matching Scan

Lecture Notes in Computer Science, 2006

Full search motion estimation in real-time video coding requires large amount of computations. Re... more Full search motion estimation in real-time video coding requires large amount of computations. Reducing computational cost for full search motion estimation is critical research issue for enabling fast real-time video coding such as MPEG-4 advanced video coding. In this paper, we propose a novel fast full search block matching algorithmwhich significantly reduces unnecessary computations without affecting motion prediction quality. The proposed algorithm identifies computational matching order from initial calculation of matching differences. According to the computational order identified, matching errors are calculated based on partial distortion elimination method. The proposed algorithm could reduce about 45% of computational cost for calculating block matching errors compared with the conventional algorithm without degrading any motion prediction quality. The proposed algorithm will be particularly useful for realizing fast real-time video coding applications, such as MPEG-4 advanced video coding, that require large amount of computations.

Research paper thumbnail of <title>Novel approach to expanding dependency regions for fast fractal image decoding</title>

Multimedia Systems and Applications III, 2001

To accelerate decoding fractal-coded images, researchers have actively investigated methods for d... more To accelerate decoding fractal-coded images, researchers have actively investigated methods for deriving dependency regions. This paper describes a novel approach to expanding dependency regions for fast fractal image decoding. Our approach carefully identifies and encodes data dependency at encoding time in such a way that the decoder guarantees to get the dependency regions and further expands the dependency regions effectively. The experimental results show our approach improves the efficiency of the fractal decoding process significantly.

Research paper thumbnail of Efficient and dependable multimedia data delivery service in World Wide Web environment

Multimedia data is characterized by large objects that require high-bandwidth. This paper present... more Multimedia data is characterized by large objects that require high-bandwidth. This paper presents technique that enables efficient and dependable data storage and delivery of large multimedia objects in the World Wide Web environment. The proposed approach strips data into blocks that are distributed over multiple Web servers. Data distribution is transparent to users. The parallelism of multiple Web servers is exploited to achieve high data rates. The paper presents two data coding techniques to achieve high dependability

Research paper thumbnail of Distributed multimedia data storage for dependability, scalability, and high performance

Multimedia Storage and Archiving Systems III, 1998

In this paper, we describe a method for enabling efficient and dependable multimedia data storage... more In this paper, we describe a method for enabling efficient and dependable multimedia data storage and transfer in a large scale distributed computing and communication environment such as Web environment. The proposed method has several advantages over traditional ones: high data rates, scalability, availability, reliability, and seamless system controlled load balancing.

Research paper thumbnail of Automatic determination and visualization of relationships among symptoms for building medical knowledge bases

Proceedings of the 1995 ACM symposium on Applied computing - SAC '95, 1995

Medicine is one of the most important areas to be ex- plored by many researchers in Artificial In... more Medicine is one of the most important areas to be ex- plored by many researchers in Artificial Intelligence. The interest has been primarily in the applied as- pects of Artificial Intelligence, that is, Expert Sys- tems [I, 21. Current expert system work in medicine includes interpretation ...

Research paper thumbnail of Spatial knowledge representation and retrieval in 3-D image databases

Proceedings of the International Conference on Multimedia Computing and Systems, 1995

... 7 Conclusions In this paper, we have proposed a geometry-based image representation scheme an... more ... 7 Conclusions In this paper, we have proposed a geometry-based image representation scheme and an algo-rithm/function for retrieving i&D images of relevance (based on spatial similarity) to ... The algorithm is robust-in the sense that it can recognize translation and scale ...

Research paper thumbnail of Distributed adaptive attribute-based image retrieval

SPIE Proceedings, 1995

ABSTRACT In this paper, we describe a prototype system, named DAIRS, for distributed image retrie... more ABSTRACT In this paper, we describe a prototype system, named DAIRS, for distributed image retrieval. DAIRS features adaptive query reformulation mechanism for improving the retrieval effectiveness. The query reformulation mechanism is based on the calculation of the functional dependency between each image attribute and the user&#39;s relevance feedback. The importance (or weight) of each attribute is modified in the reformulated query based on the degree of such functional dependency. Since image servers are dynamically evolving in a distributed environment, DAIRS has been designed to deal with image databases in various domains distributed in the Internet. The DAIRS communication protocol is designed to cooperate with http and ftp, so that the client can easily access the distributed image repositories. Experimental results show that the query reformulation mechanism significantly improves the retrieval effectiveness.

Research paper thumbnail of 3D Animation Watermarking Using PositionInterpolator

Multimedia Content Representation, Classification and Security, 2006

For real-time animation, keyframe animation that consists of translation, rotation, scaling inter... more For real-time animation, keyframe animation that consists of translation, rotation, scaling interpolator nodes is used widely in 3D graphics. This paper presents 3D keyframe animation watermarking based on vertex coordinates in CoordIndex node and keyvalues in PositionInterpolator node for VRML animation. Experimental results verify that the proposed algorithm has the robustness against geometrical attacks and timeline attacks as well as

Research paper thumbnail of On probabilistic notions of precision as a function of recall

Information Processing & Management, 1992

Two problems that arise when recall and precision are used to evaluate information retrieval syst... more Two problems that arise when recall and precision are used to evaluate information retrieval systems are due to the weak ordering of the documents generated by the system and evaluation with multiple queries. Although several alternative stopping criteria are available, our emphasis in this paper is on defining precision when recall is used as the stopping criterion. A number of different probabilistic notions of precision for handling the problem of weak ordering have been proposed in the past, including PRECALL, probability of relevance given retrieval (PRR), and expected precision (EP). Recently Raghavan et al. provided a comparative analysis of PRECALL, PRR, and EP They showed that previous usages of PRECALL for dealing with the problem of weak ordering and interpolation, which involved the application of ceiling operation, are inconsistent, and the results obtained are not easy to interpret. Consequently, they introduced an interpolation scheme, termed intuitive interpolation, that leads to consistent and meaningful handling of averaging results given by PRR over multiple queries. A simple way of calculating PRR was also given. However, a comparable analysis of precision defined as EP has not been provided. Furthermore, given that several alternative ways of defining precision in a probabilistic sense are available, no theoretical basis for deciding which alternative to use in a specific situation exists. This paper initially investigates an efficient way of calculating EP and an interpolation scheme for averaging EP that are consistent with the intuitive interpolation scheme proposed for PRR. In addition, PRECALL with intuitive interpolation is termed R-B Precision, and is shown to have interpretation as the value of PRR and EP, in the limit. From this result, PRR and EP are shown to be attractive in their ability to present experimental results in a descriptive sense. In contrast, in situations where experimental tests are intended for predictive use, R-B Precision is shown to be a better choice.

Research paper thumbnail of Distributed multimedia data storage for dependability, scalability, and high performance

In this paper, we describe a method for enabling efficient and dependable multimedia data storage... more In this paper, we describe a method for enabling efficient and dependable multimedia data storage and transfer in a large scale distributed computing and communication environment such as Web environment. The proposed method has several advantages over traditional ones: high data rates, scalability, availability, reliability, and seamless system controlled load balancing.

Research paper thumbnail of Novel approach to expanding dependency regions for fast fractal image decoding

To accelerate decoding fractal-coded images, researchers have actively investigated methods for d... more To accelerate decoding fractal-coded images, researchers have actively investigated methods for deriving dependency regions. This paper describes a novel approach to expanding dependency regions for fast fractal image decoding. Our approach carefully identifies and encodes data dependency at encoding time in such a way that the decoder guarantees to get the dependency regions and further expands the dependency regions effectively. The experimental results show our approach improves the efficiency of the fractal decoding process significantly.