Ja-Ling Wu - Profile on Academia.edu (original) (raw)

Papers by Ja-Ling Wu

Research paper thumbnail of Solving sorting and related problems by quadratic perceptrons

Electronics Letters, May 7, 1992

Research paper thumbnail of A Time Bank System Design on the Basis of Hyperledger Fabric Blockchain

Future Internet, May 8, 2020

This paper presents a blockchain-based time bank system on the basis of the Hyperledger Fabric fr... more This paper presents a blockchain-based time bank system on the basis of the Hyperledger Fabric framework, which is one of the permissioned blockchain networks. Most of the services provided by existing Time Bank systems were recorded and conducted manually in the past; furthermore, jobs for matching services with receivers were managed by people. Running a time bank in this way will cost lots of time and human resources and, worse, it lacks security. This work designs and realizes a time bank system enabling all the service-related processes being executed and recorded on a blockchain. The matching between services' supply-and-demand tasks can directly be done through autonomous smart contracts. Building a time bank system on blockchain benefits the transaction of time credit which plays the role of digital currency on the system. In addition, the proposed time bank also retains a grading system, allowing its members to give each other a grade for reflecting their degrees of satisfaction about the results provided by the system. This grading system will incentivize the members to provide a better quality of service and adopt a nicer attitude for receiving a service, which may positively endorse the development of a worldwide time bank system.

Research paper thumbnail of Cell-based interconnection network design and the all-pairs examination problem

International Journal of Electronics, Oct 1, 1989

A systematic procedure for the design of VLSI cell-based interconnection networks is proposed thr... more A systematic procedure for the design of VLSI cell-based interconnection networks is proposed through the concept of the all-pairs examination problem. Since there are no line intersections between the intermodular interconnections of the proposed network, it is very suitable for planar VLSI implementation. With the advent of very large scale integration (VLSI), it became possible to place not only a whole computer on a single chip of silicon but also a whole array of processors onto a silicon chip or wafer (Seitz 1984). Interconnections between these processors in such a tightliintegrated array is very complicated. Furthermore, the significance of VLSI technology lies not only in the capability of integrating a large number of devices on a chi; but also in the capability of providing-massive interconnections. (Goodman et al. 1984). The modularizing system design simplifies the external links in each subsystem and this will reduce the propagation delay, whereas the VLSI implementation reduces the switching delay. In addition, since gate (switching) delays decrease with scaling, while interconnection (propagation) delays remain constant with scaling, the speed at which a circuit can operate is dominated by the interconnection delay rather than by the switching delay . Therefore, an interconnection network constructed with switching devices is highly desirable. Switching-type interconnection networks can be found in many papers (Feng 1981, Hwang and, but these result in many line intersections if they are implemented using planar VLSI technologies. These line intersections will result in a lengthening of the routing length; in other words, extra chip area is required. This is one of the major problems of the VLSI implementation of interconnections. The cross-bar network is a well known and widely used interconnection network. From the analysis given by Franklin (1981), one can be see that the crossbar network, especially in VLSI, is more suitable for asynchronous timing control systems than is its synchronous counterpart. But in most practical real-time digital processing systems, synchronous (clocked) timing control is required (cf. radar and sonar signal processing, digital image processing, digital speech processing, etc. (Kung et al. 1985). Furthermore, the primary condition for the existence of fast computation algorithms for each processing system is the 'symmetry and/or antisymmetry of the operand', which is always due to the 'dynamic permutations' of the data flows. So, in this paper, synchronized interconnection network design only is considered.

Research paper thumbnail of The Effect of Thickness-Based Dynamic Matching Mechanism on a Hyperledger Fabric-Based TimeBank System

Future Internet, Mar 6, 2021

In a community with an aging population, helping each other is a must society function. Lacking m... more In a community with an aging population, helping each other is a must society function. Lacking mutual trust makes the need for a fair and transparent service exchange platform on top of the public service administration's list. We present an efficient blockchain-based TimeBank realization with a newly proposed dynamic service matching algorithm (DSMA) in this work. The Hyperledger Fabric (or Fabric in short), one of the well-known Consortium Blockchains, is chosen as our system realization platform. It provides the identity certification mechanism and has an extendable network structure. The performance of a DSMA is measured by the waiting time for a service to get a match, called the service-matching waiting time (SMWT). In our DSMA, the decision as to whether a service is to get a match or wait for a later chance depends dynamically on the total number of contemporarily available services (i.e., the thickness of the service market). To better the proposed TimeBank system's service quality, a Dynamic Tuning Strategy (DTS) is designed to thicken the market size. Experimental results show that a thicker market makes on-chain nodes have more links, and in turn, they find a match easier (i.e., consume a shorter SMWT).

Research paper thumbnail of Comments on “fixed-point error analysis of fast Hartley transform”

Signal Processing, Aug 1, 1993

Zusammenfassung. Dieser Kurzbeitrag enth~ilt eine vollst~indige Arbeit und eine Korrektur zu eine... more Zusammenfassung. Dieser Kurzbeitrag enth~ilt eine vollst~indige Arbeit und eine Korrektur zu einem kUrzlich in Signal Processing erschienenen Aufsatz. R6sum6. Cette contribution br~ve contient un travail complet et une correction ~ une article recente publi6e dans Signal Processing.

Research paper thumbnail of An efficient and effective Decentralized Anonymous Voting System

arXiv (Cornell University), Apr 18, 2018

A trusted electronic election system requires that all the involved information must go public, t... more A trusted electronic election system requires that all the involved information must go public, that is, it focuses not only on transparency but also privacy issues. In other words, each ballot should be counted anonymously, correctly, and efficiently. In this work, a lightweight E-voting system is proposed for voters to minimize their trust in the authority or government. We ensure the transparency of election by putting all message on the Ethereum blockchain, in the meantime, the privacy of individual voter is protected via an efficient and effective ring signature mechanism. Besides, the attractive selftallying feature is also built in our system, which guarantees that everyone who can access the blockchain network is able to tally the result on his own, no third party is required after voting phase. More importantly, we ensure the correctness of voting results and keep the Ethereum gas cost of individual participant as low as possible, at the same time. Clearly, the pre-described characteristics make our system more suitable for large-scale election.

Research paper thumbnail of A CAI System for Voronoi Diagrams

A CAI System for Voronoi Diagrams

Research paper thumbnail of A Fully Decentralized Time-Lock Encryption System on Blockchain

A Fully Decentralized Time-Lock Encryption System on Blockchain

2019 IEEE International Conference on Blockchain (Blockchain), 2019

To make a time capsule on the Internet, which will be opened at a planned time in the future, wit... more To make a time capsule on the Internet, which will be opened at a planned time in the future, without third parties' involvement has always been a difficult problem. Although there are many researches worked on various time-lock systems, they may have some shortcomings like uncertainty in decryption time, not fully decentralized, hard to estimate the required computing resources. In this paper, we proposed a protocol and a reliable encryption scheme to make time-sensitive message be opened on time at a fully decentralized environment, which is then integrated with the blockchain to adapt to different computing power situations. The method also provides the capability of incorporating with appropriate incentives for encouraging participants to contribute their computing resources, which makes our system more suitable for real world applications.

Research paper thumbnail of Perceptully lossless video re-encoding for cloud transcoding

Perceptully lossless video re-encoding for cloud transcoding

2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), 2014

In this paper, we present a perceptually lossless video re-encoding approach to cloud transcoding... more In this paper, we present a perceptually lossless video re-encoding approach to cloud transcoding based on a just noticeable distortion (JND) model. The bitrate is minimized by adaptive step size and dynamic rounding offset adjustment. On the average, the proposed approach achieves 9.7% overall bitrate reduction compared to the H.264/AVC JM reference encoder under the same coding condition. The performance of the resulting cloud transcoding is further verified by a subjective test.

Research paper thumbnail of Genetic Algorithm in Pattern Matching Problems of Video Coding

Genetic Algorithm in Pattern Matching Problems of Video Coding

Research paper thumbnail of Adaptive Multi-Dictionary Model for Data Compression

Adaptive Multi-Dictionary Model for Data Compression

Proceedings. IEEE International Symposium on Information Theory

The main purpose of data compression is to represent source data with a compact form by applying ... more The main purpose of data compression is to represent source data with a compact form by applying coding techniques. It can use fewer amount of data to substitute original large volume of information. Compression techniques can be applied to data storage and data transmission applications. It can save the space needed when storing enormous data and reduce the time used when transmitting data via communication channels. Data compression plays a very important role in modern information systems. By its result, data compression can be classified as two categories: lossless and lossy compression. Lossless compression assures the original data can be exactly recovered without any distortion. We will focus on lossless case in the following discussions. Common lossless techniques include run-length coding, Huffman coding, arithmetic coding, Lempel-Ziv coding, and BSTW coding. Two major fundamental models are probabilistic model and dictionary model. One obvious redundancy of many data sets is the repeated occurrence of substrings or patterns. Techniques that factorize common substrings are known as dictionary techniques A dictionary of common substrings could be constructed using dictionary techniques either on the fly or in a separate pass. It may use the same dictionary for all input data sets (static) or construct a different dictionary for each data file (adaptive or semi-adaptive). Lempel-Ziv coding can be classified as one of the adaptive dictionary techniques. Cache memories are high-speed buffers which are inserted between the processors and main memory to capture those portions of the contents of main memory which are currently in use. Some well-known management policies include. block placement, block identification, block replacement, and write strategy. The idea of fast access in cache can be applied to data compression. If we collect frequently occurring substrings (patterns) in a small cache-like dictionary and encode these patterns with fewer bits, the overall compression performance should be better. For dictionary techniques, policies that maintain the contents of dictionaries can be adopted from those of cache management. We proposed a new adaptive multi-dictionary model to describe the behavior of compression coding by the management policies of dictionary. Parameters defined in the model include: the number of dictionaries, the sizes of dictionaries, the generate policy to define new words during encoding, the codeword representation mapping that specifies the output bit pattern of each dictionary entry, the Aagbit representation mapping that specifies the flag bit pattern to point out the current used dictionary, the placement policy to decide where a dictionary word should be placed, the replacement policy to throw away old entries when dictionary fills, the update policy to control the exchange of words among dictionaries, and the adjustment policy to modify codeword mapping after each coding step. Under the proposed model, the coding process of dictionarybased coding can be viewed as the construction, insertion, deletion, and modification of dictionary contents. The characteristics of Lempel-Ziv type methods such as LZ77, LZ78, and LZW can be exactly described by the specified management policies. Meanwhile, some other non-dictionary techniques can also be included in our model. By relating the coding procedures with the dictionary management actions, we had successfully interpreted Huffman coding and arithmetic coding as special cases under the proposed model. The model describes the operational behavior of dictionarybased coding by nine parameters. Compression efficiency is affected greatly by those factors. The features of our proposed model include multiple dictionaries, time-varient codeword mapping mechanism, adaptive vocabulary exchange capability between dictionaries, and the placement, replacement, update policies for dictionary vocabulary. Possible applications of the proposed coding model are: First, it provides an unified framework to interpret existent techniques. Second, it points out the possible directions to improve current techniques. Third, new coding system can be easily developed by choosing suitable management policies. The influences of different parameters on compression are the future research topics. *This work was supported by National Science Council, Taipei, Taiwan, Republic of China, under the contract No NSC-0408-E-002-232

Research paper thumbnail of Adaptive beam forming without signal cancellation in the presence of coherent jammers

IEE Proceedings F Radar and Signal Processing, 1989

Adaptive beam forming using spatial smoothing has been proposed to combat coherent jammers. Recen... more Adaptive beam forming using spatial smoothing has been proposed to combat coherent jammers. Recently, it has been found that this adaptive beam forming technique cannot avoid signal cancellation phenomena while also rejecting coherent jammers. In the paper, an approach is presented to eliminate the interaction between the desired signal and coherent jammers during the adaptation of spatial smoothing. As a result, the proposed beam former can effectively null coherent jammers without signal cancellation. Moreover, the resulting output signal-to-noise ratio provides the information about the existence of the desired signal. Computer simulations confirm the theoretical work.

Research paper thumbnail of Music Cut and Paste: A Personalized Musical Medley Generating System

International Symposium/Conference on Music Information Retrieval, 2013

A musical medley is a piece of music that is composed of parts of existing pieces. Manually creat... more A musical medley is a piece of music that is composed of parts of existing pieces. Manually creating medley is time consuming because it is not easy to find out proper clips to put in succession and seamlessly connect them. In this work, we propose a framework for creating personalized music medleys from users' music collection. Unlike existing similar works in which only low-level features are used to select candidate clips and locate possible transition points among clips, we take song structures and song phrasing into account during medley creation. Inspired by the musical dice game, we treat the medley generation process as an audio version of musical dice game. That is, once the analysis on the songs of user collection has been done, the system is able to generate various medleys with different probabilities. This flexibility brings us the ability to create medleys according to the user-specified conditions, such as the medley structure or some must-use clips. The preliminary subjective evaluations showed that the proposed system is effective in selecting connectable clips that preserved chord progression structure. Besides, connecting the clips at phrase boundaries acquired more user preference than previous works did.

Research paper thumbnail of Two-Bit Embedding Histogram-Prediction-Error Based Reversible Data Hiding for Medical Images with Smooth Area

Computers, Nov 12, 2021

During medical treatment, personal privacy is involved and must be protected. Healthcare institut... more During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata into an image and can be recovered without any distortion after the hidden data have been extracted. This work aims to develop a fully reversible two-bit embedding RDH algorithm with a large hiding capacity for medical images. Medical images can be partitioned into regions of interest (ROI) and regions of noninterest (RONI). ROI is informative with semantic meanings essential for clinical applications and diagnosis and cannot tolerate subtle changes. Therefore, we utilize histogram shifting and prediction error to embed metadata into RONI. In addition, our embedding algorithm minimizes the side effect to ROI as much as possible. To verify the effectiveness of the proposed approach, we benchmarked three types of medical images in DICOM format, namely Xray photography (X-ray), computer tomography (CT), and magnetic resonance imaging (MRI). Experimental results show that most of the hidden data have been embedded in RONI, and the performance achieves high capacity and leaves less visible distortion to ROIs.

Research paper thumbnail of <title>Automatic facial feature extraction by genetic algorithms</title>

Proceedings of SPIE, Dec 28, 1998

Research paper thumbnail of The same-geometry implementations of the discrete rectangular wave transform

Since a new discrete rectan lar wave transform (DRWT) has been propcsef there are various applica... more Since a new discrete rectan lar wave transform (DRWT) has been propcsef there are various applications which could be applied, such as real-time DFT implementation and ima e recognition. Although the DRWT possesses tfe property of easy-computation, there 19 no cost-effective algorithm for the implementation of DRWT. Most of the proposed architectures to implement this transform are systolic array, wavefront array, or algorithm based implementations with butterfly-like structures. Since these proposed architertiires have either the modular or high-throughput rate property, but not both; therefore, a same-geometry implementations B proposed in this paper. In this paper, a cost-effective algorithm for implementing DRWT is roposed, called the s a m T m e t r y DRWT. TKis newly proposed dgorit m provides a better procedure not merely in fast computation but also in the consideration of real implementation. In the sarne-geometry DRWT, power of two length is the cdnstraint. There are 21oaN-1 stages in the implementation of the DRWT with length N. Furthermore, logzN stages are identical and therefore only logzN st es demand circuit design in implementation phase.%he delay of each stage is merely one addition and the cells required to construct these stages are simple. Therefore, this algorithm is suitable for VLSI implementation.

Research paper thumbnail of Real-number DFT codes for estimating a dispersive channel

IEEE Signal Processing Letters, Nov 1, 1995

The utilization of real-number DFT codes for channel equalization is studied in this letter. As s... more The utilization of real-number DFT codes for channel equalization is studied in this letter. As shown below, through real-number DFT codes, it is possible to deterministically calculate the dispersive parameters of a channel by introducing some redundancies into the transmitting data.

Research paper thumbnail of Feature extraction capability of some discrete transforms

Feature extraction is a fundamental operation of classification and pattern recognition. There ar... more Feature extraction is a fundamental operation of classification and pattern recognition. There are various strategies for one-and multi-dimensional feature extraction. The transform domain features are very effective when the patterns are characterized by their spectral properties. A wellknown successful example is the speech recognition. In this paper the feature extraction capability of discrete cosine transform (DCT), Walsh-Hadamard transform (WHT), discrete Hartley transform (DHT) and their sign transformations are investigated and compared for the recognition of two &mensional binary patterns. It is shown, in this paper, that the noise immunity of the transform based feature extraction is rather promising.

Research paper thumbnail of MMX-based DCT and MC algorithms for real-time pure software MPEG decoding

To overcome the difficulties of computation-intensive multimedia applications, the development gr... more To overcome the difficulties of computation-intensive multimedia applications, the development groups of major CPU manufactories, such as Intel""' and DigitalTM, have decided to include new instruction sets into their CPU families to increase their multimedia handling ability. The newly introduced instruction set is basically in a Single Instruction Multiple Data (SIMD) Stream operation type. For the practical purpose (e.g. the trade off between ihe complexity of hardware implementation and the soobtained performance improvement), they use a reduced SIMD instruction set instead of the full one. Taking Intel as an example, the new instruction set is composed of 57 operations called the MultiMedia extension (MMX) instruction set. Nowadays, how to fully utilize the power of the embedded instruction set for providing various multimedia applications becomes an interesting and important issue. In this paper, we demonstrate an efficient realization, based on the new MMX instruction set, of the block Inverse Discrete Cosine Transform (IDCT) and Motion Compensation (MC) which are kernel components of the block-based decoding standards, such as MPEG-1, H.261 and H.263. The convincing results show that: with the add of proper SIMD instruction set, the pure software solution for complicated multimedia applications (such as real-time MPEG video decoding) becomes feasible.

Research paper thumbnail of A refined fast 2-D discrete cosine transform algorithm

IEEE Transactions on Signal Processing, Mar 1, 1999

In this correspondence, an index permutation-based fast twodimensional discrete cosine transform ... more In this correspondence, an index permutation-based fast twodimensional discrete cosine transform (2-D DCT) algorithm is presented. It is shown that the N 2 N N 2 N N 2 N 2-D DCT, where N = 2 m N = 2 m N = 2 m , can be computed using only N N N 1-D DCT's and some post additions.

Research paper thumbnail of Solving sorting and related problems by quadratic perceptrons

Electronics Letters, May 7, 1992

Research paper thumbnail of A Time Bank System Design on the Basis of Hyperledger Fabric Blockchain

Future Internet, May 8, 2020

This paper presents a blockchain-based time bank system on the basis of the Hyperledger Fabric fr... more This paper presents a blockchain-based time bank system on the basis of the Hyperledger Fabric framework, which is one of the permissioned blockchain networks. Most of the services provided by existing Time Bank systems were recorded and conducted manually in the past; furthermore, jobs for matching services with receivers were managed by people. Running a time bank in this way will cost lots of time and human resources and, worse, it lacks security. This work designs and realizes a time bank system enabling all the service-related processes being executed and recorded on a blockchain. The matching between services' supply-and-demand tasks can directly be done through autonomous smart contracts. Building a time bank system on blockchain benefits the transaction of time credit which plays the role of digital currency on the system. In addition, the proposed time bank also retains a grading system, allowing its members to give each other a grade for reflecting their degrees of satisfaction about the results provided by the system. This grading system will incentivize the members to provide a better quality of service and adopt a nicer attitude for receiving a service, which may positively endorse the development of a worldwide time bank system.

Research paper thumbnail of Cell-based interconnection network design and the all-pairs examination problem

International Journal of Electronics, Oct 1, 1989

A systematic procedure for the design of VLSI cell-based interconnection networks is proposed thr... more A systematic procedure for the design of VLSI cell-based interconnection networks is proposed through the concept of the all-pairs examination problem. Since there are no line intersections between the intermodular interconnections of the proposed network, it is very suitable for planar VLSI implementation. With the advent of very large scale integration (VLSI), it became possible to place not only a whole computer on a single chip of silicon but also a whole array of processors onto a silicon chip or wafer (Seitz 1984). Interconnections between these processors in such a tightliintegrated array is very complicated. Furthermore, the significance of VLSI technology lies not only in the capability of integrating a large number of devices on a chi; but also in the capability of providing-massive interconnections. (Goodman et al. 1984). The modularizing system design simplifies the external links in each subsystem and this will reduce the propagation delay, whereas the VLSI implementation reduces the switching delay. In addition, since gate (switching) delays decrease with scaling, while interconnection (propagation) delays remain constant with scaling, the speed at which a circuit can operate is dominated by the interconnection delay rather than by the switching delay . Therefore, an interconnection network constructed with switching devices is highly desirable. Switching-type interconnection networks can be found in many papers (Feng 1981, Hwang and, but these result in many line intersections if they are implemented using planar VLSI technologies. These line intersections will result in a lengthening of the routing length; in other words, extra chip area is required. This is one of the major problems of the VLSI implementation of interconnections. The cross-bar network is a well known and widely used interconnection network. From the analysis given by Franklin (1981), one can be see that the crossbar network, especially in VLSI, is more suitable for asynchronous timing control systems than is its synchronous counterpart. But in most practical real-time digital processing systems, synchronous (clocked) timing control is required (cf. radar and sonar signal processing, digital image processing, digital speech processing, etc. (Kung et al. 1985). Furthermore, the primary condition for the existence of fast computation algorithms for each processing system is the 'symmetry and/or antisymmetry of the operand', which is always due to the 'dynamic permutations' of the data flows. So, in this paper, synchronized interconnection network design only is considered.

Research paper thumbnail of The Effect of Thickness-Based Dynamic Matching Mechanism on a Hyperledger Fabric-Based TimeBank System

Future Internet, Mar 6, 2021

In a community with an aging population, helping each other is a must society function. Lacking m... more In a community with an aging population, helping each other is a must society function. Lacking mutual trust makes the need for a fair and transparent service exchange platform on top of the public service administration's list. We present an efficient blockchain-based TimeBank realization with a newly proposed dynamic service matching algorithm (DSMA) in this work. The Hyperledger Fabric (or Fabric in short), one of the well-known Consortium Blockchains, is chosen as our system realization platform. It provides the identity certification mechanism and has an extendable network structure. The performance of a DSMA is measured by the waiting time for a service to get a match, called the service-matching waiting time (SMWT). In our DSMA, the decision as to whether a service is to get a match or wait for a later chance depends dynamically on the total number of contemporarily available services (i.e., the thickness of the service market). To better the proposed TimeBank system's service quality, a Dynamic Tuning Strategy (DTS) is designed to thicken the market size. Experimental results show that a thicker market makes on-chain nodes have more links, and in turn, they find a match easier (i.e., consume a shorter SMWT).

Research paper thumbnail of Comments on “fixed-point error analysis of fast Hartley transform”

Signal Processing, Aug 1, 1993

Zusammenfassung. Dieser Kurzbeitrag enth~ilt eine vollst~indige Arbeit und eine Korrektur zu eine... more Zusammenfassung. Dieser Kurzbeitrag enth~ilt eine vollst~indige Arbeit und eine Korrektur zu einem kUrzlich in Signal Processing erschienenen Aufsatz. R6sum6. Cette contribution br~ve contient un travail complet et une correction ~ une article recente publi6e dans Signal Processing.

Research paper thumbnail of An efficient and effective Decentralized Anonymous Voting System

arXiv (Cornell University), Apr 18, 2018

A trusted electronic election system requires that all the involved information must go public, t... more A trusted electronic election system requires that all the involved information must go public, that is, it focuses not only on transparency but also privacy issues. In other words, each ballot should be counted anonymously, correctly, and efficiently. In this work, a lightweight E-voting system is proposed for voters to minimize their trust in the authority or government. We ensure the transparency of election by putting all message on the Ethereum blockchain, in the meantime, the privacy of individual voter is protected via an efficient and effective ring signature mechanism. Besides, the attractive selftallying feature is also built in our system, which guarantees that everyone who can access the blockchain network is able to tally the result on his own, no third party is required after voting phase. More importantly, we ensure the correctness of voting results and keep the Ethereum gas cost of individual participant as low as possible, at the same time. Clearly, the pre-described characteristics make our system more suitable for large-scale election.

Research paper thumbnail of A CAI System for Voronoi Diagrams

A CAI System for Voronoi Diagrams

Research paper thumbnail of A Fully Decentralized Time-Lock Encryption System on Blockchain

A Fully Decentralized Time-Lock Encryption System on Blockchain

2019 IEEE International Conference on Blockchain (Blockchain), 2019

To make a time capsule on the Internet, which will be opened at a planned time in the future, wit... more To make a time capsule on the Internet, which will be opened at a planned time in the future, without third parties' involvement has always been a difficult problem. Although there are many researches worked on various time-lock systems, they may have some shortcomings like uncertainty in decryption time, not fully decentralized, hard to estimate the required computing resources. In this paper, we proposed a protocol and a reliable encryption scheme to make time-sensitive message be opened on time at a fully decentralized environment, which is then integrated with the blockchain to adapt to different computing power situations. The method also provides the capability of incorporating with appropriate incentives for encouraging participants to contribute their computing resources, which makes our system more suitable for real world applications.

Research paper thumbnail of Perceptully lossless video re-encoding for cloud transcoding

Perceptully lossless video re-encoding for cloud transcoding

2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), 2014

In this paper, we present a perceptually lossless video re-encoding approach to cloud transcoding... more In this paper, we present a perceptually lossless video re-encoding approach to cloud transcoding based on a just noticeable distortion (JND) model. The bitrate is minimized by adaptive step size and dynamic rounding offset adjustment. On the average, the proposed approach achieves 9.7% overall bitrate reduction compared to the H.264/AVC JM reference encoder under the same coding condition. The performance of the resulting cloud transcoding is further verified by a subjective test.

Research paper thumbnail of Genetic Algorithm in Pattern Matching Problems of Video Coding

Genetic Algorithm in Pattern Matching Problems of Video Coding

Research paper thumbnail of Adaptive Multi-Dictionary Model for Data Compression

Adaptive Multi-Dictionary Model for Data Compression

Proceedings. IEEE International Symposium on Information Theory

The main purpose of data compression is to represent source data with a compact form by applying ... more The main purpose of data compression is to represent source data with a compact form by applying coding techniques. It can use fewer amount of data to substitute original large volume of information. Compression techniques can be applied to data storage and data transmission applications. It can save the space needed when storing enormous data and reduce the time used when transmitting data via communication channels. Data compression plays a very important role in modern information systems. By its result, data compression can be classified as two categories: lossless and lossy compression. Lossless compression assures the original data can be exactly recovered without any distortion. We will focus on lossless case in the following discussions. Common lossless techniques include run-length coding, Huffman coding, arithmetic coding, Lempel-Ziv coding, and BSTW coding. Two major fundamental models are probabilistic model and dictionary model. One obvious redundancy of many data sets is the repeated occurrence of substrings or patterns. Techniques that factorize common substrings are known as dictionary techniques A dictionary of common substrings could be constructed using dictionary techniques either on the fly or in a separate pass. It may use the same dictionary for all input data sets (static) or construct a different dictionary for each data file (adaptive or semi-adaptive). Lempel-Ziv coding can be classified as one of the adaptive dictionary techniques. Cache memories are high-speed buffers which are inserted between the processors and main memory to capture those portions of the contents of main memory which are currently in use. Some well-known management policies include. block placement, block identification, block replacement, and write strategy. The idea of fast access in cache can be applied to data compression. If we collect frequently occurring substrings (patterns) in a small cache-like dictionary and encode these patterns with fewer bits, the overall compression performance should be better. For dictionary techniques, policies that maintain the contents of dictionaries can be adopted from those of cache management. We proposed a new adaptive multi-dictionary model to describe the behavior of compression coding by the management policies of dictionary. Parameters defined in the model include: the number of dictionaries, the sizes of dictionaries, the generate policy to define new words during encoding, the codeword representation mapping that specifies the output bit pattern of each dictionary entry, the Aagbit representation mapping that specifies the flag bit pattern to point out the current used dictionary, the placement policy to decide where a dictionary word should be placed, the replacement policy to throw away old entries when dictionary fills, the update policy to control the exchange of words among dictionaries, and the adjustment policy to modify codeword mapping after each coding step. Under the proposed model, the coding process of dictionarybased coding can be viewed as the construction, insertion, deletion, and modification of dictionary contents. The characteristics of Lempel-Ziv type methods such as LZ77, LZ78, and LZW can be exactly described by the specified management policies. Meanwhile, some other non-dictionary techniques can also be included in our model. By relating the coding procedures with the dictionary management actions, we had successfully interpreted Huffman coding and arithmetic coding as special cases under the proposed model. The model describes the operational behavior of dictionarybased coding by nine parameters. Compression efficiency is affected greatly by those factors. The features of our proposed model include multiple dictionaries, time-varient codeword mapping mechanism, adaptive vocabulary exchange capability between dictionaries, and the placement, replacement, update policies for dictionary vocabulary. Possible applications of the proposed coding model are: First, it provides an unified framework to interpret existent techniques. Second, it points out the possible directions to improve current techniques. Third, new coding system can be easily developed by choosing suitable management policies. The influences of different parameters on compression are the future research topics. *This work was supported by National Science Council, Taipei, Taiwan, Republic of China, under the contract No NSC-0408-E-002-232

Research paper thumbnail of Adaptive beam forming without signal cancellation in the presence of coherent jammers

IEE Proceedings F Radar and Signal Processing, 1989

Adaptive beam forming using spatial smoothing has been proposed to combat coherent jammers. Recen... more Adaptive beam forming using spatial smoothing has been proposed to combat coherent jammers. Recently, it has been found that this adaptive beam forming technique cannot avoid signal cancellation phenomena while also rejecting coherent jammers. In the paper, an approach is presented to eliminate the interaction between the desired signal and coherent jammers during the adaptation of spatial smoothing. As a result, the proposed beam former can effectively null coherent jammers without signal cancellation. Moreover, the resulting output signal-to-noise ratio provides the information about the existence of the desired signal. Computer simulations confirm the theoretical work.

Research paper thumbnail of Music Cut and Paste: A Personalized Musical Medley Generating System

International Symposium/Conference on Music Information Retrieval, 2013

A musical medley is a piece of music that is composed of parts of existing pieces. Manually creat... more A musical medley is a piece of music that is composed of parts of existing pieces. Manually creating medley is time consuming because it is not easy to find out proper clips to put in succession and seamlessly connect them. In this work, we propose a framework for creating personalized music medleys from users' music collection. Unlike existing similar works in which only low-level features are used to select candidate clips and locate possible transition points among clips, we take song structures and song phrasing into account during medley creation. Inspired by the musical dice game, we treat the medley generation process as an audio version of musical dice game. That is, once the analysis on the songs of user collection has been done, the system is able to generate various medleys with different probabilities. This flexibility brings us the ability to create medleys according to the user-specified conditions, such as the medley structure or some must-use clips. The preliminary subjective evaluations showed that the proposed system is effective in selecting connectable clips that preserved chord progression structure. Besides, connecting the clips at phrase boundaries acquired more user preference than previous works did.

Research paper thumbnail of Two-Bit Embedding Histogram-Prediction-Error Based Reversible Data Hiding for Medical Images with Smooth Area

Computers, Nov 12, 2021

During medical treatment, personal privacy is involved and must be protected. Healthcare institut... more During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata into an image and can be recovered without any distortion after the hidden data have been extracted. This work aims to develop a fully reversible two-bit embedding RDH algorithm with a large hiding capacity for medical images. Medical images can be partitioned into regions of interest (ROI) and regions of noninterest (RONI). ROI is informative with semantic meanings essential for clinical applications and diagnosis and cannot tolerate subtle changes. Therefore, we utilize histogram shifting and prediction error to embed metadata into RONI. In addition, our embedding algorithm minimizes the side effect to ROI as much as possible. To verify the effectiveness of the proposed approach, we benchmarked three types of medical images in DICOM format, namely Xray photography (X-ray), computer tomography (CT), and magnetic resonance imaging (MRI). Experimental results show that most of the hidden data have been embedded in RONI, and the performance achieves high capacity and leaves less visible distortion to ROIs.

Research paper thumbnail of <title>Automatic facial feature extraction by genetic algorithms</title>

Proceedings of SPIE, Dec 28, 1998

Research paper thumbnail of The same-geometry implementations of the discrete rectangular wave transform

Since a new discrete rectan lar wave transform (DRWT) has been propcsef there are various applica... more Since a new discrete rectan lar wave transform (DRWT) has been propcsef there are various applications which could be applied, such as real-time DFT implementation and ima e recognition. Although the DRWT possesses tfe property of easy-computation, there 19 no cost-effective algorithm for the implementation of DRWT. Most of the proposed architectures to implement this transform are systolic array, wavefront array, or algorithm based implementations with butterfly-like structures. Since these proposed architertiires have either the modular or high-throughput rate property, but not both; therefore, a same-geometry implementations B proposed in this paper. In this paper, a cost-effective algorithm for implementing DRWT is roposed, called the s a m T m e t r y DRWT. TKis newly proposed dgorit m provides a better procedure not merely in fast computation but also in the consideration of real implementation. In the sarne-geometry DRWT, power of two length is the cdnstraint. There are 21oaN-1 stages in the implementation of the DRWT with length N. Furthermore, logzN stages are identical and therefore only logzN st es demand circuit design in implementation phase.%he delay of each stage is merely one addition and the cells required to construct these stages are simple. Therefore, this algorithm is suitable for VLSI implementation.

Research paper thumbnail of Real-number DFT codes for estimating a dispersive channel

IEEE Signal Processing Letters, Nov 1, 1995

The utilization of real-number DFT codes for channel equalization is studied in this letter. As s... more The utilization of real-number DFT codes for channel equalization is studied in this letter. As shown below, through real-number DFT codes, it is possible to deterministically calculate the dispersive parameters of a channel by introducing some redundancies into the transmitting data.

Research paper thumbnail of Feature extraction capability of some discrete transforms

Feature extraction is a fundamental operation of classification and pattern recognition. There ar... more Feature extraction is a fundamental operation of classification and pattern recognition. There are various strategies for one-and multi-dimensional feature extraction. The transform domain features are very effective when the patterns are characterized by their spectral properties. A wellknown successful example is the speech recognition. In this paper the feature extraction capability of discrete cosine transform (DCT), Walsh-Hadamard transform (WHT), discrete Hartley transform (DHT) and their sign transformations are investigated and compared for the recognition of two &mensional binary patterns. It is shown, in this paper, that the noise immunity of the transform based feature extraction is rather promising.

Research paper thumbnail of MMX-based DCT and MC algorithms for real-time pure software MPEG decoding

To overcome the difficulties of computation-intensive multimedia applications, the development gr... more To overcome the difficulties of computation-intensive multimedia applications, the development groups of major CPU manufactories, such as Intel""' and DigitalTM, have decided to include new instruction sets into their CPU families to increase their multimedia handling ability. The newly introduced instruction set is basically in a Single Instruction Multiple Data (SIMD) Stream operation type. For the practical purpose (e.g. the trade off between ihe complexity of hardware implementation and the soobtained performance improvement), they use a reduced SIMD instruction set instead of the full one. Taking Intel as an example, the new instruction set is composed of 57 operations called the MultiMedia extension (MMX) instruction set. Nowadays, how to fully utilize the power of the embedded instruction set for providing various multimedia applications becomes an interesting and important issue. In this paper, we demonstrate an efficient realization, based on the new MMX instruction set, of the block Inverse Discrete Cosine Transform (IDCT) and Motion Compensation (MC) which are kernel components of the block-based decoding standards, such as MPEG-1, H.261 and H.263. The convincing results show that: with the add of proper SIMD instruction set, the pure software solution for complicated multimedia applications (such as real-time MPEG video decoding) becomes feasible.

Research paper thumbnail of A refined fast 2-D discrete cosine transform algorithm

IEEE Transactions on Signal Processing, Mar 1, 1999

In this correspondence, an index permutation-based fast twodimensional discrete cosine transform ... more In this correspondence, an index permutation-based fast twodimensional discrete cosine transform (2-D DCT) algorithm is presented. It is shown that the N 2 N N 2 N N 2 N 2-D DCT, where N = 2 m N = 2 m N = 2 m , can be computed using only N N N 1-D DCT's and some post additions.