Kanoksak Wattanachote | Mahidol University (original) (raw)
Uploads
Papers by Kanoksak Wattanachote
IEEE Transactions on Circuits and Systems for Video Technology, Dec 31, 2022
Lecture notes in networks and systems, Dec 31, 2022
Iet Image Processing, Aug 8, 2022
ACM Transactions on Multimedia Computing, Communications, and Applications, Aug 20, 2019
arXiv (Cornell University), Dec 2, 2021
Salient object detection (SOD) is a basic research direction in the computer vision task. Its mai... more Salient object detection (SOD) is a basic research direction in the computer vision task. Its main work is to simulate the human visual system by establishing a visual-aware model. However, most of existing models have suffered from two challenges. First, edge-aware models only use edge feature to improve segmentation features but lack of utilizing the complementarity of them. Second, the problem of losing part of information in the process of extracting features in Convolutional Neural Networks (CNN) leads to the incompleteness of final result. In this paper, we proposed a criss-cross refined salient object detection method based on hybrid attention to solve these problems. Specifically, we applied a Criss-Cross Attention module (CCA) to capture useful contextual information of surrounding pixels from long-range dependencies effectively. Fully considering the complementary between salient detection and edge information, our model simultaneously refines multilevel features of SOD and edge detection by overlapping a Cross Refinement Unit (CRU). And finally we used a U-Net to output the refined result. Extensive experiments conducted on five benchmark datasets demonstrate that our method can significantly improve the performance of low-contrast scenes, optimize the problem of partial missing and error detection of salient objects. Besides, our method achieved state-of-the-art performance in three different indicators.
To implement vertical retargeting for stereoscopic images, this paper proposes an occlusion-guide... more To implement vertical retargeting for stereoscopic images, this paper proposes an occlusion-guided stereoscopic image retargeting method via the pixel fusion technique. Traditional seam searching-based methods cannot construct valid horizontal seam pairs due to the existence of occluded regions in stereoscopic images and thus fail to implement vertical retargeting. To solve this issue, we propose a novel horizontal seam coupling strategy guided by the occlusion regions that appear on both sides of stereoscopic images. Horizontal seams were able to be laid across the occluded and occluding regions with their geometric consistency maintained. Another important contribution of our method is incorporating occluding masks into energy optimization. The experimental results show that our method can achieve promising performances in both visual experience and depth preservation.
2D barcode becomes a famous technology in information security issue in this decade. We propose a... more 2D barcode becomes a famous technology in information security issue in this decade. We propose a security mechanism improvement method by applying the error correction with random segmentation (ECRS) technique. In this paper, we use private key array for segmentation of the original texts be-fore merging the segmented texts whose were encoded by using error correction coding approach. Our proposed method has two advantages when com-pare with the traditional Reed-Solomon error correction coding algorithm. On one hand, our ECRS performs random segmentation using multiple redundant code to protect data. On the other hand, the ECRS protects the code from being falsified by using encryption keys. Our experiments manifest that the proposed method improves the security degree of significance while just only few sacrificing coding times.
Dynamic texture describes the images sequence that continuously demonstrates movement of pixel... more Dynamic texture describes the images sequence that continuously demonstrates movement of pixel's intensity change patterns in time. Two motion features, namely average radius and motion coherence index are extracted to find the different characteristics of fire and smoke. Since, the average radius of motion vectors describes the rapid of intensity change, whilst the motion coherence index has been developed to assess the coherent motion of intensity change. We propose a cutting-edge periodic series analysis to characterize fire and smoke dynamic textures. Our experimental results demonstrate the periodic motion pattern of fire and smoke. Besides, the periodicity index calculated by our proposed method is efficiently able to characterize fire and smoke dynamic textures, higher efficiency than calculated by traditional approach.
Applied Intelligence, Mar 7, 2022
IGI Global eBooks, Jun 19, 2017
Lecture Notes in Computer Science, 2017
Dynamic texture has been described as images sequence that demonstrates continuous movement of pi... more Dynamic texture has been described as images sequence that demonstrates continuous movement of pixels intensity change patterns in time. We consider the motion features of smoke and fire dynamic textures, which are important for fire calamity surveillance system to analyze the fire situation. We propose a method to understand the motion of intensity change. The objective is not only for classification purpose but also to characterize the motion pattern of fire and smoke dynamic texture. The radius of vector usually describes how fast the intensity change. The motion coherence index has been developed to assess the motion coherency between observed vector and its neighborhoods. We implement strategic motion coherence analysis to determine the motion coherence index of motion vector field in each video frame. In practical, both covariance stationarity of average radius and motion coherence index are efficiently used to investigate fire and smoke characteristics by applying periodicity index for analysis.
arXiv (Cornell University), Dec 2, 2021
Applied Intelligence, 2022
2020 8th International Conference on Digital Home (ICDH), 2020
Salient object detection (SOD) is a basic research direction in the computer vision task. Its mai... more Salient object detection (SOD) is a basic research direction in the computer vision task. Its main work is to simulate the human visual system by establishing a visual-aware model. However, most of existing models have suffered from two challenges. First, edge-aware models only use edge feature to improve segmentation features but lack of utilizing the complementarity of them. Second, the problem of losing part of information in the process of extracting features in Convolutional Neural Networks (CNN) leads to the incompleteness of final result. In this paper, we proposed a criss-cross refined salient object detection method based on hybrid attention to solve these problems. Specifically, we applied a Criss-Cross Attention module (CCA) to capture useful contextual information of surrounding pixels from long-range dependencies effectively. Fully considering the complementary between salient detection and edge information, our model simultaneously refines multilevel features of SOD and edge detection by overlapping a Cross Refinement Unit (CRU). And finally we used a U-Net to output the refined result. Extensive experiments conducted on five benchmark datasets demonstrate that our method can significantly improve the performance of low-contrast scenes, optimize the problem of partial missing and error detection of salient objects. Besides, our method achieved state-of-the-art performance in three different indicators.
Advances in Intelligent Systems and Computing, 2018
Significant motion features which are able to be used for fire video detection in regard to the d... more Significant motion features which are able to be used for fire video detection in regard to the dynamic fire texture are proposed in this article. We are now interested in motion characteristics rather than color schemes. Since colors of fire textures observed on video medium nowadays are possibly illustrated with whimsical colors. It is not caused by only nature chemical phenomena but also by special effect application technologies in video industry. We propose four data series of motion features gained from motion vector field or optical flow estimation, namely, the series of average radius, the series of motion coherence index, the covariance stationary series of average radius, and the covariance stationary series of motion coherence index, respectively. The extracted data is used by machine learning part to form training set and test set for video classification using support vector machine method. Our four proposed data series are able to leverage fire video detection. Our experimental results demonstrate that the accuracy of video detection in regard to fire texture is significantly high and its time elapsed only few seconds of gaining data.
IEEE Transactions on Circuits and Systems for Video Technology, Dec 31, 2022
Lecture notes in networks and systems, Dec 31, 2022
Iet Image Processing, Aug 8, 2022
ACM Transactions on Multimedia Computing, Communications, and Applications, Aug 20, 2019
arXiv (Cornell University), Dec 2, 2021
Salient object detection (SOD) is a basic research direction in the computer vision task. Its mai... more Salient object detection (SOD) is a basic research direction in the computer vision task. Its main work is to simulate the human visual system by establishing a visual-aware model. However, most of existing models have suffered from two challenges. First, edge-aware models only use edge feature to improve segmentation features but lack of utilizing the complementarity of them. Second, the problem of losing part of information in the process of extracting features in Convolutional Neural Networks (CNN) leads to the incompleteness of final result. In this paper, we proposed a criss-cross refined salient object detection method based on hybrid attention to solve these problems. Specifically, we applied a Criss-Cross Attention module (CCA) to capture useful contextual information of surrounding pixels from long-range dependencies effectively. Fully considering the complementary between salient detection and edge information, our model simultaneously refines multilevel features of SOD and edge detection by overlapping a Cross Refinement Unit (CRU). And finally we used a U-Net to output the refined result. Extensive experiments conducted on five benchmark datasets demonstrate that our method can significantly improve the performance of low-contrast scenes, optimize the problem of partial missing and error detection of salient objects. Besides, our method achieved state-of-the-art performance in three different indicators.
To implement vertical retargeting for stereoscopic images, this paper proposes an occlusion-guide... more To implement vertical retargeting for stereoscopic images, this paper proposes an occlusion-guided stereoscopic image retargeting method via the pixel fusion technique. Traditional seam searching-based methods cannot construct valid horizontal seam pairs due to the existence of occluded regions in stereoscopic images and thus fail to implement vertical retargeting. To solve this issue, we propose a novel horizontal seam coupling strategy guided by the occlusion regions that appear on both sides of stereoscopic images. Horizontal seams were able to be laid across the occluded and occluding regions with their geometric consistency maintained. Another important contribution of our method is incorporating occluding masks into energy optimization. The experimental results show that our method can achieve promising performances in both visual experience and depth preservation.
2D barcode becomes a famous technology in information security issue in this decade. We propose a... more 2D barcode becomes a famous technology in information security issue in this decade. We propose a security mechanism improvement method by applying the error correction with random segmentation (ECRS) technique. In this paper, we use private key array for segmentation of the original texts be-fore merging the segmented texts whose were encoded by using error correction coding approach. Our proposed method has two advantages when com-pare with the traditional Reed-Solomon error correction coding algorithm. On one hand, our ECRS performs random segmentation using multiple redundant code to protect data. On the other hand, the ECRS protects the code from being falsified by using encryption keys. Our experiments manifest that the proposed method improves the security degree of significance while just only few sacrificing coding times.
Dynamic texture describes the images sequence that continuously demonstrates movement of pixel... more Dynamic texture describes the images sequence that continuously demonstrates movement of pixel's intensity change patterns in time. Two motion features, namely average radius and motion coherence index are extracted to find the different characteristics of fire and smoke. Since, the average radius of motion vectors describes the rapid of intensity change, whilst the motion coherence index has been developed to assess the coherent motion of intensity change. We propose a cutting-edge periodic series analysis to characterize fire and smoke dynamic textures. Our experimental results demonstrate the periodic motion pattern of fire and smoke. Besides, the periodicity index calculated by our proposed method is efficiently able to characterize fire and smoke dynamic textures, higher efficiency than calculated by traditional approach.
Applied Intelligence, Mar 7, 2022
IGI Global eBooks, Jun 19, 2017
Lecture Notes in Computer Science, 2017
Dynamic texture has been described as images sequence that demonstrates continuous movement of pi... more Dynamic texture has been described as images sequence that demonstrates continuous movement of pixels intensity change patterns in time. We consider the motion features of smoke and fire dynamic textures, which are important for fire calamity surveillance system to analyze the fire situation. We propose a method to understand the motion of intensity change. The objective is not only for classification purpose but also to characterize the motion pattern of fire and smoke dynamic texture. The radius of vector usually describes how fast the intensity change. The motion coherence index has been developed to assess the motion coherency between observed vector and its neighborhoods. We implement strategic motion coherence analysis to determine the motion coherence index of motion vector field in each video frame. In practical, both covariance stationarity of average radius and motion coherence index are efficiently used to investigate fire and smoke characteristics by applying periodicity index for analysis.
arXiv (Cornell University), Dec 2, 2021
Applied Intelligence, 2022
2020 8th International Conference on Digital Home (ICDH), 2020
Salient object detection (SOD) is a basic research direction in the computer vision task. Its mai... more Salient object detection (SOD) is a basic research direction in the computer vision task. Its main work is to simulate the human visual system by establishing a visual-aware model. However, most of existing models have suffered from two challenges. First, edge-aware models only use edge feature to improve segmentation features but lack of utilizing the complementarity of them. Second, the problem of losing part of information in the process of extracting features in Convolutional Neural Networks (CNN) leads to the incompleteness of final result. In this paper, we proposed a criss-cross refined salient object detection method based on hybrid attention to solve these problems. Specifically, we applied a Criss-Cross Attention module (CCA) to capture useful contextual information of surrounding pixels from long-range dependencies effectively. Fully considering the complementary between salient detection and edge information, our model simultaneously refines multilevel features of SOD and edge detection by overlapping a Cross Refinement Unit (CRU). And finally we used a U-Net to output the refined result. Extensive experiments conducted on five benchmark datasets demonstrate that our method can significantly improve the performance of low-contrast scenes, optimize the problem of partial missing and error detection of salient objects. Besides, our method achieved state-of-the-art performance in three different indicators.
Advances in Intelligent Systems and Computing, 2018
Significant motion features which are able to be used for fire video detection in regard to the d... more Significant motion features which are able to be used for fire video detection in regard to the dynamic fire texture are proposed in this article. We are now interested in motion characteristics rather than color schemes. Since colors of fire textures observed on video medium nowadays are possibly illustrated with whimsical colors. It is not caused by only nature chemical phenomena but also by special effect application technologies in video industry. We propose four data series of motion features gained from motion vector field or optical flow estimation, namely, the series of average radius, the series of motion coherence index, the covariance stationary series of average radius, and the covariance stationary series of motion coherence index, respectively. The extracted data is used by machine learning part to form training set and test set for video classification using support vector machine method. Our four proposed data series are able to leverage fire video detection. Our experimental results demonstrate that the accuracy of video detection in regard to fire texture is significantly high and its time elapsed only few seconds of gaining data.