Seungyong Lee - Academia.edu (original) (raw)
Papers by Seungyong Lee
IEEE Computer Graphics and Applications, 1998
This paper presents polymorph, a novel algorithm for morphing among multiple images. Traditional ... more This paper presents polymorph, a novel algorithm for morphing among multiple images. Traditional image morphing generates a sequence of images depicting an evolution from one image into another. We extend this approach to permit morphed images to be derived from more than two images at once. We formulate each input image to be a vertex of a simplex. An inbetween,
Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004., 2004
In the context of surface reconstruction, two main problems arise. The first is finding an effici... more In the context of surface reconstruction, two main problems arise. The first is finding an efficient way to average meshes with different connectivity, and the second is tuning the parameters for surface reconstruction to maximize the performance of the ensemble. We solve the first problem by voxelizing all the meshes on the same regular grid and taking majority vote on each voxel. We tune the parameters experimentally, borrowing ideas from weak learning methods.
International Conference on Shape Modeling and Applications 2005 (SMI' 05), 2005
Feature sensitive mesh segmentation is important for many computer graphics and geometric modelin... more Feature sensitive mesh segmentation is important for many computer graphics and geometric modeling applica-tions. In this paper, we develop a mesh segmentation method which is capable of producing high-quality shape partition-ing. It respects fine shape features and works ...
Lecture Notes in Computer Science, 2006
The majority of the existing techniques for surface reconstruction and the closely related proble... more The majority of the existing techniques for surface reconstruction and the closely related problem of normal estimation are deterministic. Their main advantages are the speed and, given a reasonably good initial input, the high quality of the reconstructed surfaces. Nevertheless, their deterministic nature may hinder them from effectively handling incomplete data with noise and outliers. In our previous work [1],
Lecture Notes in Computer Science, 2005
Due to a false alert or mass alerts by current intrusion detection systems, the system administra... more Due to a false alert or mass alerts by current intrusion detection systems, the system administrators have difficulties in real-time analysis of an intrusion. In order to solve this problem, it has been studied to analyze the intrusion situation or correlation. However, the existing situation analysis method is grouping with the similarity of measures, and it makes hard to respond appropriately to an elaborate attack. Also, the result of the method is so abstract that the raw information before reduction must be analyzed to realize the intrusion. In this paper, we reduce the number of alerts using the aggregation and correlation and classify the alerts by IP addresses and attack types. Through this method, our tool can find a cunningly cloaked attack flow as well as general attack situation, and more, they are visualized. So an administrator can easily understand the correct attack flow.
Lecture Notes in Computer Science, 2006
In this paper, we propose a new security architecture for adapting multiple access control models... more In this paper, we propose a new security architecture for adapting multiple access control models to operating systems. As adding a virtual access control system to a proposed security architecture, various access control models such as MAC, DAC, and RBAC are applied to secure operating systems easily. Also, the proposed was designed to overcome the deficiencies of access control in standard operating systems, makes secure OS more available by combining access control models, and apply them to secure OS in runtime.
Proceedings. Seventh Pacific Conference on Computer Graphics and Applications (Cat. No.PR00293), 1999
uniform cubic B-spline functions. We propose a geometric interpretation of the local injectivity ... more uniform cubic B-spline functions. We propose a geometric interpretation of the local injectivity of a uniform cubic Bspline function, with which 2D and 3D cases can be handled in a similar way. Based on the geometric interpretation, we present sufficient conditions for the local injectivity that are represented in terms of control point displacements. These sufficient conditions are simple and easy to check and will be useful to guarantee the injectivity of mapping functions in application areas.
Graphical Models, 2000
Uniform cubic B-spline functions have been used for mapping functions in various areas such as im... more Uniform cubic B-spline functions have been used for mapping functions in various areas such as image warping and morphing, 3D deformation, and volume morphing. The injectivity (one-to-one property) of a mapping function is crucial to obtaining desirable results in these areas. This paper considers the injectivity conditions of 2D and 3D uniform cubic B-spline functions. We propose a geometric interpretation of the injectivity of a uniform cubic B-spline function, with which 2D and 3D cases can be handled in a similar way. Based on our geometric interpretation, we present sufficient conditions for injectivity which are represented in terms of control point displacements. These sufficient conditions can be easily tested and will be useful in guaranteeing the injectivity of mapping functions in application areas.
Lecture Notes in Computer Science, 2006
Mesh chartification is an important tool for processing meshes in various applications. In this p... more Mesh chartification is an important tool for processing meshes in various applications. In this paper, we present a novel feature sensitive mesh chartification technique that can handle huge meshes with limited main memory. Our technique adapts the mesh chartification approach using Lloyd-Max quantization to out-of-core processing. While the previous approach updates chartification globally at each iteration of Lloyd-Max quantization, we propose a local update algorithm where only a part of the chartification is processed at a time. By repeating the local updates, we can obtain a chartification of a huge mesh that cannot fit into the main memory. We verify the accuracy of the serialized local updates by comparing the results with the global update approach. We demonstrate that our technique can successfully process huge meshes for applications, such as mesh compression, shape approximation, and remeshing.
Proceedings of the 22nd annual conference on Computer graphics and interactive techniques - SIGGRAPH '95, 1995
This paper presents new solutions to the following three problems in image morphing: feature spec... more This paper presents new solutions to the following three problems in image morphing: feature specification, warp generation, and transition control. To reduce the burden of feature specification, we first adopt a computer vision technique called snakes. We next propose the use of multilevel free-form deformations (MFFD) to achieve C 2 -continuous and one-to-one warps among feature point pairs.
2007 IEEE 11th International Conference on Computer Vision, 2007
We propose a method for removing non-uniform motion blur from multiple blurry images. Traditional... more We propose a method for removing non-uniform motion blur from multiple blurry images. Traditional methods focus on estimating a single motion blur kernel for the entire image. In contrast, we aim to restore images blurred by unknown, spatially varying motion blur kernels caused by different relative motions between the camera and the scene. Our algorithm simultaneously estimates multiple motions, motion blur kernels, and the associated image segments. We formulate the problem as a regularized energy function and solve it using an alternating optimization technique. Realworld experiments demonstrate the effectiveness of the proposed method.
IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing - Vol 2 - Workshops, 2006
The study deals with the most important elements of ubiquitous computing, that is, the toolkit to... more The study deals with the most important elements of ubiquitous computing, that is, the toolkit to acquire, express and safely use the context information. To do so, we introduce CAST(Context-Awareness Simulation Toolkit) and show how it works. CAST generates users and devices in a virtual home domain, designates their relation and creates virtual context information. The created context information is reused by the request of application and put into use for context learning. Particularly, we have given a consideration to security in the process of context creation and its consumption. That is, we applied SPKI/SDSI to test if the created context information was valid information and if the application that called for the context had legitimate authority to do so. CAST not only captures virtual context information, but it also guarantees the safe sharing of the context information requested by the application.
ACM Transactions on Graphics, 2014
ABSTRACT
This paper describes a fast algorithm for scattered data interpolation and approximation. Multile... more This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel B-splines are introduced to compute a C 2 -continuous surface through a set of irregularly spaced points. The algorithm makes use of a coarse-tofine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. Large performance gains are realized by using B-spline refinement to reduce the sum of these functions into one equivalent B-spline function. Experimental results demonstrate that high-fidelity reconstruction is possible from a selected set of sparse and irregular samples.
This paper presents an intelligent scissoring operator for meshes. Unlike common approaches that ... more This paper presents an intelligent scissoring operator for meshes. Unlike common approaches that segment a mesh using clustering schemes, we introduce a method that concentrates on the contours for cutting. Our approach is based on the minima rule and part salience theory from cognitive theory. The minima rule states that human perception usually divides a surface into parts along the concave discontinuity of the tangent plane. The part salience theory provides factors that determine the salience of segments. Our method first extracts features to find candidate contours based on the minima rule. Subsequently, these open contours are prioritized to select the most salient one. Then, the selected open contour is automatically completed to form a loop around a specific part of the mesh. This loop is used as the initial position of a 3D geometric snake. Before applying a snake, we measure the part salience of the segments obtained by the completed contour. If conditions for the salience are not met, the contour is rejected. Otherwise, the snake moves by relaxation until it settles to define the final scissoring position. In this paper, we focus on a fully automatic scissoring scheme; nevertheless, we also report on semi-automatic user interfaces for intelligent scissoring which are easy to use and intuitive.
12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings., 2004
... Snakes were presented as active contour models for semi-automatic detection of features in an... more ... Snakes were presented as active contour models for semi-automatic detection of features in an image [13, 26]. ... is presented in [2]. We follow the approach proposed in [15] where the snake's updated position is de-termined by energy minimization on a 2D embedding plane, ...
Bioscience, biotechnology, and biochemistry, Jan 6, 2015
In this study, dual-cylindrical anaerobic digesters were designed and built on the pilot plant sc... more In this study, dual-cylindrical anaerobic digesters were designed and built on the pilot plant scale for the improvement of anaerobic digestion efficiency. The removal efficiency of organics, biogas productivity, yield, and microbial communities was evaluated as performance parameters of the digester. During the stable operational period in the continuous mode, the removal efficiencies of chemical oxygen demand and total solids were 74.1 and 65.1%, respectively. Biogas productivities of 63.9 m(3)/m(3)-FWW and 1.3 m(3)/kg-VSremoved were measured. The hydrogenotrophic methanogen orders, Methanomicrobiales and Methanobacteriales, were predominant over the aceticlastic methanogen order, Methanosarcinaceae, probably due to the tolerance of the hydrogenotrophs to environmental perturbation in the field and their faster growth rate compared with that of the aceticlastics.
Planta medica, Jan 26, 2015
The adverse effects of anticancer drugs can prompt patients to end their treatment despite the ef... more The adverse effects of anticancer drugs can prompt patients to end their treatment despite the efficacy. Cisplatin is a platinum-based molecule widely used to treat various forms of cancer, but frequent and long-term use of cisplatin is limited due to severe nephrotoxicity. In the present study, we investigated the protective effect and mechanism of tetrahydrocurcumin on cisplatin-induced kidney damage, oxidative stress, and inflammation to evaluate its possible use in renal damage. Cisplatin-induced LLC-PK1 renal cell damage was significantly reduced by tetrahydrocurcumin treatment. Additionally, the protective effect of tetrahydrocurcumin on cisplatin-induced oxidative renal damage was investigated in rats. Tetrahydrocurcumin was orally administered every day at a dose of 80 mg/kg body weight for ten days, and a single dose of cisplatin was administered intraperitoneally (7.5 mg/kg body weight) in 0.9 % saline on day four. The creatinine clearance levels, which were markers of ren...
IEEE Computer Graphics and Applications, 1998
This paper presents polymorph, a novel algorithm for morphing among multiple images. Traditional ... more This paper presents polymorph, a novel algorithm for morphing among multiple images. Traditional image morphing generates a sequence of images depicting an evolution from one image into another. We extend this approach to permit morphed images to be derived from more than two images at once. We formulate each input image to be a vertex of a simplex. An inbetween,
Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004., 2004
In the context of surface reconstruction, two main problems arise. The first is finding an effici... more In the context of surface reconstruction, two main problems arise. The first is finding an efficient way to average meshes with different connectivity, and the second is tuning the parameters for surface reconstruction to maximize the performance of the ensemble. We solve the first problem by voxelizing all the meshes on the same regular grid and taking majority vote on each voxel. We tune the parameters experimentally, borrowing ideas from weak learning methods.
International Conference on Shape Modeling and Applications 2005 (SMI' 05), 2005
Feature sensitive mesh segmentation is important for many computer graphics and geometric modelin... more Feature sensitive mesh segmentation is important for many computer graphics and geometric modeling applica-tions. In this paper, we develop a mesh segmentation method which is capable of producing high-quality shape partition-ing. It respects fine shape features and works ...
Lecture Notes in Computer Science, 2006
The majority of the existing techniques for surface reconstruction and the closely related proble... more The majority of the existing techniques for surface reconstruction and the closely related problem of normal estimation are deterministic. Their main advantages are the speed and, given a reasonably good initial input, the high quality of the reconstructed surfaces. Nevertheless, their deterministic nature may hinder them from effectively handling incomplete data with noise and outliers. In our previous work [1],
Lecture Notes in Computer Science, 2005
Due to a false alert or mass alerts by current intrusion detection systems, the system administra... more Due to a false alert or mass alerts by current intrusion detection systems, the system administrators have difficulties in real-time analysis of an intrusion. In order to solve this problem, it has been studied to analyze the intrusion situation or correlation. However, the existing situation analysis method is grouping with the similarity of measures, and it makes hard to respond appropriately to an elaborate attack. Also, the result of the method is so abstract that the raw information before reduction must be analyzed to realize the intrusion. In this paper, we reduce the number of alerts using the aggregation and correlation and classify the alerts by IP addresses and attack types. Through this method, our tool can find a cunningly cloaked attack flow as well as general attack situation, and more, they are visualized. So an administrator can easily understand the correct attack flow.
Lecture Notes in Computer Science, 2006
In this paper, we propose a new security architecture for adapting multiple access control models... more In this paper, we propose a new security architecture for adapting multiple access control models to operating systems. As adding a virtual access control system to a proposed security architecture, various access control models such as MAC, DAC, and RBAC are applied to secure operating systems easily. Also, the proposed was designed to overcome the deficiencies of access control in standard operating systems, makes secure OS more available by combining access control models, and apply them to secure OS in runtime.
Proceedings. Seventh Pacific Conference on Computer Graphics and Applications (Cat. No.PR00293), 1999
uniform cubic B-spline functions. We propose a geometric interpretation of the local injectivity ... more uniform cubic B-spline functions. We propose a geometric interpretation of the local injectivity of a uniform cubic Bspline function, with which 2D and 3D cases can be handled in a similar way. Based on the geometric interpretation, we present sufficient conditions for the local injectivity that are represented in terms of control point displacements. These sufficient conditions are simple and easy to check and will be useful to guarantee the injectivity of mapping functions in application areas.
Graphical Models, 2000
Uniform cubic B-spline functions have been used for mapping functions in various areas such as im... more Uniform cubic B-spline functions have been used for mapping functions in various areas such as image warping and morphing, 3D deformation, and volume morphing. The injectivity (one-to-one property) of a mapping function is crucial to obtaining desirable results in these areas. This paper considers the injectivity conditions of 2D and 3D uniform cubic B-spline functions. We propose a geometric interpretation of the injectivity of a uniform cubic B-spline function, with which 2D and 3D cases can be handled in a similar way. Based on our geometric interpretation, we present sufficient conditions for injectivity which are represented in terms of control point displacements. These sufficient conditions can be easily tested and will be useful in guaranteeing the injectivity of mapping functions in application areas.
Lecture Notes in Computer Science, 2006
Mesh chartification is an important tool for processing meshes in various applications. In this p... more Mesh chartification is an important tool for processing meshes in various applications. In this paper, we present a novel feature sensitive mesh chartification technique that can handle huge meshes with limited main memory. Our technique adapts the mesh chartification approach using Lloyd-Max quantization to out-of-core processing. While the previous approach updates chartification globally at each iteration of Lloyd-Max quantization, we propose a local update algorithm where only a part of the chartification is processed at a time. By repeating the local updates, we can obtain a chartification of a huge mesh that cannot fit into the main memory. We verify the accuracy of the serialized local updates by comparing the results with the global update approach. We demonstrate that our technique can successfully process huge meshes for applications, such as mesh compression, shape approximation, and remeshing.
Proceedings of the 22nd annual conference on Computer graphics and interactive techniques - SIGGRAPH '95, 1995
This paper presents new solutions to the following three problems in image morphing: feature spec... more This paper presents new solutions to the following three problems in image morphing: feature specification, warp generation, and transition control. To reduce the burden of feature specification, we first adopt a computer vision technique called snakes. We next propose the use of multilevel free-form deformations (MFFD) to achieve C 2 -continuous and one-to-one warps among feature point pairs.
2007 IEEE 11th International Conference on Computer Vision, 2007
We propose a method for removing non-uniform motion blur from multiple blurry images. Traditional... more We propose a method for removing non-uniform motion blur from multiple blurry images. Traditional methods focus on estimating a single motion blur kernel for the entire image. In contrast, we aim to restore images blurred by unknown, spatially varying motion blur kernels caused by different relative motions between the camera and the scene. Our algorithm simultaneously estimates multiple motions, motion blur kernels, and the associated image segments. We formulate the problem as a regularized energy function and solve it using an alternating optimization technique. Realworld experiments demonstrate the effectiveness of the proposed method.
IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing - Vol 2 - Workshops, 2006
The study deals with the most important elements of ubiquitous computing, that is, the toolkit to... more The study deals with the most important elements of ubiquitous computing, that is, the toolkit to acquire, express and safely use the context information. To do so, we introduce CAST(Context-Awareness Simulation Toolkit) and show how it works. CAST generates users and devices in a virtual home domain, designates their relation and creates virtual context information. The created context information is reused by the request of application and put into use for context learning. Particularly, we have given a consideration to security in the process of context creation and its consumption. That is, we applied SPKI/SDSI to test if the created context information was valid information and if the application that called for the context had legitimate authority to do so. CAST not only captures virtual context information, but it also guarantees the safe sharing of the context information requested by the application.
ACM Transactions on Graphics, 2014
ABSTRACT
This paper describes a fast algorithm for scattered data interpolation and approximation. Multile... more This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel B-splines are introduced to compute a C 2 -continuous surface through a set of irregularly spaced points. The algorithm makes use of a coarse-tofine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. Large performance gains are realized by using B-spline refinement to reduce the sum of these functions into one equivalent B-spline function. Experimental results demonstrate that high-fidelity reconstruction is possible from a selected set of sparse and irregular samples.
This paper presents an intelligent scissoring operator for meshes. Unlike common approaches that ... more This paper presents an intelligent scissoring operator for meshes. Unlike common approaches that segment a mesh using clustering schemes, we introduce a method that concentrates on the contours for cutting. Our approach is based on the minima rule and part salience theory from cognitive theory. The minima rule states that human perception usually divides a surface into parts along the concave discontinuity of the tangent plane. The part salience theory provides factors that determine the salience of segments. Our method first extracts features to find candidate contours based on the minima rule. Subsequently, these open contours are prioritized to select the most salient one. Then, the selected open contour is automatically completed to form a loop around a specific part of the mesh. This loop is used as the initial position of a 3D geometric snake. Before applying a snake, we measure the part salience of the segments obtained by the completed contour. If conditions for the salience are not met, the contour is rejected. Otherwise, the snake moves by relaxation until it settles to define the final scissoring position. In this paper, we focus on a fully automatic scissoring scheme; nevertheless, we also report on semi-automatic user interfaces for intelligent scissoring which are easy to use and intuitive.
12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings., 2004
... Snakes were presented as active contour models for semi-automatic detection of features in an... more ... Snakes were presented as active contour models for semi-automatic detection of features in an image [13, 26]. ... is presented in [2]. We follow the approach proposed in [15] where the snake's updated position is de-termined by energy minimization on a 2D embedding plane, ...
Bioscience, biotechnology, and biochemistry, Jan 6, 2015
In this study, dual-cylindrical anaerobic digesters were designed and built on the pilot plant sc... more In this study, dual-cylindrical anaerobic digesters were designed and built on the pilot plant scale for the improvement of anaerobic digestion efficiency. The removal efficiency of organics, biogas productivity, yield, and microbial communities was evaluated as performance parameters of the digester. During the stable operational period in the continuous mode, the removal efficiencies of chemical oxygen demand and total solids were 74.1 and 65.1%, respectively. Biogas productivities of 63.9 m(3)/m(3)-FWW and 1.3 m(3)/kg-VSremoved were measured. The hydrogenotrophic methanogen orders, Methanomicrobiales and Methanobacteriales, were predominant over the aceticlastic methanogen order, Methanosarcinaceae, probably due to the tolerance of the hydrogenotrophs to environmental perturbation in the field and their faster growth rate compared with that of the aceticlastics.
Planta medica, Jan 26, 2015
The adverse effects of anticancer drugs can prompt patients to end their treatment despite the ef... more The adverse effects of anticancer drugs can prompt patients to end their treatment despite the efficacy. Cisplatin is a platinum-based molecule widely used to treat various forms of cancer, but frequent and long-term use of cisplatin is limited due to severe nephrotoxicity. In the present study, we investigated the protective effect and mechanism of tetrahydrocurcumin on cisplatin-induced kidney damage, oxidative stress, and inflammation to evaluate its possible use in renal damage. Cisplatin-induced LLC-PK1 renal cell damage was significantly reduced by tetrahydrocurcumin treatment. Additionally, the protective effect of tetrahydrocurcumin on cisplatin-induced oxidative renal damage was investigated in rats. Tetrahydrocurcumin was orally administered every day at a dose of 80 mg/kg body weight for ten days, and a single dose of cisplatin was administered intraperitoneally (7.5 mg/kg body weight) in 0.9 % saline on day four. The creatinine clearance levels, which were markers of ren...