S. Simske - Academia.edu (original) (raw)

Papers by S. Simske

Research paper thumbnail of DIAL 2004 Working Group Report on Acquisition Quality Control

Second International Conference on Document Image Analysis for Libraries (DIAL'06), 2006

This report summarizes the discussions of the Working Group on Acquisition Quality at the Interna... more This report summarizes the discussions of the Working Group on Acquisition Quality at the International Workshop on Document Image Analysis for Libraries, Palo Alto, CA, 23-24 January 2004. Acquisition of the image is one of the most time intensive components of forming a digital library, and the quality of the acquisition will affect all later stages of the digital library project. The current state of the art in acquisition is analyzed. Problems and suggested improvements for image acquisition and storage formats and the special problems associated with acquisition from microfilm follows. A list of general suggestions was developed which was complemented by a wish list of things the Working Group would like to see followed in acquisition discussions in the future.

Research paper thumbnail of Fast Single Image Super-Resolution by Self-trained Filtering

Lecture Notes in Computer Science, 2012

Super-resolution; PSNR; filter; image restoration; image enhancement This paper introduces an alg... more Super-resolution; PSNR; filter; image restoration; image enhancement This paper introduces an algorithm to super-resolve an image based on a self-training filter (STF). As in other methods, we first increase the resolution by interpolation. The interpolated image has higher resolution, but is blurry because of the interpolation. Then, unlike other methods, we simply filter this interpolated image to recover some missing high frequency details by STF. The input image is first downsized at the same ratio used in super-resolution, then upsized. The super-resolution filters are obtained by minimizing the mean square error between the upsized image and the input image at different levels of the image pyramid. The best STF is chosen as the one with minimal error in the training phase. We have shown that STF is more effective than a generic unsharp mask filter. By combining interpolation and filtering, we achieved competitive results when compared to support vector regression methods and the kernel regression method.

Research paper thumbnail of Automating the analysis of voting systems

14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003., 2003

Voting is a well-known technique to combine the decisions of peer experts. It is used in fault to... more Voting is a well-known technique to combine the decisions of peer experts. It is used in fault tolerant applications to mask errors from one or more experts using n-modular redundancy (NMR) and n-version programming. Voting strategies include: majority, weighted voting, plurality; instance runoff voting, threshold voting, and the more general weighted k-out-of-n systems. Before selecting a voting schema for a particular application, we have to understand the various tradeoffs and parameters and how they impact the correctness, reliability, and confidence in the final decision made by a voting system. In this paper, we propose an enumerated simulation approach to automate the behavior analysis of voting schemas with application to majority and plurality voting. We conduct synthetic studies using a simulator that we develop to analyze results from each expert, apply a voting mechanism, and analyze the voting results. The simulator builds a decision tree and uses a depth-first traversal algorithm to obtain the system reliability among other factors. We define and study the following behaviors: 1) the probability of reaching a consensus, "Pc"; 2) reliability of the voting system, "R"; 3) certainly index, "T"; and 4) the confidence index, "C". The parameters controlling the analysis are the number of participating experts, the number of possible output symbols that can be produced by an expert, the probability distribution of each expert's output, and the voting schema. The paper presents an enumerated simulation approach for analyzing voting systems which can be used when the use of theoretical models are challenged by dependencies between experts or uncommon probability distributions of the expert's output.

Research paper thumbnail of Blind image deconvolution using constrained variance maximization

Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004., 2004

This paper describes an algorithm based on constrained variance maximization for the restoration ... more This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using Principal Component Analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.

Research paper thumbnail of Blind Image Deconvolution Using Support Vector Regression

Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., 2005

This paper describes an algorithm for the restoration of a noisy blurred image based on support v... more This paper describes an algorithm for the restoration of a noisy blurred image based on support vector regression. The blind image deconvolution was formulated as a machine learning problem. From the training set, the mapping between the noisy blurred image and the original image are learned by support vector regression (SVR). With the acquired mapping, the degraded image can be restored. Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. In terms of ISNR (Improvement of Signal to Noise Ratio), SVR outperformes ML in blind deblurring tests in which the types of blurs, point spread function (PSF) support, and noise energy are all unknown.

Research paper thumbnail of A novel combination of methods to assess sarcopenia and muscle performance in mice

Biomedical sciences instrumentation, 2005

A novel combination of assays was developed to assess sarcopenia and muscle performance. Three te... more A novel combination of assays was developed to assess sarcopenia and muscle performance. Three techniques were tested to assess muscle function both during and upon termination of treatments designed to induce sarcopenia. In unsuspended (US) and hindlimb suspended (HS) mice, a Hindlimb Exertion Force Test (HEFT), cage wheel running, and in vitro muscle electrophysiology were performed. Twelve-week old, mature male C57BL/6J mice were HS (n = 24) for two weeks, or served as US controls (n = 26). Both groups were subjected to a HEFT on day 13; that is, the maximum force exerted against a beam force transducer (2 lb. linear range, Transducer Techniques, Temecula CA) following applied tail shock stimulus (0.15 mA, 300 msec). This test primarily evaluated the hindlimb muscles used for an escape response (i.e., hamstrings, quadriceps and calf muscles). Mice (n = 10-11/group) were given voluntary access to running wheels for 7 days post treatment to evaluate muscle endurance. On day 13, HS ...

Research paper thumbnail of Training Set Compression by Incremental Clustering

Journal of Pattern Recognition Research, 2011

Research paper thumbnail of Example Based Single-Frame Image Super-Resolution by Support Vector Regression

Journal of Pattern Recognition Research, 2010

As many other inverse problems, single-frame image super-resolution is an ill-posed problem. The ... more As many other inverse problems, single-frame image super-resolution is an ill-posed problem. The problem has been approached in the context of machine learning. However, the proposed method in this paper is different from other learning based methods regarding how the input/output are formulated as well as how the learning is done. The assumption behind example based methods is the local similarity across seemingly different images.

Research paper thumbnail of A study of the interaction of paper substrates on printed forensic imaging

Proceedings of the 11th ACM symposium on Document engineering - DocEng '11, 2011

Research paper thumbnail of 2D Barcode Sub-Coding Density Limits

Research paper thumbnail of Paper type classification employing a 3D DrCID

Research paper thumbnail of Paper substrate classification based on 3D surface micro-geometry

Research paper thumbnail of Blur identification based on kurtosis minimization

IEEE International Conference on Image Processing 2005, 2005

In this paper, we describe an algorithm for identifying a parametrically described blur based on ... more In this paper, we describe an algorithm for identifying a parametrically described blur based on kurtosis minimization. Using different choices for the parameters of the blur, the noisy blurred image is restored using Wiener filter. We use the kurtosis as a measurement of the quality of the restored image. From the set of the candidate deblurred images, the one with the minimum kurtosis is selected. The proposed technique is tested in a simulated experiment on a variety of blurs including atmospheric turbulence blurs, Gaussian blurs, and out-of-focus blurs. The proposed approach is also tested on real blurred images. Moreover, we test the performance when a wrong blur model is given. Our experiments show that the kurtosis minimization measurements match well with methods that maximize PSNR.

Research paper thumbnail of Mouse Tall-Suspension as a Model of Microgravity: Effects on Skeletal, Neural and Muscular Systems

SAE Technical Paper Series, 1989

Research paper thumbnail of Image Classification to Improve Printing Quality of Mixed-Type Documents

2009 10th International Conference on Document Analysis and Recognition, 2009

Functional image classification is the assignment of different image types to separate classes to... more Functional image classification is the assignment of different image types to separate classes to optimize their rendering for reading or other specific end task, and is an important area of research in the publishing and multi-Average industries. This paper presents recent research on optimizing the simultaneous classification of documents, photos and logos. Each of these is handled during printing with a class-specific pipeline of image transformation algorithms, and misclassification results in pejorative imaging effects. This paper reports on replacing an existing classifier with a Weka-based classifier that simultaneously improves accuracy (from 85.3% to 90.8%) and performance (from 1458 msec to 418 msec/image). Generic subsampling of the images further improved the performance (to 199 msec/image) with only a modest impact on accuracy (to 90.4%). A staggered subsampling approach, finally, improved both accuracy (to 96.4%) and performance (to 147 msec/image) for the Weka-base classifier. This approach did not appreciable benefit the HP classifier (85.4% accuracy, 497 msec/image). These data indicate staggered subsampling using the optimized Weka classifier substantially improves the classification accuracy and performance without resulting in additional "egregious" misclassifications (assigning photos or logos to the "document" class). 2009 10th International Conference on Document Analysis and Recognition 978-0-7695-3725-2/09 $25.00

Research paper thumbnail of Image Denoising Through Support Vector Regression

2007 IEEE International Conference on Image Processing, 2007

In this paper, an example-based image denoising algorithm is introduced. Image denoising is formu... more In this paper, an example-based image denoising algorithm is introduced. Image denoising is formulated as a regression problem, which is then solved using support vector regression (SVR). Using noisy images as training sets, SVR models are developed. The models can then be used to denoise different images corrupted by random noise at different levels. Initial experiments show that SVR can achieve a higher peak signal-to-noise ratio (PSNR) than the multiple wavelet domain Besov ball projection method on document images.

Research paper thumbnail of Design of high capacity 3D print codes aiming for robustness to the PS channel and external distortions

2009 16th IEEE International Conference on Image Processing (ICIP), 2009

Adding high-density information to printed materials enables and improves interesting hardcopy do... more Adding high-density information to printed materials enables and improves interesting hardcopy document applications involving security, authentication, physical-electronic round tripping, item-level tagging, and consumer/product interaction. This investigation of robust and high capacity print codes aims to maximize information payload in a given printed page area, subject to robustness to channel errors including distortions introduced by the printing and scanning processes and

Research paper thumbnail of Design of high capacity 3D print codes with visual cues aiming for robustness to the PS channel and external distortions

2009 IEEE International Workshop on Multimedia Signal Processing, 2009

The process of adding high-density information onto printed material enables and improves interes... more The process of adding high-density information onto printed material enables and improves interesting hardcopy document applications, such as: security, authentication, physical-electronic round tripping, item-level tagging as well as consumer/product interaction. This investigation on robust and high capacity print codes aims to maximize information payload in a given printed page area, subject to robustness to distortions originated by printing and scanning

Research paper thumbnail of Effects of combined insulin-like growth factor 1 and macrophage colony-stimulating factor on the skeletal properties of mice

In vivo (Athens, Greece)

Insulin-like growth factor-1 (IGF-1) and macrophage colony-stimulating factor (MCSF) are critical... more Insulin-like growth factor-1 (IGF-1) and macrophage colony-stimulating factor (MCSF) are critical to skeletal homeostasis. We investigated the effects of combined IGF-1 plus MCSF on mice. C57BL/6J mice, aged 7 weeks, were assigned to baseline, vehicle, IGF-1, MCSF, or combined IGF-1 plus MCSF (1 mg/kg/day each, n=12-13/group, 28-day duration). IGF-1 or MCSF had no effect on bone formation rate; however, IGF-1 plus MCSF produced a 169% increase in periosteal bone formation rate. Combined therapy increased femoral mechanical properties (+25% elastic force), while IGF-1, and MCSF alone did not. Combined therapy affected trabecular bone volume fraction (+40%), number (+13%), and spacing (-13%). MCSF produced similar trabecular changes, while IGF-1 had no effect. Combined therapy and MCSF alone increased bone mineral content. We have demonstrated the superior effects of combined IGF-1 and MCSF. Together, these agents may promote bone modeling to a greater extent than either therapy alone.

Research paper thumbnail of Selected contribution: skeletal muscle capillarity and enzyme activity in rats selectively bred for running endurance

Journal of applied physiology (Bethesda, Md. : 1985), 2003

To attempt to explain the difference in intrinsic (untrained) endurance running capacity in rats ... more To attempt to explain the difference in intrinsic (untrained) endurance running capacity in rats selectively bred over seven generations for either low (LCR) or high running capacity (HCR), the relationship among skeletal muscle capillarity, fiber composition, enzyme activity, and O(2) transport was studied. Ten females from each group [body wt: 228 g (HCR), 247 g (LCR); P = 0.03] were studied at 25 wk of age. Peak normoxic maximum O(2) consumption and muscle O(2) conductance were previously reported to be 12 and 33% higher, respectively, in HCR, despite similar ventilation, arterial O(2) saturation, and a cardiac output that was <10% greater in HCR compared with LCR. Total capillary and fiber number in the medial gastrocnemius were similar in HCR and LCR, but, because fiber area was 37% lower in HCR, the number of capillaries per unit area (or mass) of muscle was higher in HCR by 32% (P < 0.001). A positive correlation (r = 0.92) was seen between capillary density and muscle ...

Research paper thumbnail of DIAL 2004 Working Group Report on Acquisition Quality Control

Second International Conference on Document Image Analysis for Libraries (DIAL'06), 2006

This report summarizes the discussions of the Working Group on Acquisition Quality at the Interna... more This report summarizes the discussions of the Working Group on Acquisition Quality at the International Workshop on Document Image Analysis for Libraries, Palo Alto, CA, 23-24 January 2004. Acquisition of the image is one of the most time intensive components of forming a digital library, and the quality of the acquisition will affect all later stages of the digital library project. The current state of the art in acquisition is analyzed. Problems and suggested improvements for image acquisition and storage formats and the special problems associated with acquisition from microfilm follows. A list of general suggestions was developed which was complemented by a wish list of things the Working Group would like to see followed in acquisition discussions in the future.

Research paper thumbnail of Fast Single Image Super-Resolution by Self-trained Filtering

Lecture Notes in Computer Science, 2012

Super-resolution; PSNR; filter; image restoration; image enhancement This paper introduces an alg... more Super-resolution; PSNR; filter; image restoration; image enhancement This paper introduces an algorithm to super-resolve an image based on a self-training filter (STF). As in other methods, we first increase the resolution by interpolation. The interpolated image has higher resolution, but is blurry because of the interpolation. Then, unlike other methods, we simply filter this interpolated image to recover some missing high frequency details by STF. The input image is first downsized at the same ratio used in super-resolution, then upsized. The super-resolution filters are obtained by minimizing the mean square error between the upsized image and the input image at different levels of the image pyramid. The best STF is chosen as the one with minimal error in the training phase. We have shown that STF is more effective than a generic unsharp mask filter. By combining interpolation and filtering, we achieved competitive results when compared to support vector regression methods and the kernel regression method.

Research paper thumbnail of Automating the analysis of voting systems

14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003., 2003

Voting is a well-known technique to combine the decisions of peer experts. It is used in fault to... more Voting is a well-known technique to combine the decisions of peer experts. It is used in fault tolerant applications to mask errors from one or more experts using n-modular redundancy (NMR) and n-version programming. Voting strategies include: majority, weighted voting, plurality; instance runoff voting, threshold voting, and the more general weighted k-out-of-n systems. Before selecting a voting schema for a particular application, we have to understand the various tradeoffs and parameters and how they impact the correctness, reliability, and confidence in the final decision made by a voting system. In this paper, we propose an enumerated simulation approach to automate the behavior analysis of voting schemas with application to majority and plurality voting. We conduct synthetic studies using a simulator that we develop to analyze results from each expert, apply a voting mechanism, and analyze the voting results. The simulator builds a decision tree and uses a depth-first traversal algorithm to obtain the system reliability among other factors. We define and study the following behaviors: 1) the probability of reaching a consensus, &amp;amp;amp;quot;Pc&amp;amp;amp;quot;; 2) reliability of the voting system, &amp;amp;amp;quot;R&amp;amp;amp;quot;; 3) certainly index, &amp;amp;amp;quot;T&amp;amp;amp;quot;; and 4) the confidence index, &amp;amp;amp;quot;C&amp;amp;amp;quot;. The parameters controlling the analysis are the number of participating experts, the number of possible output symbols that can be produced by an expert, the probability distribution of each expert&amp;amp;amp;#x27;s output, and the voting schema. The paper presents an enumerated simulation approach for analyzing voting systems which can be used when the use of theoretical models are challenged by dependencies between experts or uncommon probability distributions of the expert&amp;amp;amp;#x27;s output.

Research paper thumbnail of Blind image deconvolution using constrained variance maximization

Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004., 2004

This paper describes an algorithm based on constrained variance maximization for the restoration ... more This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using Principal Component Analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.

Research paper thumbnail of Blind Image Deconvolution Using Support Vector Regression

Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., 2005

This paper describes an algorithm for the restoration of a noisy blurred image based on support v... more This paper describes an algorithm for the restoration of a noisy blurred image based on support vector regression. The blind image deconvolution was formulated as a machine learning problem. From the training set, the mapping between the noisy blurred image and the original image are learned by support vector regression (SVR). With the acquired mapping, the degraded image can be restored. Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. In terms of ISNR (Improvement of Signal to Noise Ratio), SVR outperformes ML in blind deblurring tests in which the types of blurs, point spread function (PSF) support, and noise energy are all unknown.

Research paper thumbnail of A novel combination of methods to assess sarcopenia and muscle performance in mice

Biomedical sciences instrumentation, 2005

A novel combination of assays was developed to assess sarcopenia and muscle performance. Three te... more A novel combination of assays was developed to assess sarcopenia and muscle performance. Three techniques were tested to assess muscle function both during and upon termination of treatments designed to induce sarcopenia. In unsuspended (US) and hindlimb suspended (HS) mice, a Hindlimb Exertion Force Test (HEFT), cage wheel running, and in vitro muscle electrophysiology were performed. Twelve-week old, mature male C57BL/6J mice were HS (n = 24) for two weeks, or served as US controls (n = 26). Both groups were subjected to a HEFT on day 13; that is, the maximum force exerted against a beam force transducer (2 lb. linear range, Transducer Techniques, Temecula CA) following applied tail shock stimulus (0.15 mA, 300 msec). This test primarily evaluated the hindlimb muscles used for an escape response (i.e., hamstrings, quadriceps and calf muscles). Mice (n = 10-11/group) were given voluntary access to running wheels for 7 days post treatment to evaluate muscle endurance. On day 13, HS ...

Research paper thumbnail of Training Set Compression by Incremental Clustering

Journal of Pattern Recognition Research, 2011

Research paper thumbnail of Example Based Single-Frame Image Super-Resolution by Support Vector Regression

Journal of Pattern Recognition Research, 2010

As many other inverse problems, single-frame image super-resolution is an ill-posed problem. The ... more As many other inverse problems, single-frame image super-resolution is an ill-posed problem. The problem has been approached in the context of machine learning. However, the proposed method in this paper is different from other learning based methods regarding how the input/output are formulated as well as how the learning is done. The assumption behind example based methods is the local similarity across seemingly different images.

Research paper thumbnail of A study of the interaction of paper substrates on printed forensic imaging

Proceedings of the 11th ACM symposium on Document engineering - DocEng '11, 2011

Research paper thumbnail of 2D Barcode Sub-Coding Density Limits

Research paper thumbnail of Paper type classification employing a 3D DrCID

Research paper thumbnail of Paper substrate classification based on 3D surface micro-geometry

Research paper thumbnail of Blur identification based on kurtosis minimization

IEEE International Conference on Image Processing 2005, 2005

In this paper, we describe an algorithm for identifying a parametrically described blur based on ... more In this paper, we describe an algorithm for identifying a parametrically described blur based on kurtosis minimization. Using different choices for the parameters of the blur, the noisy blurred image is restored using Wiener filter. We use the kurtosis as a measurement of the quality of the restored image. From the set of the candidate deblurred images, the one with the minimum kurtosis is selected. The proposed technique is tested in a simulated experiment on a variety of blurs including atmospheric turbulence blurs, Gaussian blurs, and out-of-focus blurs. The proposed approach is also tested on real blurred images. Moreover, we test the performance when a wrong blur model is given. Our experiments show that the kurtosis minimization measurements match well with methods that maximize PSNR.

Research paper thumbnail of Mouse Tall-Suspension as a Model of Microgravity: Effects on Skeletal, Neural and Muscular Systems

SAE Technical Paper Series, 1989

Research paper thumbnail of Image Classification to Improve Printing Quality of Mixed-Type Documents

2009 10th International Conference on Document Analysis and Recognition, 2009

Functional image classification is the assignment of different image types to separate classes to... more Functional image classification is the assignment of different image types to separate classes to optimize their rendering for reading or other specific end task, and is an important area of research in the publishing and multi-Average industries. This paper presents recent research on optimizing the simultaneous classification of documents, photos and logos. Each of these is handled during printing with a class-specific pipeline of image transformation algorithms, and misclassification results in pejorative imaging effects. This paper reports on replacing an existing classifier with a Weka-based classifier that simultaneously improves accuracy (from 85.3% to 90.8%) and performance (from 1458 msec to 418 msec/image). Generic subsampling of the images further improved the performance (to 199 msec/image) with only a modest impact on accuracy (to 90.4%). A staggered subsampling approach, finally, improved both accuracy (to 96.4%) and performance (to 147 msec/image) for the Weka-base classifier. This approach did not appreciable benefit the HP classifier (85.4% accuracy, 497 msec/image). These data indicate staggered subsampling using the optimized Weka classifier substantially improves the classification accuracy and performance without resulting in additional "egregious" misclassifications (assigning photos or logos to the "document" class). 2009 10th International Conference on Document Analysis and Recognition 978-0-7695-3725-2/09 $25.00

Research paper thumbnail of Image Denoising Through Support Vector Regression

2007 IEEE International Conference on Image Processing, 2007

In this paper, an example-based image denoising algorithm is introduced. Image denoising is formu... more In this paper, an example-based image denoising algorithm is introduced. Image denoising is formulated as a regression problem, which is then solved using support vector regression (SVR). Using noisy images as training sets, SVR models are developed. The models can then be used to denoise different images corrupted by random noise at different levels. Initial experiments show that SVR can achieve a higher peak signal-to-noise ratio (PSNR) than the multiple wavelet domain Besov ball projection method on document images.

Research paper thumbnail of Design of high capacity 3D print codes aiming for robustness to the PS channel and external distortions

2009 16th IEEE International Conference on Image Processing (ICIP), 2009

Adding high-density information to printed materials enables and improves interesting hardcopy do... more Adding high-density information to printed materials enables and improves interesting hardcopy document applications involving security, authentication, physical-electronic round tripping, item-level tagging, and consumer/product interaction. This investigation of robust and high capacity print codes aims to maximize information payload in a given printed page area, subject to robustness to channel errors including distortions introduced by the printing and scanning processes and

Research paper thumbnail of Design of high capacity 3D print codes with visual cues aiming for robustness to the PS channel and external distortions

2009 IEEE International Workshop on Multimedia Signal Processing, 2009

The process of adding high-density information onto printed material enables and improves interes... more The process of adding high-density information onto printed material enables and improves interesting hardcopy document applications, such as: security, authentication, physical-electronic round tripping, item-level tagging as well as consumer/product interaction. This investigation on robust and high capacity print codes aims to maximize information payload in a given printed page area, subject to robustness to distortions originated by printing and scanning

Research paper thumbnail of Effects of combined insulin-like growth factor 1 and macrophage colony-stimulating factor on the skeletal properties of mice

In vivo (Athens, Greece)

Insulin-like growth factor-1 (IGF-1) and macrophage colony-stimulating factor (MCSF) are critical... more Insulin-like growth factor-1 (IGF-1) and macrophage colony-stimulating factor (MCSF) are critical to skeletal homeostasis. We investigated the effects of combined IGF-1 plus MCSF on mice. C57BL/6J mice, aged 7 weeks, were assigned to baseline, vehicle, IGF-1, MCSF, or combined IGF-1 plus MCSF (1 mg/kg/day each, n=12-13/group, 28-day duration). IGF-1 or MCSF had no effect on bone formation rate; however, IGF-1 plus MCSF produced a 169% increase in periosteal bone formation rate. Combined therapy increased femoral mechanical properties (+25% elastic force), while IGF-1, and MCSF alone did not. Combined therapy affected trabecular bone volume fraction (+40%), number (+13%), and spacing (-13%). MCSF produced similar trabecular changes, while IGF-1 had no effect. Combined therapy and MCSF alone increased bone mineral content. We have demonstrated the superior effects of combined IGF-1 and MCSF. Together, these agents may promote bone modeling to a greater extent than either therapy alone.

Research paper thumbnail of Selected contribution: skeletal muscle capillarity and enzyme activity in rats selectively bred for running endurance

Journal of applied physiology (Bethesda, Md. : 1985), 2003

To attempt to explain the difference in intrinsic (untrained) endurance running capacity in rats ... more To attempt to explain the difference in intrinsic (untrained) endurance running capacity in rats selectively bred over seven generations for either low (LCR) or high running capacity (HCR), the relationship among skeletal muscle capillarity, fiber composition, enzyme activity, and O(2) transport was studied. Ten females from each group [body wt: 228 g (HCR), 247 g (LCR); P = 0.03] were studied at 25 wk of age. Peak normoxic maximum O(2) consumption and muscle O(2) conductance were previously reported to be 12 and 33% higher, respectively, in HCR, despite similar ventilation, arterial O(2) saturation, and a cardiac output that was <10% greater in HCR compared with LCR. Total capillary and fiber number in the medial gastrocnemius were similar in HCR and LCR, but, because fiber area was 37% lower in HCR, the number of capillaries per unit area (or mass) of muscle was higher in HCR by 32% (P < 0.001). A positive correlation (r = 0.92) was seen between capillary density and muscle ...