Gerard Meyer | Johns Hopkins University (original) (raw)
Uploads
Papers by Gerard Meyer
Siam J Contr Optimizat, 1977
ABSTRACT
J Math Anal Appl, 1974
presented. The concept of wastefulness of the algorithm on a class V of constrained optimization ... more presented. The concept of wastefulness of the algorithm on a class V of constrained optimization problems is introduced.
J Math Anal Appl, 1975
In this paper a canonical structure for multistep finite memory algorithms is presented. The conc... more In this paper a canonical structure for multistep finite memory algorithms is presented. The concept of characteristic set is introduced and some of the finite and limit properties of the structure are proved. Then it is shown that the use of the theory developed in the paper's first part greatly simplifies the analysis of complicated iterative procedures.
Siam J Contr Optimizat, 1977
Siam Journal on Control, Jul 18, 2006
1981 20th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes, 1981
1977 IEEE Conference on Decision and Control including the 16th Symposium on Adaptive Processes and A Special Symposium on Fuzzy Set Theory and Applications, 1977
ABSTRACT
1977 IEEE Conference on Decision and Control including the 16th Symposium on Adaptive Processes and A Special Symposium on Fuzzy Set Theory and Applications, 1977
J Optimiz Theor Appl, 1979
ABSTRACT
Siam Journal on Control and Optimization, Feb 17, 2012
Mathematical Programming, 1988
ABSTRACT
Conventional wisdom says that incorporating more training data is the surest way to reduce the er... more Conventional wisdom says that incorporating more training data is the surest way to reduce the error rate of a speech recognition system. This, in turn, guarantees that speech recognition systems are expensive to train, because of the high cost of annotating training data. In this paper, we propose an iterative training algorithm that seeks to improve the error rate of a speech recognizer without incurring additional transcription cost, by selecting a subset of the already available transcribed training data. We apply the proposed algorithm to an alphadigit recognition problem and reduce the error rate from 10.3% to 9.4% on a particular test set.
Siam J Contr Optimizat, 1977
ABSTRACT
J Math Anal Appl, 1974
presented. The concept of wastefulness of the algorithm on a class V of constrained optimization ... more presented. The concept of wastefulness of the algorithm on a class V of constrained optimization problems is introduced.
J Math Anal Appl, 1975
In this paper a canonical structure for multistep finite memory algorithms is presented. The conc... more In this paper a canonical structure for multistep finite memory algorithms is presented. The concept of characteristic set is introduced and some of the finite and limit properties of the structure are proved. Then it is shown that the use of the theory developed in the paper's first part greatly simplifies the analysis of complicated iterative procedures.
Siam J Contr Optimizat, 1977
Siam Journal on Control, Jul 18, 2006
1981 20th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes, 1981
1977 IEEE Conference on Decision and Control including the 16th Symposium on Adaptive Processes and A Special Symposium on Fuzzy Set Theory and Applications, 1977
ABSTRACT
1977 IEEE Conference on Decision and Control including the 16th Symposium on Adaptive Processes and A Special Symposium on Fuzzy Set Theory and Applications, 1977
J Optimiz Theor Appl, 1979
ABSTRACT
Siam Journal on Control and Optimization, Feb 17, 2012
Mathematical Programming, 1988
ABSTRACT
Conventional wisdom says that incorporating more training data is the surest way to reduce the er... more Conventional wisdom says that incorporating more training data is the surest way to reduce the error rate of a speech recognition system. This, in turn, guarantees that speech recognition systems are expensive to train, because of the high cost of annotating training data. In this paper, we propose an iterative training algorithm that seeks to improve the error rate of a speech recognizer without incurring additional transcription cost, by selecting a subset of the already available transcribed training data. We apply the proposed algorithm to an alphadigit recognition problem and reduce the error rate from 10.3% to 9.4% on a particular test set.