Analysis of first-order rate constant spectra with regularized least-squares and expectation maximization. 1. Theory and numerical characterization (original) (raw)
1993, Analytical Chemistry
Analysk of parallel, firstorder rate proteoses by deconvdutlon of slngle-exponentlai kernels from experlmentai data is performed with regularized least squares and the method of expectation maximlzatlon (EM). These methods may be used in general for the unbiased numerical analysis of ilnear Fredhohn integrals of the first kind with optimal results. Regularized least squares is performed uslng a smoothing regularisor with an adaptive choice for the regularization parameter (CONTIN) and by ridge regresdon uslng the generalized cross-validation choice for the regularlzation parameter (GCV). The resolution and performance of the methods are studied as a function of data type (continuous or discrete dlstrlbutbns of single exponentlab), data sampling, and superimposed noise. Ail three methods are able to yield hlgh-rebolution estimates and are statlstically valid. However, subtle differences dependent on the data exist that mggesl that the most probablllstic esthnate, or maximum Ilkellhood estimate, is dependent on the uitlmate valldlty of the spectfic model used to describe the data. Therefore, qualitative comparison of the three methods In terms of maxhnwn entropy is conddered for "worst case" ilmiting data. For discrete dWbuth8 comprising data of hlgh slgnal-toiwke ratb (SNR), the order EM > CONTIN > GCV is observed for the entropy of the solutions. For continuous distributions of hlgh SNR, the order EM > GCV > CONTIN is observed. For either type of underlying distribution and low SNR, the three methods converge to comparable performance whlle breaklng down In terms of the quality and accuracy of the estimations. The EM algorlthm Is suggested as the maximum Ilkellhood (or maximum entropy) method when a high response to model error is not dedred. The GCV algorithm yields a maximum likelihood estimate hlghly dependent on the model valldlty.
Sign up for access to the world's latest research.
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact
Related papers
Journal of Geophysical Research, 2006
An ERA (entropy regularization algorithm) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the WDF (wave distribution function) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any pre-specified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
Statistical Error of Suboptimum Spectrum Analysis: Review of Estimates
IEEE Transactions on Instrumentation and Measurement, 1975
Statistical error estimates of spectrum analysis are given for any number of degrees of freedom and any relative passband width for continuous line and mixed spectra. The continuous component is assumed to be normal but not necessarily white.
Physica A On the performance of Fisher Information Measure and Shannon entropy estimators
h i g h l i g h t s • Two estimation methods (discretization and kernel-based approach) are applied to FIM and SE. • FIM (SE) estimated by discrete approach is nearly constant with σ. • FIM (SE) estimated by discrete approach decreases (increases) with the bin number. • FIM (SE) estimated by kernel-based approach is close to the theory value for any σ. a b s t r a c t The performance of two estimators of Fisher Information Measure (FIM) and Shannon entropy (SE), one based on the discretization of the FIM and SE formulae (discrete-based approach) and the other based on the kernel-based estimation of the probability density function (pdf) (kernel-based approach) is investigated. The two approaches are employed to estimate the FIM and SE of Gaussian processes (with different values of σ and size N), whose theoretic FIM and SE depend on the standard deviation σ. The FIM (SE) estimated by using the discrete-based approach is approximately constant with σ , but decreases (increases) with the bin number L; in particular, the discrete-based approach furnishes a rather correct estimation of FIM (SE) for L ∝ σ. Furthermore, for small values of σ , the larger the size N of the series, the smaller the mean relative error; while for large values of σ , the larger the size N of the series, the larger the mean relative error. The FIM (SE) estimated by using the kernel-based approach is very close to the theoretic value for any σ , and the mean relative error decreases with the increase of the length of the series. Comparing the results obtained using the discrete-based and kernel-based approaches, the estimates of FIM and SE by using the kernel-based approach are much closer to the theoretic values for any σ and any N and have to be preferred to the discrete-based estimates.
Journal of Chemometrics, 1991
A simple algorithm for deconvolution and regression of shot-noise-limited data is illustrated in this paper. The algorithm is easily adapted to almost any model and converges to the global optimum. Multiplecomponent spectrum regression, spectrum deconvolution and smoothing examples are used to illustrate the algorithm. The algorithm and a method for determining uncertainties in the parameters based on the Fisher information matrix are given and illustrated with three examples. An experimental example of spectrograph grating order compensation of a diode array solar spectroradiometer is given to illustrate the use of this technique in environmental analysis. The major advantages of the EM algorithm are found to be its stability, simplicity, conservation of data magnitude and guaranteed convergence.
Signal Processing, 1981
The Maximum Entropy Spectral Analysis technique is applied to signals with spectral peaks of finite width. The Burg and Least Squares algorithms are used and in each case the performance is compared to that of a conventional Fourier method. Zusammeniassung. Die Methode der Spektralanalyse mit maximaler Entropie wird auf Signale angewandt, welche Spektralspitzen mit endlicher Weite haben. Der Algorithmus von Burg, sowie die Methode der kleinsten Quadrate wird verwendet und im beiden F~illen die Resultate mit der konventionellen Fouriertransformation verglichen. R6sum6. La technique d'analyse spectrale h entropie maximum est appliqu6e ~t des signaux poss6dant des pics spectraux de largeur finie. Les algorithmes de Burg et des moindre-carr6s sont utilis6s et dans chaque cas le performance est compar6e h celle de la m6thode conventionnelle de Fourier.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.