Elvira Mayordomo - Academia.edu (original) (raw)
Papers by Elvira Mayordomo
Lecture Notes in Computer Science, 2004
We study constructive and resource-bounded scaled dimension as an information content measure and... more We study constructive and resource-bounded scaled dimension as an information content measure and obtain several results that parallel previous work on unscaled dimension. Scaled dimension for finite strings is developed and shown to be closely related to Kolmogorov complexity. The scaled dimension of an infinite sequence is characterized by the scaled dimensions of its prefixes. We obtain an exact Kolmogorov complexity characterization of scaled dimension. Juedes and Lutz (1996) established a small span theorem for P/poly-Turing reductions which asserts that for any problem A in ESPACE, either the class of problems reducible to A (the lower span) or the class of problems to which A is reducible (the upper span) has measure 0 in ESPACE. We apply our Kolmogorov complexity characterization to improve this to (−3) rdorder scaled dimension 0 in ESPACE. As a consequence we obtain a new upper bound on the Kolmogorov complexity of Turing-hard sets for ESPACE.
Jornada de Jóvenes Investigadores del I3A
El presente trabajo tiene como objetivo estudiar el efecto de la normalización en el sistema de c... more El presente trabajo tiene como objetivo estudiar el efecto de la normalización en el sistema de clasificación de Spasov2019 [2] para el problema de deterioro cognitivo leve estable vs progresivo (sMCI vs pMCI), cuando esta normalización proviene de un algoritmo de deep-learning. Se ha elegido VoxelMorph [8], por ser uno de los algoritmos más utilizados como benchmark desde la introducción del deep-learning en el registro deformable de imágenes médicas. Se realizará una comparativa entre los métodos de registro difeomorfo tradicionales y VoxelMorph para este problema.
Theory of Computing Systems
We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Prin... more We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set X of languages in terms of the relativized resourcebounded dimensions of the individual elements of X, provided that the former resource bound is large enough to parameterize the latter. Thus for example, the dimension of a class X of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of X. 2. Every language that is ≤ P m-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is ≤ P m-hard for NP. 3. If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.
Theory of Computing Systems
We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Prin... more We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set X of languages in terms of the relativized resourcebounded dimensions of the individual elements of X, provided that the former resource bound is large enough to parameterize the latter. Thus for example, the dimension of a class X of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of X. 2. Every language that is ≤ P m-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is ≤ P m-hard for NP. 3. If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.
Bulletin of Symbolic Logic
Theory and Applications of Computability, 2021
The development of algorithmic fractal dimensions in this century has had many fruitful interacti... more The development of algorithmic fractal dimensions in this century has had many fruitful interactions with geometric measure theory, especially fractal geometry in Euclidean spaces. We survey these developments, with emphasis on connections with computable functions on the reals, recent uses of algorithmic dimensions in proving new theorems in classical (non-algorithmic) fractal geometry, and directions for future research.
Research on the notions of information and randomness has drawn on methods and ideas from computa... more Research on the notions of information and randomness has drawn on methods and ideas from computability theory and cumputational complexity, as well as core mathematical subjects like measure theory and information theory. The Dagstuhl seminar 12021 “Computability, Complexity and Randomness ” was aimed to meet people and ideas in these areas to share new results and discuss open problems. This report collects the material presented during the course of the seminar.
The zeta-dimension of a set A of positive integers is Dimi(A) = inf{s | iA(s) < 1}, where iA(s... more The zeta-dimension of a set A of positive integers is Dimi(A) = inf{s | iA(s) < 1}, where iA(s) = X n2A n-s. Zeta-dimension serves as a fractal dimension on Z+ that extends naturally and usefully to discrete lattices such as Zd, where d is a positive integer. This paper
2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2018
Genome Wide Association Studies (GWAS) of Alzheimer's disease with quantitative phenotype is ... more Genome Wide Association Studies (GWAS) of Alzheimer's disease with quantitative phenotype is an active field of research that has pointed out a number of SNPs potentially related to the disease. Some initiatives such as Alzheimer’s Disease Neuroimaging Initiative (ADNI) have been established in order to proof whether anatomical or biological markers (from magnetic resonance imaging (MRI) or positron emission tomography (PET)), genetic information, and clinical and neuropsychological assessments can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). In the last decade, ADNI database has been progressively augmented, including a considerable number of patients yielding to the subsequent projects and cohorts ADNI1, ADNIGO, and ADNI2. Although some GWAS have been carried out with subpopulations of ADNI1, to the best of our knowledge, these works have not been replicated with the whole ADNI1 or the subsequent cohorts. In this...
We compare the weak completeness notions for E in the sense of Lutz's resource-bounded measur... more We compare the weak completeness notions for E in the sense of Lutz's resource-bounded measure theory 11] with respect to the standard polynomial time reducibilities. Our results parallel results for classical completeness by Watanabe 17] and others. We show that the weak completeness notions for 1-query reductions coincide: A set is weakly complete for E under 1-truth-table reducibility ii it is weakly complete for length-increasing one-one reducibility. For most of the other polynomial reducibilities, however, we obtain separations of the weak completeness notions where these reducibilities diier on E (Ladner et al. 10]). In fact our separations simultaneously hold for the corresponding weak completeness notions for E and E2, for the classical completeness notions, and for the weak completeness notions in the sense of the resource-bounded Baire category concepts of Ambos-Spies et al. 2] and Ambos-Spies 1].
Information and Computation, 2021
A real number x is absolutely normal if, for every base b ≥ 2, every two equally long strings of ... more A real number x is absolutely normal if, for every base b ≥ 2, every two equally long strings of digits appear with equal asymptotic frequency in the base-b expansion of x. This paper presents an explicit algorithm that generates the binary expansion of an absolutely normal number x, with the nth bit of x appearing after npolylog(n) computation steps. This speed is achieved by simultaneously computing and diagonalizing against a martingale that incorporates Lempel-Ziv parsing algorithms in all bases.
2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 2020
We study the effect of the selection of diffeomorphic normalization in the performance of Spasov&... more We study the effect of the selection of diffeomorphic normalization in the performance of Spasov's deep-learning system for the problem of progressive MCI vs stable MCI discrimination. We considered different degrees of normalization (no, affine and non-rigid normalization) and two diffeomorphic registration methods (ANTS and BL PDE-LDDMM) with different image similarity metrics (SSD, NCC, and lNCC) yielding qualitatively different deformation models and quantitatively different degrees of registration accuracy. BL PDE-LDDMM NCC achieved the best performing accuracy with median values of 89%. Surprisingly, the accuracy of no and affine normalization was also among the highest, indicating that the deep-learning system is powerful enough to learn accurate models for pMCI vs sMCI discrimination without the need for normalization. However, the best sensitivity values were obtained by BL PDE-LDDMM SSD and NCC with median values of 97% and 94% while the sensitivity of the remaining methods stayed under 88%.
complexity, logic, and recursion theory, 2019
We survey recent results on resource-bounded measure and randomnessin structural complexity theor... more We survey recent results on resource-bounded measure and randomnessin structural complexity theory. In particular, we discuss applicationsof these concepts to the exponential time complexity classes E and E2 .Moreover, we treat time-bounded genericity and stochasticity conceptswhich are weaker than time-bounded randomness but which suffice formany of the applications in complexity theory.1 IntroductionThe first attempt for defining the concept of a
IEEE Transactions on Information Theory, 2021
The Schnorr-Stimm dichotomy theorem [31] concerns finite-state gamblers that bet on infinite sequ... more The Schnorr-Stimm dichotomy theorem [31] concerns finite-state gamblers that bet on infinite sequences of symbols taken from a finite alphabet Σ. The theorem asserts that, for any such sequence S, the following two things are true. (1) If S is not normal in the sense of Borel (meaning that every two strings of equal length appear with equal asymptotic frequency in S), then there is a finite-state gambler that wins money at an infinitelyoften exponential rate betting on S. (2) If S is normal, then any finite-state gambler betting on S loses money at an exponential rate betting on S. In this paper we use the Kullback-Leibler divergence to formulate the lower asymptotic divergence div(S||α) of a probability measure α on Σ from a sequence S over Σ and the upper asymptotic divergence Div(S||α) of α from S in such a way that a sequence S is α-normal (meaning that every string w has asymptotic frequency α(w) in S) if and only if Div(S||α) = 0. We also use the Kullback-Leibler divergence
Jornada de Jóvenes Investigadores del I3A, 1970
La filogenética es la ciencia que estudia las relaciones entre organismos basándose en suproximid... more La filogenética es la ciencia que estudia las relaciones entre organismos basándose en suproximidad evolutiva. La forma más visual y conveniente de representar estas relaciones es através de los árboles filogenéticos. El análisis filogenético es un proceso formado por distintasetapas cuya finalidad es poder reconstruir dichos árboles. Estas etapas pueden incluir: estudiode modelos evolutivos (modelos matemáticos que intentan explicar de la forma más fielposible la evolución real de los organismos), análisis estadístico, alineamiento de secuencias,...Actualmente el coste computacional limita de forma práctica tanto la realización de filogeniasextensivas (tratando miles y decenas de miles de secuencias) como la aplicación de modelosevolutivos más generales, interesantes y explicativos que el modelo uniforme (usado entamaños de problema reducidos). Por otro lado, los procesos de secuenciación de cadenasbiológicas no están exentos de errores, los cuales pueden aparecer en cualquier luga...
Theory of Computing Systems, 2018
We introduce the concept of effective dimension for a wide class of metric spaces whose metric is... more We introduce the concept of effective dimension for a wide class of metric spaces whose metric is not necessarily based on a measure. Effective dimension was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended to Euclidean space. Lutz effectivization uses gambling, in particular the concept of gale and supergale, our extension of Hausdorff dimension to other metric spaces is also based on a supergale characterization of dimension, which in practice avoids an extra quantifier present in the classical definition of dimension that is based on Hausdorff measure and therefore allows effectivization for small time-bounds. We present here the concept of constructive dimension and its characterization in terms of Kolmogorov complexity, for which we extend the concept of Kolmogorov complexity to any metric space defining the Kolmogorov complexity of a point at a certain precision. Further research directions are indicated.
BMC bioinformatics, Jan 7, 2017
Several methods have been developed to predict the pathogenicity of missense mutations but none h... more Several methods have been developed to predict the pathogenicity of missense mutations but none has been specifically designed for classification of variants in mtDNA-encoded polypeptides. Moreover, there is not available curated dataset of neutral and damaging mtDNA missense variants to test the accuracy of predictors. Because mtDNA sequencing of patients suffering mitochondrial diseases is revealing many missense mutations, it is needed to prioritize candidate substitutions for further confirmation. Predictors can be useful as screening tools but their performance must be improved. We have developed a SVM classifier (Mitoclass.1) specific for mtDNA missense variants. Training and validation of the model was executed with 2,835 mtDNA damaging and neutral amino acid substitutions, previously curated by a set of rigorous pathogenicity criteria with high specificity. Each instance is described by a set of three attributes based on evolutionary conservation in Eukaryota of wildtype and...
Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a neg... more Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is shown that there is a language that is P T-complete (&quot;Cook complete &quot;), but not P m-complete (&quot;Karp-Levin complete&quot;), for NP. This conclusion, widely believed to be true, is not known to follow from P 6 = NP or other traditional complexity-theoretic hypotheses. Evidence is presented that &quot;NP does not have p-measure 0 &quot; is a reasonable hypothesis with many credible consequences. Additional such consequences proven here include the separation of many truthtable reducibilities in NP (e.g., k queries versus k+1 queries), the class separation E 6 = NE, and the existence of NP search problems that are not reducible to the corresponding decision problems.
Lecture Notes in Computer Science, 2004
We study constructive and resource-bounded scaled dimension as an information content measure and... more We study constructive and resource-bounded scaled dimension as an information content measure and obtain several results that parallel previous work on unscaled dimension. Scaled dimension for finite strings is developed and shown to be closely related to Kolmogorov complexity. The scaled dimension of an infinite sequence is characterized by the scaled dimensions of its prefixes. We obtain an exact Kolmogorov complexity characterization of scaled dimension. Juedes and Lutz (1996) established a small span theorem for P/poly-Turing reductions which asserts that for any problem A in ESPACE, either the class of problems reducible to A (the lower span) or the class of problems to which A is reducible (the upper span) has measure 0 in ESPACE. We apply our Kolmogorov complexity characterization to improve this to (−3) rdorder scaled dimension 0 in ESPACE. As a consequence we obtain a new upper bound on the Kolmogorov complexity of Turing-hard sets for ESPACE.
Jornada de Jóvenes Investigadores del I3A
El presente trabajo tiene como objetivo estudiar el efecto de la normalización en el sistema de c... more El presente trabajo tiene como objetivo estudiar el efecto de la normalización en el sistema de clasificación de Spasov2019 [2] para el problema de deterioro cognitivo leve estable vs progresivo (sMCI vs pMCI), cuando esta normalización proviene de un algoritmo de deep-learning. Se ha elegido VoxelMorph [8], por ser uno de los algoritmos más utilizados como benchmark desde la introducción del deep-learning en el registro deformable de imágenes médicas. Se realizará una comparativa entre los métodos de registro difeomorfo tradicionales y VoxelMorph para este problema.
Theory of Computing Systems
We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Prin... more We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set X of languages in terms of the relativized resourcebounded dimensions of the individual elements of X, provided that the former resource bound is large enough to parameterize the latter. Thus for example, the dimension of a class X of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of X. 2. Every language that is ≤ P m-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is ≤ P m-hard for NP. 3. If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.
Theory of Computing Systems
We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Prin... more We prove three results on the dimension structure of complexity classes. 1. The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set X of languages in terms of the relativized resourcebounded dimensions of the individual elements of X, provided that the former resource bound is large enough to parameterize the latter. Thus for example, the dimension of a class X of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of X. 2. Every language that is ≤ P m-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is ≤ P m-hard for NP. 3. If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.
Bulletin of Symbolic Logic
Theory and Applications of Computability, 2021
The development of algorithmic fractal dimensions in this century has had many fruitful interacti... more The development of algorithmic fractal dimensions in this century has had many fruitful interactions with geometric measure theory, especially fractal geometry in Euclidean spaces. We survey these developments, with emphasis on connections with computable functions on the reals, recent uses of algorithmic dimensions in proving new theorems in classical (non-algorithmic) fractal geometry, and directions for future research.
Research on the notions of information and randomness has drawn on methods and ideas from computa... more Research on the notions of information and randomness has drawn on methods and ideas from computability theory and cumputational complexity, as well as core mathematical subjects like measure theory and information theory. The Dagstuhl seminar 12021 “Computability, Complexity and Randomness ” was aimed to meet people and ideas in these areas to share new results and discuss open problems. This report collects the material presented during the course of the seminar.
The zeta-dimension of a set A of positive integers is Dimi(A) = inf{s | iA(s) < 1}, where iA(s... more The zeta-dimension of a set A of positive integers is Dimi(A) = inf{s | iA(s) < 1}, where iA(s) = X n2A n-s. Zeta-dimension serves as a fractal dimension on Z+ that extends naturally and usefully to discrete lattices such as Zd, where d is a positive integer. This paper
2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2018
Genome Wide Association Studies (GWAS) of Alzheimer's disease with quantitative phenotype is ... more Genome Wide Association Studies (GWAS) of Alzheimer's disease with quantitative phenotype is an active field of research that has pointed out a number of SNPs potentially related to the disease. Some initiatives such as Alzheimer’s Disease Neuroimaging Initiative (ADNI) have been established in order to proof whether anatomical or biological markers (from magnetic resonance imaging (MRI) or positron emission tomography (PET)), genetic information, and clinical and neuropsychological assessments can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). In the last decade, ADNI database has been progressively augmented, including a considerable number of patients yielding to the subsequent projects and cohorts ADNI1, ADNIGO, and ADNI2. Although some GWAS have been carried out with subpopulations of ADNI1, to the best of our knowledge, these works have not been replicated with the whole ADNI1 or the subsequent cohorts. In this...
We compare the weak completeness notions for E in the sense of Lutz's resource-bounded measur... more We compare the weak completeness notions for E in the sense of Lutz's resource-bounded measure theory 11] with respect to the standard polynomial time reducibilities. Our results parallel results for classical completeness by Watanabe 17] and others. We show that the weak completeness notions for 1-query reductions coincide: A set is weakly complete for E under 1-truth-table reducibility ii it is weakly complete for length-increasing one-one reducibility. For most of the other polynomial reducibilities, however, we obtain separations of the weak completeness notions where these reducibilities diier on E (Ladner et al. 10]). In fact our separations simultaneously hold for the corresponding weak completeness notions for E and E2, for the classical completeness notions, and for the weak completeness notions in the sense of the resource-bounded Baire category concepts of Ambos-Spies et al. 2] and Ambos-Spies 1].
Information and Computation, 2021
A real number x is absolutely normal if, for every base b ≥ 2, every two equally long strings of ... more A real number x is absolutely normal if, for every base b ≥ 2, every two equally long strings of digits appear with equal asymptotic frequency in the base-b expansion of x. This paper presents an explicit algorithm that generates the binary expansion of an absolutely normal number x, with the nth bit of x appearing after npolylog(n) computation steps. This speed is achieved by simultaneously computing and diagonalizing against a martingale that incorporates Lempel-Ziv parsing algorithms in all bases.
2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 2020
We study the effect of the selection of diffeomorphic normalization in the performance of Spasov&... more We study the effect of the selection of diffeomorphic normalization in the performance of Spasov's deep-learning system for the problem of progressive MCI vs stable MCI discrimination. We considered different degrees of normalization (no, affine and non-rigid normalization) and two diffeomorphic registration methods (ANTS and BL PDE-LDDMM) with different image similarity metrics (SSD, NCC, and lNCC) yielding qualitatively different deformation models and quantitatively different degrees of registration accuracy. BL PDE-LDDMM NCC achieved the best performing accuracy with median values of 89%. Surprisingly, the accuracy of no and affine normalization was also among the highest, indicating that the deep-learning system is powerful enough to learn accurate models for pMCI vs sMCI discrimination without the need for normalization. However, the best sensitivity values were obtained by BL PDE-LDDMM SSD and NCC with median values of 97% and 94% while the sensitivity of the remaining methods stayed under 88%.
complexity, logic, and recursion theory, 2019
We survey recent results on resource-bounded measure and randomnessin structural complexity theor... more We survey recent results on resource-bounded measure and randomnessin structural complexity theory. In particular, we discuss applicationsof these concepts to the exponential time complexity classes E and E2 .Moreover, we treat time-bounded genericity and stochasticity conceptswhich are weaker than time-bounded randomness but which suffice formany of the applications in complexity theory.1 IntroductionThe first attempt for defining the concept of a
IEEE Transactions on Information Theory, 2021
The Schnorr-Stimm dichotomy theorem [31] concerns finite-state gamblers that bet on infinite sequ... more The Schnorr-Stimm dichotomy theorem [31] concerns finite-state gamblers that bet on infinite sequences of symbols taken from a finite alphabet Σ. The theorem asserts that, for any such sequence S, the following two things are true. (1) If S is not normal in the sense of Borel (meaning that every two strings of equal length appear with equal asymptotic frequency in S), then there is a finite-state gambler that wins money at an infinitelyoften exponential rate betting on S. (2) If S is normal, then any finite-state gambler betting on S loses money at an exponential rate betting on S. In this paper we use the Kullback-Leibler divergence to formulate the lower asymptotic divergence div(S||α) of a probability measure α on Σ from a sequence S over Σ and the upper asymptotic divergence Div(S||α) of α from S in such a way that a sequence S is α-normal (meaning that every string w has asymptotic frequency α(w) in S) if and only if Div(S||α) = 0. We also use the Kullback-Leibler divergence
Jornada de Jóvenes Investigadores del I3A, 1970
La filogenética es la ciencia que estudia las relaciones entre organismos basándose en suproximid... more La filogenética es la ciencia que estudia las relaciones entre organismos basándose en suproximidad evolutiva. La forma más visual y conveniente de representar estas relaciones es através de los árboles filogenéticos. El análisis filogenético es un proceso formado por distintasetapas cuya finalidad es poder reconstruir dichos árboles. Estas etapas pueden incluir: estudiode modelos evolutivos (modelos matemáticos que intentan explicar de la forma más fielposible la evolución real de los organismos), análisis estadístico, alineamiento de secuencias,...Actualmente el coste computacional limita de forma práctica tanto la realización de filogeniasextensivas (tratando miles y decenas de miles de secuencias) como la aplicación de modelosevolutivos más generales, interesantes y explicativos que el modelo uniforme (usado entamaños de problema reducidos). Por otro lado, los procesos de secuenciación de cadenasbiológicas no están exentos de errores, los cuales pueden aparecer en cualquier luga...
Theory of Computing Systems, 2018
We introduce the concept of effective dimension for a wide class of metric spaces whose metric is... more We introduce the concept of effective dimension for a wide class of metric spaces whose metric is not necessarily based on a measure. Effective dimension was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended to Euclidean space. Lutz effectivization uses gambling, in particular the concept of gale and supergale, our extension of Hausdorff dimension to other metric spaces is also based on a supergale characterization of dimension, which in practice avoids an extra quantifier present in the classical definition of dimension that is based on Hausdorff measure and therefore allows effectivization for small time-bounds. We present here the concept of constructive dimension and its characterization in terms of Kolmogorov complexity, for which we extend the concept of Kolmogorov complexity to any metric space defining the Kolmogorov complexity of a point at a certain precision. Further research directions are indicated.
BMC bioinformatics, Jan 7, 2017
Several methods have been developed to predict the pathogenicity of missense mutations but none h... more Several methods have been developed to predict the pathogenicity of missense mutations but none has been specifically designed for classification of variants in mtDNA-encoded polypeptides. Moreover, there is not available curated dataset of neutral and damaging mtDNA missense variants to test the accuracy of predictors. Because mtDNA sequencing of patients suffering mitochondrial diseases is revealing many missense mutations, it is needed to prioritize candidate substitutions for further confirmation. Predictors can be useful as screening tools but their performance must be improved. We have developed a SVM classifier (Mitoclass.1) specific for mtDNA missense variants. Training and validation of the model was executed with 2,835 mtDNA damaging and neutral amino acid substitutions, previously curated by a set of rigorous pathogenicity criteria with high specificity. Each instance is described by a set of three attributes based on evolutionary conservation in Eukaryota of wildtype and...
Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a neg... more Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is shown that there is a language that is P T-complete (&quot;Cook complete &quot;), but not P m-complete (&quot;Karp-Levin complete&quot;), for NP. This conclusion, widely believed to be true, is not known to follow from P 6 = NP or other traditional complexity-theoretic hypotheses. Evidence is presented that &quot;NP does not have p-measure 0 &quot; is a reasonable hypothesis with many credible consequences. Additional such consequences proven here include the separation of many truthtable reducibilities in NP (e.g., k queries versus k+1 queries), the class separation E 6 = NE, and the existence of NP search problems that are not reducible to the corresponding decision problems.