Juliano Mota | Universidade Estadual do Paraná - Campus de Campo Mourão (original) (raw)
Uploads
Papers by Juliano Mota
As RNAs - Redes Neurais Artificiais sao um grupo de modelos matematicos que simulam o funcionamen... more As RNAs - Redes Neurais Artificiais sao um grupo de modelos matematicos que simulam o funcionamento do sistema neurologico humano. Esses modelos vem sendo utilizados por cientistas em diversas areas do conhecimento devido a sua adaptabilidade a varios tipos de problemas. Um destes problemas e a previsao de series temporais, o qual consiste em, dada uma serie de valores observados a intervalos regulares de uma determinada variavel, prever os valores futuros dessa variavel, baseando-se nos valores passados. Este trabalho implementou uma RNBR - Rede Neural de Base Radial, para a previsao da cotacao do dolar em reais, criando-se tambem um sistema de compra e venda de dolares onde, apos a rede calcular sua previsao para a serie temporal, a rede simulava compras e vendas de dolares; comprando dolares quando sua previsao apontava alta na cotacao, e vendendo dolares quando sua previsao apontava baixa. Realizamos um experimento variando de 2 a 40 tanto a quantidade de neuronios na camada int...
International Journal of Pure and Apllied Mathematics
In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight ma... more In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight matrix of a RBFNN-Radial Basis Function Neural Network. The first, named GA, was based in Michalewicz's Operators for Continuous Genetic Algorithms and the other, named modGA, was based in extending these operators to matricial individuals, consequently proposing new operators. The main objective was verifying if the new approach could reduce the number of iterations (generations) necessary to compute the weight matrix. Six datasets was tested and in 50% of them, that hypothesis was confirmed.
International Journal of Pure and Apllied Mathematics
In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight ma... more In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight matrix of a RBFNN-Radial Basis Function Neural Network. The first, named GA, was based in Michalewicz's Operators for Continuous Genetic Algorithms and the other, named modGA, was based in extending these operators to matricial individuals, consequently proposing new operators. The main objective was verifying if the new approach could reduce the number of iterations (generations) necessary to compute the weight matrix. Six datasets was tested and in 50% of them, that hypothesis was confirmed.
Orientadora : Maria Teresinha Arns Steiner Dissertação (mestrado) - Universidade Federal do Paran... more Orientadora : Maria Teresinha Arns Steiner Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciencias Exatas e Setor de Tecnologia, Programa de Pós-Graduaçao em Métodos Numéricos em Engenharia. Defesa: Curitiba, 2007 Inclui bibliografia Área de concentração: Programação matemática
One of the issues of modeling a RBFNN -Radial Basis Function Neural Network consists of determini... more One of the issues of modeling a RBFNN -Radial Basis Function Neural Network consists of determining the weights of the output layer, usually represented by a rectangular matrix. The inconvenient characteristic at this stage it's the calculation of the pseudo-inverse of the activation values matrix. This operation may become computationally expensive and cause rounding errors when the amount of variables is large or the activation values form an ill-conditioned matrix so that the model can misclassify the patterns. In our research, Genetic Algorithms for continuous variables determines the weights of the output layer of a RBNN and we've made a comparsion with the traditional method of pseudo-inversion. The proposed approach generates matrices of random normally distributed weights which are individuals of the population and applies the Michalewicz's genetic operators until some stopping criteria is reached. We've tested four classification patterns databases and an overall mean accuracy lies in the range 91-98%, in the best case and 58-63%, in the worse case.
Conference Presentations by Juliano Mota
Proceedings of the 4th International Conference on Agents and Artificial Intelligence, 2012
One of the issues of modeling a RBFNN-Radial Basis Function Neural Network consists of determinin... more One of the issues of modeling a RBFNN-Radial Basis Function Neural Network consists of determining the weights of the output layer, usually represented by a rectangular matrix. The inconvenient characteristic at this stage it's the calculation of the pseudo-inverse of the activation values matrix. This operation may become computationally expensive and cause rounding errors when the amount of variables is large or the activation values form an ill-conditioned matrix so that the model can misclassify the patterns. In our research, Genetic Algorithms for continuous variables determines the weights of the output layer of a RBNN and we've made a comparsion with the traditional method of pseudo-inversion. The proposed approach generates matrices of random normally distributed weights which are individuals of the population and applies the Michalewicz's genetic operators until some stopping criteria is reached. We've tested four classification patterns databases and an overall mean accuracy lies in the range 91-98%, in the best case and 58-63%, in the worse case.
As RNAs - Redes Neurais Artificiais sao um grupo de modelos matematicos que simulam o funcionamen... more As RNAs - Redes Neurais Artificiais sao um grupo de modelos matematicos que simulam o funcionamento do sistema neurologico humano. Esses modelos vem sendo utilizados por cientistas em diversas areas do conhecimento devido a sua adaptabilidade a varios tipos de problemas. Um destes problemas e a previsao de series temporais, o qual consiste em, dada uma serie de valores observados a intervalos regulares de uma determinada variavel, prever os valores futuros dessa variavel, baseando-se nos valores passados. Este trabalho implementou uma RNBR - Rede Neural de Base Radial, para a previsao da cotacao do dolar em reais, criando-se tambem um sistema de compra e venda de dolares onde, apos a rede calcular sua previsao para a serie temporal, a rede simulava compras e vendas de dolares; comprando dolares quando sua previsao apontava alta na cotacao, e vendendo dolares quando sua previsao apontava baixa. Realizamos um experimento variando de 2 a 40 tanto a quantidade de neuronios na camada int...
International Journal of Pure and Apllied Mathematics
In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight ma... more In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight matrix of a RBFNN-Radial Basis Function Neural Network. The first, named GA, was based in Michalewicz's Operators for Continuous Genetic Algorithms and the other, named modGA, was based in extending these operators to matricial individuals, consequently proposing new operators. The main objective was verifying if the new approach could reduce the number of iterations (generations) necessary to compute the weight matrix. Six datasets was tested and in 50% of them, that hypothesis was confirmed.
International Journal of Pure and Apllied Mathematics
In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight ma... more In this paper we've compared two GA-Genetic Algorithmbased approaches for computing the weight matrix of a RBFNN-Radial Basis Function Neural Network. The first, named GA, was based in Michalewicz's Operators for Continuous Genetic Algorithms and the other, named modGA, was based in extending these operators to matricial individuals, consequently proposing new operators. The main objective was verifying if the new approach could reduce the number of iterations (generations) necessary to compute the weight matrix. Six datasets was tested and in 50% of them, that hypothesis was confirmed.
Orientadora : Maria Teresinha Arns Steiner Dissertação (mestrado) - Universidade Federal do Paran... more Orientadora : Maria Teresinha Arns Steiner Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciencias Exatas e Setor de Tecnologia, Programa de Pós-Graduaçao em Métodos Numéricos em Engenharia. Defesa: Curitiba, 2007 Inclui bibliografia Área de concentração: Programação matemática
One of the issues of modeling a RBFNN -Radial Basis Function Neural Network consists of determini... more One of the issues of modeling a RBFNN -Radial Basis Function Neural Network consists of determining the weights of the output layer, usually represented by a rectangular matrix. The inconvenient characteristic at this stage it's the calculation of the pseudo-inverse of the activation values matrix. This operation may become computationally expensive and cause rounding errors when the amount of variables is large or the activation values form an ill-conditioned matrix so that the model can misclassify the patterns. In our research, Genetic Algorithms for continuous variables determines the weights of the output layer of a RBNN and we've made a comparsion with the traditional method of pseudo-inversion. The proposed approach generates matrices of random normally distributed weights which are individuals of the population and applies the Michalewicz's genetic operators until some stopping criteria is reached. We've tested four classification patterns databases and an overall mean accuracy lies in the range 91-98%, in the best case and 58-63%, in the worse case.
Proceedings of the 4th International Conference on Agents and Artificial Intelligence, 2012
One of the issues of modeling a RBFNN-Radial Basis Function Neural Network consists of determinin... more One of the issues of modeling a RBFNN-Radial Basis Function Neural Network consists of determining the weights of the output layer, usually represented by a rectangular matrix. The inconvenient characteristic at this stage it's the calculation of the pseudo-inverse of the activation values matrix. This operation may become computationally expensive and cause rounding errors when the amount of variables is large or the activation values form an ill-conditioned matrix so that the model can misclassify the patterns. In our research, Genetic Algorithms for continuous variables determines the weights of the output layer of a RBNN and we've made a comparsion with the traditional method of pseudo-inversion. The proposed approach generates matrices of random normally distributed weights which are individuals of the population and applies the Michalewicz's genetic operators until some stopping criteria is reached. We've tested four classification patterns databases and an overall mean accuracy lies in the range 91-98%, in the best case and 58-63%, in the worse case.