Michael Kimwele | Jomo Kenyatta University of Agriculture and Technology (original) (raw)
Papers by Michael Kimwele
International Journal of Speech Technology, Dec 20, 2023
International Journal of Software Engineering & Applications, May 31, 2021
The success of any software product line development project is closely tied to its domain variab... more The success of any software product line development project is closely tied to its domain variability management. Whereas a lot of effort has been put into functional variability management by the SPL community, non-functional variability is considered implicit. The result has been dissatisfaction among clients due to resultant poor quality systems. This work presents an integrated requirement specification template for quality and functional requirements at software product line variation points. The implementation of this approach at the analytical description phase increases the visibility of quality requirements obliging developers to implement them. The approach proposes the use of decision tree classification techniques to support the weaving of functional quality attributes at respective variation points. This work, therefore, promotes software product line variability management objectives by proposing new functional quality artifacts during requirements specification phase. The approach is illustrated with an exemplar mobile phone family data storage requirements case study.
Deep Convolutional Neural Networks, or DCNNs, have undergone numerous modifications to enhance th... more Deep Convolutional Neural Networks, or DCNNs, have undergone numerous modifications to enhance their capabilities in image processing, including image restoration, but there is still room for improvement. They have been improved in a number of ways, including reducing information loss, increasing feature utilization , and reducing computational complexity. This study presents a network structure known as CNN with attention (CNWATT2) that can preserve detail and edge information while also making the denoised image easier to view. Multiple features from the input image are extracted and fed into a forward network structure by the CNWATT2 using convolutional kernels of varying sizes. It is made up of two CNNs, and an attention module is added to the output of each CNN so that it can choose the features that affect the model before the concate-nation operation (attention-guided concatenation) to combine these features into the final feature map. The feature selection mechanism is enhan...
2023 IEEE 27th International Conference on Intelligent Engineering Systems (INES)
Engineering reports, Nov 28, 2022
The selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained model&... more The selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained model's accuracy and adaptation in a new target domain. However, the selection process is still manual and without clearly defined criteria. If the wrong layers in a neural network are selected and used, it could lead to poor accuracy and model generalization in the target domain. This paper introduces the use of Kullback–Leibler divergence on the weight correlations of the model's convolutional neural network layers. The approach identifies the positive and negative weights in the ImageNet initial weights selecting the best‐suited layers of the network depending on the correlation divergence. We experiment on four publicly available datasets and six ImageNet pre‐trained models used in past studies for results comparisons. This proposed approach method yields better accuracies than the standard fine‐tuning baselines with a margin accuracy rate of 10.8%–24%, thereby leading to better model adaptation for target transfer learning tasks.
Zenodo (CERN European Organization for Nuclear Research), Jun 18, 2021
The success of any software product line development project is closely tied to its domain variab... more The success of any software product line development project is closely tied to its domain variability management. Whereas a lot of effort has been put into functional variability management by the SPL community, non-functional variability is considered implicit. The result has been dissatisfaction among clients due to resultant poor quality systems. This work presents an integrated requirement specification template for quality and functional requirements at software product line variation points. The implementation of this approach at the analytical description phase increases the visibility of quality requirements obliging developers to implement them. The approach proposes the use of decision tree classification techniques to support the weaving of functional quality attributes at respective variation points. This work, therefore, promotes software product line variability management objectives by proposing new functional quality artifacts during requirements specification phase. The approach is illustrated with an exemplar mobile phone family data storage requirements case study.
Frontiers in Genetics
Accurate diagnosis is the key to providing prompt and explicit treatment and disease management. ... more Accurate diagnosis is the key to providing prompt and explicit treatment and disease management. The recognized biological method for the molecular diagnosis of infectious pathogens is polymerase chain reaction (PCR). Recently, deep learning approaches are playing a vital role in accurately identifying disease-related genes for diagnosis, prognosis, and treatment. The models reduce the time and cost used by wet-lab experimental procedures. Consequently, sophisticated computational approaches have been developed to facilitate the detection of cancer, a leading cause of death globally, and other complex diseases. In this review, we systematically evaluate the recent trends in multi-omics data analysis based on deep learning techniques and their application in disease prediction. We highlight the current challenges in the field and discuss how advances in deep learning methods and their optimization for application is vital in overcoming them. Ultimately, this review promotes the devel...
Epilepsy is a condition that disrupts normal brain function and sometimes leads to seizures, unus... more Epilepsy is a condition that disrupts normal brain function and sometimes leads to seizures, unusual sensations, and temporary loss of awareness. Electroencephalograph (EEG) records are commonly used for diagnosing epilepsy, but traditional analysis is subjective and prone to misclassification. Previous studies applied Deep Learning (DL) techniques to improve EEG classification, but their performance has been limited due to dynamic and non-stationary nature of EEG structure. In this paper, we propose a multi-channel EEG classification model called LConvNet, which combines Convolutional Neural Networks (CNN) for spatial feature extraction and Long Short-Term Memory (LSTM) for capturing temporal dependencies. The model is trained using open source secondary EEG data from Temple University Hospital (TUH) to distinguish between epileptic and healthy EEG signals. Our model achieved an impressive accuracy of 97%, surpassing existing EEG classification models used in similar tasks such as ...
International Journal of Ambient Computing and Intelligence
Convolutional neural networks (CNNs) are deep learning methods that are utilized in image process... more Convolutional neural networks (CNNs) are deep learning methods that are utilized in image processing such as image classification and recognition. It has achieved excellent results in various sectors; however, it still lacks rotation invariant and spatial information. To establish whether two images are rotational versions of one other, one can rotate them exhaustively to see if they compare favorably at some angle. Due to the failure of current algorithms to rotate images and provide spatial information, the study proposes to transform color spaces and use the Gabor filter to address the issue. To gather spatial information, the HSV and CieLab color spaces are used, and Gabor is used to orient images at various orientation. The experiments show that HSV and CieLab color spaces and Gabor convolutional neural network (GCNN) improves image retrieval with an accuracy of 98.72% and 98.67% on the CIFAR-10 dataset.
Concurrency and Computation: Practice and Experience
Biometric systems have been used extensively in the identification and verification of persons. F... more Biometric systems have been used extensively in the identification and verification of persons. Fingerprint biometrics stands out as the most effective due to their characteristics of Permanence, uniqueness, ergonomics, throughput, low cost, and lifelong usability. By reducing the number of comparisons, biometric recognition systems can effectively deal with large‐scale databases. Fingerprint classification is an important task used to reduce the number of comparisons by dividing fingerprints into classes. Deep learning models have demonstrated impressive classification performance in fingerprint classification tasks. The high‐level features of deep learning models can affect the transfer learning in deep learning models. Furthermore, the high‐level features involve high computational costs that can render difficulty in the deployment of the applications. This work proposes an improved system for fingerprint classification through the truncation of layers and transfer learning. Our ...
Measurement of maintainability early in the software development life cycle, especially during th... more Measurement of maintainability early in the software development life cycle, especially during the design phase, may aid designers in incorporating necessary improvements and adjustments to enhance the maintainability of the completed product. In order to demonstrate the importance and necessity of software maintainability during the design phase, this paper expands the (MEMOOD) metrics model, which estimates the maintainability of class diagrams in terms of their understandability and modifiability, to create a multivariate linear model called "Maintainability Estimation Framework and Metrics for Object Oriented Software (MEFOOS)" during the design phase. Thus, class diagrams' maintainability is estimated in terms of their understandability, modifiability, and analyzability in the developed model. This study attempts, as a first effort, to create a relationship between object-oriented design features and maintainability elements analyzability, understandability, and m...
Computer Science & Information Technology (CS & IT), 2021
There are many calls from software engineering scholars to incorporate non-functional requirement... more There are many calls from software engineering scholars to incorporate non-functional requirements as first-class citizens in the software development process. In Software Product Line Engineering emphasis is on explicit definition of functional requirements using feature models while non-functional requirements are considered implicit. In this paper we present an integrated requirements specification template for common quality attributes alongside functional requirements at software product line variation points. This approach implemented at analytical description phase increases the visibility of quality requirements obliging developers to consider them in subsequent phases. The approach achieves weaving of quality requirements into associated functional requirements through higher level feature abstraction method. This work therefore promotes achievement of system quality by elevating nonfunctional requirement specification. The approach is illustrated with an exemplar mobile ph...
IJARCCE, 2015
The aim of this paper is to study m-learning literature in order to propose and develop a privacy... more The aim of this paper is to study m-learning literature in order to propose and develop a privacy-preserving framework which can be used to foster sustainable deployment of mobile learning within open and distance education in Kenya. Location-based privacy in mobile learning is essential to retain users' trust, key to influencing usage intention. Any risk on privacy can negatively affect users' perceptions of a system's reliability and trustworthiness. While extant studies have proposed frameworks for mobile technologies adoption into learning, few have integrated privacy aspects and their influence on m-learning implementation. The framework would provide University management with informed approach to consider privacy preserving aspects in m-learning implementation. Also, it could provide enlightened guidance to mobile learning application developers on the need to cater for learners' privacy aspects.
A Fuzzy logic based mean filter (FLBMF) is presented for impulse noise reduction of mammogram ima... more A Fuzzy logic based mean filter (FLBMF) is presented for impulse noise reduction of mammogram images degraded with additive impulse noise. FLBMF removes both low and high density impulsive noise from mammogram images. FLBMF performs this in three major phases. In phase one, the detection of noisy pixels is performed and determined. In phase two, an adaptive threshold is determined by examining the neighboring pixels. In phase three, fuzzy membership functions and fuzzy rules are used to decide whether the current pixel is noise-free, or the noise pixel is in a smooth or detailed region. All these phases are based on fuzzy rules making use of membership functions. FLBMF can be applied iteratively to effectively reduce impulsive noise. In particular, the membership function’s shape is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. In this approach, the mammogram images are selected from mini-MIAS dat...
Adapting the target dataset for a pre-trained model is still challenging. These adaptation proble... more Adapting the target dataset for a pre-trained model is still challenging. These adaptation problems result from a lack of adequate transfer of traits from the source dataset; this often leads to poor model performance resulting in trial and error in selecting the best performing pre-trained model. This paper introduces the conflation of source domain low-level textural features extracted using the first layer of the pretrained model. The extracted features are compared to the conflated low-level features of the target dataset to select a higher quality target dataset for improved pre-trained model performance and adaptation. From comparing the various probability distance metrics, Kullback-Leibler is adopted to compare the samples from both domains. We experiment on three publicly available datasets and two ImageNet pre-trained models used in past studies for results comparisons. This proposed approach method yields two categories of the target samples with those with lower Kullback...
International Journal of Speech Technology, Dec 20, 2023
International Journal of Software Engineering & Applications, May 31, 2021
The success of any software product line development project is closely tied to its domain variab... more The success of any software product line development project is closely tied to its domain variability management. Whereas a lot of effort has been put into functional variability management by the SPL community, non-functional variability is considered implicit. The result has been dissatisfaction among clients due to resultant poor quality systems. This work presents an integrated requirement specification template for quality and functional requirements at software product line variation points. The implementation of this approach at the analytical description phase increases the visibility of quality requirements obliging developers to implement them. The approach proposes the use of decision tree classification techniques to support the weaving of functional quality attributes at respective variation points. This work, therefore, promotes software product line variability management objectives by proposing new functional quality artifacts during requirements specification phase. The approach is illustrated with an exemplar mobile phone family data storage requirements case study.
Deep Convolutional Neural Networks, or DCNNs, have undergone numerous modifications to enhance th... more Deep Convolutional Neural Networks, or DCNNs, have undergone numerous modifications to enhance their capabilities in image processing, including image restoration, but there is still room for improvement. They have been improved in a number of ways, including reducing information loss, increasing feature utilization , and reducing computational complexity. This study presents a network structure known as CNN with attention (CNWATT2) that can preserve detail and edge information while also making the denoised image easier to view. Multiple features from the input image are extracted and fed into a forward network structure by the CNWATT2 using convolutional kernels of varying sizes. It is made up of two CNNs, and an attention module is added to the output of each CNN so that it can choose the features that affect the model before the concate-nation operation (attention-guided concatenation) to combine these features into the final feature map. The feature selection mechanism is enhan...
2023 IEEE 27th International Conference on Intelligent Engineering Systems (INES)
Engineering reports, Nov 28, 2022
The selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained model&... more The selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained model's accuracy and adaptation in a new target domain. However, the selection process is still manual and without clearly defined criteria. If the wrong layers in a neural network are selected and used, it could lead to poor accuracy and model generalization in the target domain. This paper introduces the use of Kullback–Leibler divergence on the weight correlations of the model's convolutional neural network layers. The approach identifies the positive and negative weights in the ImageNet initial weights selecting the best‐suited layers of the network depending on the correlation divergence. We experiment on four publicly available datasets and six ImageNet pre‐trained models used in past studies for results comparisons. This proposed approach method yields better accuracies than the standard fine‐tuning baselines with a margin accuracy rate of 10.8%–24%, thereby leading to better model adaptation for target transfer learning tasks.
Zenodo (CERN European Organization for Nuclear Research), Jun 18, 2021
The success of any software product line development project is closely tied to its domain variab... more The success of any software product line development project is closely tied to its domain variability management. Whereas a lot of effort has been put into functional variability management by the SPL community, non-functional variability is considered implicit. The result has been dissatisfaction among clients due to resultant poor quality systems. This work presents an integrated requirement specification template for quality and functional requirements at software product line variation points. The implementation of this approach at the analytical description phase increases the visibility of quality requirements obliging developers to implement them. The approach proposes the use of decision tree classification techniques to support the weaving of functional quality attributes at respective variation points. This work, therefore, promotes software product line variability management objectives by proposing new functional quality artifacts during requirements specification phase. The approach is illustrated with an exemplar mobile phone family data storage requirements case study.
Frontiers in Genetics
Accurate diagnosis is the key to providing prompt and explicit treatment and disease management. ... more Accurate diagnosis is the key to providing prompt and explicit treatment and disease management. The recognized biological method for the molecular diagnosis of infectious pathogens is polymerase chain reaction (PCR). Recently, deep learning approaches are playing a vital role in accurately identifying disease-related genes for diagnosis, prognosis, and treatment. The models reduce the time and cost used by wet-lab experimental procedures. Consequently, sophisticated computational approaches have been developed to facilitate the detection of cancer, a leading cause of death globally, and other complex diseases. In this review, we systematically evaluate the recent trends in multi-omics data analysis based on deep learning techniques and their application in disease prediction. We highlight the current challenges in the field and discuss how advances in deep learning methods and their optimization for application is vital in overcoming them. Ultimately, this review promotes the devel...
Epilepsy is a condition that disrupts normal brain function and sometimes leads to seizures, unus... more Epilepsy is a condition that disrupts normal brain function and sometimes leads to seizures, unusual sensations, and temporary loss of awareness. Electroencephalograph (EEG) records are commonly used for diagnosing epilepsy, but traditional analysis is subjective and prone to misclassification. Previous studies applied Deep Learning (DL) techniques to improve EEG classification, but their performance has been limited due to dynamic and non-stationary nature of EEG structure. In this paper, we propose a multi-channel EEG classification model called LConvNet, which combines Convolutional Neural Networks (CNN) for spatial feature extraction and Long Short-Term Memory (LSTM) for capturing temporal dependencies. The model is trained using open source secondary EEG data from Temple University Hospital (TUH) to distinguish between epileptic and healthy EEG signals. Our model achieved an impressive accuracy of 97%, surpassing existing EEG classification models used in similar tasks such as ...
International Journal of Ambient Computing and Intelligence
Convolutional neural networks (CNNs) are deep learning methods that are utilized in image process... more Convolutional neural networks (CNNs) are deep learning methods that are utilized in image processing such as image classification and recognition. It has achieved excellent results in various sectors; however, it still lacks rotation invariant and spatial information. To establish whether two images are rotational versions of one other, one can rotate them exhaustively to see if they compare favorably at some angle. Due to the failure of current algorithms to rotate images and provide spatial information, the study proposes to transform color spaces and use the Gabor filter to address the issue. To gather spatial information, the HSV and CieLab color spaces are used, and Gabor is used to orient images at various orientation. The experiments show that HSV and CieLab color spaces and Gabor convolutional neural network (GCNN) improves image retrieval with an accuracy of 98.72% and 98.67% on the CIFAR-10 dataset.
Concurrency and Computation: Practice and Experience
Biometric systems have been used extensively in the identification and verification of persons. F... more Biometric systems have been used extensively in the identification and verification of persons. Fingerprint biometrics stands out as the most effective due to their characteristics of Permanence, uniqueness, ergonomics, throughput, low cost, and lifelong usability. By reducing the number of comparisons, biometric recognition systems can effectively deal with large‐scale databases. Fingerprint classification is an important task used to reduce the number of comparisons by dividing fingerprints into classes. Deep learning models have demonstrated impressive classification performance in fingerprint classification tasks. The high‐level features of deep learning models can affect the transfer learning in deep learning models. Furthermore, the high‐level features involve high computational costs that can render difficulty in the deployment of the applications. This work proposes an improved system for fingerprint classification through the truncation of layers and transfer learning. Our ...
Measurement of maintainability early in the software development life cycle, especially during th... more Measurement of maintainability early in the software development life cycle, especially during the design phase, may aid designers in incorporating necessary improvements and adjustments to enhance the maintainability of the completed product. In order to demonstrate the importance and necessity of software maintainability during the design phase, this paper expands the (MEMOOD) metrics model, which estimates the maintainability of class diagrams in terms of their understandability and modifiability, to create a multivariate linear model called "Maintainability Estimation Framework and Metrics for Object Oriented Software (MEFOOS)" during the design phase. Thus, class diagrams' maintainability is estimated in terms of their understandability, modifiability, and analyzability in the developed model. This study attempts, as a first effort, to create a relationship between object-oriented design features and maintainability elements analyzability, understandability, and m...
Computer Science & Information Technology (CS & IT), 2021
There are many calls from software engineering scholars to incorporate non-functional requirement... more There are many calls from software engineering scholars to incorporate non-functional requirements as first-class citizens in the software development process. In Software Product Line Engineering emphasis is on explicit definition of functional requirements using feature models while non-functional requirements are considered implicit. In this paper we present an integrated requirements specification template for common quality attributes alongside functional requirements at software product line variation points. This approach implemented at analytical description phase increases the visibility of quality requirements obliging developers to consider them in subsequent phases. The approach achieves weaving of quality requirements into associated functional requirements through higher level feature abstraction method. This work therefore promotes achievement of system quality by elevating nonfunctional requirement specification. The approach is illustrated with an exemplar mobile ph...
IJARCCE, 2015
The aim of this paper is to study m-learning literature in order to propose and develop a privacy... more The aim of this paper is to study m-learning literature in order to propose and develop a privacy-preserving framework which can be used to foster sustainable deployment of mobile learning within open and distance education in Kenya. Location-based privacy in mobile learning is essential to retain users' trust, key to influencing usage intention. Any risk on privacy can negatively affect users' perceptions of a system's reliability and trustworthiness. While extant studies have proposed frameworks for mobile technologies adoption into learning, few have integrated privacy aspects and their influence on m-learning implementation. The framework would provide University management with informed approach to consider privacy preserving aspects in m-learning implementation. Also, it could provide enlightened guidance to mobile learning application developers on the need to cater for learners' privacy aspects.
A Fuzzy logic based mean filter (FLBMF) is presented for impulse noise reduction of mammogram ima... more A Fuzzy logic based mean filter (FLBMF) is presented for impulse noise reduction of mammogram images degraded with additive impulse noise. FLBMF removes both low and high density impulsive noise from mammogram images. FLBMF performs this in three major phases. In phase one, the detection of noisy pixels is performed and determined. In phase two, an adaptive threshold is determined by examining the neighboring pixels. In phase three, fuzzy membership functions and fuzzy rules are used to decide whether the current pixel is noise-free, or the noise pixel is in a smooth or detailed region. All these phases are based on fuzzy rules making use of membership functions. FLBMF can be applied iteratively to effectively reduce impulsive noise. In particular, the membership function’s shape is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. In this approach, the mammogram images are selected from mini-MIAS dat...
Adapting the target dataset for a pre-trained model is still challenging. These adaptation proble... more Adapting the target dataset for a pre-trained model is still challenging. These adaptation problems result from a lack of adequate transfer of traits from the source dataset; this often leads to poor model performance resulting in trial and error in selecting the best performing pre-trained model. This paper introduces the conflation of source domain low-level textural features extracted using the first layer of the pretrained model. The extracted features are compared to the conflated low-level features of the target dataset to select a higher quality target dataset for improved pre-trained model performance and adaptation. From comparing the various probability distance metrics, Kullback-Leibler is adopted to compare the samples from both domains. We experiment on three publicly available datasets and two ImageNet pre-trained models used in past studies for results comparisons. This proposed approach method yields two categories of the target samples with those with lower Kullback...