ali mahdi - Academia.edu (original) (raw)

Papers by ali mahdi

Research paper thumbnail of Line Profile Based Segmentation Algorithm for Touching Corn Kernels

Cornell University - arXiv, Jun 1, 2017

Image segmentation of touching objects plays a key role in providing accurate classification for ... more Image segmentation of touching objects plays a key role in providing accurate classification for computer vision technologies. A new line profile based imaging segmentation algorithm has been developed to provide a robust and accurate segmentation of a group of touching corns. The performance of the line profile based algorithm has been compared to a watershed based imaging segmentation algorithm. Both algorithms are tested on three different patterns of images, which are isolated corns, single-lines, and random distributed formations. The experimental results show that the algorithm can segment a large number of touching corn kernels efficiently and accurately.

Research paper thumbnail of Bottom up saliency evaluation via deep features of state-of-the-art convolutional neural networks

2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2018

The recent trends of saliency modeling suggest that deep learning based saliency models are viabl... more The recent trends of saliency modeling suggest that deep learning based saliency models are viable tools as they achieve outstanding results. However, they require large training times and expensive hardware. In this study, we present a bottom up saliency model that exploit deep features from a pre-trained deep convolutional neural network. We evaluate deep features of a variety of activations from seven state-of-the-art deep convolutional neural networks, including 35 implementations of the proposed bottom up saliency model. All implementations are evaluated over a popular dataset using three evaluation metrics. The experimental results suggest that deep features of a network trained for object classification can be used for saliency modeling. While the ranking of the saliency models is varied across scores, majority of the GoogLeNet based implementations outperformed all other implementations.

Research paper thumbnail of Where Do Consumers Look When Viewing Fashion Advertisements? Saliency Based Models for Visual Attention Prediction

Pivoting for the Pandemic, 2020

Conceptual Framework. Advertising is a key marketing tool and is costly to the company, so creati... more Conceptual Framework. Advertising is a key marketing tool and is costly to the company, so creating ads as effectively as possible to attract potential customers is essential for business success (Simmonds et al., 2020). Many industries, including the fashion industry, use magazine ads to deliver brand images to consumers, increase brand reputation, and encourage consumers to buy their brand products. An understanding of consumers' visual attention to advertising is vital because there is a robust relationship between consumers' visual attention and memory (Pieters et al., 2007). Most marketing communication devices work through the sense of sight. Advertisement content including brand logos, colors, packaging, letters and fonts all attract visual attention to create consumer awareness of a brand (Simmonds et al., 2020). Human beings tend to minimize their neural resources by using eye, head, and body movement to shift their visual attention and gaze behavior toward more informative image spatial locations (Mahdi et al., 2017). In computer vision, a saliency map is a 2D topological map that indicates visual attention priorities using a numerical scale. A higher visual attention priority indicates the object of interest is irregular or rare in its surroundings. Recently, a diversity of saliency models has become a popular technical term for human vision attention study in fields such as communication and electronic engineering (e.g., Garnett et al., 2014; Mahdi et al., 2019; Tsiami et al., 2019). However, there is little research that applies saliency modeling to the fashion area. Thus, it is meaningful to investigate whether saliency modeling can be applied to the fashion area. Purposes of the study. This multidisciplinary study were 1) to explore where consumers look at fashion advertisements using saliency based models, and 2) to investigate which saliency model(s) are the most effective tools in predicting consumers' visual attention when viewing fashion advertisements. Method. Seventy college students were recruited to participate in the study at a mid-western university in US. Participants were from different majors such as fashion and electronic engineering. One hundred fashion images were selected from fashion magazines (e.g., Vogue) and divided into four image groups (25 images in each group). Each participant was seated about 1 meter away from the screen, viewed 25 fashion images with 5 seconds per image, and selected locations on each image that most attracted their attention by clicking a mouse on the screen. The mouse clicking points were saved as human attention data on the 100 fashion images. The experimental setup is shown in Figure 1. Moreover, six saliency models (Ground-truth, AW5,

Research paper thumbnail of Can consumers’ visual attention be predictable? A saliency modelling-based approach on fashion advertisements

International Journal of Fashion Design, Technology and Education, 2021

ABSTRACT As collaborative research between engineering and fashion, the purpose of this study was... more ABSTRACT As collaborative research between engineering and fashion, the purpose of this study was to investigate if saliency models can be applied for predicting consumers’ visual attention to fashion images such as fashion advertisements. A human subject study was conducted to record human visual fixations on 10 colour fashion advertisement images, which were randomly selected from fashion magazines. The participants include 67 college students (26 males and 41 females). All mouse-tracking locations on images were recorded and saved using Psychtoolbox-3 with MATLAB. The locations represent the human fixation points on the images and are used to generate fixation maps. This collaborative research is an innovative and pioneering approach to predict consumers’ visual attention toward fashion images using saliency models. From the results of this study, the engineering area’s saliency models were proven as effective measurements in predicting fashion consumers’ visual attention when looking at fashion images such as advertisements.

Research paper thumbnail of An extensive evaluation of deep featuresof convolutional neural networks for saliency prediction of human visual attention

Journal of Visual Communication and Image Representation, 2019

Abstract Based on transfer learning, feature maps of deep convolutional neural networks (DCNNs) h... more Abstract Based on transfer learning, feature maps of deep convolutional neural networks (DCNNs) have been used to predict human visual attention. In this paper, we conduct extensive comparisons to investigate effects of feature maps on the predictions of the human visual attention using a deep features based saliency model framework. The feature maps of seven pretrained DCNNs are investigated using classical and class activation maps approaches. The performances of various saliency implementations are evaluated over four datasets using three metrics. The results demonstrate that deep feature maps of the pretrained DCNNs can be used to create saliency maps for the prediction of human visual attention. The incorporation of multiple levels of blurred and multi-scale feature maps improves the extraction of salient regions. Moreover, DCNNs pretrained using the Places dataset provide more localized objects that can be beneficial to the top-down saliency maps.

Research paper thumbnail of Evaluation of bottom-up saliency model using deep features pretrained by deep convolutional neural networks

Journal of Electronic Imaging, 2019

Abstract. We present extensive evaluations of deep features pretrained by state-of-the-art deep c... more Abstract. We present extensive evaluations of deep features pretrained by state-of-the-art deep convolutional neural networks (DCNNs) for predictions of human fixations. The evaluations are conducted using a bottom-up saliency model, which utilizes deep features of DCNNs pretrained for object classification. Using various selections of deep feature maps, 35 implementations of the bottom-up saliency model are computed, evaluated, and compared over three publicly available datasets using four evaluation metrics. The experimental results demonstrate that the pretrained deep features are strong predictors of human fixations. The incorporation of multiscale deep feature maps benefits the saliency prediction. The depth of DCNNs has a negative effect on saliency prediction. Moreover, we also compare the performance of the proposed deep features-based bottom-up saliency model with the other eight bottom-up saliency models. The comparison results show that our saliency model can outperform other conventional bottom-up saliency models.

Research paper thumbnail of DeepFeat: A Bottom-Up and Top-Down Saliency Model Based on Deep Features of Convolutional Neural Networks

IEEE Transactions on Cognitive and Developmental Systems, 2019

A deep feature based saliency model (DeepFeat) is developed to leverage the understanding of the ... more A deep feature based saliency model (DeepFeat) is developed to leverage the understanding of the prediction of human fixations. Traditional saliency models often predict the human visual attention relying on few level image cues. Although such models predict fixations on a variety of image complexities, their approaches are limited to the incorporated features. In this study, we aim to provide an intuitive interpretation of convolutional neural network deep features by combining low and high level visual factors. We exploit four evaluation metrics to evaluate the correspondence between the proposed framework and the ground-truth fixations. The key findings of the results demonstrate that the DeepFeat algorithm, incorporation of bottom up and top down saliency maps, outperforms the individual bottom up and top down approach. Moreover, in comparison to nine 9 state-of-the-art saliency models, our proposed DeepFeat model achieves satisfactory performance based on all four evaluation metrics.

Research paper thumbnail of A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults

IEEE Transactions on Cognitive and Developmental Systems, 2017

Various saliency models have been developed over the years. The performance of saliency models is... more Various saliency models have been developed over the years. The performance of saliency models is typically evaluated based on databases of experimentally recorded adult eye fixations. Although studies on infant gaze patterns have attracted much attention recently, saliency based models have not been widely applied for prediction of infant gaze patterns. In this study, we conduct a comprehensive comparison study of eight state-ofthe-art saliency models on predictions of experimentally captured fixations from infants and adults. Seven evaluation metrics are used to evaluate and compare the performance of saliency models. The results demonstrate a consistent performance of saliency models predicting adult fixations over infant fixations in terms of overlap, center fitting, intersection, information loss of approximation, and spatial distance between the distributions of saliency map and fixation map. In saliency and baselines models performance ranking, the results show that GBVS and Itti models are among the top three contenders, infants and adults have bias toward the centers of images, and all models and the center baseline model outperformed the chance baseline model.

Research paper thumbnail of Infants gaze pattern analyzing using contrast entropy minimization

2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2015

Statistical relationships between human vision and natural scene statistics have been addressed. ... more Statistical relationships between human vision and natural scene statistics have been addressed. Contrast is fundamental image property. We proposed that infant visual exploration and learning development could be analyzed using contrast entropy minimization. Ten infants viewed 16 naturalistic images. Fixations of the infants were recorded. Infant gaze patterns are compared between individual infants. Infants are compared to the fixations predicted by the contrast entropy minimization. The results show that infant tends to have different gaze patterns, and infant entropy drops gradually as fixations are made.

Research paper thumbnail of Line Profile Based Segmentation Algorithm for Touching Corn Kernels

Cornell University - arXiv, Jun 1, 2017

Image segmentation of touching objects plays a key role in providing accurate classification for ... more Image segmentation of touching objects plays a key role in providing accurate classification for computer vision technologies. A new line profile based imaging segmentation algorithm has been developed to provide a robust and accurate segmentation of a group of touching corns. The performance of the line profile based algorithm has been compared to a watershed based imaging segmentation algorithm. Both algorithms are tested on three different patterns of images, which are isolated corns, single-lines, and random distributed formations. The experimental results show that the algorithm can segment a large number of touching corn kernels efficiently and accurately.

Research paper thumbnail of Bottom up saliency evaluation via deep features of state-of-the-art convolutional neural networks

2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2018

The recent trends of saliency modeling suggest that deep learning based saliency models are viabl... more The recent trends of saliency modeling suggest that deep learning based saliency models are viable tools as they achieve outstanding results. However, they require large training times and expensive hardware. In this study, we present a bottom up saliency model that exploit deep features from a pre-trained deep convolutional neural network. We evaluate deep features of a variety of activations from seven state-of-the-art deep convolutional neural networks, including 35 implementations of the proposed bottom up saliency model. All implementations are evaluated over a popular dataset using three evaluation metrics. The experimental results suggest that deep features of a network trained for object classification can be used for saliency modeling. While the ranking of the saliency models is varied across scores, majority of the GoogLeNet based implementations outperformed all other implementations.

Research paper thumbnail of Where Do Consumers Look When Viewing Fashion Advertisements? Saliency Based Models for Visual Attention Prediction

Pivoting for the Pandemic, 2020

Conceptual Framework. Advertising is a key marketing tool and is costly to the company, so creati... more Conceptual Framework. Advertising is a key marketing tool and is costly to the company, so creating ads as effectively as possible to attract potential customers is essential for business success (Simmonds et al., 2020). Many industries, including the fashion industry, use magazine ads to deliver brand images to consumers, increase brand reputation, and encourage consumers to buy their brand products. An understanding of consumers' visual attention to advertising is vital because there is a robust relationship between consumers' visual attention and memory (Pieters et al., 2007). Most marketing communication devices work through the sense of sight. Advertisement content including brand logos, colors, packaging, letters and fonts all attract visual attention to create consumer awareness of a brand (Simmonds et al., 2020). Human beings tend to minimize their neural resources by using eye, head, and body movement to shift their visual attention and gaze behavior toward more informative image spatial locations (Mahdi et al., 2017). In computer vision, a saliency map is a 2D topological map that indicates visual attention priorities using a numerical scale. A higher visual attention priority indicates the object of interest is irregular or rare in its surroundings. Recently, a diversity of saliency models has become a popular technical term for human vision attention study in fields such as communication and electronic engineering (e.g., Garnett et al., 2014; Mahdi et al., 2019; Tsiami et al., 2019). However, there is little research that applies saliency modeling to the fashion area. Thus, it is meaningful to investigate whether saliency modeling can be applied to the fashion area. Purposes of the study. This multidisciplinary study were 1) to explore where consumers look at fashion advertisements using saliency based models, and 2) to investigate which saliency model(s) are the most effective tools in predicting consumers' visual attention when viewing fashion advertisements. Method. Seventy college students were recruited to participate in the study at a mid-western university in US. Participants were from different majors such as fashion and electronic engineering. One hundred fashion images were selected from fashion magazines (e.g., Vogue) and divided into four image groups (25 images in each group). Each participant was seated about 1 meter away from the screen, viewed 25 fashion images with 5 seconds per image, and selected locations on each image that most attracted their attention by clicking a mouse on the screen. The mouse clicking points were saved as human attention data on the 100 fashion images. The experimental setup is shown in Figure 1. Moreover, six saliency models (Ground-truth, AW5,

Research paper thumbnail of Can consumers’ visual attention be predictable? A saliency modelling-based approach on fashion advertisements

International Journal of Fashion Design, Technology and Education, 2021

ABSTRACT As collaborative research between engineering and fashion, the purpose of this study was... more ABSTRACT As collaborative research between engineering and fashion, the purpose of this study was to investigate if saliency models can be applied for predicting consumers’ visual attention to fashion images such as fashion advertisements. A human subject study was conducted to record human visual fixations on 10 colour fashion advertisement images, which were randomly selected from fashion magazines. The participants include 67 college students (26 males and 41 females). All mouse-tracking locations on images were recorded and saved using Psychtoolbox-3 with MATLAB. The locations represent the human fixation points on the images and are used to generate fixation maps. This collaborative research is an innovative and pioneering approach to predict consumers’ visual attention toward fashion images using saliency models. From the results of this study, the engineering area’s saliency models were proven as effective measurements in predicting fashion consumers’ visual attention when looking at fashion images such as advertisements.

Research paper thumbnail of An extensive evaluation of deep featuresof convolutional neural networks for saliency prediction of human visual attention

Journal of Visual Communication and Image Representation, 2019

Abstract Based on transfer learning, feature maps of deep convolutional neural networks (DCNNs) h... more Abstract Based on transfer learning, feature maps of deep convolutional neural networks (DCNNs) have been used to predict human visual attention. In this paper, we conduct extensive comparisons to investigate effects of feature maps on the predictions of the human visual attention using a deep features based saliency model framework. The feature maps of seven pretrained DCNNs are investigated using classical and class activation maps approaches. The performances of various saliency implementations are evaluated over four datasets using three metrics. The results demonstrate that deep feature maps of the pretrained DCNNs can be used to create saliency maps for the prediction of human visual attention. The incorporation of multiple levels of blurred and multi-scale feature maps improves the extraction of salient regions. Moreover, DCNNs pretrained using the Places dataset provide more localized objects that can be beneficial to the top-down saliency maps.

Research paper thumbnail of Evaluation of bottom-up saliency model using deep features pretrained by deep convolutional neural networks

Journal of Electronic Imaging, 2019

Abstract. We present extensive evaluations of deep features pretrained by state-of-the-art deep c... more Abstract. We present extensive evaluations of deep features pretrained by state-of-the-art deep convolutional neural networks (DCNNs) for predictions of human fixations. The evaluations are conducted using a bottom-up saliency model, which utilizes deep features of DCNNs pretrained for object classification. Using various selections of deep feature maps, 35 implementations of the bottom-up saliency model are computed, evaluated, and compared over three publicly available datasets using four evaluation metrics. The experimental results demonstrate that the pretrained deep features are strong predictors of human fixations. The incorporation of multiscale deep feature maps benefits the saliency prediction. The depth of DCNNs has a negative effect on saliency prediction. Moreover, we also compare the performance of the proposed deep features-based bottom-up saliency model with the other eight bottom-up saliency models. The comparison results show that our saliency model can outperform other conventional bottom-up saliency models.

Research paper thumbnail of DeepFeat: A Bottom-Up and Top-Down Saliency Model Based on Deep Features of Convolutional Neural Networks

IEEE Transactions on Cognitive and Developmental Systems, 2019

A deep feature based saliency model (DeepFeat) is developed to leverage the understanding of the ... more A deep feature based saliency model (DeepFeat) is developed to leverage the understanding of the prediction of human fixations. Traditional saliency models often predict the human visual attention relying on few level image cues. Although such models predict fixations on a variety of image complexities, their approaches are limited to the incorporated features. In this study, we aim to provide an intuitive interpretation of convolutional neural network deep features by combining low and high level visual factors. We exploit four evaluation metrics to evaluate the correspondence between the proposed framework and the ground-truth fixations. The key findings of the results demonstrate that the DeepFeat algorithm, incorporation of bottom up and top down saliency maps, outperforms the individual bottom up and top down approach. Moreover, in comparison to nine 9 state-of-the-art saliency models, our proposed DeepFeat model achieves satisfactory performance based on all four evaluation metrics.

Research paper thumbnail of A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults

IEEE Transactions on Cognitive and Developmental Systems, 2017

Various saliency models have been developed over the years. The performance of saliency models is... more Various saliency models have been developed over the years. The performance of saliency models is typically evaluated based on databases of experimentally recorded adult eye fixations. Although studies on infant gaze patterns have attracted much attention recently, saliency based models have not been widely applied for prediction of infant gaze patterns. In this study, we conduct a comprehensive comparison study of eight state-ofthe-art saliency models on predictions of experimentally captured fixations from infants and adults. Seven evaluation metrics are used to evaluate and compare the performance of saliency models. The results demonstrate a consistent performance of saliency models predicting adult fixations over infant fixations in terms of overlap, center fitting, intersection, information loss of approximation, and spatial distance between the distributions of saliency map and fixation map. In saliency and baselines models performance ranking, the results show that GBVS and Itti models are among the top three contenders, infants and adults have bias toward the centers of images, and all models and the center baseline model outperformed the chance baseline model.

Research paper thumbnail of Infants gaze pattern analyzing using contrast entropy minimization

2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2015

Statistical relationships between human vision and natural scene statistics have been addressed. ... more Statistical relationships between human vision and natural scene statistics have been addressed. Contrast is fundamental image property. We proposed that infant visual exploration and learning development could be analyzed using contrast entropy minimization. Ten infants viewed 16 naturalistic images. Fixations of the infants were recorded. Infant gaze patterns are compared between individual infants. Infants are compared to the fixations predicted by the contrast entropy minimization. The results show that infant tends to have different gaze patterns, and infant entropy drops gradually as fixations are made.