Mohammad Nematbakhsh | University of Isfahan (original) (raw)
Papers by Mohammad Nematbakhsh
Computer networks for Picture Archiving and Communications. Systems (PACS) have evolved over the ... more Computer networks for Picture Archiving and Communications. Systems (PACS) have evolved over the last few years. Twisted pairs and coaxial cable networks have been used for image transfer at low data rates. Second generation PACS networks use fiber optic communications at speeds up to 140 Mbps. In this paper, the need for integrated voice, data, and image communications and network rates over 200 Mbps is presented. A high speed fiber optic network for integrated PACS services has been design and simulated in the Computer Engineering Research Laboratory at the University of Arizona. This paper summarizes the characteristics and protocols of this network. The network represents a high performance application for local PACS environments and its implementation is technically feasible now.
European Online Journal of Natural and Social Sciences, Aug 19, 2014
2011 Second International Conference on Intelligent Systems, Modelling and Simulation, 2011
Abstractthe significant feature of a social networking website is the primary reason they are ma... more Abstractthe significant feature of a social networking website is the primary reason they are made for: connecting people and friends via internet. Friend recommender systems are wisely designed for finding people, most of whom tend to be with the same interests and ...
International Journal of Web Engineering and Technology, 2013
In the Web of Data, linked datasets are changed over time. These changes include updating on feat... more In the Web of Data, linked datasets are changed over time. These changes include updating on features and address of entities. The address change in RDF entities causes their corresponding links to be broken. Broken link is one of the major obstacles that the Web of Data is facing. Most approaches to solve this problem, attempt to fix broken links at the destination point. These approaches have two major problems: a single point of failure; and reliance on the destination data source. In this paper, we introduce a method for fixing broken links which is based on the source point of links, and discover the new address of the detached entity. To this end, we introduce two datasets, which we call "Superior" and "Inferior". Through these datasets, our method creates an exclusive graph structure for each entity that needs to be observed over time. This graph is used to identify and discover the new address of the detached entity. Afterward, the most similar entity, which is candidate for the detached entity, is deduced and suggested by the algorithm. The proposed model is evaluated with DBpedia dataset within the domain of "person" entities. The result shows that most of the broken links, which had referred to a "person" entity in DBpedia, had been fixed correctly.
Clinica Chimica Acta, 2004
Cardiovascular disease (CVD) is less prevalent in premenopausal women and women receiving estroge... more Cardiovascular disease (CVD) is less prevalent in premenopausal women and women receiving estrogen replacement therapy (ERT) than in postmenopausal women or men. It proposed that the cardiovascular effects of estrogen are mediated, at least in part, through the ability of estrogen to increase nitric oxide (NO) synthesis. This study investigated the effect of estrogen on serum NO concentrations in normotensive and deoxycorticostrone acetate (DOCA) Salt hypertensive ovariectomized rats. Forty-eight female rats were ovariectomized and randomly divided into six groups. Hypertension was induced by DOCA Salt method. DOCA was injected 30 mg/kg of body weight subcutaneously, twice a week with NaCl 1% instead of tap water for drinking throughout the treatment period. Estradiol valerate (Es) was injected i.m. once a week. The groups were as follows: (2) DOCA (4 weeks) and DOCA+Es (6 weeks), (22) DOCA (10 weeks), (222) normal saline (N/S) (4 weeks)+Es (6 weeks), (2V) N/S (10 weeks), (V) DOCA (4 weeks), and (V2) N/S (4 weeks). Serum NO concentrations were measured in groups 1, 3 and 4 before and after treatment. Other groups were used as control. Results showed that in normotensive animals, serum NO concentrations were increased after estrogen treatment significantly (90.20 +/- 18.67 vs. 19.11 +/- 1.78 micromol/l) (p < 0.05). Also, estrogen increased serum NO concentrations in DOCA Salt hypertensive rats (73.54 +/- 22.55 vs. 36.94 +/- 10.73 micromol/l) (p = 0.06). Estrogen can increase serum NO concentrations in normotensive and DOCA Salt hypertensive animals and it may be important in cardiovascular effects of estrogen.
Biological Psychiatry, 1997
Many previous prevalence studies of polydipsia (PD) have utilized single and often non-biologic m... more Many previous prevalence studies of polydipsia (PD) have utilized single and often non-biologic measures. In this study we estimated prevalence using specific gravity of urine (SPGU), normalized diurnal weight gain (NDWG), and staff identification (staff ID). Agreement between these two biologic and one behavioral measure was assessed. A total of 572 psychiatric inpatients were assessed for SPGU and NDWG. Unit staff were asked to identify PD patients. Positive and negative PD groups were formed separately based on the SPGU, NDWG, and staff ID data. All three measures were collected on the same day. Prevalence data for the biologic measures varied The estimate for PD by SPGU (<1.009 cutoff) was higher (43.4% of sample) than that of NDWG (>2.5%; 25.4%) or staff ID (21.4%). These prevalence rates did not change substantially after exclusion of medical causes of polyuria. Agreement, assessed by the kappa statistic, was uniformly low among the measures. Weak association between the measures reflects their multidetermined, nonspecific nature, and highlights the lack of a diagnostic standard in the field. The observed prevalence rates must be considered rough approximations. Associations between the measures and certain subject characteristics suggest the measures may identify different types of potential PD patients. These different types of patients are discussed, as are other issues in the measurement of PD. The data suggest estimates of PD are a function of the type of measure used as even biologic measures vary greatly.
International Journal of Information Science and Management, Jul 16, 2012
ABSTRACT Service is the core concept of future Internet that named Internet of Services. This con... more ABSTRACT Service is the core concept of future Internet that named Internet of Services. This concept refers to software components as well as processing capacity and any other resources that can be offered through the Internet. There is a need for a third party that make registering and searching services easy and feasible. Web Service Technology (WST) is one of those current solutions to the Internet of services that has attracted much attention. Registering and searching of the services are feasible by use of Service Directory. The current structure of WST that is mainly based on UDDI suffers from some structural shortcomings. From a structural point of view, service directory is implemented as a centralized node that is a performance bottleneck and single point of failure. In this paper, shortcomings of the current UDDI structure are analyzed through some quantitative experiments. Then, in order to address this shortcoming, a structure is presented by interlinking services using some sort of semantic relation that can be established between WST entities. Different aspects of structure and its performance enhancement are shown through experiments.
Journal of Theoretical and Applied Electronic Commerce Research, 2007
In this paper, we propose a market model which is based on reputation and reinforcement learning ... more In this paper, we propose a market model which is based on reputation and reinforcement learning algorithms for buying and selling agents. Three important factors: quality, price and delivery-time are considered in the model. We take into account the fact that buying agents can have different priorities on quality, price and delivery-time of their goods and selling agents adjust their bids according to buying agents preferences. Also we have assumed that multiple selling agents may offer the same goods with different qualities, prices and delivery-times. In our model, selling agents learn to maximize their expected profits by using reinforcement learning to adjust product quality, price and delivery-time. Also each selling agent models the reputation of buying agents based on their profits for that seller and uses this reputation to consider discount for reputable buying agents. Buying agents learn to model the reputation of selling agents based on different features of goods: reputation on quality, reputation on price and reputation on delivery-time to avoid interaction with disreputable selling agents. The model has been implemented with Aglet and tested in a large-sized marketplace. The results show that selling/buying agents that model the reputation of buying/selling agents obtain more satisfaction rather than selling/buying agents who only use the reinforcement learning.
... Behrooz Tork Ladani FacultyMember of University of Isfahan Computer Faculty - Engineering Dep... more ... Behrooz Tork Ladani FacultyMember of University of Isfahan Computer Faculty - Engineering Department - University of Isfahan - Isfahan - Iran ladani@eng.ui.ac.ir ... Studies in Informatics and Control, Vol. 11, No. 3. [8] Valentin Robu and Han La Poutr´e, 2005. ...
Amc, 2007
Many definitive and approximate methods have been so far proposed for the construction of an opti... more Many definitive and approximate methods have been so far proposed for the construction of an optimal binary search tree. One such method is the use of evolutionary algorithms with satisfactorily improved cost efficiencies. This paper will propose a new genetic algorithm for making a near-optimal binary search tree. In this algorithm, a new greedy method is used for the crossover of chromosomes while a new way is also developed for inducing mutation in them. Practical results show a rapid and desirable convergence towards the near-optimal solution. The use of a heuristic to create not so costly chromosomes as the first offspring, the greediness of the crossover, and the application of elitism in the selection of future generation chromosomes are the most important factors leading to near-optimal solutions by the algorithm at desirably high speeds. Due to the practical results, increasing problem size does not cause any considerable difference between the solution obtained from the algorithm and exact solution.
AIP Conference Proceedings, 2009
A Smarter Look to Nature: GENetically Adapted VErsatile Heterogonous Ant Colony System. [AIP Conf... more A Smarter Look to Nature: GENetically Adapted VErsatile Heterogonous Ant Colony System. [AIP Conference Proceedings 1117, 180 (2009)]. Ahmad Zaeri, Kamran Zamanifar, Mohammad Ali Nematbakhsh, Afsaneh Fatemi. Abstract. ...
Expert Systems with Applications, 2015
The profiling of background knowledge is essential in scholar's recommender systems. Existing ont... more The profiling of background knowledge is essential in scholar's recommender systems. Existing ontologybased profiling approaches employ a pre-built reference ontology as a backbone structure for representing the scholar's preferences. However, such singular reference ontologies lack sufficient ontological concepts and are unable to represent the hierarchical structure of scholars' knowledge. They rather encompass general-purpose topics of the domain and are inaccurate in representing the scholars' knowledge. This paper proposes a method for integrating of multiple domain taxonomies to build a reference ontology, and exploits this reference ontology for profiling scholars' background knowledge. In our approach, various topics of Computer Science domain from Web taxonomies are selected, transformed by DBpedia, and merged to construct a reference ontology. We demonstrate the effectiveness of our approach by measuring five quality-based metrics as well as application-based evaluation against the developed reference ontology. The empirical results show an improvement over the existing reference ontologies in terms of completeness, richness, and coverage. (B. Amini), roliana@utm.my (R. Ibrahim), shahizan@utm.my (M.S. Othman), nematbakhsh@eng.ui.ac.ir (M.A. Nematbakhsh). Expert Systems with Applications 42 (2015) 913-928
Communications in Computer and Information Science, 2014
Computer networks for Picture Archiving and Communications. Systems (PACS) have evolved over the ... more Computer networks for Picture Archiving and Communications. Systems (PACS) have evolved over the last few years. Twisted pairs and coaxial cable networks have been used for image transfer at low data rates. Second generation PACS networks use fiber optic communications at speeds up to 140 Mbps. In this paper, the need for integrated voice, data, and image communications and network rates over 200 Mbps is presented. A high speed fiber optic network for integrated PACS services has been design and simulated in the Computer Engineering Research Laboratory at the University of Arizona. This paper summarizes the characteristics and protocols of this network. The network represents a high performance application for local PACS environments and its implementation is technically feasible now.
European Online Journal of Natural and Social Sciences, Aug 19, 2014
2011 Second International Conference on Intelligent Systems, Modelling and Simulation, 2011
Abstractthe significant feature of a social networking website is the primary reason they are ma... more Abstractthe significant feature of a social networking website is the primary reason they are made for: connecting people and friends via internet. Friend recommender systems are wisely designed for finding people, most of whom tend to be with the same interests and ...
International Journal of Web Engineering and Technology, 2013
In the Web of Data, linked datasets are changed over time. These changes include updating on feat... more In the Web of Data, linked datasets are changed over time. These changes include updating on features and address of entities. The address change in RDF entities causes their corresponding links to be broken. Broken link is one of the major obstacles that the Web of Data is facing. Most approaches to solve this problem, attempt to fix broken links at the destination point. These approaches have two major problems: a single point of failure; and reliance on the destination data source. In this paper, we introduce a method for fixing broken links which is based on the source point of links, and discover the new address of the detached entity. To this end, we introduce two datasets, which we call "Superior" and "Inferior". Through these datasets, our method creates an exclusive graph structure for each entity that needs to be observed over time. This graph is used to identify and discover the new address of the detached entity. Afterward, the most similar entity, which is candidate for the detached entity, is deduced and suggested by the algorithm. The proposed model is evaluated with DBpedia dataset within the domain of "person" entities. The result shows that most of the broken links, which had referred to a "person" entity in DBpedia, had been fixed correctly.
Clinica Chimica Acta, 2004
Cardiovascular disease (CVD) is less prevalent in premenopausal women and women receiving estroge... more Cardiovascular disease (CVD) is less prevalent in premenopausal women and women receiving estrogen replacement therapy (ERT) than in postmenopausal women or men. It proposed that the cardiovascular effects of estrogen are mediated, at least in part, through the ability of estrogen to increase nitric oxide (NO) synthesis. This study investigated the effect of estrogen on serum NO concentrations in normotensive and deoxycorticostrone acetate (DOCA) Salt hypertensive ovariectomized rats. Forty-eight female rats were ovariectomized and randomly divided into six groups. Hypertension was induced by DOCA Salt method. DOCA was injected 30 mg/kg of body weight subcutaneously, twice a week with NaCl 1% instead of tap water for drinking throughout the treatment period. Estradiol valerate (Es) was injected i.m. once a week. The groups were as follows: (2) DOCA (4 weeks) and DOCA+Es (6 weeks), (22) DOCA (10 weeks), (222) normal saline (N/S) (4 weeks)+Es (6 weeks), (2V) N/S (10 weeks), (V) DOCA (4 weeks), and (V2) N/S (4 weeks). Serum NO concentrations were measured in groups 1, 3 and 4 before and after treatment. Other groups were used as control. Results showed that in normotensive animals, serum NO concentrations were increased after estrogen treatment significantly (90.20 +/- 18.67 vs. 19.11 +/- 1.78 micromol/l) (p < 0.05). Also, estrogen increased serum NO concentrations in DOCA Salt hypertensive rats (73.54 +/- 22.55 vs. 36.94 +/- 10.73 micromol/l) (p = 0.06). Estrogen can increase serum NO concentrations in normotensive and DOCA Salt hypertensive animals and it may be important in cardiovascular effects of estrogen.
Biological Psychiatry, 1997
Many previous prevalence studies of polydipsia (PD) have utilized single and often non-biologic m... more Many previous prevalence studies of polydipsia (PD) have utilized single and often non-biologic measures. In this study we estimated prevalence using specific gravity of urine (SPGU), normalized diurnal weight gain (NDWG), and staff identification (staff ID). Agreement between these two biologic and one behavioral measure was assessed. A total of 572 psychiatric inpatients were assessed for SPGU and NDWG. Unit staff were asked to identify PD patients. Positive and negative PD groups were formed separately based on the SPGU, NDWG, and staff ID data. All three measures were collected on the same day. Prevalence data for the biologic measures varied The estimate for PD by SPGU (<1.009 cutoff) was higher (43.4% of sample) than that of NDWG (>2.5%; 25.4%) or staff ID (21.4%). These prevalence rates did not change substantially after exclusion of medical causes of polyuria. Agreement, assessed by the kappa statistic, was uniformly low among the measures. Weak association between the measures reflects their multidetermined, nonspecific nature, and highlights the lack of a diagnostic standard in the field. The observed prevalence rates must be considered rough approximations. Associations between the measures and certain subject characteristics suggest the measures may identify different types of potential PD patients. These different types of patients are discussed, as are other issues in the measurement of PD. The data suggest estimates of PD are a function of the type of measure used as even biologic measures vary greatly.
International Journal of Information Science and Management, Jul 16, 2012
ABSTRACT Service is the core concept of future Internet that named Internet of Services. This con... more ABSTRACT Service is the core concept of future Internet that named Internet of Services. This concept refers to software components as well as processing capacity and any other resources that can be offered through the Internet. There is a need for a third party that make registering and searching services easy and feasible. Web Service Technology (WST) is one of those current solutions to the Internet of services that has attracted much attention. Registering and searching of the services are feasible by use of Service Directory. The current structure of WST that is mainly based on UDDI suffers from some structural shortcomings. From a structural point of view, service directory is implemented as a centralized node that is a performance bottleneck and single point of failure. In this paper, shortcomings of the current UDDI structure are analyzed through some quantitative experiments. Then, in order to address this shortcoming, a structure is presented by interlinking services using some sort of semantic relation that can be established between WST entities. Different aspects of structure and its performance enhancement are shown through experiments.
Journal of Theoretical and Applied Electronic Commerce Research, 2007
In this paper, we propose a market model which is based on reputation and reinforcement learning ... more In this paper, we propose a market model which is based on reputation and reinforcement learning algorithms for buying and selling agents. Three important factors: quality, price and delivery-time are considered in the model. We take into account the fact that buying agents can have different priorities on quality, price and delivery-time of their goods and selling agents adjust their bids according to buying agents preferences. Also we have assumed that multiple selling agents may offer the same goods with different qualities, prices and delivery-times. In our model, selling agents learn to maximize their expected profits by using reinforcement learning to adjust product quality, price and delivery-time. Also each selling agent models the reputation of buying agents based on their profits for that seller and uses this reputation to consider discount for reputable buying agents. Buying agents learn to model the reputation of selling agents based on different features of goods: reputation on quality, reputation on price and reputation on delivery-time to avoid interaction with disreputable selling agents. The model has been implemented with Aglet and tested in a large-sized marketplace. The results show that selling/buying agents that model the reputation of buying/selling agents obtain more satisfaction rather than selling/buying agents who only use the reinforcement learning.
... Behrooz Tork Ladani FacultyMember of University of Isfahan Computer Faculty - Engineering Dep... more ... Behrooz Tork Ladani FacultyMember of University of Isfahan Computer Faculty - Engineering Department - University of Isfahan - Isfahan - Iran ladani@eng.ui.ac.ir ... Studies in Informatics and Control, Vol. 11, No. 3. [8] Valentin Robu and Han La Poutr´e, 2005. ...
Amc, 2007
Many definitive and approximate methods have been so far proposed for the construction of an opti... more Many definitive and approximate methods have been so far proposed for the construction of an optimal binary search tree. One such method is the use of evolutionary algorithms with satisfactorily improved cost efficiencies. This paper will propose a new genetic algorithm for making a near-optimal binary search tree. In this algorithm, a new greedy method is used for the crossover of chromosomes while a new way is also developed for inducing mutation in them. Practical results show a rapid and desirable convergence towards the near-optimal solution. The use of a heuristic to create not so costly chromosomes as the first offspring, the greediness of the crossover, and the application of elitism in the selection of future generation chromosomes are the most important factors leading to near-optimal solutions by the algorithm at desirably high speeds. Due to the practical results, increasing problem size does not cause any considerable difference between the solution obtained from the algorithm and exact solution.
AIP Conference Proceedings, 2009
A Smarter Look to Nature: GENetically Adapted VErsatile Heterogonous Ant Colony System. [AIP Conf... more A Smarter Look to Nature: GENetically Adapted VErsatile Heterogonous Ant Colony System. [AIP Conference Proceedings 1117, 180 (2009)]. Ahmad Zaeri, Kamran Zamanifar, Mohammad Ali Nematbakhsh, Afsaneh Fatemi. Abstract. ...
Expert Systems with Applications, 2015
The profiling of background knowledge is essential in scholar's recommender systems. Existing ont... more The profiling of background knowledge is essential in scholar's recommender systems. Existing ontologybased profiling approaches employ a pre-built reference ontology as a backbone structure for representing the scholar's preferences. However, such singular reference ontologies lack sufficient ontological concepts and are unable to represent the hierarchical structure of scholars' knowledge. They rather encompass general-purpose topics of the domain and are inaccurate in representing the scholars' knowledge. This paper proposes a method for integrating of multiple domain taxonomies to build a reference ontology, and exploits this reference ontology for profiling scholars' background knowledge. In our approach, various topics of Computer Science domain from Web taxonomies are selected, transformed by DBpedia, and merged to construct a reference ontology. We demonstrate the effectiveness of our approach by measuring five quality-based metrics as well as application-based evaluation against the developed reference ontology. The empirical results show an improvement over the existing reference ontologies in terms of completeness, richness, and coverage. (B. Amini), roliana@utm.my (R. Ibrahim), shahizan@utm.my (M.S. Othman), nematbakhsh@eng.ui.ac.ir (M.A. Nematbakhsh). Expert Systems with Applications 42 (2015) 913-928
Communications in Computer and Information Science, 2014