Anushree Raj - Visvesvaraya Technological University (original) (raw)
Papers by Anushree Raj
American Journal of Neuroradiology, 2013
BACKGROUND AND PURPOSE: There is a desire within many institutions to reduce the radiation dose i... more BACKGROUND AND PURPOSE: There is a desire within many institutions to reduce the radiation dose in CTP examinations. The purpose of this study was to simulate dose reduction through the addition of noise in brain CT perfusion examinations and to determine the subsequent effects on quality and quantitative interpretation. MATERIALS AND METHODS: A total of 22 consecutive reference CTP scans were identified from an institutional review board-approved prospective clinical trial, all performed at 80 keV and 190 mAs. Lower-dose scans at 188, 177, 167, 127, and 44 mAs were generated through the addition of spatially correlated noise to the reference scans. A standard software package was used to generate CBF, CBV, and MTT maps. Six blinded radiologists determined quality scores of simulated scans on a Likert scale. Quantitative differences were calculated.
International Journal for Digital Society, 2010
CAPTCHAs are employed on web systems to differentiate between human users and automated programs ... more CAPTCHAs are employed on web systems to differentiate between human users and automated programs which indulge in spamming and other fraudulent activities. CAPTCHAs currently in use have been broken and rendered ineffective as a result of continuous evolution in CAPTCHA breaking. Thus, there is a need to employ stronger CAPTCHAs to keep these breaking attacks at bay while retaining ease of implementation on websites and ease of use for humans. In this paper, we introduce Sequenced Picture Captcha (SPC). Each CAPTCHA round comprises of object pictures, each of which may be accompanied by a Tag. The user is required to determine the logical sequence of the displayed object pictures based on the Tags. We identify two generation schemes-one in which object pictures indicate an inherent sequencing and one in which explicit Tags are displayed for determining the sequencing. We also analyze all these schemes. The advantages of high user convenience and simplicity of operation are retained in both generation types.
Scalable Kernel Methods via Doubly Stochastic Gradients
The general perception is that kernel methods are not scalable, so neural nets become the choice ... more The general perception is that kernel methods are not scalable, so neural nets become the choice for large-scale nonlinear learning problems. Have we tried hard enough for kernel methods? In this paper, we propose an approach that scales up kernel methods using a novel concept called doubly stochastic functional gradients''. Based on the fact that many kernel methods can be expressed as convex optimization problems, our approach solves the optimization problems by making two unbiased stochastic approximations to the functional gradient---one using random training points and another using random features associated with the kernel---and performing descent steps with this noisy functional gradient. Our algorithm is simple, need no commit to a preset number of random features, and allows the flexibility of the function class to grow as we see more incoming data in the streaming setting. We demonstrate that a function learned by this procedure after t iterations converges to the...
Genes & Development, 2013
Recently, researchers have uncovered the presence of many long noncoding RNAs (lncRNAs) in embryo... more Recently, researchers have uncovered the presence of many long noncoding RNAs (lncRNAs) in embryonic stem cells and believe they are important regulators of the differentiation process. However, there are only a few examples explicitly linking lncRNA activity to transcriptional regulation. Here, we used transcript counting and spatial localization to characterize a lncRNA (dubbed linc-HOXA1) located ∼50 kb from the Hoxa gene cluster in mouse embryonic stem cells. Single-cell transcript counting revealed that linc-HOXA1 and Hoxa1 RNA are highly variable at the single-cell level and that whenever linc-HOXA1 RNA abundance was high, Hoxa1 mRNA abundance was low and vice versa. Knockdown analysis revealed that depletion of linc-HOXA1 RNA at its site of transcription increased transcription of the Hoxa1 gene cis to the chromosome and that exposure of cells to retinoic acid can disrupt this interaction. We further showed that linc-HOXA1 RNA represses Hoxa1 by recruiting the protein PURB as...
Journal of Global Research in Computer Science, 2012
Abstract: This electronic document gives the study of a bidirectional model, of smell as a media ... more Abstract: This electronic document gives the study of a bidirectional model, of smell as a media and it‟ s applications in the real world. This paper presents a prototype system, which uses smell as a medium to communicate information bi-directionally in the ...
Proceedings of the National Academy of Sciences, 2005
The mechanism of transport of mRNA-protein (mRNP) complexes from transcription sites to nuclear p... more The mechanism of transport of mRNA-protein (mRNP) complexes from transcription sites to nuclear pores has been the subject of many studies. Using molecular beacons to track single mRNA molecules in living cells, we have characterized the diffusion of mRNP complexes in the nucleus. The mRNP complexes move freely by Brownian diffusion at a rate that assures their dispersion throughout the nucleus before they exit into the cytoplasm, even when the transcription site is located near the nuclear periphery. The diffusion of mRNP complexes is restricted to the extranucleolar, interchromatin spaces. When mRNP complexes wander into dense chromatin, they tend to become stalled. Although the movement of mRNP complexes occurs without the expenditure of metabolic energy, ATP is required for the complexes to resume their motion after they become stalled. This finding provides an explanation for a number of observations in which mRNA transport appeared to be an enzymatically facilitated process.
Development of Big data anonymization framework using DNA Computing
2022 International Conference on Artificial Intelligence and Data Engineering (AIDE)
International Journal of Recent Technology and Engineering (IJRTE), 2020
ETL stands for extraction, transformation and loading, where extraction is done to active data fr... more ETL stands for extraction, transformation and loading, where extraction is done to active data from the source, transformation involve data cleansing, data filtering, data validation and finally application of certain rules and loading stores back the data to the destination repository where it has to finally reside. Pig is one of the most important to which could be applied in Extract, Transform and Load (ETL) process. It helps in applying the ETL approach to the large set of data. Initially Pig loads the data, and further is able to perform predictions, repetitions, expected conversions and further transformations. UDFs can be used to perform more complex algorithms during the transformation phase. The huge data processed by Pig, could be stored back in HDFS. In this paper we demonstrate the ETL process using Pig in Hadoop. Here we demonstrate how the files in HDFS are extracted, transformed and loaded back to HDFS using Pig. We extend the functionality of Pig Latin with Python UD...
CERN European Organization for Nuclear Research - Zenodo, Oct 27, 2022
Speech is the most effective means for humans to communicate their ideas and emotions across a va... more Speech is the most effective means for humans to communicate their ideas and emotions across a variety of languages. Every language has a different set of speech characteristics. The tempo and dialect vary from person to person even when speaking the same language. For some folks, this makes it difficult to understand the message being delivered. Long speeches can be challenging to follow at times because of things like inconsistent pronunciation, tempo, and other factors. The development of technology that enables the recognition and transcription of voice into text is aided by speech recognition, an interdisciplinary area of computational linguistics. The most crucial information is taken from a text source and adequately summarized by text summarization.
CERN European Organization for Nuclear Research - Zenodo, Oct 27, 2022
International Journal for Research in Applied Science and Engineering Technology, 2020
Speech recognition is the method of translating spoken words into text. The speech recognition pr... more Speech recognition is the method of translating spoken words into text. The speech recognition process digitizes the sound waves into basic language units. Speech recognition is one of the most used technologies in today's life. This technology can be seen everywhere around a person, for example in phones, games, etc. The main purpose of the paper is to know the knowledge and the technology behind this superb invention.
Novel DNA Cryptosystem using Genetic Operators
Design Engineering, Aug 11, 2021
Privacy-preserving data mining has numerous applications which are naturally supposed to be “priv... more Privacy-preserving data mining has numerous applications which are naturally supposed to be “privacyviolating” applications. The key is to design methods which continue to be effective, without compromising security. Data mining is the process of analyzing data. Data Privacy is collection of data and distribution of data. Privacy issues arise in different area such as health care, intellectual property, biological data, financial transaction etc. Protection of data is a very challenging task while data transfer. Sensitive information needs protection. There are two kinds of major attacks against privacy namely record linkage and attribute linkage attacks. Research have proposed some methods namely kanonymity, l-diversity, t-closeness for data privacy. k-anonymity method preserves the privacy against record linkage attack alone. IndexTerms: Anonymization, Privacy Preserving, k-anonymity, PPDM, PPDP
International Journal for Research in Applied Science and Engineering Technology, 2019
Technology revolution has been facilitating millions of people by generating tremendous data, res... more Technology revolution has been facilitating millions of people by generating tremendous data, resulting in big data. It has been a distinct knowledge that massive amount of data have been generated continuously at extraordinary and ever increasing scales. Big Data is a new term used to identify the datasets that due to their large size and complexity. Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. Big Data mining is the capability of extracting useful information from these large datasets or streams of data, that due to its volume, variability, and velocity, it was not possible before to do it. The Big Data challenge is becoming one of the most exciting opportunities for the next years. This survey paper includes the information about what is big data, big data sources, Data mining, Big data mining and the challenges.
International Journal for Research in Applied Science and Engineering Technology, 2019
In this era of information age, a huge amount of data generates every moment through various sour... more In this era of information age, a huge amount of data generates every moment through various sources. This enormous data is beyond the processing capability of traditional data management system to manage and analyse the data in a specified time span. This huge amount of data refers to Big Data. Big Data faces numerous challenges in various operations on data such as capturing data, data analysis, data searching, data sharing, data filtering etc. HADOOP has showed a big way of various enterprises for big data management. Big data hadoop deals with the implementation of various industry use cases. Hadoop framework has been emerged as the most effective and widely adopted framework for Big Data processing. In this paper we discuss the implementation analysis of MapReduce, Pig and Hive approaches.
Encoding and decoding algorithms to store Big data into DNA
<strong><em>Abstract</em> - The latest outburst of digital data has always led ... more <strong><em>Abstract</em> - The latest outburst of digital data has always led an urge to create superior data storage architectures with the capacity to store huge amounts of data. DNA as data storage is one of the promising solutions for massive amount of information generated and the need to store data for long-lasting period of time. DNA as a bio-molecule based memory device beside with big data storage in DNA has shown a new direction towards DNA computing to solve computational problems. This paper critically analyzes the various methods used for encoding data onto DNA through the FASTA format files which undergoes biological procedures to save data in DNA and stored in NCBI database. The same FASTA file is retrieved back tracking the GI accession number of the file to be accessed and further decoded back into its original text format. The results show that our implementations can successfully transform and recover the original data to and from the DNA after undergoing the encoding and decoding procedure</strong>
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2019
Anonymization techniques are enforced to provide privacy protection for the data published on clo... more Anonymization techniques are enforced to provide privacy protection for the data published on cloud. These techniques include various algorithms to generalize or suppress the data. Top Down Specification in k anonymity is the best generalization algorithm for data anonymization. As the data increases on cloud, data analysis becomes very tedious. Map reduce framework can be adapted to process on these huge amount of Big Data. We implement generalized method using Map phase and Reduce Phase for data anonymization on cloud in two different phases of Top Down Specification.
International Journal for Research in Applied Science and Engineering Technology, 2019
The reduction of emissions from deforestation and forest degradation (REDD) constitutes part of t... more The reduction of emissions from deforestation and forest degradation (REDD) constitutes part of the international climate agreements and contributes to the Sustainable Development Goals. This research is motivated by the risks associated with the future CO 2 price uncertainty in the context of the offsetting of carbon emissions by regulated entities. The research asked whether it is possible to reduce these financial risks. In this study, we consider the bilateral interaction of a REDD supplier and a greenhouse gas (GHG)-emitting energy producer in an incomplete emission offsets market. Within this setting, we explore an innovative financial instrument-flobsion-a flexible option with benefit-sharing. For the quantitative assessment, we used a research method based on a two-stage stochastic technological portfolio optimization model established in earlier studies. First, we obtain an important result that the availability of REDD offsets does not increase the optimal emissions of the electricity producer under any future CO 2 price realization. Moreover, addressing concerns about a possible "crowding-out" effect of REDD-based offsets, we demonstrate that the emissions and offsetting cost will decrease and increase, respectively. Second, we demonstrate the flexibility of the proposed instrument by analyzing flobsion contracts with respect to the benefit-sharing ratio and strike price within the risk-adjusted supply and demand framework. Finally, we perform a sensitivity analysis with respect to CO 2 price distributions and the opportunity costs of the forest owner supplying REDD offsets. Our results show that flobsion's flexibility has advantages compared to a standard option, which can help GHG-emitting energy producers with managing their compliance risks, while at the same time facilitating the development of REDD programs. In this study we limited our analysis to the case of the same CO 2 price distributions foreseen by both parties; the flobsion pricing under asymmetric information could be considered in the future.
Adaptation and innovation are extremely important to the manufacturing industry. This development... more Adaptation and innovation are extremely important to the manufacturing industry. This development should lead to sustainable manufacturing using new technologies. To promote sustainability, smart production requires global perspectives of smart production application technology. In this regard, thanks to intensive research eorts in the field of artificial intelligence (AI), a number of AI-based techniques, such as machine learning, have already been established in the industry to achieve sustainable manufacturing. Thus, the aim of the present research was to analyze, systematically, the scientific literature relating to the application of artificial intelligence and machine learning (ML) in industry. In fact, with the introduction of the Industry 4.0, artificial intelligence and machine learning are considered the driving force of smart factory revolution. The purpose of this review was to classify the literature, including publication year, authors, scientific sector, country, institution, and keywords. The analysis was done using the Web of Science and SCOPUS database. Furthermore, UCINET and NVivo 12 software were used to complete them. A literature review on ML and AI empirical studies published in the last century was carried out to highlight the evolution of the topic before and after Industry 4.0 introduction, from 1999 to now. Eighty-two articles were reviewed and classified. A first interesting result is the greater number of works published by the USA and the increasing interest after the birth of Industry 4.0.
American Journal of Neuroradiology, 2013
BACKGROUND AND PURPOSE: There is a desire within many institutions to reduce the radiation dose i... more BACKGROUND AND PURPOSE: There is a desire within many institutions to reduce the radiation dose in CTP examinations. The purpose of this study was to simulate dose reduction through the addition of noise in brain CT perfusion examinations and to determine the subsequent effects on quality and quantitative interpretation. MATERIALS AND METHODS: A total of 22 consecutive reference CTP scans were identified from an institutional review board-approved prospective clinical trial, all performed at 80 keV and 190 mAs. Lower-dose scans at 188, 177, 167, 127, and 44 mAs were generated through the addition of spatially correlated noise to the reference scans. A standard software package was used to generate CBF, CBV, and MTT maps. Six blinded radiologists determined quality scores of simulated scans on a Likert scale. Quantitative differences were calculated.
International Journal for Digital Society, 2010
CAPTCHAs are employed on web systems to differentiate between human users and automated programs ... more CAPTCHAs are employed on web systems to differentiate between human users and automated programs which indulge in spamming and other fraudulent activities. CAPTCHAs currently in use have been broken and rendered ineffective as a result of continuous evolution in CAPTCHA breaking. Thus, there is a need to employ stronger CAPTCHAs to keep these breaking attacks at bay while retaining ease of implementation on websites and ease of use for humans. In this paper, we introduce Sequenced Picture Captcha (SPC). Each CAPTCHA round comprises of object pictures, each of which may be accompanied by a Tag. The user is required to determine the logical sequence of the displayed object pictures based on the Tags. We identify two generation schemes-one in which object pictures indicate an inherent sequencing and one in which explicit Tags are displayed for determining the sequencing. We also analyze all these schemes. The advantages of high user convenience and simplicity of operation are retained in both generation types.
Scalable Kernel Methods via Doubly Stochastic Gradients
The general perception is that kernel methods are not scalable, so neural nets become the choice ... more The general perception is that kernel methods are not scalable, so neural nets become the choice for large-scale nonlinear learning problems. Have we tried hard enough for kernel methods? In this paper, we propose an approach that scales up kernel methods using a novel concept called doubly stochastic functional gradients''. Based on the fact that many kernel methods can be expressed as convex optimization problems, our approach solves the optimization problems by making two unbiased stochastic approximations to the functional gradient---one using random training points and another using random features associated with the kernel---and performing descent steps with this noisy functional gradient. Our algorithm is simple, need no commit to a preset number of random features, and allows the flexibility of the function class to grow as we see more incoming data in the streaming setting. We demonstrate that a function learned by this procedure after t iterations converges to the...
Genes & Development, 2013
Recently, researchers have uncovered the presence of many long noncoding RNAs (lncRNAs) in embryo... more Recently, researchers have uncovered the presence of many long noncoding RNAs (lncRNAs) in embryonic stem cells and believe they are important regulators of the differentiation process. However, there are only a few examples explicitly linking lncRNA activity to transcriptional regulation. Here, we used transcript counting and spatial localization to characterize a lncRNA (dubbed linc-HOXA1) located ∼50 kb from the Hoxa gene cluster in mouse embryonic stem cells. Single-cell transcript counting revealed that linc-HOXA1 and Hoxa1 RNA are highly variable at the single-cell level and that whenever linc-HOXA1 RNA abundance was high, Hoxa1 mRNA abundance was low and vice versa. Knockdown analysis revealed that depletion of linc-HOXA1 RNA at its site of transcription increased transcription of the Hoxa1 gene cis to the chromosome and that exposure of cells to retinoic acid can disrupt this interaction. We further showed that linc-HOXA1 RNA represses Hoxa1 by recruiting the protein PURB as...
Journal of Global Research in Computer Science, 2012
Abstract: This electronic document gives the study of a bidirectional model, of smell as a media ... more Abstract: This electronic document gives the study of a bidirectional model, of smell as a media and it‟ s applications in the real world. This paper presents a prototype system, which uses smell as a medium to communicate information bi-directionally in the ...
Proceedings of the National Academy of Sciences, 2005
The mechanism of transport of mRNA-protein (mRNP) complexes from transcription sites to nuclear p... more The mechanism of transport of mRNA-protein (mRNP) complexes from transcription sites to nuclear pores has been the subject of many studies. Using molecular beacons to track single mRNA molecules in living cells, we have characterized the diffusion of mRNP complexes in the nucleus. The mRNP complexes move freely by Brownian diffusion at a rate that assures their dispersion throughout the nucleus before they exit into the cytoplasm, even when the transcription site is located near the nuclear periphery. The diffusion of mRNP complexes is restricted to the extranucleolar, interchromatin spaces. When mRNP complexes wander into dense chromatin, they tend to become stalled. Although the movement of mRNP complexes occurs without the expenditure of metabolic energy, ATP is required for the complexes to resume their motion after they become stalled. This finding provides an explanation for a number of observations in which mRNA transport appeared to be an enzymatically facilitated process.
Development of Big data anonymization framework using DNA Computing
2022 International Conference on Artificial Intelligence and Data Engineering (AIDE)
International Journal of Recent Technology and Engineering (IJRTE), 2020
ETL stands for extraction, transformation and loading, where extraction is done to active data fr... more ETL stands for extraction, transformation and loading, where extraction is done to active data from the source, transformation involve data cleansing, data filtering, data validation and finally application of certain rules and loading stores back the data to the destination repository where it has to finally reside. Pig is one of the most important to which could be applied in Extract, Transform and Load (ETL) process. It helps in applying the ETL approach to the large set of data. Initially Pig loads the data, and further is able to perform predictions, repetitions, expected conversions and further transformations. UDFs can be used to perform more complex algorithms during the transformation phase. The huge data processed by Pig, could be stored back in HDFS. In this paper we demonstrate the ETL process using Pig in Hadoop. Here we demonstrate how the files in HDFS are extracted, transformed and loaded back to HDFS using Pig. We extend the functionality of Pig Latin with Python UD...
CERN European Organization for Nuclear Research - Zenodo, Oct 27, 2022
Speech is the most effective means for humans to communicate their ideas and emotions across a va... more Speech is the most effective means for humans to communicate their ideas and emotions across a variety of languages. Every language has a different set of speech characteristics. The tempo and dialect vary from person to person even when speaking the same language. For some folks, this makes it difficult to understand the message being delivered. Long speeches can be challenging to follow at times because of things like inconsistent pronunciation, tempo, and other factors. The development of technology that enables the recognition and transcription of voice into text is aided by speech recognition, an interdisciplinary area of computational linguistics. The most crucial information is taken from a text source and adequately summarized by text summarization.
CERN European Organization for Nuclear Research - Zenodo, Oct 27, 2022
International Journal for Research in Applied Science and Engineering Technology, 2020
Speech recognition is the method of translating spoken words into text. The speech recognition pr... more Speech recognition is the method of translating spoken words into text. The speech recognition process digitizes the sound waves into basic language units. Speech recognition is one of the most used technologies in today's life. This technology can be seen everywhere around a person, for example in phones, games, etc. The main purpose of the paper is to know the knowledge and the technology behind this superb invention.
Novel DNA Cryptosystem using Genetic Operators
Design Engineering, Aug 11, 2021
Privacy-preserving data mining has numerous applications which are naturally supposed to be “priv... more Privacy-preserving data mining has numerous applications which are naturally supposed to be “privacyviolating” applications. The key is to design methods which continue to be effective, without compromising security. Data mining is the process of analyzing data. Data Privacy is collection of data and distribution of data. Privacy issues arise in different area such as health care, intellectual property, biological data, financial transaction etc. Protection of data is a very challenging task while data transfer. Sensitive information needs protection. There are two kinds of major attacks against privacy namely record linkage and attribute linkage attacks. Research have proposed some methods namely kanonymity, l-diversity, t-closeness for data privacy. k-anonymity method preserves the privacy against record linkage attack alone. IndexTerms: Anonymization, Privacy Preserving, k-anonymity, PPDM, PPDP
International Journal for Research in Applied Science and Engineering Technology, 2019
Technology revolution has been facilitating millions of people by generating tremendous data, res... more Technology revolution has been facilitating millions of people by generating tremendous data, resulting in big data. It has been a distinct knowledge that massive amount of data have been generated continuously at extraordinary and ever increasing scales. Big Data is a new term used to identify the datasets that due to their large size and complexity. Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. Big Data mining is the capability of extracting useful information from these large datasets or streams of data, that due to its volume, variability, and velocity, it was not possible before to do it. The Big Data challenge is becoming one of the most exciting opportunities for the next years. This survey paper includes the information about what is big data, big data sources, Data mining, Big data mining and the challenges.
International Journal for Research in Applied Science and Engineering Technology, 2019
In this era of information age, a huge amount of data generates every moment through various sour... more In this era of information age, a huge amount of data generates every moment through various sources. This enormous data is beyond the processing capability of traditional data management system to manage and analyse the data in a specified time span. This huge amount of data refers to Big Data. Big Data faces numerous challenges in various operations on data such as capturing data, data analysis, data searching, data sharing, data filtering etc. HADOOP has showed a big way of various enterprises for big data management. Big data hadoop deals with the implementation of various industry use cases. Hadoop framework has been emerged as the most effective and widely adopted framework for Big Data processing. In this paper we discuss the implementation analysis of MapReduce, Pig and Hive approaches.
Encoding and decoding algorithms to store Big data into DNA
<strong><em>Abstract</em> - The latest outburst of digital data has always led ... more <strong><em>Abstract</em> - The latest outburst of digital data has always led an urge to create superior data storage architectures with the capacity to store huge amounts of data. DNA as data storage is one of the promising solutions for massive amount of information generated and the need to store data for long-lasting period of time. DNA as a bio-molecule based memory device beside with big data storage in DNA has shown a new direction towards DNA computing to solve computational problems. This paper critically analyzes the various methods used for encoding data onto DNA through the FASTA format files which undergoes biological procedures to save data in DNA and stored in NCBI database. The same FASTA file is retrieved back tracking the GI accession number of the file to be accessed and further decoded back into its original text format. The results show that our implementations can successfully transform and recover the original data to and from the DNA after undergoing the encoding and decoding procedure</strong>
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2019
Anonymization techniques are enforced to provide privacy protection for the data published on clo... more Anonymization techniques are enforced to provide privacy protection for the data published on cloud. These techniques include various algorithms to generalize or suppress the data. Top Down Specification in k anonymity is the best generalization algorithm for data anonymization. As the data increases on cloud, data analysis becomes very tedious. Map reduce framework can be adapted to process on these huge amount of Big Data. We implement generalized method using Map phase and Reduce Phase for data anonymization on cloud in two different phases of Top Down Specification.
International Journal for Research in Applied Science and Engineering Technology, 2019
The reduction of emissions from deforestation and forest degradation (REDD) constitutes part of t... more The reduction of emissions from deforestation and forest degradation (REDD) constitutes part of the international climate agreements and contributes to the Sustainable Development Goals. This research is motivated by the risks associated with the future CO 2 price uncertainty in the context of the offsetting of carbon emissions by regulated entities. The research asked whether it is possible to reduce these financial risks. In this study, we consider the bilateral interaction of a REDD supplier and a greenhouse gas (GHG)-emitting energy producer in an incomplete emission offsets market. Within this setting, we explore an innovative financial instrument-flobsion-a flexible option with benefit-sharing. For the quantitative assessment, we used a research method based on a two-stage stochastic technological portfolio optimization model established in earlier studies. First, we obtain an important result that the availability of REDD offsets does not increase the optimal emissions of the electricity producer under any future CO 2 price realization. Moreover, addressing concerns about a possible "crowding-out" effect of REDD-based offsets, we demonstrate that the emissions and offsetting cost will decrease and increase, respectively. Second, we demonstrate the flexibility of the proposed instrument by analyzing flobsion contracts with respect to the benefit-sharing ratio and strike price within the risk-adjusted supply and demand framework. Finally, we perform a sensitivity analysis with respect to CO 2 price distributions and the opportunity costs of the forest owner supplying REDD offsets. Our results show that flobsion's flexibility has advantages compared to a standard option, which can help GHG-emitting energy producers with managing their compliance risks, while at the same time facilitating the development of REDD programs. In this study we limited our analysis to the case of the same CO 2 price distributions foreseen by both parties; the flobsion pricing under asymmetric information could be considered in the future.
Adaptation and innovation are extremely important to the manufacturing industry. This development... more Adaptation and innovation are extremely important to the manufacturing industry. This development should lead to sustainable manufacturing using new technologies. To promote sustainability, smart production requires global perspectives of smart production application technology. In this regard, thanks to intensive research eorts in the field of artificial intelligence (AI), a number of AI-based techniques, such as machine learning, have already been established in the industry to achieve sustainable manufacturing. Thus, the aim of the present research was to analyze, systematically, the scientific literature relating to the application of artificial intelligence and machine learning (ML) in industry. In fact, with the introduction of the Industry 4.0, artificial intelligence and machine learning are considered the driving force of smart factory revolution. The purpose of this review was to classify the literature, including publication year, authors, scientific sector, country, institution, and keywords. The analysis was done using the Web of Science and SCOPUS database. Furthermore, UCINET and NVivo 12 software were used to complete them. A literature review on ML and AI empirical studies published in the last century was carried out to highlight the evolution of the topic before and after Industry 4.0 introduction, from 1999 to now. Eighty-two articles were reviewed and classified. A first interesting result is the greater number of works published by the USA and the increasing interest after the birth of Industry 4.0.