Objectifying the Subjective: Fundaments and Applications of Soft Metrology (original) (raw)
Related papers
Soft metrology based on machine learning: A review
Measurement Science and Technology
Soft metrology has been defined as a set of measurement techniques and models that allow the objective quantification of properties usually determined by human perception such as smell, sound or taste. The development of a soft metrology system requires the measurement of physical parameters and the construction of a model to correlate them with the variables that need to be quantified. This paper presents a review of indirect measurement with the aim of understanding the state of development in this area, as well as the current challenges and opportunities; and proposes to gather all the different designations under the term soft metrology, broadening its definition. For this purpose, the literature on indirect measurement techniques and systems has been reviewed, encompassing recent as well as a few older key documents to present a time line of development and map out application contexts and designations. As machine learning techniques have been extensively used in indirect measurement strategies, this review highlights them, and also makes an effort to describe the state of the art regarding the determination of uncertainty. This study does not delve into developments and applications for human and social sciences, although the proposed definition considers the use that this term has had in these areas.
Relationship between human perception of softness and instrument measurements
BioResources, 2018
Softness, as a subjective perception, is difficult to define and quantify. For decades, panel tests have been used to judge differences in the softness of hygiene tissue samples. Panel tests can be a time-consuming and expensive process. A number of protocols have been developed to quantify the physical properties of tissues associated with softness. The Tissue Softness Analyzer (TSA) by Emtec has gained popularity in characterizing the physical properties of tissues associated with softness. The instrument was designed with softness in mind and attempts to simulate the touch of the human hand. There is currently no comprehensive study that compares the results from a TSA and human panel. In this work, panel tests were used to validate the performance of the TSA with bath tissue. It was determined that one component of the TSA measurements (TS7) linearly correlated with the panel results. Among all of the algorithms available for use with the TSA, the TP2 algorithm most accurately p...
IEEE Instrumentation & Measurement Magazine, 2016
A few years ago I joined a team of experts writing a revised standard for phasor measurement units (PMUs), devices that seemed to perform amazing feats of measurement in electric power systems. Not long after I joined the group, I began to be troubled by what we were writing about the measurement of frequency. In particular, it seemed to me that the term frequency had not been defined if it was changing, and since the PMU was expected to return a value for the rate of change of frequency (ROCOF), it was obviously expected to be changing. If the term is not defined precisely, I thought, how would you know if the measurement was being made accurately? The group invited me to find a definition for the term. Answering that challenge took me on a journey of discovery in metrology. It may be that some of the ideas presented here are not exactly new-but they were new to me, even though I have been involved in measurements all my life. I would like to share my new view of measurement.
Toward a harmonized treatment of nominal properties in metrology
Metrologia
This paper explores in a metrological perspective the basic characteristics of an (i) experimental process that (ii) provides publicly trustworthy information (iii) on the property of an object as a value of that property (iv) through the comparison of the property and a reference set of properties of the same kind, at the same time not requiring that the property is quantitative. The conclusion is that such a process, called here a nominal property evaluation, is not only both logically and operatively possible, but actually shares most of the fundamental features of measurement, and in particular the possibility to provide publicly trustworthy information. Hence the proposed conceptual framework paves the way toward a harmonized treatment of nominal properties in metrology.
Accounting for systematic effects in metrology and testing, namely in comparisons
Replication of measurements and the combination of observations are standard and essential practices in metrology. A metrological -or testing-process of evaluating the uncertainty of the measurement results consists, in each laboratory, of basically three steps, using different methods to fulfil distinct purposes: (a) When done on the same standard, to obtain the statistical features of the observations allowing to assess the repeatability of the value of the standard; (b) When done on the same standard, to obtain a measure of the effect on the total uncertainty of the variability of the influence parameters affecting the standard, including dependence on time, i.e. to assess the reproducibility of the value of the standard; (c) When done on several standards of the laboratory, to check if they have the same value or to establish the differences between their values, and to evaluate the associated uncertainty; i.e., to evaluate the accuracy of the values of the laboratory standards. This exercise can be called intra-laboratory comparison. When the exercise is performed for purpose (c) by comparing one (or more) standards provided by different laboratories, it is called inter-laboratory comparison. Past experience suggests that one should assume, as an a priori knowledge, that the comparisons are performed to detect bias. Bias is originating from the influence quantities, whose variability can show a non-zero mean and is also the source of the generally higher uncertainty obtained in 'reproducibility conditions' with respect to 'repeatability conditions'. The paper is shortly recalling first the basic terms used in several written standards and international documents that not always are fully consistent each other and also show some evolutions of the concepts in the past decade, with the consequent possible confusion arising from the fact that not everybody is talking of the same things when they are assumed to. Then, is comparing several data models and is discussing their merits in taking (or not) into account the systematic effects, which are the prevailing reason of systematic errors in most metrology and testing measurements.
Metrology in industry. The mean of “validation” in different measurements
Journal of Physics: Conference Series, 2018
Metrology is the concurrent of the science of measurement. It is the essential part for any scientific research, industry (including manufacturing), trading, safety (environment protection, medicine, new/smart technologies) and realistically for all areas of human daily life. Nowadays society should not have any possibilities for complete functioning without it. Metrology helps to ensure high accuracy, low uncertainty measurements that are needful for now and the future. Still many industry areas needs deeper understanding of legal and industrial metrology requirements. Also it is missing some common evaluation of final result or process characterization in technical, social or natural sciences (chemistry, pharmaceutics and medicine). High variety of measurement areas impede the common "language" and understanding. This research highlights these problems and gives some considerations about term "validation" as an example.
About the treatment of systematic effects in metrology
Measurement, 2009
A comparison of the text of VIM recent III Edition with that of the GUM and of its contemporary VIM II Edition alights significant differences in the definition of basic measurement terms in the two documents, and with respect to the basic written standards in the field of testing, ISO 5725 and ISO 3534. This paper intends to introduce author's interpretation of these -and companion-texts, concerning specifically the terminology and the statistical treatment of the influence quantities and of the effects of their variability (in time and standard-to-standard), either related to replicated measurements performed on a single standard (standard 'reproducibility') or to the comparisons of different standards, thus involving the concept of 'accuracy' and its estimate, and consequently directly relevant to traceability. Another question that arose a few years ago was whether different types of measurands could be the consequence of the different intrinsic nature of different types of standards. It prompted an analysis that resulted in the proposal of considering two distinct 'classes' of standards. These classes, more recently also labelled 'kind', require different answers to the issue of the treatment of systematic effects. The distinction is relevant, in particular, to the statistical treatment of comparison data, which form the basis of the traceability assessment. This paper is presenting a discussion on the implications of the above distinction, concentrating on cases where systematic effects are dominating the experimental results, a common case in several metrology fields, and on ways to tackle the problem of the correction required by the GUM for standards of class 2 (standards whose values are accurate measures of a common measurand) -a class often not recognised in the general literature.
A better understanding of how to characterise human response is essential to improved person-centred care and other situations where human factors are crucial. Challenges to introducing classical metrological concepts such as measurement uncertainty and traceability when characterising Man as a Measurement Instrument include the failure of many statistical tools when applied to ordinal measurement scales and a lack of metrological references in, for instance, healthcare. The present work attempts to link metrological and psychometric (Rasch) characterisation of Man as a Measurement Instrument in a study of elementary tasks, such as counting dots, where one knows independently the expected value because the measurement object (collection of dots) is prepared in advance. The analysis is compared and contrasted with recent approaches to this problem by others, for instance using signal error fidelity.