Software Verification and Validation Research Papers (original) (raw)

Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness or... more

Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness or otherwise of formal methods in various contexts for improving software testing. A number of different possibilities for the use of formal methods are explored and questions raised. The contributors are all members of the UK FORTEST Network on formal methods and testing. Although the authors generally believe that formal methods are useful in aiding the testing process, this paper is intended to provoke discussion. Dissenters are encouraged to put their views to the panel or individually to the authors.

The formal analysis described here detects two so far undetected real deadlock situations per thousand C source files or million lines of code in the open source Linux operating system kernel, and three undetected accesses to freed... more

The formal analysis described here detects two so far undetected real deadlock situations per thousand C source files or million lines of code in the open source Linux operating system kernel, and three undetected accesses to freed memory, at a few seconds per file. That is notable because the code has been continuously under scrutiny from thousands of developers' pairs of eyes. In distinction to model-checking techniques, which also use symbolic logic, the analysis uses a ``3-phase" compositional Hoare-style programming logic combined with abstract interpretation. The result is a customisable post-hoc semantic analysis of C code that is capable of several different analyses at once.

The dynamic analysis of the safety critical software is an important phase of the Independent Verification and Validation of the system. The EN 50128 has detailed the methods that shall be used for this phase of the verification life... more

The dynamic analysis of the safety critical software is an important phase of the Independent Verification and Validation of the system. The EN 50128 has detailed the methods that shall be used for this phase of the verification life cycle. This phase is so critical to the project output that it demands meticulous planning and organization. Here we discuss the dynamic analysis methods suggested by the CENELEC standards for SIL 4 software

Debugging large-scale parallel applications is challenging. In most HPC applications, parallel tasks progress in a coordinated fashion, and thus a fault in one task can quickly propagate to other tasks, making it difficult to debug.... more

Debugging large-scale parallel applications is challenging. In most HPC applications, parallel tasks progress in a coordinated fashion, and thus a fault in one task can quickly propagate to other tasks, making it difficult to debug. Finding the least-progressed tasks can significantly reduce the effort to identify the task where the fault originated. However, existing approaches for detecting them suffer low accuracy and large overheads; either they use imprecise static analysis or are unable to infer progress dependence inside loops. We present a loop-aware progress-dependence analysis tool, Prodometer, which determines relative progress among parallel tasks via dynamic analysis. Our fault-injection experiments suggest that its accuracy and precision are over 90% for most cases and that it scales well up to 16,384 MPI tasks. Further, our case study shows that it significantly helped diagnosing a perplexing error in MPI, which only manifested at large scale.

Literature on the modeling and simulation of complex adaptive systems (cas) has primarily advanced vertically in different scientific domains with scientists developing a variety of domain-specific approaches and applications. However,... more

Literature on the modeling and simulation of complex adaptive systems (cas) has primarily advanced vertically in different scientific domains with scientists developing a variety of domain-specific approaches and applications. However, while cas researchers are inherently interested in an interdisciplinary comparison of models, to the best of our knowledge, there is currently no single unified framework for facilitating the development, comparison, communication and validation of models across different scientific domains. In this thesis, we propose first steps towards such a unified framework using a combination of agent-based and complex network-based modeling approaches and guidelines formulated in the form of a set of four levels of usage, which allow multidisciplinary researchers to adopt a suitable framework level on the basis of available data types, their research study objectives and expected outcomes, thus allowing them to better plan and conduct their respective research case studies. Firstly, the complex network modeling level of the proposed framework entails the development of appropriate complex network models for the case where interaction data of cas components is available, with the aim of detecting emergent patterns in the cas under study. The exploratory agent-based modeling level of the proposed framework allows for the development of proof-of-concept models for the cas system, primarily for purposes of exploring feasibility of further research. Descriptive agent-based modeling level of the proposed framework allows for the use of a formal step-by-step approach for developing agent-based models coupled with a quantitative complex network and pseudocode-based specification of the model, which will, in turn, facilitate interdisciplinary cas model comparison and knowledge transfer. Finally, the validated agent-based modeling level of the proposed framework is concerned with the building of in-simulation verification and validation of agent-based models using a proposed Virtual Overlay Multiagent System approach for use in a systematic team-oriented approach to developing models. The proposed framework is evaluated and validated using seven detailed case study examples selected from various scientific domains including ecology, social sciences and a range of complex adaptive communication networks. The successful case studies demonstrate the potential of the framework in appealing to multidisciplinary researchers as a methodological approach to the modeling and simulation of cas by facilitating effective communication and knowledge transfer across scientific disciplines without the requirement of extensive learning curves.

Quality assurance is one of the important issues for software companies because the delivery of high-quality software for customer satisfaction is much needed. Software quality is a relatively complex concept, but many companies have... more

Quality assurance is one of the important issues for software companies because the delivery of high-quality software for customer satisfaction is much needed. Software quality is a relatively complex concept, but many companies have standards for quality assurance. In agile methods, developers are responsible for quality assurance. Agile methods involve many practices for quality assurance. Moreover, quality assurance in the waterfall model in each stage by using different practices is done for validation. Quality assurance has many parameters, to control software quality, that are also discussed. The traditional quality assurance is carried out periodically at various stages, but the agile quality assurance is constantly being carried out by the team on daily basis. In this paper, we mentioned ways to improve quality assurance in agile development.

The main objectives of this work are to describe the online bus pass generation and ticket booking using QR code. Online bus pass generation is helpful to people who are suffering issues with the present technique for the generation of... more

The main objectives of this work are to describe the online bus pass generation and ticket booking using QR code. Online bus pass generation is helpful to people who are suffering issues with the present technique for the generation of bus pass and renewal. This project consists of two login pages, one for user registration and the other one for admin. Users need to register by submitting their details through online. Once the registration process is done then a security code called One Time Password (OTP) code will be sent to the user's registered mail. This system is used for ticket generation, bus pass formation and renewing of the bus pass of the users. The user can login with Idno and password to perform the pass booking and renewal. Bus Ticket Checker can scan the users QR code to check the validity of bus pass.

A simple, robust, precise HPLC method was developed and the method has been validated for the estimation of valsartan and nebivolol simultaneously in its combined dosage forms as well as individual... more

A simple, robust, precise HPLC method was developed and the
method has been validated for the estimation of valsartan and
nebivolol simultaneously in its combined dosage forms as well as
individual formulations. Both the compounds are well resolved in a
isocratic method by using the mobile phase composition of
acetonitrile: methanol: pH4.0 0.02M Potassium hydrogen phosphate
buffer in the ratio of 50:20:30 v/v at a flow rate of 1.0mL/min using a
C-18 column. The detection was carried out at 210nm.. The retention
time of Nebivolol and Valsartan were 2.5 min and 4.3 min
respectively. The developed method was validated for specificity,
linearity, precision, accuracy, limit of detection (LOD), limit of
quantification (LOQ) and robustness as per ICH guidelines. Linearity
for Nebivolol and Valsartan were found in the range of 48-112 µg/ml
and 3.0-7.0 µg/ml, respectively. The percentage recoveries for
Nebivolol and Valsartan ranged from 98.9-100.8 % and 98.9-101.1
%, respectively. The proposed method could be used for routine
analysis of Nebivolol and Valsartan in their combined dosage forms

The benefits of effective verification and validation activities in the medical device domain include increased usability and reliability, decreased failure rate and recalls and reduced risks to patients and users. Though there is... more

The benefits of effective verification and validation activities in the
medical device domain include increased usability and reliability, decreased failure rate and recalls and reduced risks to patients and users. Though there is guidance on verification and validation in multiple standards in the medical device domain, these are difficult for the manufacturer to implement, as there is no consolidated information on how they can be successfully achieved. The paper is intended to highlight three major areas for improvement in the medical device software development domain. This research is based on an analysis of available literature in the field of verification and validation in generic software development, safety-critical and medical device software domains. Additionally,
we also performed a review of the standards and process improvement models available in these domains.

Microservice architecture is a cloud application design pattern that implies that the application is divided into a number of small independent services, each of which is responsible for implementing of a certain feature. The need for... more

Microservice architecture is a cloud application design pattern that implies that the application is divided into a number of small independent services, each of which is responsible for implementing of a certain feature. The need for continuous integration of developed and/or modified microservices in the existing system requires a comprehensive validation of individual microservices and their co-operation as an ensemble with other microservices. In this paper, we would provide an analysis of existing methods of cloud applications testing and identify features that are specific to the microservice architecture. Based on this analysis, we will try to propose a validation methodology of the microservice systems.

The quality and the correctness of software has a great concern in computer systems. Formal verification tools can used to provide a confidence that a software design is free from certain errors. This paper surveys tools that accomplish... more

The quality and the correctness of software has a great concern in computer systems. Formal verification tools can used to provide a confidence that a software design is free from certain errors. This paper surveys tools that accomplish automatic software verification to detect programming errors or prove their absence. The two tools considered are tools that based on Hoare logic namely, the KeY-Hoare and Hoare Advanced Homework Assistant (HAHA). A detailed example on these tools is provided, underlining their differences when applied to practical problems.

A new rapid, precise and sensitive reverse phase high performance liquid chromatographic (RP-HPLC) method has been developed and validated for the estimation of Perindopril and Amlodipine simultaneously in combined dosage... more

A new rapid, precise and sensitive reverse phase high performance
liquid chromatographic (RP-HPLC) method has been developed and
validated for the estimation of Perindopril and Amlodipine
simultaneously in combined dosage form. The two components
Perindopril and Amlodipine were well resolved on an isocratic
method, C18 column, utilizing a mobile phase composition of
acetonitrile: methanol: a mixed buffer of 0.02M Potassium
dihydrogen phosphate buffer and 0.02M Sodium dihydrogen
Phosphate buffer with 1mL Tri ethyl amine (40:20:40), v/v, pH 5.0)
at a flow rate of 1.0 mL/min with UV detection at 226 nm. The
retention time of Perindopril and Amlodipine were 2.9 min and 4.9
min respectively. The developed method was validated for
specificity, linearity, precision, accuracy, limit of detection (LOD),
limit of quantification (LOQ) and robustness as per ICH guidelines.
Linearity for Perindopril and Amlodipine were found in the range of
48-112 µg/ml and 60.0-140.0 µg/ml, respectively. The percentage
recoveries for Perindopril and Amlodipine ranged from 98.9-100.4 %
and 98.9-100.7 %, respectively. The proposed method could be used
for routine analysis of Perindopril and Amlodipine in their combined
dosage forms.

Effective verification and validation are central to medical device software development and are essential for regulatory approval. Although guidance is available in multiple standards in the medical device software domain, it is... more

Effective verification and validation are central to medical device software development and are essential for regulatory approval. Although guidance is available in multiple standards in the medical device software domain, it is difficult for the manufacturer to implement as there is no consolidated view of this information. Likewise, the standards and guidance documents do not consider process improvement initiatives. This paper assists in relation to both these aspects and introduces the development of processes for verification and validation in the medical device domain.

The formal methods community is in general very good at undertaking research into the mathematical aspects of formal methods, but not so good at promulgating the use of formal methods in an engineering environment and at an industrial... more

The formal methods community is in general very good at undertaking research into the mathematical aspects of formal methods, but not so good at promulgating the use of formal methods in an engineering environment and at an industrial scale. Technology transfer is an extremely important part of the overall effort necessary in the acceptance of formal techniques. This paper explores some of the more informal aspects of applying formal methods and presents some maxims with associated discussion that may help in the application of formal methods in an industrial setting.

In this paper details about the results of a code-to-code validation procedure of an in-house developed building simulation model, called DETECt, are reported. The tool was developed for research purposes in order to carry out dynamic... more

In this paper details about the results of a code-to-code validation procedure of an in-house developed building simulation model, called DETECt, are reported. The tool was developed for research purposes in order to carry out dynamic building energy performance and parametric analyses by taking into account new building envelope integrated technologies, novel construction materials and innovative energy saving strategies. The reliability and accuracy of DETECt was appropriately tested by means of the standard BESTEST validation procedure. In the paper, details of this validation process are accurately described. A good agreement between the obtained results and all the reference data of the BESTEST qualification cases is achieved. In particular, the obtained results vs. standard BESTEST output are always within the provided ranges of confidence. In addition, several test cases output obtained by DETECt (e.g., dynamic profiles of indoor air and building surfaces temperature and heat fluxes and spatial trends of temperature across walls) are provided. Keywords: building energy performance simulation; BESTEST validation procedure; building heating and cooling demand and load

The proliferation of mobile devices and applications has seen an un- precedented rise in recent years. Application domains of mobile systems range from personal assistants to point-of-care health informatics systems. Software development... more

The proliferation of mobile devices and applications has seen an un-
precedented rise in recent years. Application domains of mobile systems range
from personal assistants to point-of-care health informatics systems. Software
development for such diverse application domains requires stringent and well-
defined development process. Software testing is a type of verification that is
required to achieve more reliable system. Even though, Software Engineering
literature contains many research studies that address challenging issues in mo-
bile application development, we could not have identified a comprehensive li-
terature review study on this subject. In this paper, we present a systematic
mapping of the Software Verification in the field of mobile applications. We
provide definitive metrics and publications about mobile application testing,
which we believe will allow fellow researchers to identify gaps and research
opportunities in this field.

This book ‘Software Engineering’ has been designed to help the Computer Engineering students studying in TU, KU, PU and Purbanchal University understand the knowledge of software engineering. The book follows the simple bulleted structure... more

This book ‘Software Engineering’ has been designed to help the Computer Engineering students studying in TU, KU, PU and Purbanchal University understand the knowledge of software engineering. The book follows the simple bulleted structure that enables the students to gain clarity and help to understand the essence of the topic. It includes concepts and diagrams to help students gain a deep-seated understanding of the key elements intrinsic to the subject. The pedagogical elements would prepare the student for their examination with ease.

Verification and Validation (V&V) is a series of activities ,technical and managerial ,which performed by system tester not the system developer in order to improve the system quality ,system reliability and assure that product satisfies... more

Verification and Validation (V&V) is a series of activities ,technical and managerial ,which performed by system tester not the system developer in order to improve the system quality ,system reliability and assure that product satisfies the users operational needs. Verification is the assurance that the products of a particular development phase are consistent with the requirements of that phase and preceding phase(s), while validation is the assurance that the final product meets system requirements. an outside agency can be used to performed V&V, which is indicate by Independent V&V, or IV&V, or by a group within the organization but not the developer, referred to as Internal V&V. Use of V&V often accompanies testing, can improve quality assurance, and can reduce risk. This paper putting guidelines for performing V&V of Multi-Agent Systems (MAS). KEYWORDS verification and validation life cycle , Multi-agent system, verification and validation Techniques

In this paper, we propose to base the verification of a GUI application on a reference model used in black-box testing. The reference model is a formal model for the behavior of the GUI application. It is derived by dynamic analysis... more

In this paper, we propose to base the verification of a GUI application on a reference model used in black-box testing. The reference model is a formal model for the behavior of the GUI application. It is derived by dynamic analysis (hence “black-box”). Thus, it can be used to account for the graphical interface even when the GUI toolkit is not amenable to formal analysis or its source code is not available. We have implemented our approach; a preliminary case study indicates its feasibility in principle.

Requirements engineering is very crucial phase of software development cycle. Different sub activities are performed during requirements engineering. Previous research shows that there are various techniques regarding requirements... more

Requirements engineering is very crucial phase of software development cycle. Different sub activities are performed during requirements engineering. Previous research shows that there are various techniques regarding requirements verification and validation. These techniques can have different relative importance in different situation. Typically the requirements engineers don’t have proper knowledge regarding nature of each technique. Existing requirements validation & verification techniques differ in their ability to effectively cover the system requirements. It is highly desirable to conduct an empirical study on requirements verification and validation techniques to know their relative importance in opinion of different experts. In this study we developed a questionnaire based on requirements verification and validation techniques and their relative importance. Analysis of the results indicates that according to experts point of view (based on their experiences and judgments) ...

A simple text processing tool which allows positioning of lines within a document is presented using the formal specification language Z. Implementation details such as the use of tab characters and newline sequences are covered. The... more

A simple text processing tool which allows positioning of lines within a document is presented using the formal specification language Z. Implementation details such as the use of tab characters and newline sequences are covered. The program has been implemented under the UNIX operating system. It is hoped that the use of similar techniques will become widespread in the field of software engineering.

The EMVCo (EMV® is a registered trademark or trademark of EMVCo, LLC in the US and other countries.) organisation (i.e. MasterCard, Visa, etc.) protocols facilitate worldwide interoperability of secure electronic payments. Despite recent... more

The EMVCo (EMV® is a registered trademark or trademark of EMVCo, LLC in the US and other countries.) organisation (i.e. MasterCard, Visa, etc.) protocols facilitate worldwide interoperability of secure electronic payments. Despite recent advances, it has proved difficult for academia to provide an acceptable solution to construction of secure applications within industry’s constraints. In this paper, we describe a methodology we have applied to EMV1. It involves domain specific languages and verification tools targeting different analysis of interest. We are currently collaborating with EMVCo on their upcoming EMV® 2nd Generation (EMV2) specifications.

One of the important component of software quality assurance in software development is testing. This is because testing software consumes over 50% of the development effort. To minimize development time, porting software applications has... more

One of the important component of software quality assurance in software development is testing. This is because testing software consumes over 50% of the development effort. To minimize development time, porting software applications has become a necessity which has not really improved testing activities – Software porting is the process of making necessary changes to an existing application running on an operating system and hardware architecture, with the aim of enabling the application to run on another operating system and hardware architecture. The testing activities carried out during software development or porting are adhoc without well established guidelines. As such, several testing models has been established by researchers and establishments such as IEEE and ISO to guide and ease testing activities. This paper reviewed seven (7) different testing models and approaches used during software development or porting. The strengths and weaknesses of these testing approaches are enumerated and evaluated. After evaluation, the limitation of each testing technique are pointed out with the aim of overcoming the limitations in future work.

Testing is the primary approach for detecting software defects. A major challenge faced by testers lies in crafting eecient test suites, able to detect a maximum number of bugs with manageable eeort. To do so, they rely on coverage... more

Testing is the primary approach for detecting software defects. A major challenge faced by testers lies in crafting eecient test suites, able to detect a maximum number of bugs with manageable eeort. To do so, they rely on coverage criteria, which deene some precise test objectives to be covered. However, many common criteria specify a signiicant number of objectives that occur to be infeasible or redundant in practice, like covering dead code or semantically equal mutants. Such objectives are well-known to be harmful to the design of test suites, impacting both the eeciency and precision of the tester's eeort. This work introduces a sound and scalable technique to prune out a signiicant part of the infeasible and redundant objectives produced by a panel of white-box criteria. In a nutshell, we reduce this task to proving the validity of logical assertions in the code under test. The technique is implemented in a tool that relies on weakest-precondition calculus and SMT solving for proving the assertions. The tool is built on top of the Frama-C veriication platform, which we carefully tune for our speciic scalability needs. The experiments reveal that the pruning capabilities of the tool can reduce the number of targeted test objectives in a program by up to 27% and scale to real programs of 200K lines, making it possible to automate a painstaking part of their current testing process.

The paper suggests a method for formal proof of safety for railway software systems which includes validation and verification stages and considers software and hardware as one integrated piece. The method is based on as experience of... more

The paper suggests a method for formal proof of safety for railway software systems which includes validation and verification stages and considers software and hardware as one integrated piece. The method is based on as experience of software correctness proof of railway devices as on the worldwide theory and practice for safety-critical systems. The method can do safety analysis of an arbitrary railway hardware-software complex, which has software size up to 10 KLOC. The method can be used to prove functional safety and information security.

The proliferation of mobile devices and applications has seen an unprecedented rise in recent years. Application domains of mobile systems range from personal assistants to point-of-care health informatics systems. Software development... more

The proliferation of mobile devices and applications has seen an unprecedented rise in recent years. Application domains of mobile systems range from personal assistants to point-of-care health informatics systems. Software development for such diverse application domains requires stringent and well-defined development process. Software testing is a type of verification that is required to achieve more reliable system. Even though, Software Engineering literature contains many research studies that address challenging issues in mobile application development, we could not have identified a comprehensive literature review study on this subject. In this paper, we present a systematic mapping of the Software Verification in the field of mobile applications. We provide definitive metrics and publications about mobile application testing, which we believe will allow fellow researchers to identify gaps and research opportunities in this field.

The features of the software proof of correctness for multiprocessor railway systems on the example of remote control units 8Б and 16Б of the microprocessor railway signalling system "Iput" has been considered. Analysis of the experiences... more

The features of the software proof of correctness for multiprocessor railway systems on the example of remote control units 8Б and 16Б of the microprocessor railway signalling system "Iput" has been considered. Analysis of the experiences of the verification are proved possible to distinguish the category of safety functions for proof, determine the sequence of steps of the proof and to make criteria for assessing the amount of work for proof. It is shown that the determination of the invariant and the proof of it are necessary to avoid accumulation software errors.
Established the potential presence of the features for proof, which are difficult to formalize precisely before the proof of correctness take place, and the verification result must be checked by the end user, for showed instance, at the monitor diagnostic subsystem.

This chapter presents a provably correct compilation scheme that converts a program into a network of abstract components that interact with each other by exchanging request and acknowledgement signals. We provide a systematic and... more

This chapter presents a provably correct compilation scheme that converts a program into a network of abstract components that interact with each other by exchanging request and acknowledgement signals. We provide a systematic and modular
technique for correctly realizing the abstract components in
hardware device, and use a standard programming language to
describe both algorithms and circuits. The resulting circuitry, which behaves according to the program, has the same structure as the program. The circuit logic is asynchronous, with no global clock.

This paper presents a framework for evaluate and compare different defect taxonomies. Six well-known taxonomies are evaluated with it and the results are showed. We found deficiencies in every taxonomy once they are evaluated with the... more

This paper presents a framework for evaluate and compare different defect taxonomies. Six well-known taxonomies are evaluated with it and the results are showed. We found deficiencies in every taxonomy once they are evaluated with the framework.

This paper describes Mull, an open-source tool for mutation testing based on the LLVM framework. Mull works with LLVM IR, a low-level intermediate representation, to perform mutations, and uses LLVM JIT for just-in-time compilation. This... more

This paper describes Mull, an open-source tool for mutation testing based on the LLVM framework. Mull works with LLVM IR, a low-level intermediate representation, to perform mutations, and uses LLVM JIT for just-in-time compilation. This design choice enables the following two capabilities of Mull: language independence and fine-grained control over compilation and execution of a tested program and its mutations. Mull can work with code written in any programming language that supports compilation to LLVM IR, such as C, C++, Rust, or Swift. Direct manipulation of LLVM IR allows Mull to do less work to generate mutations: only modified fragments of IR code are recompiled, and this results in faster processing of mutated programs. To our knowledge, no existing mutation testing tool provides these capabilities for compiled programming languages. We describe the algorithm and implementation details of Mull, highlight current limitations of Mull, and present the results of our evaluation of Mull on real-world projects such as RODOS, OpenSSL, LLVM.

Requirements engineering is very crucial phase of software development cycle. Different sub activities are performed during requirements engineering. Previous research shows that there are various techniques regarding requirements... more

Requirements engineering is very crucial phase of software development cycle. Different sub activities are performed during requirements engineering. Previous research shows that there are various techniques regarding requirements verification and validation. These techniques can have different relative importance in different situation. Typically the requirements engineers don’t have proper knowledge regarding nature of each technique. Existing requirements validation & verification techniques differ in their ability to effectively cover the system requirements. It is highly desirable to conduct an empirical study on requirements verification and validation techniques to know their relative importance in opinion of different experts. In this study we developed a questionnaire based on requirements verification and validation techniques and their relative importance. Analysis of the results indicates that according to experts point of view (based on their experiences and judgments) ambiguous requirements are the biggest reason behind requirements verification and validation issues. Experts do believe that review and inspection are the best mitigation strategies to remove or minimize requirements verification and validation challenges.

This study was designed to verify the evidence of validity, through: 1) literature review about ushiro-ukemi pedagogical progression. 2) An expert performed the movement and it was recorded to develop the animation. 3) Finally, an... more

This study was designed to verify the evidence of validity, through: 1) literature review about ushiro-ukemi pedagogical progression. 2) An expert performed the movement and it was recorded to develop the animation. 3) Finally, an evaluation of 3D character animations processes by other experts’ opinions. 13 experts have evaluated three affirmatives through the Likert scale,
and answered one question, also in ordinal scale, about the quality of the animation. The calculated medians for the first, second and third affirmatives were 5, 5 and 5 (fully agree), but, only for the 3rd affirmative about sitting ushiro-ukemi, the median was 4 (agree). The percentage of answers that scored higher than agree varied from 84,6 to 100%. The median of the 3D character animation was very good (4) and the percentage of acceptance was 100% (>good). The found definitional evidence of validity for these animations ensured their application in a learning material.
Keywords: 3D character animation, Judo, sport skill, validation.

Agent Based Models are very popular in a number of different areas. For example, they have been used in a range of domains ranging from modeling of tumor growth, immune systems, molecules to models of social networks, crowds and computer... more

Agent Based Models are very popular in a number of different areas. For example, they have been used in a range of domains ranging from modeling of tumor growth, immune systems, molecules to models of social networks, crowds and computer and mobile self-organizing networks. One reason for their success is their intuitiveness and similarity to human cognition. However, with this power of abstraction, in spite of being easily applicable to such a wide number of domains, it is hard to validate agent-based models. In addition, building valid and credible simulations is not just a challenging task but also a crucial exercise to ensure that what we are modeling is, at some level of abstraction, a model of our conceptual system; the system that we have in mind. In this paper, we address this important area of validation of agent based models by presenting a novel technique which has broad applicability and can be applied to all kinds of agent-based models. We present a framework, where a virtual overlay multi-agent system can be used to validate simulation models. In addition, since agent-based models have been typically growing, in parallel, in multiple domains, to cater for all of these, we present a new single validation technique applicable to all agent based models. Our technique, which allows for the validation of agent based simulations uses VOMAS: a Virtual Overlay Multi-agent System. This overlay multi-agent system can comprise various types of agents, which form an overlay on top of the agent based simulation model that needs to be validated. Other than being able to watch and log, each of these agents contains clearly defined constraints, which, if violated, can be logged in real time. To demonstrate its effectiveness, we show its broad applicability in a wide variety of simulation models ranging from social sciences to computer networks in spatial and non-spatial conceptual models.

The EMVCo (EMV® is a registered trademark or trademark of EMVCo, LLC in the US and other countries.) organisation (i.e. MasterCard, Visa, etc.) protocols facilitate worldwide interoperability of secure electronic payments. Despite recent... more

The EMVCo (EMV® is a registered trademark or trademark of EMVCo, LLC in the US and other countries.) organisation (i.e. MasterCard, Visa, etc.) protocols facilitate worldwide interoperability of secure electronic payments. Despite recent advances, it has proved difficult for academia to provide an acceptable solution to construction of secure applications within industry’s constraints. In this paper, we describe a methodology we have applied to EMV1. It involves domain specific languages and verification tools targeting different analysis of interest. We are currently collaborating with EMVCo on their upcoming EMV® 2nd Generation (EMV2) specifications.

This paper presents a framework for evaluate and compare different defect taxonomies. Six well-known taxonomies are evaluated with it and the results are showed. We found deficiencies in every tax-onomy once they are evaluated with the... more

This paper presents a framework for evaluate and compare different defect taxonomies. Six well-known taxonomies are evaluated with it and the results are showed. We found deficiencies in every tax-onomy once they are evaluated with the framework.

Context: Genistein and daidzein, major isoflavone aglycone compounds contained in tempeh, became an interesting subject since their activities as cancer prevention, cardiovascular, and wound healing agents. It is important to develop an... more

Context: Genistein and daidzein, major isoflavone aglycone compounds contained in tempeh, became an interesting subject since their activities as cancer prevention, cardiovascular, and wound healing agents. It is important to develop an efficient extraction method to obtained genistein and daidzein from tempeh. Aims: To optimize extraction process of the two compounds from dried tempeh. Methods: The full factorial design was used to analyze the effect of combining factors affecting the extraction process. Genistein and daidzein analysis have been performed by using a C18 column of Phenomenex® (250 x 4.6 mm, 5 µm) and a mobile phase containing methanol:distilled water (70:30). The detection was carried out at 261 nm with a flow rate of 0.6 mL/min in the isocratic reverse-phase HPLC system. Results: The analytical method was validated according to AOAC guidelines including the parameters of selectivity, linearity, accuracy, precision, limit of detection, and limit of quantification. Conclusions: The optimal extraction condition was achieved at the ethanol concentration of 96%, the particle size of 0.6 mm and extraction time of 270 minutes. Total amount obtained from the extraction method was 26.03 mg% and 19.42 mg% for genistein and daidzein, respectively.