Computer Software Research Papers - Academia.edu (original) (raw)

Context Automotive software architectures describe distributed functionality by an interaction of software components. One drawback of today’s architectures is their strong integration into the onboard communication network based on... more

Context Automotive software architectures describe distributed functionality by an interaction of software components. One drawback of today’s architectures is their strong integration into the onboard communication network based on predefined dependencies at design time. The idea is to reduce this rigid integration and technological dependencies. To this end, service-oriented architecture offers a suitable methodology since network communication is dynamically established at run-time. Aim We target to provide a methodology for analysing hardware resources and synthesising automotive service-oriented architectures based on platform-independent service models. Subsequently, we focus on transforming these models into a platform-specific architecture realisation process following AUTOSAR Adaptive. Approach For the platform-independent part, we apply the concepts of design space exploration and simulation to analyse and synthesise deployment configurations, i. e., mapping services to ha...

The main aim of this article is to discuss how the functional and the object-oriented views can be inter-played to represent the various modeling perspectives of embedded systems. We discuss whether the object-oriented modeling paradigm,... more

The main aim of this article is to discuss how the functional and the object-oriented views can be inter-played to represent the various modeling perspectives of embedded systems. We discuss whether the object-oriented modeling paradigm, the predominant one to develop software at the present time, is also adequate for modeling embedded software and how it can be used with the functional paradigm. More specifically, we present how the main modeling tool of the traditional structured methods, data flow diagrams, can be integrated in an object-oriented development strategy based on the unified modeling language. The rationale behind the approach is that both views are important for modeling purposes in embedded systems environments, and thus a combined and integrated model is not only useful, but also fundamental for developing complex systems. The approach was integrated in a model-driven engineering process, where tool support for the models used was provided. In addition, model transformations have been specified and implemented to automate the process. We exemplify the approach with an IPv6 router case study.

The Shadow Semantics (Morgan, Math Prog Construction, vol 4014, pp 359–378, 2006 ; Morgan, Sci Comput Program 74(8):629–653, 2009 ) is a possibilistic (qualitative) model for noninterference security. Subsequent work (McIver et al.,... more

The Shadow Semantics (Morgan, Math Prog Construction, vol 4014, pp 359–378, 2006 ; Morgan, Sci Comput Program 74(8):629–653, 2009 ) is a possibilistic (qualitative) model for noninterference security. Subsequent work (McIver et al., Proceedings of the 37th international colloquium conference on Automata, languages and programming: Part II, 2010 ) presents a similar but more general quantitative model that treats probabilistic information flow. Whilst the latter provides a framework to reason about quantitative security risks, that extra detail entails a significant overhead in the verification effort needed to achieve it. Our first contribution in this paper is to study the relationship between those two models (qualitative and quantitative) in order to understand when qualitative Shadow proofs can be “promoted” to quantitative versions, i.e. in a probabilistic context. In particular we identify a subset of the Shadow’s refinement theorems that, when interpreted in the quantitative ...

Critical Infrastructures, such as energy, banking, and transport, are an essential pillar to the well-being of the national and international economy, security and quality of life. These infrastructures are dependent on a spectrum of... more

Critical Infrastructures, such as energy, banking, and transport, are an essential pillar to the well-being of the national and international economy, security and quality of life. These infrastructures are dependent on a spectrum of highly interconnected information infrastructures for their smooth, reliable and continuous operation. The field of protecting such Critical Information Infrastructures, or CIIP, faces numerous challenges, such as managing the secure interaction between peers, assuring the resilience and robustness of ...

Abstract—Scenarios have been advocated as a means of improving requirements engineering yet few methods or tools exist to support scenario-based RE. The paper reports a method and software assistant tool for scenario-based RE that... more

Abstract—Scenarios have been advocated as a means of improving requirements engineering yet few methods or tools exist to support scenario-based RE. The paper reports a method and software assistant tool for scenario-based RE that integrates with use case approaches to ...

Emerging pervasive information and computational environments require a content-based middleware infrastructure that is scalable, self-managing, and asynchronous. In this paper, we propose associative rendezvous (AR) as a paradigm for... more

Emerging pervasive information and computational environments require a content-based middleware infrastructure that is scalable, self-managing, and asynchronous. In this paper, we propose associative rendezvous (AR) as a paradigm for content-based decoupled interactions for pervasive ...

We propose a steganalytic algorithm for triangle meshes, based on the supervised training of a classifier by discriminative feature vectors. After a normalization step, the triangle mesh is calibrated by one step of Laplacian smoothing... more

We propose a steganalytic algorithm for triangle meshes, based on the supervised training of a classifier by discriminative feature vectors. After a normalization step, the triangle mesh is calibrated by one step of Laplacian smoothing and then a feature vector is computed, encoding geometric information corresponding to vertices, edges and faces. For a given steganographic or watermarking algorithm, we create a training set containing unmarked meshes and meshes marked by that algorithm, and train a classifier using Quadratic Discriminant Analysis. The performance of the proposed method was evaluated on six well-known watermarking/steganographic schemes with satisfactory accuracy rates.

Conceptual models are used in understanding and communicating the domain of interest during analysis phase of system development. As they are used in early phases, errors and omissions may propagate to later phases and may be very costly... more

Conceptual models are used in understanding and communicating the domain of interest during analysis phase of system development. As they are used in early phases, errors and omissions may propagate to later phases and may be very costly to correct. This paper proposes a framework for evaluating conceptual models when represented in a domain specific language based on UML constructs.

Interest rate is the most important financial variable that helps in determining the macro and microeconomic policy of a country. Therefore predicting any change in the value of interest rates become crucial for the government and various... more

Interest rate is the most important financial variable that helps in determining the macro and microeconomic policy of a country. Therefore predicting any change in the value of interest rates become crucial for the government and various other financial institutions. This research aims to forecast the interest rates for India and analyze various factors affecting the change in the interest rates for better forecasting. A novel approach of using sentiment analysis of tweets of various users about the economic situation along with various financial and economic variables that help in predicting interest rates has been used. Four machine learning models namely: Vector autoregressive model, Long short term memory, Sequential neural network, and Multilayer perceptron neural network have been compared based on evaluation metrics such as root mean square error, mean absolute error, and mean absolute error percentage. The analysis shows that the machine learning model with input as sentime...

Dynamic Adaptive Streaming over HTTP (DASH) is a recently proposed standard that offers different versions of the same media content to adapt the delivery process over the Internet to dynamic bandwidth fluctuations and different user... more

Dynamic Adaptive Streaming over HTTP (DASH) is a recently proposed standard that offers different versions of the same media content to adapt the delivery process over the Internet to dynamic bandwidth fluctuations and different user device capabilities. The peer-to-peer (P2P) paradigm for video streaming allows us to leverage the cooperation among peers, guaranteeing the service of video requests with increased scalability and reduced cost. We propose to combine these two approaches in a P2P-DASH architecture, exploiting the potentiality of both. The new platform is made of several swarms and a different DASH representation is streamed within each of them; unlike client-server DASH architectures, where each client autonomously selects which version to download according to current network conditions and to its device resources, we put forth a new rate control strategy implemented at peer site to maintain a good viewing quality to the local user and to simultaneously guarantee the s...

Ubiquitous Computing promises seamless access to a wide range of applications and Internet-based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise:... more

Ubiquitous Computing promises seamless access to a wide range of applications and Internet-based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption — the implementation of static Web interfaces; and dynamic adaptation — the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies — ...

... Suprasad V. Amari, Member, IEEE ... Notation label of component , [failure, repair] rate of component system-failure frequency “mean number of system-failures” per time-unit , Boolean variable indicating that component is [working,... more

... Suprasad V. Amari, Member, IEEE ... Notation label of component , [failure, repair] rate of component system-failure frequency “mean number of system-failures” per time-unit , Boolean variable indicating that component is [working, failed]; is the complement of , steady state ...

Much textual engineering knowledge is captured in tables, particularly in spreadsheets and in documents such as equipment manuals. To leverage the benefits of artificial intelligence, industry must find ways to extract the data and... more

Much textual engineering knowledge is captured in tables, particularly in spreadsheets and in documents such as equipment manuals. To leverage the benefits of artificial intelligence, industry must find ways to extract the data and relationships captured in these tables. This paper demonstrates the application of an ontological approach to make the classes and relations held in spreadsheet tables explicit. Ontologies offer a pathway because they use formal descriptions to define machine-interpretable definitions of shared concepts and relations between concepts. We illustrate this with two case studies on a failure modes and effects analysis (FMEA) table. Our examples demonstrate how the relationship between rows and columns in a table can be represented in logic for FMEA entries, thereby allowing the same ontology to ingest instance data from the IEC 60812:2006 FMEA Standard and a real industrial FMEA. We give relationships in the FMEA and asset hierarchy spreadsheets an explicit r...

Rainy image restoration is considered as one of the most important image restorations aspects to improve the outdoor vision. Many fields have used this kind of restorations such as driving assistant, environment monitoring, animals... more

Rainy image restoration is considered as one of the most important image restorations aspects to improve the outdoor vision. Many fields have used this kind of restorations such as driving assistant, environment monitoring, animals monitoring, computer vision, face recognition, object recognition and personal photos. Image restoration simply means how to remove the noise from the images. Most of the images have some noises from the environment. Moreover, image quality assessment plays an important role in the valuation of image enhancement algorithms. In this research, we will use a total variation to remove rain streaks from a single image. It shows a good performance compared to other methods, using some measurements MSE, PSNR, and VIF for an image with references and BRISQUE for an image without references.

In type-theoretic research on object-oriented programming, the issue of “covariance versus contravariance” is a topic of continuing debate. In this short note we argue that covariance and contravariance appropriately characterize two... more

In type-theoretic research on object-oriented programming, the issue of “covariance versus contravariance” is a topic of continuing debate. In this short note we argue that covariance and contravariance appropriately characterize two distinct and independent mechanisms. The so-called contravariance rule correctly captures the subtyping relation (that relation which establishes which sets of functions can replace another given set in every context ). A covariant relation, instead, characterizes the specialization of code (i.e., the definition of new code which replaces old definitions in some particular cases ). Therefore, covariance and contravariance are not opposing views, but distinct concepts that each have their place in object-oriented systems. Both can (and should) be integrated in a type-safe manner in object-oriented languages. We also show that the independence of the two mechanisms is not characteristic of a particular model but is valid in general, since covariant specia...