Resource management for embedded systems (original) (raw)
Related papers
Heterogeneous memory management for embedded systems
Proceedings of the international conference on Compilers, architecture, and synthesis for embedded systems - CASES '01, 2001
Abstract This paper presents a technique for the efficient compiler management of software-exposed heterogeneous memory. In many lower-end embedded chips, often used in micro-controllers and DSP processors, heterogeneous memory units such as scratch-pad SRAM, internal DRAM, external DRAM and ROM are visible directly to the software, without automatic management by a hardware caching mechanism. Instead the memory units are mapped to different portions of the address space. Caches are avoided because of their ...
π: Effective Use of Metacomputation for Structuring Operating Systems
1993
To respond to continuing hardware advances and changing application demands, operating systems must be flexible. Recently developed metacomputation techniques can provide the desired flexibility. The p architecture investigates the use of metaobject protocols to tailor subsystems of operating systems. It allows effective utilization of events and resources through interfaces for tailorable object implementations. Its main contributions are flexible event management, scope reification and change management. The challenges and possible solutions are described in the context of a distributed shared object subsystem. Keywords: Operating systems, distribution, resources, events, interfaces, reification, metacomputation, scope, distributed shared objects p: Effective Use of Metacomputation for Structuring Operating Systems Dinesh C. Kulkarni, Arindam Banerji, David L. Cohn 1. Introduction There is an emerging need for operating systems to be flexible and it is caused by two key factors: ...
Exploiting template metaprogramming to customize an object-oriented operating system
IEEE International Symposium on Industrial Electronics, 2013
Nowadays, the growing complexity of embedded systems demands for configurability, variability and reuse. Conditional compilation and object-orientation are two of the most applied approaches in the management of system variability. While the former increases the code management complexity, the latter leverages the needed modularity and adaptability to simplify the development of reusable and customizable software at the expense of performance and memory penalty. This paper shows how C++ TMP (Template Metaprogramming) can be applied to manage the variability of an object-oriented operating system and at the same time get ride out of the performance and memory footprint overhead. In doing so, it will be statically generated only the desired functionalities, thus ensuring that code is optimized and adjusted to application requirements and hardware resources.
Memory-access-aware data structure transformations for embedded software with dynamic data accesses
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000
Embedded systems are evolving from traditional, stand-alone devices to devices that participate in Internet activity. The days of simple, manifest embedded software [e.g. a simple finite-impulse response (FIR) algorithm on a digital signal processor DSP)] are over. Complex, nonmanifest code, executed on a variety of embedded platforms in a distributed manner, characterizes next generation embedded software. One dominant niche, which we concentrate on, is embedded, multimedia software. The need is present to map large scale, dynamic, multimedia software onto an embedded system in a systematic and highly optimized manner. The objective of this paper is to introduce high-level, systematically applicable, data structure transformations and to show in detail the practical feasibility of our optimizations on three real-life multimedia case studies. We derive Pareto tradeoff points in terms of accesses versus memory footprint and obtain significant gains in execution time and power consumption with respect to the initial implementation choices. Our approach is a first step to systematically applying high-level data structure transformations in the context of memory-efficient and low-power multimedia systems.
Ï€ : Effective Use of Metacomputation for Structuring Operating Systems
1993
To respond to continuing hardware advances and changing application demands, operating systems must be flexible. Recently developed metacomputation techniques can provide the desired flexibility. The πarchitecture investigates the use of metaobject protocols to tailor subsystems of operating systems. It allows effective utilization of events and resources through interfaces for tailorable object implementations. Its main contributions are flexible event management, scope reification and change management. The challenges and possible solutions are described in the context of a distributed shared object subsystem.
2010
The MOLEN Programming Paradigm was proposed to offer a general function like execution of the computation intensive parts of the programs on the reconfigurable fabric of the polymorphic computing platforms. Within the MOLEN programming paradigm, the MOLEN SET and EXECUTE primitives are employed to map an arbitrary function on the reconfigurable hardware. However, these instructions in their current status are intended for single application execution scenario. In this paper, we extended the semantic of MOLEN SET and EXECUTE to have a more generalized approach and support multi application, multitasking scenarios. This way, the new SET and EXECUTES are APIs added to the operating system runtime. We use these APIs to abstract the concept of the task from its actual implementation. Our experiments show that the proposed approach has a negligible overhead over the overall applications execution.
Efficient cross-domain mechanisms for building kernel-less operating systems
1996
We describe a set of efficient cross-domain mechanisms that allow operating systems to be implemented as cooperating applications, eliminating the need for a monolithic kernel. Our implementation, called SPACE[1, 2], can achieve higher-performance than kernel-based systems by allowing applications to build customized system services and tailor system interfaces for performance. On the SPARC architecture we have measured minimal application-to-application system service calls that are 5 times faster than Solaris getpid(), and customized thread creation that is 50 times faster than minimal Solaris threads.