Andre Goforth - Academia.edu (original) (raw)

Papers by Andre Goforth

Research paper thumbnail of Summary Real-Time Design with Peer Tasks

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS) that approaches system scheduling analysis in a way different from those methods that use a scalar metric of urgency such as found in rate (or deadline) monotonic theories. The latter assume the place for scheduling prioritization to be at the functional level of runtime processes. For example, in the Ada programming language this refers to task scheduling. In our method, the fundamental units of prioritization, which we call work items, are system level or domain specific objects with timing requirements (deadlines) associated with them in the requirements specification. For LDPARTS, a work item consists of a collection of tasks. No priorities are assigned to tasks or, equivalently, tasks have equal priorities.

Research paper thumbnail of Ada in AI or AI in Ada. On developing a rationale for integration

The use of Ada as an Artificial Intelligence (AI) language is gaining interest in the NASA Commun... more The use of Ada as an Artificial Intelligence (AI) language is gaining interest in the NASA Community, i.e., by parties who have a need to deploy Knowledge Based-Systems (KBS) compatible with the use of Ada as the software standard for the Space Station. A fair number of KBS and pseudo-KBS implementations in Ada exist today. Currently, no widely used guidelines exist to compare and evaluate these with one another. The lack of guidelines illustrates a fundamental problem inherent in trying to compare and evaluate implementations of any sort in languages that are procedural or imperative in style, such as Ada, with those in languages that are functional in style, such as Lisp. Discussed are the strengths and weakness of using Ada as an AI language and a preliminary analysis provided of factors needed for the development of criteria for the integration of these two families of languages and the environments in which they are implemented. The intent for developing such criteria is to hav...

Research paper thumbnail of Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

The use of computers in autonomous telerobots is reaching the point where advanced distributed pr... more The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reas...

Research paper thumbnail of Real-Time Designwith Peer Tas k s

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS), as an alternative to those methods using rate or dead-line monotonic analysis. In our method the fundamental units of prioritization, work items, are domain specific objects with timing requirements (deadlines) found in user's specification. A work item consists of a collection of tasks of equal priority. Current scheduling theories are applied with artifact deadlines introduced by the designer whereas our method schedules work items to meet user's specification deadlines (sometimes called end-to-end deadlines). Our method supports these scheduling properties. Work item scheduling is based on domain specific importance instead of task level urgency and still meets as many user specification deadlines as can be met by scheduling tasks with respect to urgency. Second, the minimum (closest) on-line deadline that can be guaranteed for a work item of highest importance, scheduled at run time, is approximately the inverse of the throughput, measured in work items per second. Third, throughput is not degraded during overload and instead of resorting to task shedding during overload, the designer can specify which work items to shed. We prove these properties in a mathematical model.

Research paper thumbnail of Real-Time Design with Peer Tasks October 1995

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS) that approaches system scheduling analysis in a way different from those methods that use a scalar metric of urgency such as found in rate (or deadline) monotonic theories. The latter assume the place for scheduling prioritization to be at the functional level of run-time processes. For example, in the Ada programming language this refers to task scheduling. In our method, the fundamental units of prioritization, which we call work items, are system level or domain specific objects with timing requirements (deadlines) associated with them in the requirements specification. For LDPARTS, a work item consists of a collection of tasks. No priorities are assigned to tasks or, equivalently, tasks have equal priorities. Such a collection of tasks is referred to as peer tasks. Current scheduling theories are applied with artifact deadlines introduced by the designer w...

Research paper thumbnail of Ada as a parallel language for high performance computers

Proceedings of the conference on TRI-ADA '90 - TRI-Ada '90, 1990

This paper reports on experimental results which demonstrate the potential of Ada as a parallel p... more This paper reports on experimental results which demonstrate the potential of Ada as a parallel programming language for large scale, scientific applications on high performance multiprocessors. Reported performance results show a linear speed-up of a factor of 10 over 10 processors. A linear speedup performance over a larger number of processors is indicated given the availability of higher performance configurations and larger data sets.

Research paper thumbnail of The R-Shell approach - Using scheduling agents in complex distributed real-time systems

9th Computing in Aerospace Conference, 1993

Large, complex real-time systems such as space and avionics systems are extremely demanding in th... more Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

Research paper thumbnail of Attaining Program Affordability Through Integration of Logistical Operations and Health Maintenance

1st Space Exploration Conference: Continuing the Voyage of Discovery, 2005

Program affordability needs to be built-in at the initial concept formulation stage. For NASA's s... more Program affordability needs to be built-in at the initial concept formulation stage. For NASA's space exploration vision this is critical for long range sustainability of human presence in space. What is often overlooked in the initial concept formulation of a large scale system endeavor such as NASA's Constellation program is the hidden cost of maintaining requisite operational safety margins and redundancy through adequate supply chain logistics. Ensuring adequate supply chain logistics necessitates the integration of operations and maintenance cycles. What enables this integration is the coordination and reconciliation across multiple equipment types of system health features and logistical information such as: (1) prognostic drivers from Integrated Vehicle Health Monitoring (IVHM) systems producing proactive condition-based "maintain me" demands, (2) maintenance management systems tracking usage and producing scheduled maintenance demands, (3) unscheduled maintenance demands resulting from any trouble reports entered by human observers of conditions missed by the IVHM system, and (4) implicit maintenance demands resulting from mission plans which require assignment of vehicular/robotic assets and consequently require assurance of the assigned assets' fitness for the intended tasks. In this paper we discuss how Coordinated Multi-source Maintenance on Demand (CMMD) technology, which is being is transitioned to the USMC Coherent Analytical Computing Environment (CACE) program and the Joint Strike Fighter Program, can be applied to the NASA domain, and its benefits in terms of mission affordability, operations efficiency and system health effectiveness. Using concepts derived from CMMD, we discuss the kind of IVHM capabilities needed to optimize multiple, parallel, yet inter-linked, operations-maintenance cycles, thereby optimizing program affordability while meeting specific mission supportability requirements across a broad range of mission scenarios.

Research paper thumbnail of Knowledge based systems and Ada: an overview of the issues

ACM SIGAda Ada Letters, 1988

The goal of this paper is to present an overview of the issues of using Ada in Artificial Intelli... more The goal of this paper is to present an overview of the issues of using Ada in Artificial Intelligence (AI). The purpose of this paper is to act as a catalyst and a focus for these ongoing discussions. Our perspective is one from a cross-cultural vantage point. On one side is the Ada community that represents the engineer's "how to" culture; and, on the other side, the AI community that represents the scientist's "what" and "why" culture. Differences in communication media - i.e . programming languages - are the direct result of the different focus of the two cultures. We discuss some of the obstacles to a marriage of the two and assess what are the current promising development paths for overcoming these.

Research paper thumbnail of Secured Advanced Federated Environment (SAFE): a NASA solution for secure cross-organization collaboration

WET ICE 2003. Proceedings. Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003.

This paper discusses the challenges and security issues inherent in building complex cross-organi... more This paper discusses the challenges and security issues inherent in building complex cross-organizational collaborative projects and software systems within NASA. By appking the design principles of compartmentalization, organizational hierarchy and inter-organizational federation, the Secured Advanced Federated Environment (SAFE) is laying the foundation for a collaborative virtual infrastnicture for the NASA community. A key element of SAFE is the Micro Security Domain (MSD) concept, which balances the need to collaborate and the need to enforce enterprise and local security rules. With the SAFE approach, security is an integral component of enterprise software and network design, not an afterthought. ' I. Introduction: Like many federal agencies, National Aeronautics and Space Administration (NASA) field centers are distributed across the United States. NASA contractors and partners are located throughout the world. Many NASA projects and missions involve geographically distributed teams at NASA centers, industry, and universities. In 1999 the NASA Collaborative Engineering Environment (CEE) project developed a proof of concept "CEE room" to allow engineering teams from different sites to see and hear each other, share design data, and view and manipulate CAD drawings together in real-time over ISDN. Prototypes were deployed in all ten major NASA installations within one year. One deep space mission required collaboration between a NASA center and a NASA contractor site. Using CEE rooms to collaborate saved the spacecraft design team 70 person-trips in the first two months, recouping the $70K equipment investment in 60 days. The CEE room concept was

Research paper thumbnail of Communications for Integrated Modular Avionics

2007 IEEE Aerospace Conference, 2007

The aerospace industry has been adopting avionics architectures to take advantage of advances in ... more The aerospace industry has been adopting avionics architectures to take advantage of advances in computer engineering. Integrated Modular Avionics (IMA), as described in ARINC 653, distributes functional modules into a robust configuration interconnected with a "virtual backplane" data communications network. Each avionics module's function is defined in software compliant with the APEX Application Program Interface. The Avionics Full-Duplex Ethernet (AFDX) network replaces the point-topoint connections used in previous distributed systems with "virtual links". This network creates a command and data path between avionics modules with the software and network defining the active virtual links over an integrated physical network. In the event of failures, the software and network can perform complex reconfigurations very quickly, resulting in a very robust system. In this paper, suitable architectures, standards and conceptual designs for IMA computational modules and the virtual backplane are defined and analyzed for applicability to spacecraft. The AFDX network standard is examined in detail and compared with IEEE 802.3 Ethernet. A reference design for the "Ancillary Sensor Network" (ASN) is outlined based on the IEEE 1451 "Standard for a Smart Transducer Interface for Sensors and Actuators" using realtime operating systems, time deterministic AFDX and wireless LAN technology. Strategies for flight test and operational data collection related to Systems Health Management are developed, facilitating vehicle ground processing. Finally, a laboratory evaluation defines performance metrics and test protocols and summarizes the results of AFDX network tests, allowing identification of design issues and determination of ASN subsystem scalability, from a few to potentially thousands of smart and legacy sensors. 12

Research paper thumbnail of The Role and Impact of Software Coding Standards On System Integrity

AIAA Infotech@Aerospace (I@A) Conference, 2013

Coding standards are an integral part of today's safety-critical computer systems. Software verif... more Coding standards are an integral part of today's safety-critical computer systems. Software verification and validation (V&V) practices significantly impact the cost of achieving human-rated levels of system integrity. The choices of software used to meet realtime, hard deadline requirements in onboard flight critical systems are relatively narrow. The stringent technical demands and expertise of the domain and the limited and specialized availability of industrial grade software products are the primary limiting factors. Robustness of software is influenced by choices in what programming languages, coding standards and testing, which is the primary form of V&V practice, are used. As a result, in these systems the state-of-the-art software uses less current programming features and techniques than those found, on the whole, in the software industry. Application of coding standards in high integrity systems software development is central to such practices. The goal of these is to make the software robust and safe and thereby contributing to overall system integrity. This paper examines the role and impact of interactions between use of the C++ programming language, the organization's own coding standards, and how these fit within V&V processes and procedures on robustness and testing as recently observed in NASA's EFT-1 project. These observations will help future flight software managers and engineers to make better application of coding standards throughout the V&V life cycle and optimize their impact on robustness and affordability.

Research paper thumbnail of Benchmarking Ada tasking on tightly coupled multiprocessor architectures

7th Computers in Aerospace Conference, 1989

Research paper thumbnail of Embedded Data Processor and Portable Computer Technology testbeds

9th Computing in Aerospace Conference, 1993

Attention is given to current activities in the Embedded Data Processor and Portable Computer Tec... more Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

Research paper thumbnail of Certification of COTS Software in NASA Human Rated Flight Systems

Infotech@Aerospace 2012, 2012

Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as ... more Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA's own standards. For software that is from COTS sources, and, in general from 3 rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3 rd party software in safety critical systems and cover recent efforts to use COTS software in NASA's MultiPurpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3 rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3 rd party software.

Research paper thumbnail of Intelligent Information Fusion in the Aviation Domain: A Semantic-Web Based Approach

AIAA 1st Intelligent Systems Technical Conference, 2004

Information fusion from multiple sources is a critical requirement for System Wide Information lW... more Information fusion from multiple sources is a critical requirement for System Wide Information lWanagement in the National Airspace (NM). NASA and the FAA enxison creating an %*rated poor' ofinformation'ori@y coming from merent sowces, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the resuits of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that wilI benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.

Research paper thumbnail of <title>Al, Automation And The Flight Telerobotic Servicer</title>

Space Station Automation IV, 1988

NASA has recently completed a study for the preliminary definition of a teleoperated robotic devi... more NASA has recently completed a study for the preliminary definition of a teleoperated robotic device. The Flight Telerobotic Servicer (FTS) will be used to assist astronauts in many of the on-board tasks of assembly, maintenance, servicing and inspection of the Space Station. This paper makes an assessment of the role that Artificial Intelligence (AI) may have in furthering the automation capabilities of the FTS and, hence, extending the FTS capacity for growth and evolution. Relevant system engineering issues are identified, and an approach for insertion of AI technology is presented in terms of the NASA/NBS Standard Reference Model (NASREM) control architecture.

Research paper thumbnail of Performance measurement of parallel Ada

Proceedings of the working group on Ada performance issues 1990 -, 1990

This paper reports on the development of benchmarks and performance measures for parallel Ada tas... more This paper reports on the development of benchmarks and performance measures for parallel Ada tasking. The focus is on the macroscopic behavior of the benchmarks across a set of load parameters because parallel processing of Ada tasks involves complex run-time behavior and side effects. An Ada program of an application with parallel processes was implemented and its tasks' execution on a multiprocessor system was studied. The chosen application was the NASREM model developed by National Bureau of Standards (NBS). The purpose of the model is to serve as a standard reference control architecture for intelligent, autonomous telerobotic systems. The control architectures of these systems have significant communication requirements as well as computational requirements. A preliminary load model of communication and computation characteristics has been made. Experiments were run on a Sequent Balance 8000 which has a tightly coupled, shared memory multiprocessor architecture and hosts a proprietary version of UNIX. The number of processors varied from 1 to 16. The software environment was a Verdix Ada compiler. A proprietary Ada run-time environment automatically scheduled Ada tasks for parallel execution on available processors. Most results show lowered communication response time as more processors were made available. However, in some cases communication response time increased as more processors were added. This appears because of system overhead.

Research paper thumbnail of Artificial intelligence, automation, and the flight telerobotic servicer

Research paper thumbnail of Planning to explore: Using a coordinated multisource infrastructure to overcome present and future space flight planning challenges

Few human endeavors present as much of a planning and scheduling challenge as space flight, parti... more Few human endeavors present as much of a planning and scheduling challenge as space flight, particularly manned space flight. Just on the operational side of it, efforts of thousands of people across hundreds of organizations need to be coordinated. Numerous tasks of varying complexity and nature, from scientific to construction, need to be accomplished within limited mission time frames. Resources need to be carefully managed and contingencies worked out, often on a very short notice. From the beginning of the NASA space program, planning has been done by large teams of domain experts working months, sometimes years, to put together a single mission. This approach, while proven very reliable up to now, is becoming increasingly harder to sustain. Elevated levels of NASA space activities, from deployment of the new Crew Exploration Vehicle (CEV) and completion of the International Space Station (ISS), to the planned lunar missions and permanent lunar bases, will put an even greater strain on this largely manual process. While several attempts to automate it have been made in the past, none have fully succeeded. In this paper we describe the current NASA planning methods, outline their advantages and disadvantages, discuss the planning challenges of upcoming missions and propose a distributed planning/scheduling framework (CMMD) aimed at unifying and optimizing the planning effort. CMMD will not attempt to make the process completely automated, but rather serve in a decision support capacity for human managers and planners. It will help manage information gathering, creation of partial and consolidated schedules, inter-team negotiations, contingencies investigation, and rapid re-planning when the situation demands it. The first area of CMMD application will be planning for Extravehicular Activities (EVA) and associated logistics. Other potential applications, not only in the space flight domain, and future research efforts will be discussed as well.

Research paper thumbnail of Summary Real-Time Design with Peer Tasks

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS) that approaches system scheduling analysis in a way different from those methods that use a scalar metric of urgency such as found in rate (or deadline) monotonic theories. The latter assume the place for scheduling prioritization to be at the functional level of runtime processes. For example, in the Ada programming language this refers to task scheduling. In our method, the fundamental units of prioritization, which we call work items, are system level or domain specific objects with timing requirements (deadlines) associated with them in the requirements specification. For LDPARTS, a work item consists of a collection of tasks. No priorities are assigned to tasks or, equivalently, tasks have equal priorities.

Research paper thumbnail of Ada in AI or AI in Ada. On developing a rationale for integration

The use of Ada as an Artificial Intelligence (AI) language is gaining interest in the NASA Commun... more The use of Ada as an Artificial Intelligence (AI) language is gaining interest in the NASA Community, i.e., by parties who have a need to deploy Knowledge Based-Systems (KBS) compatible with the use of Ada as the software standard for the Space Station. A fair number of KBS and pseudo-KBS implementations in Ada exist today. Currently, no widely used guidelines exist to compare and evaluate these with one another. The lack of guidelines illustrates a fundamental problem inherent in trying to compare and evaluate implementations of any sort in languages that are procedural or imperative in style, such as Ada, with those in languages that are functional in style, such as Lisp. Discussed are the strengths and weakness of using Ada as an AI language and a preliminary analysis provided of factors needed for the development of criteria for the integration of these two families of languages and the environments in which they are implemented. The intent for developing such criteria is to hav...

Research paper thumbnail of Advanced data management design for autonomous telerobotic systems in space using spaceborne symbolic processors

The use of computers in autonomous telerobots is reaching the point where advanced distributed pr... more The use of computers in autonomous telerobots is reaching the point where advanced distributed processing concepts and techniques are needed to support the functioning of Space Station era telerobotic systems. Three major issues that have impact on the design of data management functions in a telerobot are covered. It also presents a design concept that incorporates an intelligent systems manager (ISM) running on a spaceborne symbolic processor (SSP), to address these issues. The first issue is the support of a system-wide control architecture or control philosophy. Salient features of two candidates are presented that impose constraints on data management design. The second issue is the role of data management in terms of system integration. This referes to providing shared or coordinated data processing and storage resources to a variety of telerobotic components such as vision, mechanical sensing, real-time coordinated multiple limb and end effector control, and planning and reas...

Research paper thumbnail of Real-Time Designwith Peer Tas k s

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS), as an alternative to those methods using rate or dead-line monotonic analysis. In our method the fundamental units of prioritization, work items, are domain specific objects with timing requirements (deadlines) found in user's specification. A work item consists of a collection of tasks of equal priority. Current scheduling theories are applied with artifact deadlines introduced by the designer whereas our method schedules work items to meet user's specification deadlines (sometimes called end-to-end deadlines). Our method supports these scheduling properties. Work item scheduling is based on domain specific importance instead of task level urgency and still meets as many user specification deadlines as can be met by scheduling tasks with respect to urgency. Second, the minimum (closest) on-line deadline that can be guaranteed for a work item of highest importance, scheduled at run time, is approximately the inverse of the throughput, measured in work items per second. Third, throughput is not degraded during overload and instead of resorting to task shedding during overload, the designer can specify which work items to shed. We prove these properties in a mathematical model.

Research paper thumbnail of Real-Time Design with Peer Tasks October 1995

We introduce a real-time design methodology for large scale, distributed, parallel architecture, ... more We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS) that approaches system scheduling analysis in a way different from those methods that use a scalar metric of urgency such as found in rate (or deadline) monotonic theories. The latter assume the place for scheduling prioritization to be at the functional level of run-time processes. For example, in the Ada programming language this refers to task scheduling. In our method, the fundamental units of prioritization, which we call work items, are system level or domain specific objects with timing requirements (deadlines) associated with them in the requirements specification. For LDPARTS, a work item consists of a collection of tasks. No priorities are assigned to tasks or, equivalently, tasks have equal priorities. Such a collection of tasks is referred to as peer tasks. Current scheduling theories are applied with artifact deadlines introduced by the designer w...

Research paper thumbnail of Ada as a parallel language for high performance computers

Proceedings of the conference on TRI-ADA '90 - TRI-Ada '90, 1990

This paper reports on experimental results which demonstrate the potential of Ada as a parallel p... more This paper reports on experimental results which demonstrate the potential of Ada as a parallel programming language for large scale, scientific applications on high performance multiprocessors. Reported performance results show a linear speed-up of a factor of 10 over 10 processors. A linear speedup performance over a larger number of processors is indicated given the availability of higher performance configurations and larger data sets.

Research paper thumbnail of The R-Shell approach - Using scheduling agents in complex distributed real-time systems

9th Computing in Aerospace Conference, 1993

Large, complex real-time systems such as space and avionics systems are extremely demanding in th... more Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

Research paper thumbnail of Attaining Program Affordability Through Integration of Logistical Operations and Health Maintenance

1st Space Exploration Conference: Continuing the Voyage of Discovery, 2005

Program affordability needs to be built-in at the initial concept formulation stage. For NASA's s... more Program affordability needs to be built-in at the initial concept formulation stage. For NASA's space exploration vision this is critical for long range sustainability of human presence in space. What is often overlooked in the initial concept formulation of a large scale system endeavor such as NASA's Constellation program is the hidden cost of maintaining requisite operational safety margins and redundancy through adequate supply chain logistics. Ensuring adequate supply chain logistics necessitates the integration of operations and maintenance cycles. What enables this integration is the coordination and reconciliation across multiple equipment types of system health features and logistical information such as: (1) prognostic drivers from Integrated Vehicle Health Monitoring (IVHM) systems producing proactive condition-based "maintain me" demands, (2) maintenance management systems tracking usage and producing scheduled maintenance demands, (3) unscheduled maintenance demands resulting from any trouble reports entered by human observers of conditions missed by the IVHM system, and (4) implicit maintenance demands resulting from mission plans which require assignment of vehicular/robotic assets and consequently require assurance of the assigned assets' fitness for the intended tasks. In this paper we discuss how Coordinated Multi-source Maintenance on Demand (CMMD) technology, which is being is transitioned to the USMC Coherent Analytical Computing Environment (CACE) program and the Joint Strike Fighter Program, can be applied to the NASA domain, and its benefits in terms of mission affordability, operations efficiency and system health effectiveness. Using concepts derived from CMMD, we discuss the kind of IVHM capabilities needed to optimize multiple, parallel, yet inter-linked, operations-maintenance cycles, thereby optimizing program affordability while meeting specific mission supportability requirements across a broad range of mission scenarios.

Research paper thumbnail of Knowledge based systems and Ada: an overview of the issues

ACM SIGAda Ada Letters, 1988

The goal of this paper is to present an overview of the issues of using Ada in Artificial Intelli... more The goal of this paper is to present an overview of the issues of using Ada in Artificial Intelligence (AI). The purpose of this paper is to act as a catalyst and a focus for these ongoing discussions. Our perspective is one from a cross-cultural vantage point. On one side is the Ada community that represents the engineer's "how to" culture; and, on the other side, the AI community that represents the scientist's "what" and "why" culture. Differences in communication media - i.e . programming languages - are the direct result of the different focus of the two cultures. We discuss some of the obstacles to a marriage of the two and assess what are the current promising development paths for overcoming these.

Research paper thumbnail of Secured Advanced Federated Environment (SAFE): a NASA solution for secure cross-organization collaboration

WET ICE 2003. Proceedings. Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003.

This paper discusses the challenges and security issues inherent in building complex cross-organi... more This paper discusses the challenges and security issues inherent in building complex cross-organizational collaborative projects and software systems within NASA. By appking the design principles of compartmentalization, organizational hierarchy and inter-organizational federation, the Secured Advanced Federated Environment (SAFE) is laying the foundation for a collaborative virtual infrastnicture for the NASA community. A key element of SAFE is the Micro Security Domain (MSD) concept, which balances the need to collaborate and the need to enforce enterprise and local security rules. With the SAFE approach, security is an integral component of enterprise software and network design, not an afterthought. ' I. Introduction: Like many federal agencies, National Aeronautics and Space Administration (NASA) field centers are distributed across the United States. NASA contractors and partners are located throughout the world. Many NASA projects and missions involve geographically distributed teams at NASA centers, industry, and universities. In 1999 the NASA Collaborative Engineering Environment (CEE) project developed a proof of concept "CEE room" to allow engineering teams from different sites to see and hear each other, share design data, and view and manipulate CAD drawings together in real-time over ISDN. Prototypes were deployed in all ten major NASA installations within one year. One deep space mission required collaboration between a NASA center and a NASA contractor site. Using CEE rooms to collaborate saved the spacecraft design team 70 person-trips in the first two months, recouping the $70K equipment investment in 60 days. The CEE room concept was

Research paper thumbnail of Communications for Integrated Modular Avionics

2007 IEEE Aerospace Conference, 2007

The aerospace industry has been adopting avionics architectures to take advantage of advances in ... more The aerospace industry has been adopting avionics architectures to take advantage of advances in computer engineering. Integrated Modular Avionics (IMA), as described in ARINC 653, distributes functional modules into a robust configuration interconnected with a "virtual backplane" data communications network. Each avionics module's function is defined in software compliant with the APEX Application Program Interface. The Avionics Full-Duplex Ethernet (AFDX) network replaces the point-topoint connections used in previous distributed systems with "virtual links". This network creates a command and data path between avionics modules with the software and network defining the active virtual links over an integrated physical network. In the event of failures, the software and network can perform complex reconfigurations very quickly, resulting in a very robust system. In this paper, suitable architectures, standards and conceptual designs for IMA computational modules and the virtual backplane are defined and analyzed for applicability to spacecraft. The AFDX network standard is examined in detail and compared with IEEE 802.3 Ethernet. A reference design for the "Ancillary Sensor Network" (ASN) is outlined based on the IEEE 1451 "Standard for a Smart Transducer Interface for Sensors and Actuators" using realtime operating systems, time deterministic AFDX and wireless LAN technology. Strategies for flight test and operational data collection related to Systems Health Management are developed, facilitating vehicle ground processing. Finally, a laboratory evaluation defines performance metrics and test protocols and summarizes the results of AFDX network tests, allowing identification of design issues and determination of ASN subsystem scalability, from a few to potentially thousands of smart and legacy sensors. 12

Research paper thumbnail of The Role and Impact of Software Coding Standards On System Integrity

AIAA Infotech@Aerospace (I@A) Conference, 2013

Coding standards are an integral part of today's safety-critical computer systems. Software verif... more Coding standards are an integral part of today's safety-critical computer systems. Software verification and validation (V&V) practices significantly impact the cost of achieving human-rated levels of system integrity. The choices of software used to meet realtime, hard deadline requirements in onboard flight critical systems are relatively narrow. The stringent technical demands and expertise of the domain and the limited and specialized availability of industrial grade software products are the primary limiting factors. Robustness of software is influenced by choices in what programming languages, coding standards and testing, which is the primary form of V&V practice, are used. As a result, in these systems the state-of-the-art software uses less current programming features and techniques than those found, on the whole, in the software industry. Application of coding standards in high integrity systems software development is central to such practices. The goal of these is to make the software robust and safe and thereby contributing to overall system integrity. This paper examines the role and impact of interactions between use of the C++ programming language, the organization's own coding standards, and how these fit within V&V processes and procedures on robustness and testing as recently observed in NASA's EFT-1 project. These observations will help future flight software managers and engineers to make better application of coding standards throughout the V&V life cycle and optimize their impact on robustness and affordability.

Research paper thumbnail of Benchmarking Ada tasking on tightly coupled multiprocessor architectures

7th Computers in Aerospace Conference, 1989

Research paper thumbnail of Embedded Data Processor and Portable Computer Technology testbeds

9th Computing in Aerospace Conference, 1993

Attention is given to current activities in the Embedded Data Processor and Portable Computer Tec... more Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

Research paper thumbnail of Certification of COTS Software in NASA Human Rated Flight Systems

Infotech@Aerospace 2012, 2012

Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as ... more Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA's own standards. For software that is from COTS sources, and, in general from 3 rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3 rd party software in safety critical systems and cover recent efforts to use COTS software in NASA's MultiPurpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3 rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3 rd party software.

Research paper thumbnail of Intelligent Information Fusion in the Aviation Domain: A Semantic-Web Based Approach

AIAA 1st Intelligent Systems Technical Conference, 2004

Information fusion from multiple sources is a critical requirement for System Wide Information lW... more Information fusion from multiple sources is a critical requirement for System Wide Information lWanagement in the National Airspace (NM). NASA and the FAA enxison creating an %*rated poor' ofinformation'ori@y coming from merent sowces, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the resuits of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that wilI benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.

Research paper thumbnail of <title>Al, Automation And The Flight Telerobotic Servicer</title>

Space Station Automation IV, 1988

NASA has recently completed a study for the preliminary definition of a teleoperated robotic devi... more NASA has recently completed a study for the preliminary definition of a teleoperated robotic device. The Flight Telerobotic Servicer (FTS) will be used to assist astronauts in many of the on-board tasks of assembly, maintenance, servicing and inspection of the Space Station. This paper makes an assessment of the role that Artificial Intelligence (AI) may have in furthering the automation capabilities of the FTS and, hence, extending the FTS capacity for growth and evolution. Relevant system engineering issues are identified, and an approach for insertion of AI technology is presented in terms of the NASA/NBS Standard Reference Model (NASREM) control architecture.

Research paper thumbnail of Performance measurement of parallel Ada

Proceedings of the working group on Ada performance issues 1990 -, 1990

This paper reports on the development of benchmarks and performance measures for parallel Ada tas... more This paper reports on the development of benchmarks and performance measures for parallel Ada tasking. The focus is on the macroscopic behavior of the benchmarks across a set of load parameters because parallel processing of Ada tasks involves complex run-time behavior and side effects. An Ada program of an application with parallel processes was implemented and its tasks' execution on a multiprocessor system was studied. The chosen application was the NASREM model developed by National Bureau of Standards (NBS). The purpose of the model is to serve as a standard reference control architecture for intelligent, autonomous telerobotic systems. The control architectures of these systems have significant communication requirements as well as computational requirements. A preliminary load model of communication and computation characteristics has been made. Experiments were run on a Sequent Balance 8000 which has a tightly coupled, shared memory multiprocessor architecture and hosts a proprietary version of UNIX. The number of processors varied from 1 to 16. The software environment was a Verdix Ada compiler. A proprietary Ada run-time environment automatically scheduled Ada tasks for parallel execution on available processors. Most results show lowered communication response time as more processors were made available. However, in some cases communication response time increased as more processors were added. This appears because of system overhead.

Research paper thumbnail of Artificial intelligence, automation, and the flight telerobotic servicer

Research paper thumbnail of Planning to explore: Using a coordinated multisource infrastructure to overcome present and future space flight planning challenges

Few human endeavors present as much of a planning and scheduling challenge as space flight, parti... more Few human endeavors present as much of a planning and scheduling challenge as space flight, particularly manned space flight. Just on the operational side of it, efforts of thousands of people across hundreds of organizations need to be coordinated. Numerous tasks of varying complexity and nature, from scientific to construction, need to be accomplished within limited mission time frames. Resources need to be carefully managed and contingencies worked out, often on a very short notice. From the beginning of the NASA space program, planning has been done by large teams of domain experts working months, sometimes years, to put together a single mission. This approach, while proven very reliable up to now, is becoming increasingly harder to sustain. Elevated levels of NASA space activities, from deployment of the new Crew Exploration Vehicle (CEV) and completion of the International Space Station (ISS), to the planned lunar missions and permanent lunar bases, will put an even greater strain on this largely manual process. While several attempts to automate it have been made in the past, none have fully succeeded. In this paper we describe the current NASA planning methods, outline their advantages and disadvantages, discuss the planning challenges of upcoming missions and propose a distributed planning/scheduling framework (CMMD) aimed at unifying and optimizing the planning effort. CMMD will not attempt to make the process completely automated, but rather serve in a decision support capacity for human managers and planners. It will help manage information gathering, creation of partial and consolidated schedules, inter-team negotiations, contingencies investigation, and rapid re-planning when the situation demands it. The first area of CMMD application will be planning for Extravehicular Activities (EVA) and associated logistics. Other potential applications, not only in the space flight domain, and future research efforts will be discussed as well.