Optimized Upload of Telemetry Data (original) (raw)
Related papers
FACET: PC-parallel architecture for cost-efficient telemetry processing
1998
NASA's Mission to Planet Earth (MTPE) is planning to launch the Earth Observing System (EOS) starting in 1998. The large number of planned remote sensing satellites will bring 500 Gigabytes of information per day. The EOS Data and Information System (EOSDIS) is responsible for ingesting and archiving this data. One important component of the EOSDIS system is the data operation, which involves extracting the packets and reconstructing and archiving the original remotely sensed data products. Due to transmission errors, the way data is sampled from the different sensors encoded, packets typically arrive out of order and perhaps with some of them missing or repeated. Many special hardware solutions have been proposed to solve this real-time problem. In this paper, we demonstrate a commercial off the shelf (COTS) solution. The hardware capitalizes on the progress made in the area of network of workstations (NOW), particularly PC-clusters. The software and algorithm exploit the data characteristics and parallelism in the telemetry stream to make use of load balancing and efficient parallel processing. It will be shown that this solution can provide high-performance to cost and programmability.
Optimization of an Earth Observation Data Processing and Distribution System
Multi-purposeful Application of Geospatial Data, 2018
Conventional Earth Observation Payload Data Ground Segments (PDGS) continuously receive variable requests for data processing and distribution. However, their architecture was conceived to be on the premises of satellite operators and, for instance, has intrinsic limitations to offer variable services. In the current chapter, we introduce cloud computing technology to be considered as an alternative to offer variable services. For that purpose, a cloud infrastructure based on OpenNebula and the PDGS used in the Deimos-2 mission was adapted with the objective of optimizing it using the ENTICE open source middleware. Preliminary results with a realistic satellite recording scenario are presented.
Design of a Continuous Media Data Transport Service and Protocol
Applications with real-time data transport requirements fall into two categories: those which require transmission of data units at regular intervals, which we call continuous media (CM) clients, e.g. video conferencing, voice communication, high-quality digital sound; and those which generate data for transmission at relatively arbitrary times, which we call real-time message-oriented clients. Because CM clients are better able to characterize their future behavior than message-oriented clients, a data transport service dedicated for CM clients can use this a priori knowledge to more accurately predict their future resource demands. Therefore, a separate transport service can potentially provide a more costeffective service along with additional functionality to support CM clients. The design of such a data transport service for CM clients and its underlying protocol (within the BLANCA gigabit testbed project) will be presented in this document. This service provides unreliable, in-sequence transfer (simplex, periodic) of so-called stream data units (STDUs) between a sending and a receiving client, with performance guarantees on loss, delay, and throughput. £ ¤ ¤ ¦ § £ © amount of time t synch , then the maximum uncertainty in the delay of the OPEN_PDU is t synch , and hence the com-= min ¡ £ ¢
Reordering Packet Based Data in Real-Time Data Acquisition Systems
2007
Ubiquitous internet protocol (IP) hardware has reached performance and capability levels that allow its use in data collection and real-time processing applications. Recent development experience with IP-based airborne data acquisition systems has shown that the open, pre-existing IP tools, standards, and capabilities support this form of distribution and sharing of data quite nicely, especially when combined with IP multicast. Unfortunately, the packet based nature of our approach also posed some problems that required special handling to achieve performance requirements. We have developed methods and algorithms for the filtering, selecting, and retiming problems associated with packet-based systems and present our approach in this paper.
Mind the cost of telemetry data analysis
Proceedings of the SIGCOMM '22 Poster and Demo Sessions
Data Stream Processing engines are emerging as a promising solution to efficiently process a continuous amount of telemetry information. In this poster, we compare four of them: Storm, Flink, Spark and WindFlow. The aim is to shed some lights on the best streaming engine for network traffic analysis. CCS CONCEPTS • Networks → Network monitoring; Programmable networks.
Network Telemetry Link Throughput Maximization Approaches
2009
The use of Ethernet and Internet Protocol (IP) networking technologies in flight test instrumentation and telemetry systems is rapidly increasing, driven by the ubiquity, scalability, and flexibility of networking technologies. Networks first made a positive impact in ground station infrastructure and have recently been emerging in test article data acquisition infrastructure in programs such as the A380, 787, P-8A, and Future Combat Systems. The next logical step is to provide a two-way network telemetry link to fully extend the flexibility of the network between the test articles and ground station. The United States Department of Defense (DoD) integrated Network-Enhanced Telemetry (iNET) program is currently working to build a standardized network telemetry link for exactly this purpose. When developing a network telemetry link, the limited availability of telemetry spectrum must be considered and thus it is critical to choose system-level approaches to maximize the throughput ac...
Improved transport service for remote sensing and control over wireless networks
IEEE International Conference on Mobile Adhoc and Sensor Systems Conference, 2005., 2005
In a bilateral teleoperated system, the signal transmissions between the operator and the slave manipulators have different QoS requirements in comparison to traditional network traffic. Running teleoperated systems over wireless networks poses more challenges in comparison to wired networks. The media streams involved differentiate themselves from other media types in that they require both reliable and smooth delivery. Reliable delivery requires the transport service to have TCP style semantics. By being smooth, the transport service should be able to deliver the control and sensing data with bounded and reduced latency and its variation. For example, we have conducted numerous teleoperated experiments using our system. We have found in some of our applications that if the end-to-end latency variance becomes larger than 0.3 second, the operator has difficulty maintaining smooth control of the slave manipulator. However, our simulations show that using TCP, the end-to-end latency variance can be as much as 2.5 seconds in an ad hoc wireless network. This paper proposes an improved Transport service for Remote Sensing and Control (TRSC). The service reduces the end-to-end latency and latency variance (jitter) for real-time reliable media in mobile ad hoc networks by using forward error correction encoding and multiple network paths. Simulation using NS2 shows the approach performs well under different wireless scenarios.
Efficient and Selective Upload of Data from Connected Vehicles
Proceedings of the 6th International Conference on Vehicle Technology and Intelligent Transport Systems, 2020
Vehicles are evolving into a connected sensing platform, generating enormous amounts data about themselves and their surroundings. In this work, we focus on the efficient data collection for connected vehicles, exploiting the fact that the context data of cars on the same road is often redundant. This is for instance relevant for applications which need roadside data for map updating. We propose a vehicular data dissemination architecture with a central coordination scheme to avoid redundant uploads. It also uses roadside WiFi hotspots opportunistically. To evaluate the benefits, we use the SUMO simulator to benchmark our results against a baseline solution, showing improvements of factor 10 up to 20.
Stream processing optimizations for mobile sensing applications
5.6 Simulation of three domains. Domain 1 and 2 use the CPU and have delays of 10 and 20, respectively. Domain 3 uses the network. Domains 1-2 and domain 3 execute in parallel since they use different hardware resources. Domains 1 and 2 use the CPU fairly. 5.7 The energy-delay trade-off for SI and AR when using static sensing. Batching significantly improves energy efficiency. Combining batching with scheduled concurrency provides no additional benefit.
Chapter Optimization of an Earth Observation Data Processing and Distribution System
2021
Conventional Earth Observation Payload Data Ground Segments (PDGS) continuously receive variable requests for data processing and distribution. However, their architecture was conceived to be on the premises of satellite operators and, for instance, has intrinsic limitations to offer variable services. In the current chapter, we introduce cloud computing technology to be considered as an alternative to offer variable services. For that purpose, a cloud infrastructure based on OpenNebula and the PDGS used in the Deimos-2 mission was adapted with the objective of optimizing it using the ENTICE open source middleware. Preliminary results with a realistic satellite recording scenario are presented