NETCONF Interoperability Testing (original) (raw)

A Formal Validation Model for the Netconf Protocol

Utility Computing, 2004

Netconf is a protocol proposed by the IETF that defines a set of operations for network configuration. One of the main issues of Netconf is to define operations such as validate and commit, which currently lack a clear description and an information model. We propose in this paper a model for validation based on XML schema trees. By using an existing logical formalism called TQL, we express important dependencies between parameters that appear in those information models, and automatically check these dependencies on sample XML trees in reasonable time. We illustrate our claim by showing different rules and an example of validation on a Virtual Private Network.

Testing and verification of network management and design tools

Bell Labs Technical Journal, 2003

Testing large-scale software applications has been an area of activity for a long time. Today several fields like design patterns advocate very good practices that aim to ease the burden of testing. However, in practice, testing is often done in an ad hoc manner. Large-scale software systems like network management systems are tested against very specific system requirements, which tend to be inadequate. Such large-scale systems, comprising concurrent multi-threaded processes interacting with each other, often need to be tested for long periods of time (sometimes longer than the time taken to develop the system) before they are suitable for deployment on a live network. Despite such stringent system requirements, testing often takes a back seat, and sophisticated software tools, though available, are very rarely employed in system testing. Here we describe an engineering approach for testing and verification of network management and design tools. We describe a set of handy principles, algorithms, and testing architectures that have worked very well in practice and have achieved remarkable results. The paper derives heavily from our experience; we have developed several products for Lucent Technologies' Optical Networks organization over the last three years.

Simplifying network testing: techniques and approaches towards automating and simplifying the testing process

2009

The dramatic increase of companies and consumers that heavily depend on networks mandates the creation of reliable network devices. Such reliability can be achieved by testing both the conformance of individual protocols of an implementation to their corresponding specifications and the interaction between different protocols. With the increase of computer power and the advances in network testing research, one would expect that efficient approaches for testing network implementations would be available. However, such approaches are not available due to reasons like the complexity of network protocols, the need for different protocols to interoperate, the limited information on implementation because of proprietary codes, and the potentially unbounded size of the network to be tested. To address these issues, a novel technique is proposed that improves the quality of the test while reducing the time and effort network testing requires. The proposed approach achieves these goals, by ...

Automating Network System Configurations for Vendor-Specific Network Elements

2017

In present day, telecommunications stakeholders have not yet fully automated hardware configurations. Network configuration and reconfiguration is a repetitive, time consuming, and error prone process. To address this problem this bachelor thesis is going to shed light on the benefits of an automated configuration and topology verification process. To this end, a proof of concept system, Enna, has been developed in a case study together with an ISP stakeholder. Enna reads the current network state, applies predetermined configurations loaded from text-files, and automatically verifies the network state. The goals of this thesis are as follows: development of Enna to illustrate the simplicity in the implementation, compare the automated network reconfiguration to a fully manual one, and finally discuss potential benefits or problems in switching to an automated network configuration. Since this thesis is carried out in collaboration with an ISP working with Cisco IOS XR devices, Enna...

Network integration testing: concepts, test specifications and tools for automatic telecommunication services verification

Computer Networks, 2000

The purpose of this tutorial is to provide concepts and historical background of the``network integration testing'' (NIT) methodology. NIT is a``grey box'' testing technique that is aimed at verifying the correct behaviour of interconnected networks (operated by dierent operators) in provisioning services to end users, or the behaviour of a complex network operated by a unique operator. The main technical concepts behind this technique are presented along with the history of some International projects that have contributed to its early de®nition and application. European Institute for Research and Strategic Studies in Telecommunication (EURESCOM) has actually been very active, with many projects, in de®ning the NIT basic methodology and providing actual NIT speci®cations (for narrow-band and broad-band services, covering both voice and data). EURESCOM has also been acting as a focal point in the area, e.g., encouraging the Industry in developing commercial tools supporting NIT. In particular, the EURESCOM P412 project (1994±1996) ®rst explicitly de®ned the NIT methodology (the methodological aspects include test notation, test implementation, test processes, distributed testing and related coordination aspects). P412 applied the methodology to ISDN whilst another project, P410, applied NIT to data services. The P613 project (1997±1999) extended the basic NIT methodology to the broad band and GSM. More into details, the areas covered currently by NIT test speci®cations developed by EURESCOM projects include N-ISDN, N-ISUP, POTS, B-ISDN, B-ISUP, IP over ATM, ATM/FR, GSM, focusing also on their``inter-working'' cases (e.g., ISDN/ISDN, ISDN/GSM, etc.). ETSI, the European Telecommunication Standards Institute, also contributed to NIT development (e.g., the de®nition of the TSP1+ protocol, used for the functional coordination and timing synchronisation of all tools involved in a distributed testing session). The paper also discusses NIT in relation to the recent major changes (processes) within the telecommunication (TLC) community. Beyond the new needs coming from the pure technical aspects (integration of voice and data, ®xed mobile convergence, etc.) the full deregulation of the TLC sector has already generated new processes and new testing needs (e.g., Interconnection Testing) that had a signi®cant in¯uence on the methodology. NIT is likely to continue to develop further in the future according to the needs of telecom operators, authorities, userÕs associations and suppliers.

Networking Software Studies with the Structured Testing Methodology

Computer Science and Information Systems, 2005

The results of systematic software analyses with McCabe's and Halstead's metrics are presented for designing and testing three networking systems: the Carrier Internetworking switched routing solution, which allows managing the Internet-based virtual private networks over a multiservice asynchronous transfer mode infrastructure; Carrier Networks Support system that provides both services of conventional Layer-2 switches and the routing and control services of Layer-3 devices; and a system for providing different networking services (IP-VPNs, Firewalls, Network Address Translations, IP Quality-of-Service, and Web steering). The graph-based metrics (cyclomatic complexity, essential complexity, module design complexity, system design complexity, and system integration complexity) have been applied for studying the decision-structure complexity of code modules, code quality (unstructured logic), the amount of interaction between modules, and the estimated number of integration tests that are necessary to guard against errors. Nine protocolbased areas of the code (2,447 modules written in 149,094 lines of C-code) have been analyzed for BGP, Frame Relay, IGMP, IP, ISIS, OSPF, PPP, RIP, and SNMP networking protocols. It has been found that 511 modules (19.4% of the protocolbased code) are both unreliable and unmaintainable, including 27% of the BGP, IP, and OSPF code modules. Only the Frame Relay part of the code is well designed and programmed with a few possible errors. The number of unreliable code modules (29%) correlates well with the number of customer requests, error-fixing submits, and a number of possible errors (1,473) that have been estimated with the Halstead's metrics. Following the McCabe's approach of structured testing, 14,401 unit tests and 11,963 module integration tests have been developed to cover the protocol-based code areas. Comparing different Code Releases, it is shown that the reduction of the code complexity leads to significant reduction of the errors and maintainability efforts. The test and code coverage issues for embedded networking systems are also discussed.

NETCDL: The Network Certification Description Language

International Journal of Computational Science, Information Technology and Control Engineering (IJCSITCE), 2018

Modern IP networks are complex entities that require constant maintenance and care. Similarly, constructing a new network comes with a high amount of upfront cost, planning, and risk. Unlike the disciplines of software and hardware engineering, networking and IT professionals lack an expressive and useful certification language that they can use to verify that their work is correct. When installing and maintaining networks without a standard for describing their behavior, teams find themselves prone to making configuration mistakes. These mistakes can have real monetary and operational efficiency costs for organizations that maintain large networks.

Microsoft's protocol documentation program: interoperability testing at scale

Communications of The ACM, 2011

discuss interoperability testing at scale. In 2002, Microsoft began the difficult process of verifying much of the technical documentation for its Windows communication protocols. The undertaking came about as a consequence of a consent decree Microsoft entered into with the U.S. Department of Justice and several state attorneys general that called for the company to make available certain client-server communication protocols for third-party licensees. A series of RFC-like technical documents were then written for the relevant Windows client-server and server-server communication protocols, but to ensure interoperability Microsoft needed to verify the accuracy and completeness of those documents. From the start, it was clear this wouldn't be a typical QA (quality assurance) project. First and foremost, a team would be required to test documentation, not software, which is an inversion of the normal QA process; and the documentation in question was extensive, consisting of more than 250 documents-30,000 pages in all. In addition, the compliance deadlines were tight. To succeed, the Microsoft team would have to find an efficient testing methodology, identify the appropriate technology, and train an army of testers-all within a very short period of time. This case study considers how the team arrived at an approach to that enormous testing challenge. More specifically, it focuses on one of the testing methodologies used-model-based testing-and the primary challenges that have emerged in adopting that approach for a very large-scale project. Two lead engineers from the Microsoft team and an engineer who played a role in reviewing the Microsoft effort tell the story. Now with Google, Wolfgang Grieskamp at the time of this project was part of Microsoft's Windows Server and Cloud Interoperability Group (Winterop), the group charged with testing Microsoft's protocol documentation and, more generally, with ensuring that Microsoft's platforms are interoperable with software from the world beyond Microsoft. Previously, Grieskamp was a researcher at Microsoft Research, where he was involved in efforts to develop model-based testing capabilities. Nico Kicillof, who worked with Grieskamp at Microsoft Research to develop a model-based testing tool called Spec Explorer, continues to guide testing efforts as part of the Winterop group. Bob Binder is an expert on matters related to the testing of communication protocols. He too has been involved with the Microsoft testing project, having served as a test methodology consultant who also reviewed work performed by teams of testers in China and India. For this case study, Binder spoke with Kicillof and Grieskamp regarding some of the key challenges they've faced over the course of their large-scale testing effort. BOB BINDER When you first got involved with the Winterop Team [the group responsible for driving the creation, publication, and QA of the Windows communication protocols], what were some of the key challenges?

Interoperability Testing System of TCP/IP Based Systems in Operational Environment

Proceedings of the Ifip Tc6 Wg6 1 13th International Conference on Testing Communicating Systems Tools and Techniques, 2000

Recently, TCP/IP protocols are widely used. Here, although the protocols are realized in operating systems and users of communication systems pay no attention to their detail, some system errors are actually reported. Since those errors occur only in specific communication situations, the interoperability testing by watching communication by a testing system is appropriate to detect them. This interoperability testing has the following requirements. Firstly, since the system that includes system errors is not unidentified, the testing system needs to check all communication systems attached to a network. Secondly, for the similar reason, all protocols in the TCP/IP protocol stack need to be checked. Thirdly, the testing system needs to discriminate operational failures, such as server down and mis-configuration of parameters in clients, from system errors, when it detects any problems in the communication. Based on these considerations, we have designed an interoperability testing system of TCP/IP based communication systems applied in an operational environment. This paper describes the detailed design of our testing system and the testing algorithm for DHCP (Dynamic Host Configuration Protocol), for which some system errors are reported.