Common functional testing types explained, with examples (original) (raw)

Functional testing is the largest piece of the software testing puzzle. The QA process is inherently necessary and should result in a higher quality product that customers feel positive about.

Functional testing's goal is to confirm that software performs as described in requirements or acceptance criteria. This type of testing also validates that the software meets customer expectations.

While functional testing is the most executed type of software quality assurance, it actually comprises several different parts, each with a distinct focal point. Each functional testing type aims to determine whether an application meets needs -- on the customer front end and on back-end processing engines.

Let's discuss functional testing types, their objectives and order of occurrence. We'll also use functional testing examples to guide you through the process, all through the lens of a sample application. This article covers, and includes tests cases for, the following types of functional tests:

The fictional example application is called Return Me to Work, and it includes a web portal and a mobile app. Human resources personnel use this web portal application to manage employees who are returning to work after an extended leave due to a contagious illness, and track their exposure to other employees. HR can use the web portal to view each employee's daily updates and monitor any issues, as well as assist employees with medical, financial and other care needs. Employees use the mobile app version, through which they can fill out a daily survey form to check in, and upload any necessary medical documents. The survey responses automatically upload to the web portal for HR use. The employees keep their address and contact information up to date.

Feature, unit and user story tests

The majority of functional testing occurs when the QA tester evaluates a new application feature. There are three primary types of functional tests that evaluate application functionality in this manner:

These three types of functional tests all verify the app's functional adherence to requirements or acceptance criteria. QA engineers check each individual piece of a system under test as a single story or a single feature; they don't worry about how one component interacts with others or the whole.

Here's a functional test example for the Return Me to Work app. The development process might cover uploading exam or lab results when submitting the daily survey via the mobile app.

This functional testing example goes through the process of uploading a PDF document when filling out the daily health survey. When both actions are successful and no errors pop up, then the test passes.

Integration tests

This testing technique is a way to assess how one feature interacts with others within the application workflow. With integration testing, the QA engineer checks that the feature affects other components as expected.

Integration points within an application vary, and they can be complex. It's not good enough for app functionality to work in isolation. Applications with complex functionality or test scenarios for integration can make these tests difficult to execute. For example, many healthcare-related applications have both healthcare and financial functionalities. An application might send a prescription to a patient and generate a billing charge by connecting a diagnosis with a financial application code. All of these individual components must synchronize and pass data between each other for the application to work.

For Return Me to Work, an integration test case might include scheduling an appointment, following up afterward and a medical bill sent to a patient's insurer. In this test scenario, an employee has tested positive for a virus and has seen a doctor for a follow-up consultation or exam. Now the employee needs medication, and for the app to bill the employee's health insurance for both the exam and prescriptions.

Integration testing is complex. No one step in this test case means the test passes. Instead, this testing technique ensures that each action, such as the employee uploading the medical update, triggers the application to do the appropriate follow-on steps. As a tester, I not only check this workflow, but also functional aspects of the interaction. For example:

This scenario includes testing connected APIs that send data outbound, as well as financial statements and calculations. This testing effort is integral to delivering a quality, fully functional software system.

Interface tests

With interface testing, QA engineers evaluate APIs or other back-end data exchange connections. Some experts equate interface testing with integration testing, but it is distinct because of its complex nature. Modern web and mobile applications use APIs to exchange inbound and outbound data. Numerous open source and proprietary tools test REST and SOAP API functions.

For interface testing, don't simply verify that API endpoints are functional; also test whether they can receive and send data securely. Test that API security is set up to only allow and connect to authorized vendors or partners.

Interface testing should also examine the data that flows through the connection. Confirm that the correct data structure is received, and that it is valid. Loop in the development and security team to help set up valid interface tests.

As an interface testing example for Return Me to Work, look at API endpoint security in the feature that sends prescription orders to the pharmacy.

The tester should view the API requests and responses to verify the security token was passed when the app connected to the pharmacy. Confirm that the prescription information -- including patient name, address and medical provider -- is accurate throughout the exchange.

To create a good negative test scenario, introduce a fake pharmacy that doesn't have secure credentials. Let that pharmacy try to connect to the API to receive prescription data. The application should not send information. See what happens. Does the system send an alert to the IT group? Does it lock down the entire application? Or, worse, does it allow the connection?

Smoke tests

Smoke testing, also called sanity testing, aims to ensure the application is in a proper state before more formal testing begins. Through smoke tests, the QA engineer checks whether the application functions as expected. Smoke tests are useful when a small or emergency code release occurs. For example, a customer reported a critical error that developers fixed as quickly as possible. QA must confirm the fix did not break any other existing functionality -- in a short time frame. When critical defects lead to an unplanned development project, smoke or sanity testing makes sure the application functionality is not adversely affected.

QA engineers often use automated smoke test scripts, but they can also be short, manual test suites. Most smoke test suites have an execution time between one and two hours.

Let's include a smoke testing example for Return Me to Work.

The tester must determine which functions are critical to the customer when devising a smoke test. In our application, the HR personnel must monitor employees' medical evaluations, schedule appointments and ensure billing statements or pharmacy orders get processed in a timely manner. The HR role also ensures the employee can return to an active work status.

Regression tests

Regression testing occurs just prior to a planned release. QA engineers rerun tests to make sure the changes haven't adversely affected the application.

Regression testing varies in execution time depending on the development methodology; it can be as short as one day or as long as two weeks. The regression test suite includes all of the functional test scenarios, including integration, interface and smoke tests.

The time allotted for execution and test suite size determine how much regression testing a team can accomplish. As the application ages, more and more functionality tests accumulate. There comes a point where the test execution is too much for the QA resources and time allowed. When that happens, it's critical to plan your regression tests. Base a regression test plan on which functional areas experienced changes within the release, or which are the most critical to the customer base. One method is to create multiple suites of regression tests that cover all the functionality, and rotate their execution. For example, if you split the regression test suites into a few sets, execute one set per release.

In theory, you want to test everything, all the time, but that's not realistic for many application development teams. If you can't execute all regression tests at once, then plan to continuously execute them, spreading out the task to ensure coverage.

User acceptance tests

User acceptance testing (UAT) is less about finding defects or failed requirements, and more about meeting customer expectations and improving user experience. For that reason, with UAT, the testers aren't QA professionals. Rather, this testing approach typically relies on the product team or representatives from the user base who validate that the software performs as they expected.

UAT frequently uncovers missing functional requirements or pieces of a full workflow. UAT reveals when requirements or acceptance criteria are poorly understood or communicated between the customer, the product team, development and QA. If the functional requirements fail to fully account for customer expectations, it's likely the final product will fall short for the user. For this reason, the customer and product team must fully understand the business workflows and objectives involved with the software project and ensure that the requirement specification, acceptance criteria and development stories are created accurately.

UAT often occurs at the end of a sprint, in the form of a demo. Sprint demos to customers and product teams help ensure no requirements were changed or missed during the software development lifecycle.

If possible, perform UAT either before or after regression testing. UAT can also occur during regression testing to save time, but that approach can mean defects get reported multiple times. If you execute regression and user acceptance tests at the same time, make sure QA and UAT testers collaborate to avoid creating duplicative work.

UAT suites typically assess real user scenarios. Here's a UAT example for the Return Me to Work app.

Functional testing covers a lot of ground, with each type of functional testing designed to confirm an application performs as expected once it reaches the customer. There are even subsets of the functional testing types described above. For example, boundary value testing can fit into feature testing and integration testing.

The beauty of functional testing lies in flexible test coverage. You can test endlessly or check something quickly. Both approaches can provide quality test coverage depending on the needs of the application and its customers. However you approach functional testing, make sure to assess the critical functions that keep the application secure and performing as expected, release after release.

More than functional tests

Non-functional testing is also a major component of software quality success. QA engineers perform non-functional testing via load, stress and other forms of performance testing. Non-functional tests also include compliance testing, security testing and, in some cases, accessibility testing.