Overview of Performance Testing Framework - MATLAB & Simulink (original) (raw)

The performance test interface leverages the script, function, and class-based unit testing interfaces. You can perform qualifications within your performance tests to ensure correct functional behavior while measuring code performance. Also, you can run your performance tests as standard regression tests to ensure that code changes do not break performance tests.

Determine Bounds of Measured Code

This table indicates what code is measured for the different types of tests.

Type of Test What Is Measured What Is Excluded
Script-based Code in each section of the script Code in the shared variables sectionMeasured estimate of the framework overhead
Function-based Code in each test function Code in the following functions: setup, setupOnce, teardown, and teardownOnceMeasured estimate of the framework overhead
Class-based Code in each method tagged with the Test attribute Code in the methods with the following attributes: TestMethodSetup, TestMethodTeardown, TestClassSetup, and TestClassTeardownShared fixture setup and teardownMeasured estimate of the framework overhead
Class-based deriving from matlab.perftest.TestCase and using startMeasuring and stopMeasuring methods Code between calls to startMeasuring and stopMeasuring in each method tagged with the Test attribute Code outside of the startMeasuring/stopMeasuring boundaryMeasured estimate of the framework overhead
Class-based deriving from matlab.perftest.TestCase and using the keepMeasuring method Code inside each keepMeasuring-while loop in each method tagged with the Test attribute Code outside of the keepMeasuring-while boundaryMeasured estimate of the framework overhead

Types of Time Experiments

You can create two types of time experiments.

This table summarizes the differences between the frequentist and fixed time experiments.

| | Frequentist time experiment | Fixed time experiment | | | ------------------------------------------ | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | | Warm-up measurements | 5 by default, but configurable throughTimeExperiment.limitingSamplingError | 0 by default, but configurable through TimeExperiment.withFixedSampleSize | | Number of samples | Between 4 and 256 by default, but configurable through TimeExperiment.limitingSamplingError | Defined during experiment construction | | Relative margin of error | 5% by default, but configurable through TimeExperiment.limitingSamplingError | Not applicable | | Confidence level | 95% by default, but configurable through TimeExperiment.limitingSamplingError | Not applicable | | Framework behavior for invalid test result | Stops measuring a test and moves to the next one | Collects specified number of samples |

Write Performance Tests with Measurement Boundaries

If your class-based tests derive from matlab.perftest.TestCase instead of matlab.unittest.TestCase, then you can use thestartMeasuring and stopMeasuring methods or the keepMeasuring method multiple times to define boundaries for performance test measurements. If a test method has multiple calls tostartMeasuring, stopMeasuring andkeepMeasuring, then the performance testing framework accumulates and sums the measurements. The performance testing framework does not support nested measurement boundaries. If you use these methods incorrectly in aTest method and run the test as aTimeExperiment, then the framework marks the measurement as invalid. Also, you still can run these performance tests as unit tests. For more information, see Test Performance Using Classes.

Run Performance Tests

There are two ways to run performance tests:

You can run your performance tests as regression tests. For more information, seeTest Performance Using Classes.

Understand Invalid Test Results

In some situations, the MeasurementResult for a test result is marked invalid. A test result is marked invalid when the performance testing framework sets the Valid property of the MeasurementResult to false. This invalidation occurs if your test fails or is filtered. Also, if your test incorrectly uses the startMeasuring and stopMeasuring methods of matlab.perftest.TestCase, then the MeasurementResult for that test is marked invalid.

When the performance testing framework encounters an invalid test result, it behaves differently depending on the type of time experiment:

See Also

runperf | testsuite | matlab.perftest.TimeExperiment | matlab.perftest.TimeResult | matlab.unittest.measurement.MeasurementResult

Topics