An Introduction to the Performance API (original) (raw)

The Performance API measures the responsiveness of your live web application on real user devices and network connections. It can help identify bottlenecks in your client-side and server-side code with:

The API addresses several problems associated with typical performance assessment:

  1. Developers often test applications on high-end PCs connected to a fast network. DevTools can emulate slower devices, but it won’t always highlight real-world issues when the majority of clients are running a two year-old mobile connected to airport WiFi.
  2. Third-party options such as Google Analytics are often blocked, leading to skewed results and assumptions. You may also encounter privacy implications in some countries.
  3. The Performance API can accurately gauge various metrics better than methods such as Date().

The following sections describe ways you can use the Performance API. Some knowledge of JavaScript and page loading metrics is recommended.

Performance API Availability

Most modern browsers support the Performance API – including IE10 and IE11 (even IE9 has limited support). You can detect the API’s presence using:

if ('performance' in window) {
  // use Performance API
}

It’s not possible to fully Polyfill the API, so be wary about missing browsers. If 90% of your users are happily browsing with Internet Explorer 8, you’d only be measuring 10% of clients with more capable applications.

The API can be used in Web Workers, which provide a way to execute complex calculations in a background thread without halting browser operations.

Most API methods can be used in server-side Node.js with the standard perf_hooks module:

// Node.js performance
import { performance } from 'node:perf_hooks';
// or in Common JS: const { performance } = require('node:perf_hooks');

console.log( performance.now() );

Deno provides the standard Performance API:

// Deno performance
console.log( performance.now() );

You will need to run scripts with the --allow-hrtime permission to enable high-resolution time measurement:

deno run --allow-hrtime index.js

Server-side performance is usually easier to assess and manage because it’s dependent on load, CPUs, RAM, hard disks, and cloud service limits. Hardware upgrades or process management options such as PM2, clustering, and Kubernetes can be more effective than refactoring code.

The following sections concentrate on client-side performance for this reason.

Custom Performance Measurement

The Performance API can be used to time the execution speed of your application functions. You may have used or encountered timing functions using Date():

const timeStart = new Date();
runMyCode();
const timeTaken = new Date() - timeStart;

console.log(`runMyCode() executed in ${ timeTaken }ms`);

The Performance API offers two primary benefits:

  1. Better accuracy: Date() measures to the nearest millisecond, but the Performance API can measure fractions of a millisecond (depending on the browser).
  2. Better reliability: The user or OS can change the system time so Date()-based metrics will not always be accurate. This means your functions could appear particularly slow when clocks move forward!

The Date() equivalent is performance.now() which returns a high-resolution timestamp which is set at zero when the process responsible for creating the document starts (the page has loaded):

const timeStart = performance.now();
runMyCode();
const timeTaken = performance.now() - timeStart;

console.log(`runMyCode() executed in ${ timeTaken }ms`);

A non-standard performance.timeOrigin property can also return a timestamp from 1 January 1970 although this is not available in IE and Deno.

performance.now() becomes impractical when making more than a few measurements. The Performance API provides a buffer where you can record event for later analysis by passing a label name to performance.mark():

performance.mark('start:app');
performance.mark('start:init');

init(); // run initialization functions

performance.mark('end:init');
performance.mark('start:funcX');

funcX(); // run another function

performance.mark('end:funcX');
performance.mark('end:app');

An array of all mark objects in the Performance buffer can be extracted using:

const mark = performance.getEntriesByType('mark');

Example result:

[

  {
    detail: null
    duration: 0
    entryType: "mark"
    name: "start:app"
    startTime: 1000
  },
  {
    detail: null
    duration: 0
    entryType: "mark"
    name: "start:init"
    startTime: 1001
  },
  {
    detail: null
    duration: 0
    entryType: "mark"
    name: "end:init"
    startTime: 1100
  },
...
]

The performance.measure() method calculates the time between two marks and also stores it in the Performance buffer. You pass a new measure name, the starting mark name (or null to measure from the page load), and the ending mark name (or null to measure to the current time):

performance.measure('init', 'start:init', 'end:init');

A PerformanceMeasure object is appended to the buffer with the calculated time duration. To obtain this value, you can either request an array of all measures:

const measure = performance.getEntriesByType('measure');

or request a measure by its name:

performance.getEntriesByName('init');

Example result:

[
  {
    detail: null
    duration: 99
    entryType: "measure"
    name: "init"
    startTime: 1001
  }
]

Using the Performance Buffer

As well as marks and measures, the Performance buffer is used to automatically record navigation timing, resource timing, and paint timing (which we’ll discuss later). You can obtain an array of all entries in the buffer:

performance.getEntries();

By default, most browsers provide a buffer that stores up to 150 resource metrics. This should be enough for most assessments, but you can increase or decrease the buffer limit if needed:

// record 500 metrics
performance.setResourceTimingBufferSize(500);

Marks can be cleared by name or you can specify an empty value to clear all marks:

performance.clearMarks('start:init');

Similarly, measures can be cleared by name or an empty value to clear all:

performance.clearMeasures();

Monitoring Performance Buffer Updates

A PerformanceObserver can monitor changes to the Performance buffer and run a function when specific events occur. The syntax will be familiar if you’ve used MutationObserver to respond to DOM updates or IntersectionObserver to detect when elements are scrolled into the viewport.

You must define an observer function with two parameters:

  1. an array of observer entries which have been detected, and
  2. the observer object. If necessary, its disconnect() method can be called to stop the observer.
function performanceCallback(list, observer) {

  list.getEntries().forEach(entry => {
    console.log(`name    : ${ entry.name }`);
    console.log(`type    : ${ entry.type }`);
    console.log(`start   : ${ entry.startTime }`);
    console.log(`duration: ${ entry.duration }`);
  });

}

The function is passed to a new PerformanceObserver object. Its observe() method is passed an array of Performance buffer entryTypes to observe:

let observer = new PerformanceObserver( performanceCallback );
observer.observe({ entryTypes: ['mark', 'measure'] });

In this example, adding a new mark or measure runs the performanceCallback() function. While it only logs messages here, it could be used to trigger a data upload or make further calculations.

Measuring Paint Performance

The Paint Timing API is only available in client-side JavaScript and automatically records two metrics that are important to Core Web Vitals:

  1. first-paint: The browser has started to draw the page.
  2. first-contentful-paint: The browser has painted the first significant item of DOM content, such as a heading or an image.

These can be extracted from the Performance buffer to an array:

const paintTimes = performance.getEntriesByType('paint');

Be wary about running this before the page has fully loaded; the values will not be ready. Either wait for the window.load event or use a PerformanceObserver to monitor paint entryTypes.

Example result:

[
  {
    "name": "first-paint",
    "entryType": "paint",
    "startTime": 812,
    "duration": 0
  },
  {
    "name": "first-contentful-paint",
    "entryType": "paint",
    "startTime": 856,
    "duration": 0
  }
]

A slow first-paint is often caused by render-blocking CSS or JavaScript. The gap to the first-contentful-paint could be large if the browser has to download a large image or render complex elements.

Resource Performance Measurement

Network timings for resources such as images, stylesheets, and JavaScript files are automatically recorded to the Performance buffer. While there is little you can do to solve network speed issues (other than reducing file sizes), it can help highlight issues with larger assets, slow Ajax responses, or badly-performing third-party scripts.

An array of PerformanceResourceTiming metrics can be extracted from the buffer using:

const resources = performance.getEntriesByType('resource');

Alternatively, you can fetch metrics for an asset by passing its full URL:

const resource = performance.getEntriesByName('https://test.com/script.js');

Example result:

[
  {
    connectEnd: 195,
    connectStart: 195,
    decodedBodySize: 0,
    domainLookupEnd: 195,
    domainLookupStart: 195,
    duration: 2,
    encodedBodySize: 0,
    entryType: "resource",
    fetchStart: 195,
    initiatorType: "script",
    name: "https://test.com/script.js",
    nextHopProtocol: "h3",
    redirectEnd: 0,
    redirectStart: 0,
    requestStart: 195,
    responseEnd: 197,
    responseStart: 197,
    secureConnectionStart: 195,
    serverTiming: [],
    startTime: 195,
    transferSize: 0,
    workerStart: 195
  }
]

The following properties can be examined:

This example script retrieves all Ajax requests initiated by the Fetch API and returns the total transfer size and duration:

const fetchAll = performance.getEntriesByType('resource')
  .filter( r => r.initiatorType === 'fetch')
  .reduce( (sum, current) => {
    return {
      transferSize: sum.transferSize += current.transferSize,
      duration: sum.duration += current.duration
    }
  },
  { transferSize: 0, duration: 0 }
);

Network timings for unloading the previous page and loading the current page are automatically recorded to the Performance buffer as a single PerformanceNavigationTiming object.

Extract it to an array using:

const pageTime = performance.getEntriesByType('navigation');

…or by passing the page URL to .getEntriesByName():

const pageTiming = performance.getEntriesByName(window.location);

The metrics are identical to those for resources but also includes page-specific values:

Typical issues include:

Performance Recording and Analysis

The Performance API allows you to collate real-world usage data and upload it to a server for further analysis. You could use a third-party service such as Google Analytics to store the data, but there’s a risk the third-party script could be blocked or introduce new performance problems. Your own solution can be customized to your requirements to ensure monitoring does not impact other functionality.

Be wary of situations in which statistics cannot be determined — perhaps because users are on old browsers, blocking JavaScript, or behind a corporate proxy. Understanding what data is missing can be more fruitful than making assumptions based on incomplete information.

Ideally, your analysis scripts won’t negatively impact performance by running complex calculations or uploading large quantities of data. Consider utilizing web workers and minimizing the use of synchronous localStorage calls. It’s always possible to batch process raw data later.

Finally, be wary of outliers such as very fast or very slow devices and connections that adversely affect statistics. For example, if nine users load a page in two seconds but the tenth experiences a 60 second download, the average latency comes out to nearly 8 seconds. A more realistic metric is the median figure (2 seconds) or the 90th percentile (9 in every 10 users experience a load time of 2 seconds or less).

Summary

Craig Buckler

Freelance UK web developer, writer, and speaker. Has been around a long time and rants about standards and performance.