GitHub - grafana/quickpizza: Pizza, pizza, pizza! (original) (raw)

QuickPizza

QuickPizza screenshot

What is QuickPizza? ๐Ÿ•๐Ÿ•๐Ÿ•

QuickPizza is a web application, used for demonstrations and workshops, that generates new and exciting pizza combinations!

The app is built using SvelteKit for the frontend and Go for the backend.

The tests written for QuickPizza demonstrates the basic and advanced functionalities of k6, ranging from a basic load test to using different modules and extensions. QuickPizza is used in the the k6-oss-workshop.

Requirements

Run locally with Docker

To run the app locally with Docker, run the command:

docker run --rm -it -p 3333:3333 ghcr.io/grafana/quickpizza-local:latest

or build image from the repo:

docker run --rm -it -p 3333:3333 $(docker build -q .)

That's it!

Now you can go to localhost:3333 and get some pizza recommendations!

Use k6 to test QuickPizza

All tests live in the k6 folder. Within this folder, you will find the following folders:

To run tests on the foundations folder, you can use the following commands:

cd k6/foundations k6 run 01.basic.js

If you want to run one iteration with one virtual user, you can use the following command:

k6 run --iterations 1 --vus 1 01.basic.js

If QuickPizza is deployed remotely, then pass the hostname and port through the BASE_URL environment variable as follows:

k6 run -e BASE_URL=https://acmecorp.dev:3333 01.basic.js

Using k6 extensions If the test uses an extension, you need to build a k6 binary that includes the required extension/s. For detailed instructions, refer to k6 docs:

cd k6/extensions

xk6 build --with xk6-internal=../internal

To run the test that uses the k6/x/internal module, use previously created k6 binary in the k6/extensions folder:

./k6 run 01.basic-internal.js

Using k6 Docker image If you want to use the [k6 Docker image](https://hub.docker.com/r/grafana/k6) to run k6, you need to run the Quickpizza and k6 containers within the same network.

First, create a Docker network. Then, run Quickpizza, assigning a hostname and connecting to the created network.

docker network create quickpizza_network docker run --network=quickpizza_network --hostname=quickpizza --rm -it -p 3333:3333 ghcr.io/grafana/quickpizza-local:latest

Next, you can use the k6 Docker image to execute the k6 test. Run the k6 Docker container within the same network (quickpizza_network) and pass the BASE_URL environment variable with the value of the Quickpizza container's hostname as follows:

docker run -i --network=quickpizza_network -e BASE_URL=http://quickpizza:3333 grafana/k6 run - <01.basic.js

Collect Telemetry (Docker Compose)

Testing something you can't observe is only half the fun. QuickPizza is instrumented using best practices to record logs, emit metrics, traces and allow profiling. You can either collect and store this data locally or send it to Grafana Cloud.

Enabling Debug Logging

If you encounter any issues during operation, you can enable debug logging by setting the following environment variable:

export QUICKPIZZA_LOG_LEVEL=debug


Local Setup

The docker-compose-local.yaml file is set up to run and orchestrate the QuickPizza, Grafana, Tempo, Loki, Prometheus, Pyroscope, and Grafana Agent containers.

The Grafana Agent collects traces, metrics, logs and profiling data from the QuickPizza app, forwarding them to the Tempo, Prometheus and Loki. Finally, you can visualize and correlate data stored in these containers with the locally running Grafana instance.

To start the local environment, use the following command:

docker compose -f docker-compose-local.yaml up -d

Like before, QuickPizza is available at localhost:3333. It's time to discover some fancy pizzas!

Then, you can visit the Grafana instance running at localhost:3000 to access QuickPizza data.

Please refer to agent-local.river and docker-compose-local.yaml to find the labels applied to the telemetry data.

Correlate Pyroscope Data with k6 Tests

Whenever there is a Pyroscope instance endpoint provided via the QUICKPIZZA_PYROSCOPE_ENDPOINT environment variable, the QuickPizza app will emit and push profiling data to Pyroscope. You can visualize the profiling data with the Pyroscope data source in Grafana and the Explore Profiles Grafana Plugin.

Tip

To send Data to Pyroscope directly for a Local Setup, use the value http://pyroscope:4040 for the QUICKPIZZA_PYROSCOPE_ENDPOINT environment variable

To correlate the profiling data with the k6 test results, use the k6 Pyroscope library.

Pyroscope Data Source

Additional variables are available to configure the Tags/Labels for the Profiles:

Send k6 Test Results to Prometheus and visualize them in Grafana with prebuilt dashboards

To send k6 results to the Prometheus instance, execute the k6 run command with the value of the output flag set to experimental-prometheus-rw as follows:

k6 run -o experimental-prometheus-rw 01.basic.js

The local Grafana instance includes the k6 Prometheus and k6 Prometheus (Native Histogram) dashboards to help visualize, query, and correlate k6 results with telemetry data.

k6 provisioned dashboards

For detailed instructions about the different options of the k6 Prometheus output, refer to the k6 output guide for Prometheus remote write.

Grafana Cloud

The docker-compose-cloud.yaml file is set up to run the QuickPizza and Grafana Agent containers.

In this setup, the Grafana Agent collects observability data from the QuickPizza app and forwards it to Grafana Cloud.

You will need the following settings:

  1. The name of the Grafana Cloud Stack where the telemetry data will be stored.
  2. An Access Policy Token that includes the following scopes for the selected Grafana Cloud Stack: stacks:read, metrics:write, logs:write, traces:write, and profiles:write.
  3. The User ID and Endpoint to use for Grafana Cloud Profiles (Pyroscope) for your Grafana Cloud Stack

Then, create an .env file with the following environment variables and the values of the previous settings:

Your Grafana Cloud Stack Name (Slug)

GRAFANA_CLOUD_STACK=name

Your Grafana Cloud Access Policy Token

GRAFANA_CLOUD_TOKEN=

The Endpoint to use to send your Profiling Data too

QUICKPIZZA_PYROSCOPE_ENDPOINT=

Your Grafana Cloud Profiles Username for your Grafana Cloud Stack

QUICKPIZZA_GRAFANA_CLOUD_USER=

Your Grafana Cloud Profiles Password for your Grafana Cloud Stack (this is typically your Grafana Cloud Access Policy Token)

QUICKPIZZA_GRAFANA_CLOUD_PASSWORD=${GRAFANA_CLOUD_TOKEN}

Finally, execute the Docker Compose command using the docker-compose-cloud.yaml file, just as in the local setup:

docker compose -f docker-compose-cloud.yaml up -d

QuickPizza is available at localhost:3333. Click the Pizza, Please! button and discover some awesome pizzas!

Now, you can log in to Grafana Cloud and explore QuickPizza's telemetry data on the Prometheus, Tempo, Loki, and Pyroscope instances of your Grafana Cloud Stack. Refer to agent-cloud.river and docker-compose-cloud.yaml to find the labels applied to the telemetry data.

Enable Profiling (Send profiles to Grafana Cloud Profiles / Pyroscope)

Whenever there is a Pyroscope endpoint provided via the QUICKPIZZA_PYROSCOPE_ENDPOINT environment variable, as well as suitable Authentication Credentials, the QuickPizza app will emit and push profiling data to Grafana Cloud Profiles. You can visualize the profiling data with the Grafana Cloud Profiles data source in Grafana Cloud and the Explore Profiles feature.

To enable Grafana Cloud Profiling, ensure the following Environment Variables are set in the .env file, with values set to match your Grafana Cloud Stack:

Additional variables are available to configure the Tags/Labels for the Profiles, and support some Grafana Cloud Profiling Integrations;

Enable Frontend Observability (Grafana Faro)

Frontend Observability is available exclusively in Grafana Cloud. To enable Grafana Cloud Frontend Observability for QuickPizza, add the QUICKPIZZA_CONF_FARO_URL variable to the .env file, setting its value to your Faro web URL:

QUICKPIZZA_CONF_FARO_URL=

Restart the docker-compose-cloud.yaml environment.

Frontend Observability

Send k6 test results to Grafana Cloud Prometheus and visualize them with prebuilt Grafana dashboards

Just like in the local setup, we can output k6 result metrics to a Prometheus instance; in this case, it is provided by our Grafana Cloud Stack.

K6_PROMETHEUS_RW_USERNAME=USERNAME
K6_PROMETHEUS_RW_PASSWORD=API_KEY
K6_PROMETHEUS_RW_SERVER_URL=REMOTE_WRITE_ENDPOINT
k6 run -o experimental-prometheus-rw script.js

For detailed instructions, refer to the k6 output guide for Grafana Cloud Prometheus.


Deploy QuickPizza Docker image

The Dockerfile contains the setup for running QuickPizza without collecting data with the Grafana agent.

You can use the Dockerfile or build a Docker image to deploy the QuickPizza app on any cloud provider that supports Docker deployments. For simplicity, here are the Fly.io instructions:

  1. Authenticate using the fly CLI.
  2. Then, run the CLI to deploy the application and set up the internal port 3333 that the server listens to.
    fly launch --internal-port 3333 --now

For deployments on remote servers, you need to pass the BASE_URL environment variable when running the k6 tests as follows:

k6 run -e BASE_URL=https://acmecorp.dev:3333 01.basic.js

Use an external database

By default, QuickPizza stores all its data in an in-memory SQLite database. This allows for a quick start while still closely resembling a real world application. If you want to add an external database, you can set the QUICKPIZZA_DB environment variable to a supported connection string. Currently only PostgreSQL and SQLite is supported.

Example connection strings:

a remote PostgreSQL instance

export QUICKPIZZA_DB="postgres://user:password@localhost:5432/database?sslmode=disable"

a local sqlite3 database

export QUICKPIZZA_DB="quickpizza.db"

Deploy the application to Kubernetes

If you want to run a test that uses xk6-disruptor, or want to experiment with distributed tracing, you will need to deploy QuickPizza to Kubernetes.

For a detailed setup instructions, see the QuickPizza Kubernetes guide.

Injecting Errors from Client via Headers

You can introduce errors from the client side using custom headers. Below is a list of the currently supported error headers:

Example of header usage:

curl -X POST http://localhost:3333/api/pizza
-H "Content-Type: application/json"
-H "Authorization: abcdef0123456789"
-H "x-error-record-recommendation: internal-error"
-H "x-error-record-recommendation-percentage: 20"
-d '{}'