Issues with configuring backup against S3 storage (original) (raw)
November 12, 2024, 8:37pm 1
Description:
We have trouble configuring backup against a local MinIO instance. When creating the cluster, the cluster is successfully being created, but the operator reports ERROR unable to create stanza and when trying to run a on-demand backup, nothing happens and the status of backup resource is Starting.
Currently running OpenShift 4.15.
Steps to Reproduce:
- Create cluster resource with the pgbackrest parameters against local MinIO instance.
- Create a
pg-backupresource to trigger on-demand backup. - Status for
pg-backupresource isStartingforever.
Version:
v.2.4.0
Logs:
Log from operator:
2024-11-12T17:23:01.313Z INFO Waiting for backup to start {"controller": "perconapgbackup", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGBackup", "PerconaPGBackup": {"name":"<REDACTED>","namespace":"<REDACTED>"}, "namespace": "<REDACTED>", "name": "<REDACTED>", "reconcileID": "436f40f4-34f1-4dff-b633-08f8fe31780a", "request": {"name":"<REDACTED>","namespace":"<REDACTED>"}}
2024-11-12T17:23:01.648Z ERROR get latest backup {"controller": "perconapgcluster", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGCluster", "PerconaPGCluster": {"name":"<REDACTED>","namespace":"<REDACTED>"}, "namespace": "<REDACTED>", "name": "<REDACTED>", "reconcileID": "16ba3814-edfd-4046-a67d-df35ae4fbbd7", "error": "no completed backups found", "errorVerbose": "no completed backups found\ngithub.com/percona/percona-postgresql-operator/percona/watcher.getLatestBackup\n\t/go/src/github.com/percona/percona-postgresql-operator/percona/watcher/wal.go:129\ngithub.com/percona/percona-postgresql-operator/percona/watcher.WatchCommitTimestamps\n\t/go/src/github.com/percona/percona-postgresql-operator/percona/watcher/wal.go:65\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"}
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1695
2024-11-12T17:23:04.803Z ERROR unable to create stanza {"controller": "postgrescluster", "controllerGroup": "postgres-operator.crunchydata.com", "controllerKind": "PostgresCluster", "PostgresCluster": {"name":"<REDACTED>","namespace":"<REDACTED>"}, "namespace": "<REDACTED>", "name": "<REDACTED>", "reconcileID": "3264a2fa-dd1a-4272-b79a-cd9167f45de4", "reconciler": "pgBackRest", "error": "command terminated with exit code 32: ", "errorVerbose": "command terminated with exit code 32: \ngithub.com/percona/percona-postgresql-operator/internal/pgbackrest.Executor.StanzaCreateOrUpgrade\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/pgbackrest/pgbackrest.go:96\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2705\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1412\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:390\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcileStanzaCreate\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:2712\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).reconcilePGBackRest\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/pgbackrest.go:1412\ngithub.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:390\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"}
github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster.(*Reconciler).Reconcile
/go/src/github.com/percona/percona-postgresql-operator/internal/controller/postgrescluster/controller.go:390
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:114
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:311
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:261
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/controller/controller.go:222
Expected Result:
That the backup would succeed.
Actual Result:
The status of the pg-backup resource is stuck on Starting.
Additional Information:
Cluster resource:
...
configuration:
- secret:
name: <REDACTED>-pgbackrest-secrets
global:
repo1-retention-full: "14"
repo1-retention-full-type: time
repo1-retention-diff: "14"
repo1-retention-diff-type: time
manual:
repoName: repo1
options:
- --type=full
repos:
- name: repo1
s3:
endpoint: "http://<REDACTED>:9000" # yes, our internal MinIO instance is for some reason http only..
bucket: "<REDACTED>"
region: minio
...
-pgbackrest-config config map:
...
data:
pgbackrest-server.conf: ""
pgbackrest_instance.conf: |
# Generated by postgres-operator. DO NOT EDIT.
# Your changes will not be saved.
[global]
log-path = /pgdata/pgbackrest/log
repo1-path = /pgbackrest/repo1
repo1-retention-diff = 14
repo1-retention-diff-type = time
repo1-retention-full = 14
repo1-retention-full-type = time
repo1-s3-bucket = <REDACTED>
repo1-s3-endpoint = http://<REDACTED>:9000
repo1-s3-region = minio
repo1-type = s3
[db]
pg1-path = /pgdata/pg16
pg1-port = 5432
pg1-socket-path = /tmp/postgres
...
mboncalo November 13, 2024, 9:18am 2
Did you create your -pgbackrest-secrets secret with the other required S3 credentials ? repo1-s3-key, repo1-s3-key-secret
And other configuration like repo1-s3-uri-style=path, repo1-storage-verify-tls=n
92fnko November 13, 2024, 1:20pm 3
Yep!
I have also tried the repo1-s3-uri-style=path (to be honest, I don’t know what this is actually doing) and repo1-storage-verify-tls=n (shouldn’t be needed at all as we using HTTP at the moment).
@92fnko I will check this on Monday and get back to you. Anything specific about your Minio installation?
Ok, I will need more details from you.
I tried it today with minio and it worked fine.
How do you deploy minio - any specific config parameters?
My minio installation is deployed with https (default when you deploy with helm). And my pgbackrest config map looks like this:
archive-async = y
log-path = /pgdata/pgbackrest/log
repo1-path = /pgbackrest/repo1
repo1-s3-bucket = sp-test
repo1-s3-endpoint = https://minio.default.svc.cluster.local
repo1-s3-region = us-east-1
repo1-s3-uri-style = path
repo1-storage-verify-tls = n
repo1-type = s3
spool-path = /pgdata/pgbackrest-spool
mboncalo November 19, 2024, 2:20pm 6
@92fnko for some reason, for me only works with repo2-storage-verify-tls=y , but we have a https endpoint … you can try and see
92fnko November 19, 2024, 5:00pm 7
Thanks for the input! Apparently, the pgbackrest uses path as S3 URI style by default, so repo1-s3-uri-style shouldn’t be needed.
@Sergey_Pronin We are currently using an external MinIO (outside the cluster), and it will soon be on HTTPS only. But out of curiosity, would you mind set up the MinIO without HTTPS and test the same scenario?
@92fnko please share with me how you deploy minio. I took the latest minio instruction and it automatically deployed it with HTTPs. I can figure it out, but then it might be a different version, different config, etc. So please share the helm values.yaml or any other manifest that you use.
I’ve just created this thread Documenting my tests in using HTTP with pgbackrest and Minio and Discourse suggested me this thread.
I had the same problem as OP whilst running pgbackrest over HTTP, it doesn’t seem to be possible, though that was just my experience. Hope the link above helps!
92fnko January 13, 2025, 7:42am 10
I was wrong. pgBackRest uses host as the default path style for S3 (pgBackRest - Configuration Reference).
I also was under the impression that I could use https://hostname:port under spec.backups.pgbackrest.repos.0.s3.endpoint like many other apps, but the port needs to be in a separate parameter (repo1-storage-port, thanks @Superhammer).
We have now migrated to HTTPS, but I think @Superhammer is on to something… I also would appreciate it if the official documentation was more complete on these basic things…
The below works, but now the final boss - how do I import the necessary CAs to trust the connection (Include CAs for backup)?
spec:
backups:
pgbackrest:
...
global:
repo1-s3-uri-style: path
repo1-storage-port: "9000"
repo1-storage-verify-tls: "n"
...
...
repos:
- name: repo1
s3:
bucket: <REDACTED>
endpoint: s3.domain.com # this is just an example
region: minio
...
92fnko January 29, 2025, 10:56am 11
@Sergey_Pronin Do you have input on how to include CAs for pgBackRest to trust the connection?