Backup and Restore on Ent. PKS with Velero
Stateless Application
I followed the step by step guide from this. And wrote down some unexpected workarounds needed.
The --plugins flag is required as part of velero-v1.2.0 (This might change in newer release)
velero install --provider aws --bucket velero \
--plugins "velero/velero-plugin-for-aws:v1.0.0" \
--secret-file /home/kubo/velero-test/velero-credentials \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://192.168.160.112:9000
After the installation of velero, you will find the restic pods are crashing with the following error:
Normal Scheduled 11m default-scheduler Successfully assigned velero/restic-mqfth to 9aeea30f-f08b-47b3-b7aa-ef45f3e800b0
Normal Pulled 10m (x5 over 11m) kubelet, 9aeea30f-f08b-47b3-b7aa-ef45f3e800b0 Container image "velero/velero:v1.2.0" already present on machine
Normal Created 10m (x5 over 11m) kubelet, 9aeea30f-f08b-47b3-b7aa-ef45f3e800b0 Created container
Warning Failed 10m (x5 over 11m) kubelet, 9aeea30f-f08b-47b3-b7aa-ef45f3e800b0 Error: failed to start container "restic": Error response from daemon: linux mounts: path /var/lib/kubelet/pods is mounted on / but it is not a shared or slave mount
Warning BackOff 87s (x44 over 11m) kubelet, 9aeea30f-f08b-47b3-b7aa-ef45f3e800b0 Back-off restarting failed container
That is because the path to pods under enterprise PKS is under /var/vcap/data/kubelet/pods.
What you have to do is to:
kubectl edit ds restic -n velero
Change from
volumes:
- hostPath:
path: /var/lib/kubelet/pods
type: ""
name: host-pods
To
volumes:
- hostPath:
path: /var/vcap/data/kubelet/pods
type: ""
name: host-pods
Last updated
Was this helpful?