Automatically reclaim k3s container volumes after a cluster is deleted
Thanks @zeerorg for the suggestion on possible container volume leak. With out this fix the k3s container volumes are left in the reclaimable state. This experiment confirms it: $ docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 14 0 2.131GB 2.131GB (100%) Containers 0 0 0B 0B Local Volumes 0 0 0B 0B Build Cache 0 0 0B 0B $ bin/k3d create; sleep 5; bin/k3d delete $ docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 14 0 2.131GB 2.131GB (100%) Containers 0 0 0B 0B Local Volumes 3 0 2.366MB 2.366MB (100%) Build Cache 0 0 0B 0B In this case, 2.36MB are left in the reclaimable state. This number can be larger with a larger cluster. With this fix, output of "docker system df" does not contain the claimable volume TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 14 0 2.131GB 2.131GB (100%) Containers 0 0 0B 0B Local Volumes 0 0 0B 0B Build Cache 0 0 0B 0B
This commit is contained in:
parent
dbc93f6818
commit
cd2292ba3a
@ -196,7 +196,13 @@ func removeContainer(ID string) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("ERROR: couldn't create docker client\n%+v", err)
|
||||
}
|
||||
if err := docker.ContainerRemove(ctx, ID, types.ContainerRemoveOptions{Force: true}); err != nil {
|
||||
|
||||
options := types.ContainerRemoveOptions{
|
||||
RemoveVolumes: true,
|
||||
Force:true,
|
||||
}
|
||||
|
||||
if err := docker.ContainerRemove(ctx, ID, options); err != nil {
|
||||
return fmt.Errorf("FAILURE: couldn't delete container [%s] -> %+v", ID, err)
|
||||
}
|
||||
return nil
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user