Disk space is the most limited resource on my laptop. I have a 256Gb SSD that I struggle to keep under the 25% free space recommendation for drives of this type. Heavy Docker usage doesn’t help — it’s all too easy to accumulate many images over a short period of time.
I was really surprised to notice that the virtual drive (VHDX) that Docker uses in WSL2 had ballooned up to 17Gb! That’s when I decided to go through the official cleanup steps for Docker, including the final, less-known approach I used to finally reclaim most of that disk space.
Recommended Cleanup Commands
The first place to start is to remove any stopped containers that aren’t running. You can see them with the docker ps -a
command. Unless you plan to restart any of these containers, they are hogging space for nothing. You can wipe them by running docker container prune
. Docker will even tell you how much space it’s managed to reclaim. In my case, it wasn’t very much at all.

The next step is to look at the images list and remove those that you don’t need anymore. You can list all the images on your computer with the docker image list
command, and remove an image that you don’t need by using the docker image rm <image:tag>
command. Depending on how many images you’ve accumulated over time, it could save you a fair amount of disk space.
Now that you’ve removed all the images you don’t need anymore, it’s time to run a docker image prune
to remove dangling images. These are image layers that have no relation to any other image in Docker and are no longer needed.
Instead of executing each of the previous prune commands individually, you could use the docker system prune
command, which performs a prune on containers, images and a few other less-storage-hungry components within Docker.
By this point, you should have hopefully reclaimed a fair amount of space. Have a look at the size of your VHDX files located in C:\<username>\AppData\Local\Docker\wsl
. If the displayed size is about what you’d expect it to be relative to the number of images you have, then you’re all done. But if you’re like me, and the VHDX is still magnitudes of size bigger than the number of images you have, then it’s time for a deeper cleanse.

The Nuclear Option
After a bit of Googling, I came across this thread on the Docker GitHub repository with others having a similar problem. It seems that no matter how few images you have, the Docker VHDX refuses to hand back the disk space it once needed. Sadly, the only solution that works is to go into Docker Desktop, click on the troubleshoot button, and hit the Clean / Purge data button.

It did the trick for me, at the cost of losing all the images I had pulled. It’s a small price to pay considering that Docker was using more than 5% of my disk space.
Cleaning Up Is A Chore
There are many ways to clean up the disk space used by Docker on WSL2. But it seems that even with all those commands, sometimes you need to start with a clean slate.
Let me know in the comments below if you’ve found another, less drastic solution to this problem.
Have you tried an elevated powershell optimize command?
optimize-vhd -Path “C:\Users\\AppData\Local\Docker\wsl\data\ext4.vhdx” -Mode full
LikeLike
Ouh that sounds promising. I had not, but I’ll try definitely try it next time my vhd grows to a ridiculous size.
LikeLike
I tried it and I got the following error :
+ optimize-vhd -Path “C:\ProgramData\DockerDesktop\vm-data\DockerDeskto …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : PermissionDenied: (:) [Optimize-VHD], VirtualizationException
+ FullyQualifiedErrorId : AccessDenied,Microsoft.Vhd.PowerShell.Cmdlets.OptimizeVhd
LikeLike
I precise that I was in an admin mode …
But a simple restart of Docker Desktop seems to resolve the problem after the prune
LikeLike
That “Nuclear Option” got me my disk back and my convoluted project up and running in slightly <10 minutes. Thank you! However, I am working with the mindset that there is nothing, NOTHING AT ALL (including volumes), that is inexpendable and solely kept on the docker system.
LikeLike
Thank you! The nuclear option did the trick. I did recover 113GB that was being used by docker.
LikeLike