When I ran the test for debugging, I got the following message.
No space left on device
I recently expanded my EBS volume, so why not? ?
, I first checked the overall volume usage. Execute the df command to check the volume usage status.
Check Volume Usage
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 492M 464K 491M 1% /run
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/xvda1 35G 34G 0G 99% /
tmpfs 99M 0 99M 0% /run/user/1000
The file system /dev/xvda1 was 99% full and had no free space. Expanding EBS will solve the problem, but we will proceed with deleting unnecessary files and deleting temporary files.
Check for large files
Search for files larger than 500MB with the following command. Assuming that the log files are suspicious, check the contents of /var/log.
find /var/log -size +500M | xargs ls -l | sort -rn
Delete the journal log
The journal file was fairly large, so add the maximum file size for journal to journald.conf
$ sudo vi /etc/systemd/journald.conf
SystemMaxFileSize=300M
If you just rewrite the conf file, it will not be reflected, so restart the service and delete the journalctl log
$ sudo systemctl restart systemd-journald
$ sudo journalctl --vacuum-size=300M
When I checked the volume with the df command, it was a little better, but there was still very little free space.
Clear Yum Cache
Check the capacity of the directory under /var with the command below.
$ sudo du -sh /var/*
About 700MB was used in /var/cache, so I will try to clear the cache. Large package files used by yum or APT may have accumulated. Clear the yum cache with the following command.
$ sudo yum clean all
This command clears the cache while leaving the necessary files, so it is safer than deleting them with the rm command.
Although the cache file has become smaller by this process, it still lacks space.
Remove unused docker resources
$ sudo du -sh /var/*
$ sudo du -sh /var/lib/*
If you check inside /var/lib, you can see that the capacity of /var/lib/docker is dominantly large. This file contains objects such as docker containers, volumes, and images.
If you want to do a test run on Cloud9, you can re-containerize, so delete the already stopped container with the following command.
$ docker system prune -a
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N]
・All stopped containers
・All networks not used by at least one container
・All images without at least one associated container
・All Build Cache
Type y and Enter to proceed
You will see the deleted image and finally the volume that has been deleted and made available as shown below.
Total reclaimed space: 22.108GB
Recheck volume usage
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 492M 520K 491M 1% /run
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/xvda1 35G 7.3G 28G 21% /
tmpfs 99M 0 99M 0% /run/user/1000
tmpfs 99M 0 99M 0% /run/user/0
I was able to secure free space safely, and now I can do a test run.