7. At this point the Logical Volumes are configured but not quite yet ready to be mounted and used to store data. The Logical Volumes still need to be formatted with a file-system. This is easily accomplished with mkfs or gparted. This server is a CLI only server so the mkfs utility will be used to accomplish this task.
# mkfs.ext4 -L Music /dev/mapper/storage-Music # mkfs.ext4 -L Documents /dev/mapper/storage-Documents
The two commands above will write an ext4 file system to each of the Logical Volumes created as well as put a file system label (-L)
on each of the Logical Volume’s file systems. The (-L)
part isn’t necessary but it can help to keep Logical Volumes straight based on their file system labels.
8. Now the Logical Volumes are ready to have data written to them. In order for Linux to access the volume though it will have to be mounted. This is accomplished through the use of the mount utility but first a location on the files system must be created in order to “mount” the new LV to that directory.
For the purposes of this tutorial, a new directory for each LV was created in the /mnt directory using the following commands:
# mkdir /mnt/Music # mkdir /mnt/Documents
At this point, using the mount utility, the Logical Volumes can be “mounted” to the newly created directory. Use the following commands to accomplish this task:
# mount /dev/mapper/storage-Music /mnt/Music # mount /dev/mapper/storage-Documents /mnt/Documents
Be sure to note that Linux is case sensitive and as such ‘Music‘ and ‘music‘ are not the same thing!
Pending any error messages, the Logical Volumes are now ready for use under the directory ‘/mnt/Music
‘ or ‘/mnt/Documents
‘. There are several different ways to confirm whether the LV is actually ready for use or not, the easiest is the lsblk command.
Do not worry that the output doesn’t match exactly here. This LVM setup was done on top of a set of raided disks (sdb and sdc). The point to take away here is that on the far right, the logical volumes are mounted to the newly created directories in /mnt. That’s it for a basic LVM setup!
While this isn’t a necessary step, it does help make the administration of these Logical Volumes easier. As of this point, if the system were to reboot, on startup the Logical Volumes would not be automatically mounted to the configured mount points.
For some people this is okay and they have no problems with typing the mount command in step 8 every time, however some people prefer to have the storage areas ready on system start up.
The next couple of steps will explain how to configure the system to automatically boot and mount the Logical Volumes every time the system starts up (known as persistent mounting).
9. To accomplish this task, a disk identifier known as a Universally Unique Identifier (UUID) should be obtained for each of the Logical Volumes. This is accomplished with the blkid command.
The blkid command will return the UUID value as well as the file system type necessary for use to begin to setup persistent mounting.
# blkid /dev/mapper/*
This output provides the UUID values for each of the Logical Volumes created. This output can be difficult to remember and in a command line interface, potentially non-selectable, so using the built in functionality of redirection, this output can be redirected to the file that it will ultimately need to reside in order for the system to automatically mount the Logical Volumes at startup. The file that the output needs to be sent to is known as /etc/fstab.
# blkid /dev/mapper/* >> /etc/fstab
WARNING!! – A note of caution here, be sure that this command uses DOUBLE greater than symbols ( >> )
if a single greater than symbol is used, the contents of this file will be OVER-WRITTEN!
Now open the /etc/fstab file with a text editor. This tutorial will use the nano text editor.
# nano /etc/fstab
Now the entries need to be formatted to meet the formatting requirements for the /etc/fstab file. The first thing needed removed is “/dev/mapper/storage-Documents:” and “/dev/mapper/storage-Music:”.
Once this is removed, enter the mount points setup in step 8. Be sure that the absolute path to the mount point is placed in the field after the UUID.
The third field will need the file system type. In this case, the file systems created on each Logical Volume was ‘ext4‘.
The fourth field can be set to ‘defaults‘ and the last two fields (<dump><pass>
) can be set to zeros.
At this point the entries are ready to be persistently mounted on system reboots. Save changes to the file in nano by issuing the ctrl + the ‘o‘ (that’s the oh key not the zero). The system will then prompt for confirmation of the file name to save the document under.
Check that nano is asking to save the file as ‘/etc/fstab‘ and hit the enter key. Then ctrl +’x’ to exit the nano text editor.
Once the file is saved, confirmation of the mount points can be confirmed by using the mount command again with an argument that tells the mount utility load all automatic mounts. That command is:
# mount -a
Again, all the commands used in this document have assumed that the user logged in is the root user.
After issuing mount -a the system will likely not provide any feedback that anything happened (unless something is wrong with the fstab file or the mount points themselves). To determine if everything worked, the lsblk utility can be used.
# lsblk
At this points the Logical Volumes are accessible via /mnt/Music and /mnt/Documents for the system or users to write files. From here any number of tasks can be done with the LVM volumes such as re-sizing, migrating the data, or adding more Logical Volumes but that is for a different how-to.
For now, enjoy the new data storage locations attached to the Debian system and stay tuned for more Debian How-to’s.
A couple of things missing from this tutorial is 1) how to remove a disk or a partition from the array, 2) how to reduce the amount of space used by the array without removing disks or partitions.
Hi,
I have reduced the remove the RAID-1 array (/dev/md0) to 48G size and it is configured on the lvm and trying to reduce respective disk space (sda4) as well to 48G from 58G.
But i am not able to accomplish this, could you please help me regarding this.
Thank you so much for this how to guide, i’m going to be doing my LFCS soon and was cracking my head to understand LVM and it all clicked while reading this HOW TO. Thank you for the easy to understand write up, you sir I salute.
Dibs,
You are quite welcome and best of luck on the LCFS. It’s a great test to challenge yourself with common Linux tasks!
I love LVM2 and software raid in Linux. One tip if you want to try this before making something for real, use USB memory sticks as disks. They work great as such. Swell to test disk craches and rebuilding RAID-5 or RAID-6.
Anders, I prefer LVM on hardware RAID but this little NAS box didnt support HW RAID. The USB drive option is a fantastic idea for testing a potential install! No reason to risk the real drives when USB media is so cheap.
Actually, hardware Raid have hardware dependencies. Linux software Raided disks can be moved between machines, for instance when replace a faulty motherboard or disc controller. That can’t be done securly unless replaced with the same brand and version of hardware.
By the way, it is dead easy to move a volume group and its logical volumes to new disks and remove old that are about to crach. Just add a physical volume to the volume group and then move all data off the bad one. Lastly remove the bad one. No need to do any manual moving. Used it to move a raid that was degenerated out and a new, larger in.
Lastly, I would have recommended to mount under /srv and not /mnt. As /mnt are meant for temporary mounts and /srv are for server storage. Makes it easier to backup, like /home for user data. ;-)
Anders, To each their own. /mnt was only used for illustrative purposes. The box that these drives are actually in, does mount the LVM’s in a different location.
Please let me know if the LVM is same for other distro’s as well.
Satish, You’re very welcome. LVM2 is very similar across most distributions. I can’t speak for de facto but I would say that most of the LVM stuff will be the same across distributions. The only realy differences will likely be Distro specific things and maybe naming conventions of the LVM package.
Great articule!!!!please more on the like of this.
Thanks
You’re very welcome. There will be several more Debian based article coming down the pipe soon!
I am waiting since very long time for this.
Many Thanks @Rob Turner @Tecmint