Disk, RAID, and LVM Howtos

The following related topics are roughly in the order that one would normally utilize them.

Device / Hard Drive Info

It is important to understand that unix calls hard drives ‘devices’. Many other things besides HDDs can be ‘devices’ also. They are ussually listed as special files located at:

  • /dev/ - lists anything from ram to partitions
  • /sys/block/ - block devices are those such as hdd’s and ram

Filesystem Info

fdisk /dev/sda  - plain text partition interface
fdisk -l        - list all partitions on all devices
cfdisk /dev/sdb - n-curses partition interface

mke2fs /dev/sdb1    - makes file system on partition sdb1
mkfs.ext3 /dev/sdb1 - ditto, does the same thing

mount
umount
resize2fs
ext2resize
e2fsadm

Normal order when adding a single disk:
pvcreate, vgcreate, lvcreate, mkfs.ext3, mount, df

See the following page if you have problems after executing some of the above commands and run into boot problems.

Examples

sudo mkfs -V -t ntfs /dev/sdd5     ; Makes/formats the NTFS filesystem on the existing partition sdd5

RAID5 Create New Howto

To create a software raid1 or raid5, for example, they are all very similar and involve the same pre and post process. Below shows how to create a raid5 with a non-boot empty-starting one-big-filesystem setup.

General Steps

  1. Determine installed devices (/dev/hd* or /dev/sd*)
  2. Format/Partition installed devices (cfdisk /dev/sda)
  3. Create RAID device with mdadm (mdadm –create ...)
  4. Create filesystem on RAID device (mkfs.ext3 ...)
  5. Mount filesystem (mount ...)
  6. Add to fstab file (nano /etc/fstab)

Commands

  • Following opens the n-curses interface to format/partition devices (hard drives). Make sure you delete partitions you dont want and create new empty ones with ‘fd’ file system type, which is ‘linux raid autodetect’. This also can be done with the command-line-based fdisk.
    • cfdisk /dev/sdb
      cfdisk /dev/sdc
      cfdisk /dev/sdd
  • Following creates an array device called md0 of raid5 and with 3 devices
    • mdadm --verbose --create /dev/md0 --level=5 --raid-devices=3 --chunk=128 /dev/sd[bcd]1
      • –level=5 raid level 5
      • –raid-devices=3 3 disks in array
      • –chunk=128 The smallest amount of data that can be written to a device;
        • defaults to 64k; 128k is recommended for RAID-5
  • Shows status (Examine) of the listed devices for array stats
    • mdadm -E /dev/sd[bcd]1
  • See the multi device file existing
    • ls /dev/md*
  • See processing of md devices
    • cat /proc/mdstat
  • Make ext3 filesystem on md0 device
    • mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md1
      • -v verbose
      • -m .1 leave .1% of disk to root (so it doesnt fill and cause problems)
      • -b 4096 block size of 4kb (recommended on linux-raid wiki)
      • Calculations:
        • chunk size = 128kB (set by mdadm; recommended for raid-5 on linux-raid wiki;try it upwards of 512 to 2048)
        • block size = 4kB (highest setting; recommended for large files and most of time)
        • stride = chunk / block = 128kB / 4k = 32kB
        • stride-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (3) - 1 ) = 32kB * 2 = 64kB
      • Note: You would want to change the stride-width if you added disks to array.
        • tune2fs -E stride=n,stripe-width=m /dev/mdx
  • Make the mount directory
    • mkdir /media/documents
  • Mount the filesystem on md0 device to /media/documents
    • mount -t ext3 /dev/md0 /media/documents/
  • Edit the /etc/fstab file so that the filesystem will automatically mount on bootup
    • sudo nano -Bw /etc/fstab
      • -B Backup file
      • -w no wrap of lines
    • Add similar text as below. See https://help.ubuntu.com/community/Fstab for details.
      • #raid5 mdadm filesystem
        /dev/md0        /media/documents ext3   defaults        0       2
        

After its up

  • Status of md arrays, sometimes idle, sometimes is being checked, sometimes being rebuilt, etc
    • cat /proc/mdstat
    • tail /var/log/messages
  • Detailed information about setup of an md array
    • sudo mdadm --detail /dev/md0
  • Monitor an array
    • send errors to syslog
    • make it run in background (daemonize) (not hang up cmd prompt)
    • and send a test notification email
      • the email address is usually set at /etc/mdadm.conf
      • an MTA (Mail Transfer Agent) like exim, postfix, or sendmail must be installed and configured
    • sudo mdadm --monitor --syslog --daemonise --test /dev/md0
  • Individual drives status in array
    • sudo mdadm -E /dev/sd[abc]1
  • Check the array (for errors?), and see the status (better seen with /proc/mdstat file above)
    • sudo /usr/share/mdadm/checkarray -h
      sudo /usr/share/mdadm/checkarray --status /dev/md0
      sudo /usr/share/mdadm/checkarray --all --status
      
  • Most mdadm setups automatically check the array on the first sunday of the month
    • See cron job for this at /etc/cron.d/mdadm

Existing RAID5 Mount Howto

I had already done the create of an array from above and was moving the array to a new Ubuntu Linux machine and just needed to get it going again. Most steps are similar to the create above. Mostly the first mdadm command is different.

  • Instal mdadm if needed
    • apt-get install mdadm
  • Following assembles an existing array device to md0
    • mdadm --verbose --assemble /dev/md0 /dev/sd[bcd]1
  • Shows status (Examine) of the listed devices for array stats
    • mdadm -E /dev/sd[bcd]1
  • See the multi device file existing
    • ls /dev/md*
  • See processing of md devices
    • cat /proc/mdstat
  • Make the mount directory
    • mkdir /media/documents
  • Mount the filesystem on md0 device to /media/documents
    • mount -t ext3 /dev/md0 /media/documents/
  • Edit the /etc/fstab file so that the filesystem will automatically mount on bootup
    • sudo nano -Bw /etc/fstab
      • -B Backup file
      • -w no wrap of lines
    • Add similar text as below. See https://help.ubuntu.com/community/Fstab for details.
      • #raid5 mdadm filesystem
        /dev/md0        /media/documents ext3   defaults        0       2
        

LVM Info

http://tldp.org/HOWTO/LVM-HOWTO/index.html

  • volume group (VG)
  • physical volume (PV)
  • logical volume (LV)
  • physical extent (PE)
  • logical extent (LE)
+-- Volume Group --------------------------------+
|                                                |
|    +----------------------------------------+	 |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |	 |
|    +----------------------------------------+	 |
|      .       	  .    	     . 	      .	       	 |
|      .          .    	     .        .	         |
|    +----------------------------------------+	 |
| LV | LE |  LE | LE | LE | LE | LE | LE | LE |	 |
|    +----------------------------------------+	 |
|            .          .        .     	   .     |
|            . 	        .        .     	   .     |
|    +----------------------------------------+	 |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |	 |
|    +----------------------------------------+	 |
|                                                |
+------------------------------------------------+

get the current layout of your disks

pvscan                     ; physical volume scan, shows your hard drives
df -h                      ; disk file structure, shows usage of filesystems; h means human readable
du -hs <dir>               ; disk usage of a directory; h means human readable; s means summarize, dont show sub directories
dd                         ; device duplicate; copies or makes new files ; works with devices on block level;  can copy entire disks bitwise, can make test / fake files

common disk utilities

pvdisplay, vgdisplay, lvdisplay
pvcreate,  vgcreate,  lvcreate
pvchange,  vgchange,  lvchange
pvremove,  vgremove,  lvremove
pvmove,
           vgextend,  lvextend
           vgreduce,  lvreduce
           vgexport
           vgimport
           vgsplit
 
technology/unix/raid_filesystem_lvm.txt · Last modified: 10.30.2011 16:55 by 173.59.235.238
 
Recent changes RSS feed Creative Commons License Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki