Linux LVM

In the past I have wrote lots of different little blurbs on LVM but I have been using it a lot and needed to document all the processes.  I will consolidate all my past posts into this one:

Logical Volume Management provides a great deal of flexibility allowing you to dynamically re-allocate blocks of space to different file systems.  Traditional volume manage relies upon strict partition boundaries separating and containerizing each mount point.  Logical Volume Management in linux takes control of the whole drive (or partition) carving it out into equal chunks called physical extents (PE).  Each PE is addressed by it’s logical extent (LE) address.  Groups of LE’s are grouped together to form logical volumes (LV) that are used to mount as file systems.   Then LV’s are grouped into volume groups (VG) for management purposes.

All of the information below is the command line versions of LVM management.  There are lots of quality GUI tools to manage LVM but since I rarely run graphical linux command line is my friend.

Display LVM Information

Display Physical Volume information – pvdisplay

This command will display information on the physical volumes.  Physical volumes fall along partition and traditional linux storage boundries.   This is how you will identify what physical disks are involved in your lvm’s

[root@linuxmonkey2 ~]# pvdisplay
 --- Physical volume ---
 PV Name               /dev/sdb3
 VG Name               storagevg
 PV Size               456.83 GB / not usable 15.15 MB
 Allocatable           yes
 PE Size (KByte)       32768
 Total PE              14618
 Free PE               1
 Allocated PE          14617
 PV UUID               3bwFUg-N06S-yTr9-BkoS-nndD-XcXz-G308fy

As you can see this displays a lot of information on the physical volume and some additional information on PE’s and volume group associated with the physical volume.

Display logical volumes – lvdisplay

This command will display information on the logical volumes.  Logical volumes are mountable partitions made up of one or more PE’s.   Logical volumes have to have a traditional file system placed upon them before they are usable.  This will help you identify what things can be mounted:

lvdisplay
 --- Logical volume ---
 LV Name                /dev/storagevg/storagelv01
 VG Name                storagevg
 LV UUID                GMp2kU-kAMc-ju8o-NJRB-hQ9u-CQ1O-dBi2mX
 LV Write Access        read/write
 LV Status              available
 # open                 1
 LV Size                456.78 GB
 Current LE             14617
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:2

LV Name provides a persistent path that can be mounted via fstab you could also mount it via UUID provided by this command.

Display Volume Group – vgdisplay

This command will display information on the volume group which is a grouping of physical volumes on your system.  (not presented by the graphic very well).  By adding multiple drives to a vg you can increase the size of a lv dynamically without getting larger hard drives.

[root@linuxmonkey2 ~]# vgdisplay
 --- Volume group ---
 VG Name               storagevg
 System ID
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  2
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                1
 Open LV               1
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               456.81 GB
 PE Size               32.00 MB
 Total PE              14618
 Alloc PE / Size       14617 / 456.78 GB
 Free  PE / Size       1 / 32.00 MB
 VG UUID               0csogH-bjz3-qP4z-JM63-YLpO-YpRy-Kqci71

This displays basic volume information in order to figure out what physical volumes are part of the vg use the pvdisplay command.

Create a Linux LVM on RHEL command line

  1. Partition the disk –  fdisk /dev/sde
  2. Create the physical volume on the disk /dev/sde1 – pvcreate /dev/sde1
  3. Create a volume group on the physical volume using a 4M chunk called cheese- vgcreate -s 4M cheese /dev/sde1
  4. Create the logical volume on cheese with 11GB called mouse- lvcreate -L 11GB -n mouse cheese
  5. Now you can format and mount your LVM – mkfs.ext3 -b 4096 /dev/cheese/mouse
  6. mount /dev/cheese/mouse /mount_point

Linux Remove an old sd device name

So from time to time I remove storage from Linux servers and don’t want to reboot the server so I run the following command works in RHEL and SLES:

echo 1 >  /sys/block/device-name/device/delete

For example 

echo 1 >  /sys/block/sde/device/delete

Find HBA WWID in RHEL

Finding information about your fibre channel cards in RHEL is pretty easy with qlogic cards look at:

cat  /sys/class/fc_host/host*/

files in here provide a log of info WWID’s are stored in:


cat  /sys/class/fc_host/host*/port_name

Backup and restore your mbr (master boot record)

In Linux making block level copies of area’s is easy with the magical dd utility.   So to backup the mbr (first 512 bytes)  use this command (assuming your boot drive is hda)

dd if=/dev/hda of=/root/mbr_backup bs=512 count=1

Now in /root/mbr_backup you have a complete copy of your mbr.   BS means byte size and count means only (once) first 512.

To delete the mbr (not the partition table):

dd if=/dev/zero of=/dev/hda bs=446 count=1

To delete the mbr and partition:

dd if=/dev/zero of=/dev/hda bs=512 count=1

Restore the mbr:

dd if=/root/mbr_backup of=/dev/hda bs=512 count=1


	

Find Linux WWID’s and Fiber Channel Storage

Enterprise Linux the very term usually refers to some type of storage area network normally fiber channel.   In all my experience I have not yet met a storage administrator who went to formal storage training.  So it’s no doubt that there are some weaknesses when it comes to storage with linux.  Here are some of the storage basics.    This article assumes you already have a working knowledge of WWID’s, WWN’s and fiber channel storage.

There are multiple WWID’s involved in the process here are a few:

How many WWID's do we need

As you can see we need a lot of WWID’s to make fiber channel storage work.  This article will focus on getting the Storage LUN WWID from the server.  This will allow us to identify our mount points to storage LUN’s.

First thing to identify is the SCSI WWID this is used by Linux Native Multipathing (MPIO) this is found by using the following command (for device sda):

scsi_id -g -u -s /block/sda
38001438005dea3760000700002660000

This will return the SCSI device WWID not to be confused with any other WWID’s.  If you want to know more information about the device you can get a manufactures label by using:

scsi_id -g -u /dev/sda

A lot of this information is stored in /dev/disk in various directories:

[table id=3 /]

The information we want is inside /dev/disk/by-path which looks like this:

lrwxrwxrwx 1 root root   10 Jan 22 13:48 scsi-38001438005dea3760000700002660000 -> ../../sdfj

So in this case the SCSI WWID is 38001438005dea3760000700002660000 while the LUN WWID is :
8001-4380-05de-a376-0000-7000-0266-0000

That’s about it. Now just tie that to your storage system.

Quick Script to identify WWID on New Lun’s in Linux when using MPIO

Well the other day I had to add a lot of LUN’s to a new system and one of the key elements is writting down the SCSI WWID when I add a LUN so I can tie that back to the storage.   So i wrote a simple script to scan the SCSI bus identify new lun’s and provide their WWID via multipath.   This will only work with some HBA’s and if your using Linux MPIO.

#!/bin/bash
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo "- - -" > /sys/class/scsi_host/host3/scan
ls -altr /dev/sd* | tail -n1 | awk '{ print $10 }' \
| sed 's/\/dev\///g' | awk '{print "/sbin/multipath -v3 \
| grep " $1 " | grep undef"}' > out
chmod 755 out
./out 
rm -f out

Check Pagesize in Linux

Recently my DBA’s were asking me what the page size was in linux.  A page size is a logical block of your physical, or virtual memory.  Each type of memory is divided into same size blocks or pages.  To find your system pagesize in linux type:

getconf PAGESIZE

or

getconf PAGE_SIZE

Brocade Zoning via scripting

Update: If you are looking for instructions for FOS 7 go here.

From time to time I have to handle some storage zoning.  I use mostly brocade fiber channel switches.  They are pretty easy to zone via scripts.  Which leaves you with your whole zone documented and rebuildable at a moments notice.  Before I get into the scripts I should mention that I do end to end zoning via WWID not port based zoning.   In other words I connect my Server HBA to my storage system.  Port based zoning means we zone the port that the HBA is on to the port that contains the storage system.  Port based requires that everything is plugged into the same port always and can be hard to rebuild quickly without the correct documentation.  Comments in brocade scripts are proceeded by an exclamation mark !

!!NOTE: This is fabric A
!! Make all the aliases for systems
alicreate "Storage_HBA1_A", "50:01:43:81:02:45:DE:45"
alicreate "Storgae_HBA2_A", "50:01:43:81:02:45:DE:47"
alicreate "Server_HBA1_A", "50:01:23:45:FE:34:52:12"
alicreate "Server2_HBA1_A", "50:01:23:45:FE:35:52:15"
alicreate "Server2_HBA1_A", "50:01:23:45:FE:35:52:17"

!! Make the zones

zonecreate "Z_server_to_Storage_HBA1_A", "Server_HBA1_A; Storage_HBA1_A"

zonecreate "Z_server_to_Storage_HBA2_A", "Server_HBA1_A; Storage_HBA2_A"

zonecreate "Z_server2_to_Storage_HBA1_A", "Server2_HBA1_A; Storage_HBA1_A"

zonecreate "Z_server2_to_Storage_HBA2_A", "Server2_HBA1_A; Storage_HBA2_A"

!!NOTE: effective config and zone members on SWITCHA_config Fabric
cfgcreate "SWITCHA_config", "Z_server_to_Storage_HBA1_A; Z_server_to_Storage_HBA2_A; Z_server2_to_Storage_HBA1_A; Z_server2_to_Storage_HBA2_A"

cfgsave
cfgenable "SWIT

Load it into your switch and your ready to go!