Wednesday 28 December 2011

ADVANCE FILESYSTEM MANAGEMENT

ADVANCE FILESYSTEM MANAGEMENT
RAID:
SOFTWARE RAID:
Multiple disks grouped together into "arrays" to provide better performance, redundancy or
both.
mdadm - provides the administration interface to software RAID.
Many "RAID Levels" supported, including RAID O, 1 and 5.
RAID devices are named, /dev/md0, /dev/md1, /dev/md2, /dev/md3 and so on.
RAID 0 or STRIPPING:
Two or more disks used to create a single large high performance volume. Performance is
better if drives of equal size are used. No redundancy, so chance of failure is very high,.
Array size equals the sum of all disks in array.
RAID0 is simply data striped over several disks. This gives a performance advantage,as it is
possible to read parts of a file in parallel. However not only is there no data protection, it is
actually less reliable than a single disk, as all the data is lost if a single disk in the array stripe
fails.
RAID 1 or MIRRORING:
Two disks containing the same data updated simultaneously. redundancy offers good
protection against disk failure. Can slow write performance but tends to improve read
performance. Only RAID type that you can place the /boot partition on. Hot spare disks can
be used to improve fault-tolerance. Array size equals the size of the smallest disk used.
RAID1 is data mirroring. Two copies of the data are held on two physical disks, and the data
is always identical. RAID1 has a performance advantage, as reads can come from either disk,
and is simple to implement. However, it is expensive, as twice as manydisks are needed to
store the data.
RAID 5:
Three or more disks with zero or more hot spares. A good balance between performance and
reliability. Redundancy is achieved by splitting parity between all disks, one disk can be lost
without causing array failure. Both read and write speeds are usually improved, but in certain
cases write performance is dramatically decreased. For this reason RAID 5 is often not a
good choice to host databases.
RAID5 data is written in blocks onto data disks, and parity is generated and rotated around
the data disks. Good general performance, and reasonably cheap to implement. Used
extensively for general data.
If data on a RAID5 disk is updated, then all the old data from the RAID stripe has to be read
back from the disks, then new parity calculated before the new data, and new parity can be
written out. This means that a RAID5 write operation requires 4 IOs. The performance impact
is usually masked by a large subsystem cache.
PARITY:
Parity is a means of adding extra data, so that if one of the bits of data is deleted,
it can be recreated from the parity
The advantage of parity is that it is possible to recover data from errors.
The disadvantage is that more storage space is required.
SOFTWARE RAID CONFIGURATION:
RAID1:
create 2 new partitions using the ID as `fd'
fdisk /dev/sda
p  n  +300M 
:wq
partprobe | sync
t
 partition no  l
 fd
TO START THE RAID
mdadm -C /dev/md0 --level=1 --raid-devices=2 /dev/sda{x,y} or/dev/sdax /dev/sday
or
mdadm -C /dev/md0 -l 1 -n 2 /dev/sda{x,y} or/dev/sdax /dev/sday
The raid should be started first and then only the partition should be formatted
mkfs.ext3 /dev/md0
mount /dev/md0 /mnt
vim /etc/fstab
/dev/md0
mount -a
mount
cd /mnt
touch a b c d
TO VIEW THE RAID STATUS
cat /proc/mdstat
/mnt ext3
defaults 0 0
TO STOP RAID
umount /dev/md0
mdadm --manage /dev/md0 --stop
TESTING AND RECOVERY
mount /dev/sdax /mnt ; mount /dev/sday /opt
ls /mnt ; ls /opt
TO FAIL A PARTICULAR HARDDISK
RAID must be started before failding a device
mdadm -C /dev/md0 --level=1 --raid-devices=2 /dev/sda{x,y} or/dev/sdax /dev/sday
mdadm --manage /dev/md0 -f /dev/sdax/y
or
mdadm --fail /dev/md0 /dev/sdax/y
TO REMOVE THE FAILED DEVICE
mdadm --manage /dev/md0 -r /dev/sdax/y
or
mdadm --remove /dev/md0 /dev/sdax/y
TO ADD A NEW DEVICE
mdadm --manage -a /dev/sdax/y
or
mdadm -a /dev/md0 /dev/sdax/y
cat /proc/mdstat
LOGICAL VOLUME MANAGEMENT

LVM is a tool for logical volume management which includes allocating disks, striping,
mirroring and resizing logical volumes.

The physical volumes are combined into logical volumes, with the exception of the /boot/
partition.

The /boot partition cannot be on a logical volume group because the boot loader cannot read
it.

If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a
part of a volume group.
LOGICAL VOLUME CONFIGURATION:
Create 2 new partitions using the ID as '8e'
fdisk /dev/sda
p
n
:wq
partprobe | sync
fdisk -l
pvcreate /dev/sdaX
pvdisplay
vgcreate /dev/sdaX
vgdisplay
lvcreate -L 300M -n
lvdisplay
mkfs.ext3 /dev//
vim /etc/fstab
/dev//lvname> /home
mount -a
mount
 +1000M  t

partition no  l
 8e
ext3
defaults
0 0
Store some data on the mount point
TO INCREASE THE LOGICAL VOLUME SIZE WITH OUT DATA LOSS
lvextend -L +700M /dev//lvname>
lvdisplay
lvdisplay show that the has the space of 1000 Mb, but df -h shows that,
df -h
To format the extended LV and added to the existing LV.
resize2fs
df -h
/dev//lvname>
TO ADD ANOTHER PARTITION TO THE EXISTING LOGICAL VOLUME:
create a new partition with the ID as '8e'
pvcreate /dev/sdaY
pvdisplay
vgextend /dev/sdaY
vgdisplay
Now the new partition is also added to the existing volume group.
TO REDUCE LVM
Now reduce the filesystem by 100MB. This example assumes that the original size was 440MB
umount /home
fsck –f /dev//lvname>
resize2fs /dev//lvname> 440M
lvreduce -L 100M /dev//lvname>
mount -a
df -h
USER QUOTA WITH LVM PARTITION:
Remount the above partition with usrquota option
Temporary:
mount /dev//lvname> -o rw,remount,usrquota
/home
Permanent:
umount /home
vim /etc/fstab
/dev//lvname> /home ext3 defaults,usrquota
:wq
mount -a
mount
Now the Above partition is ready to allocate userquota
useradd test
passwd test
00
quotacheck -cu /home
repquota /home
quotaon /home
BLOCK LIMIT: To set soft limit = 5120 kb (5Mb)
setquota 5120 10240 0 0 -u test /home
edquota -u test
==> to create the quota file
==> to display the quota file
hard limit
= 10240 kb (10Mb)
or
5120 10240
0
0
Note: Add the size of used blocks with the soft and hard limit given.
To check the above condition
su - test
quota
dd if=/dev/zero
dd if=/dev/zero
dd if=/dev/zero
of=newfile
of=newfile
of=newfile
To set soft limit = 50
bs=100k count=45  should succeed
bs=100k count=70  should succeed with warning
bs=100k count=150 should fail to write the whole file
hard limit = 70
FILE LIMIT:
setquota 0 0 50 70 -u test /home
or
edquota -u test
0
0
50
70
Note: Add the size of used blocks with the soft and hard limit given
To check the above condition
su - test
quota
Create 49 files successfully, 50th file should be created with warning. 80th file should not be created,
it should be failed.
Note: if any condition did not work properly you have to off the quota and the edit as need.
quotaoff /home
quotaon /home
LVM SNAPSHOTS:
Note: Don't try this with /home, try to use another mount point.
create an LVM partition and mount that in /snapshot
store some data in the mount point
ls /snapshot
lvcreate -L 16M -p r -s -n backups
lvdisplay
mkdir /mybackup
mount /dev//backups /mybackup/backups
mkdir /dump
dump -0u -f /dump/datadump
umount /mybackup/backups
lvremove /dev//backups
umount /snapshot
mkfs.ext3 /dev//lvname>
mount -a
ls /snapshot
Now the Directory contains no data
cd /snapshot
restore -rf /dump/datadump
ls
Now the data is restored in the mount point.
/dev//
/mybackup/backups
TO REMOVE LOGICAL VOLUMES:
umount /home
vim /etc/fstab
Remove the entries regarding to the LVM
lvremove /dev//lvname>
lvdisplay
vgremove
vgdisplay
pvremove /dev/sdaX
pvdisplay
Finally remove the partition using fdisk.

No comments:

Post a Comment