Quick HOWTO: Flush DNS Cache in Linux

Posted by Parthiban Ponnusamy


nscd (Name Service Cache Daemon) daemon provides caching service for the name service requests in Linux.

To configure the nscd caching service, edit /etc/nscd.conf

To Flush the DNS Cache in Linux server:

# /etc/init.d/nscd restart  
           OR 
 # service nscd restart


Hope this helps..


Quick HOWTO: Change I/O Scheduler in Linux

Posted by Parthiban Ponnusamy



I/O schedulers in Linux


noop - It can be helpful for devices that do I/O scheduling themselves, as intelligent storage, or devices that do not depend on mechanical movement
cfq - A fairness-oriented scheduler. It tries to maintain system-wide fairness of I/O bandwidth
deadline - A latency-oriented I/O scheduler. Each I/O request has got a deadline assigned.
as (anticipatory) - conceptually similar to deadline, but with more heuristics to improve performance (It may decrease performance in some cases)

Here is the procedure to change the default I/O scheduler in Linux.


Dynamically setting the default I/O scheduler to a Particular Disk:

Example: 
echo "scheduler_name" > /sys/block/<Disk_Name>/queue/scheduler

To set I/O scheduler to all the Disk drives on the Linux server:

for disk in `ls -1 /sys/block |egrep '^emc|^sd'`;
do
echo "deadline" > /sys/block/$disk/queue/scheduler;
done

To verify the settings:
for dsk in `ls -1 /sys/block |egrep '^emc|^sd'`;
do
echo -e "$i\t\c";
cat /sys/block/${dsk}/queue/scheduler;
done
Permanently set the default I/O scheduler in Linux via Grub menu:

Implement permanent setting by adding “elevator=noop” to the default stanza in the /boot/grub/menu.lst file

1. Create backup 
cp -p /boot/grub/menu.lst /boot/grub/menu.lst-backup
2. Update menu.lst 

Example:
kernel /vmlinuz-2.6.16.60-0.91.1-smp root=/dev/sysvg/root splash=silent splash=off showopts elevator=noop

Quick HOWTO: Reduce SWAP Partition Online without reboot in Linux

Posted by Parthiban Ponnusamy

Recently I had a request to reduce the swap space and allocate that space to some other LV in one of our server.  Below is what I followed and it perfectly worked for me.  :)

Make sure you have enough physical memory to hold the swap contents. 

Now, turn the swap off:
# sync
# swapoff <YOUR_SWAP_PARTITION>
Now check the status
# swapon -s 

Then, Use fdisk command:
# fdisk <YOUR_HARDDISK_Where_SWAP_Resides>
List partitions with "p" commandFind Delete your partition with "d" commandCreate a smaller Linux-Swap partition with "n" commandMake sure it is a Linux-Swap partition (type 82) (Change with "t" command)Write partition table with "w" command


Run "partprobe" to update Filesystem table to kernel. (It is very important before proceeding further)

Then,
mkswap <YOUR_NEW_SWAP_PARTITION>
swapon <YOUR_NEW_SWAP_PARTITION> 
check to make sure swap is turned on
swapon -s 
Now you can use your free space to increase space for other Logical volumes (LV). 

Use fdisk command to create new partition, then 

# partprobe
# pvcreate <NEW_PARTITION_YOU_CREATED>
# vgextend <VG_TO_INCREASE> <YOUR_NEW_PV>
# lvextend -L  +SIZE_TO_INCREASE <LV_NAME> 

Note: It is extreme importance of syncing and turning the swap off before you change any partitions. If you FORGET TO DO THIS, YOU WILL LOST_DATA!!


Create RAID Disk using hpacucli in Linux

Posted by Parthiban Ponnusamy


1. CHECK UNASSIGNED DRIVES THAT CAN BE USED 

server1:~ # hpacucli
HP Array Configuration Utility CLI 8.70-8.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console. 
 => hpacucli ctrl all show config 

Smart Array P400 in Slot 9 (sn: P61XXXXXXXXXN)
 array A (SAS, Unused Space: 0 MB)
 logicaldrive 1 (68.3 GB, RAID 1+0, OK)
 physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)
 physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)
 unassigned      
 physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)  
 physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) 

Note: Last two drives in the output are free and we gonna use them to create RAID  


2. NOW WE ARE GOING TO CREATE RAID 1+0 DISK ARRAY 

=> hpacucli ctrl slot=9 create type=logicaldrive drives=1I:1:3,1I:1:4 raid=1

3. VERIFY THAT THE DISK ARRAY WAS CREATED 

hpasmcli> hpacucli ctrl all show config
 Smart Array P400 in Slot 9 (sn: P61XXXXXXXXXN)
 array A (SAS, Unused Space: 0 MB)
 logicaldrive 1 (68.3 GB, RAID 1+0, OK)
 physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)
 physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)
 array B (SAS, Unused Space: 0 MB)   ß NEW DISK ARRAY
 logicaldrive 2 (136.7 GB, RAID 1+0, OK)
 physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)
 physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) 


VERIFY NEW DISK DRIVE WAS CREATED IN OS LEVEL (LINUX)

server1# cat /proc/driver/cciss/cciss0
cciss0: HP Smart Array P400 Controller
Board ID: 0x3234103c
Firmware Version: 5.20
IRQ: 74
Logical drives: 2
Sector size: 2048
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 9
Max # commands on controller since init: 331
Max SG entries since init: 31
Sequential access devices: 0
cciss/c0d0:       73.37GB       RAID 1(1+0)
cciss/c0d1:       146.7GB       RAID 1(1+0)



server1# fdisk -l | grep cciss
Disk /dev/cciss/c0d0: 73.3 GB, 73372631040 bytes
Disk /dev/cciss/c0d1: 146.7 GB, 146778685440 bytes

Solution for UNIX Error: Terminal too wide

Posted by Parthiban Ponnusamy


When you are working in an UNIX shell using Putty tool, you may get this error.

Problem: 

When you are trying to open vi editor, you may get error message "Terminal too wide"

How to Fix this??

Enter the below command in the shell and try to open vi editor again. It will work.


stty columns 120

Hope this will help on someone.

File System Extension on Live Linux VMware Guest using vmdisk size extended

Posted by Parthiban Ponnusamy



Many thanks to RAM for this Article.

---

This article explains, Filesystem extension on live Linux VMware Linux Guest where vmdisk size is extended and by not new disk added.

We had a scenario as follows:

1.       File system extension requirement on a live mounted file system without reboot.
2.       It’s a Linux guest on VMware required a FS extension from 600 GB to 900 GB. The FS was a single 600 GB disk  /dev/sdb
3.       While assigning storage, the team did increase the underlying disk to 900 GB than adding a new disk.
4.       Even after extension,  /dev/sdb was not picking up the additional 300 GB space. [ rescan or partprobe did not help here ]

Note: The case also applies for situations where you have the underlying partition has been changed ( using fdisk ).

Following are the steps taken to make the kernel recognize the new partition structure and to extend the filesystem

First we verified the disk sizes and allocations

# pvs
# vgs
# lvdisplay -m /dev/vg_name/lv_name  [ to get the underlying block devices ]
 
Now we had the partition table re-read for the underlying block device.

blockdev --rereadpt /dev/sdb
OR# sfdisk –R /dev/sdb

Do note that if you are doing this on a physical machine where we have multipath involved, we would need to re-read the partition tables for all the underlying disks involved.

Now that we have the partition table re-read, we would need to make PV resized to the new disk. Else it would still show the old size.

pvresize /dev/sdb

Check pvs / vgs output to see whether the new size is detected:

# pvs
# vgs

Once you have the new size detected, you can use the standard procedure to extend the filesystems

# lvextend -L +300G /dev/vg/lv
# resize2fs /dev/vg/lv

Check whether the new file systems are showing the correct sizes:

# df -h


Following are the screenshots of the entire activity which I performed in a test VM. A test VG and LV were created for this activity.

Verify current disk size of the mounted volume :



Check and verify on the available disk space on the underlying disk(s)



Increased the size of the vmware disk than adding a new disk in the virtual machine settings in vCenter.


Now,

Make the new sizes/partition visible on the system without reboot or taking the volume offline:



Extend the LV:



Resize FS:




Allow SSH and Web Connections in IP Tables in Linux

Posted by Parthiban Ponnusamy


To Allow web and ssh connections in IP Tables

SSH and web both require out going messages on established tcp connections.

iptables -A OUTPUT -o eth0 -m state –state ESTABLISHED,RELATED -j ACCEPT

Then you need to allow incomming connections on port 80 and 22 and possibly 443
iptables -A INPUT -p tcp -i eth0 –dport 22 –sport 1024:65535 -m state –state NEW -j ACCEPTiptables -A INPUT -p tcp -i eth0 –dport 80 –sport 1024:65535 -m state –state NEW -j ACCEPTiptables -A INPUT -p tcp -i eth0 –dport 443 –sport 1024:65535 -m state –state NEW -j ACCEPT

To allow a DNS server to operate use the following rules (assuming your blocking inbound and outbound in iptables)

DNS communicated in to destination port 53 but can come from any port in the upper range. So these rules require a large section of ports to allow access as long as they want to talk to 53.

iptables -A OUTPUT -p udp –dport 53 –sport 1024:65535 -j ACCEPTiptables -A INPUT -p udp –dport 53 –sport 1024:65535 -j ACCEPT


migratepv VS replacepv

Posted by Parthiban Ponnusamy


what is the difference between migratepv and replacepv?

replacepv command simply moves all the logical partitions on one physical volume to another physical volume.  The command is designed to make it easy to replace a disk in a mirrored configuration.

migratepv command also very similar.

The biggest difference is that migratepv allows you to copy the LPs on a logical volume basis, not just on a physical volume basis. For example, if you have a disk that has two logical volumes on it and you want to reorganize and put each logical volume on a different disk, migratepv can do it.

migratepv -l lv01 hdisk1 hdisk2
migratepv -l lv02 hdisk1 hdisk3

In this case, the logical partitions from logical volume lv01 are moved from hdisk1 to hdisk2.
The logical partitions from logical volume lv02 are moved to hdisk3.