Thursday, November 22, 2018

sshd[22107]:SSH Authentication Refused Bad Ownership or Modes for Directory

Hello everyone ,

Lets discuss how to troubleshoot following error while trying to login remotely using pass wordless ssh.
"sshd[22107]: Authentication refused: bad ownership or modes for directory"

One user from development team was trying to run some pearl script and while executing that script he getting permission denied and prompting for password(pass wordless authentication environment)

When user executing pearl script he was getting following message

please authenticate for oracle|Authenticated with 
partial success|Permission denied (keyboard-interactive,password

 So I decided to check /var/log/authlog  file for any clue. After checking file found following line in authlog.

sshd[20856]: Authentication refused: bad ownership or modes for directory /home/oracle

from this clue it indicating that ownership or mode on home directory not correctly set. when i checked found that ownership is correct but permission on home directory wasn't correct . "/home/oracle" directory was group writable and this causing error "sshd[22107]: Authentication refused: bad ownership or modes for directory".

So here are  step which i performed to correct this error 

#chmod g-w /home/oracle

#ls -ld /home/oracle

#Linux: /home/oracle# ls -ld /home/oracle

drwxr-x--- 2 oracle dba 4096 Nov  3  2017 /home/oracle

so here important thing is home directory must not group writeable.

After removing write permission from group development user able successfully execute pearl script and able to do passwordless login.

Some other pre-requisite for passwordless ssh configuration are like below.......

1 User home directory permission :755 and correct ownership 
2 .ssh: 700 and correct ownership
3  authorized_keys :600 and correct ownership
4 correct public key at both source and destination server

Thanks !!!!!!!!!!!

Extend file system when max primary partition limit reached on Linux.

Extend file system when max primary partition limit reached on Linux.


VG01- /dev/sdb    // sdb having  4 Primary partition.

Now VG01 having FS /oracle/log of size 500GB and Linux admin want it to increase by 100GB.
But on VG01 there is no free space available , so admin decided to expand existing disk /dev/sdb.
After analysis found that /dev/sdb already having 4 primary partition and we cant create 5th primary partition.

Why existing disk expand is not possible?
answer to this query if on existing disk  Linux admin already created 4 primary partition so now creation of 5th primary partition is not possible. Linux will not allow creation of 5 primary partition on single disk.

Solution to this situation is add new disk to Virtual machine and create required partition on it and create PV and add it to volume group where FS reside. So after adding PV to volume group VG01 total free space in VG01 will be 100 GB . Now Linux admin can add 100 GB to /oracle/log.

How to detect new disk on suse linux ?

Run command , this add scsi devices to Linux virtual machine without reboot.
take lssci before and after running find difference and it will show new disk.

Output of lsscsi before running
[0:0:0:0]    disk    VMware   Virtual disk     1.0   /dev/sda

[0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb

After running

 [0:0:0:0]    disk    VMware   Virtual disk     1.0   /dev/sda
 [0:0:1:0]    disk    VMware   Virtual disk     1.0   /dev/sdb    
 [0:0:2:0]    disk    VMware   Virtual disk     1.0   /dev/sdc       // /dev/sdc is new disk .
After create required size partition and add it to VG01 and expand FS.

#lvextend -L +100G /dev/mapper/VG01-lvora_log

#resize2fs /dev/mapper/VG01-lvora_log   //resizing EXT3 Filesystem

Check using df -hT /oracle/log whether FS is resized or 

command for resizing XFS file system.

#xfs_growfs /dev/mapper/VG01-lvora_log

Extended partition  is solution to overcome max 4 primary partition but if Linux environment where creation of extended partition is not allowed then this approach is useful.

Thanks !!!!

Understand /etc/fstab last two field

Have you ever wondered what exactly last 2 field of /etc/fstab in Linux.
Lets see what they suggest.

Fstab syntax is like below

file-system                      mount point   type            options       dump  pass

/dev/vg00/lvsysmgmt  /sysmgmt             ext3       defaults              1     2
NFS-server-hostname:/share   /mnt         nfs         defaults,bg,intr 0 0

Here i am going to discuss what exactly last 2 field dump and pass tells operating system during boot.

dump - 0 and 1
pass - 0 , 1 and 2

dump backup  of Filesystem
0 - disable 
1- enable 

fsck on filesystem 0 -Disable fsck 
1- Enable fsck
2 - Perform fsck on other FS after / fsck done.simply order of FSCK , 1 for / and 2 for all other FS.

Dump - This tells OS to create dump backup of Linux file system.
Pass -  This Tell OS that do FSCK on this file system after " /" FSCK done. It defines FSCK order , generally / having priority over other File system and / having 1 value if you observe this in Linux /eetc/fstab configuration file.

If you observe fstab you will found that / having pass value 1 and other FS having 2 , So its order of doing fsck on filesystem while booting Linux operating system.

Then what about NFS file system which also occupying stanza in /etc/fstab ??

for NFS file system dump and pass value always 0 . 0 means disable fsck and dump during boot .

Why NFS having pass and dump value always 0 ??
1. NFS file system resides on remote server.

Thanks !!!

Running fsck on AIX filesystem

Hello Friends , 
Today i am sharing how to run fsck on AIX non rootvg file system.

When AIX LPAR rebooted without proper application stop and umount of file system then next when AIX LPAR up then non rootvg file system may not get mounted. Sometime following error can be seen when admin try to mount that File system.

#mount /oracle
mount: 0506-324 Cannot mount /dev/oralv01 on /oracle: The media is not formatted or the format is not correct.
0506-342 The superblock on /dev/oralv01 is dirty.  Run a full fsck to fix

Solution to this type of error is Run a full fsck .

#fsck /dev/oralv01

After successfull fsck command execution mount filesystem using following command.

#mount /oracle
#df -gt /oracle   // check whether mounted or not

Thanks !!!!!! 

Thursday, November 1, 2018

How to shrink file system on AIX cluster and assign free space to other FS in Same volume group

Hello Friends Today i am going to discuss how to shrink File system on AIX HACMP cluster and assign that free space to other FS which is in  same volume group.

Prerequisite before File system shrink

1. Make sure that there is enough free space available on FS which you decided to shrink.
2.Also make sure that there is no error related file system which you are shrinking.
3.Make sure that both FS are in same volume group, because after FS size shrink if other file system which we want to extend using free space  is not in same VG then we cant extend that FS.

Let say we have scenario like below

In AIX cluster there is file system name /share/log of size 500GB and we want to shrink it by 100 GB and after shrinking /share/log assign space which we shrinked to FS /share/oracle .

step 1: Shrink /share/log by 100 GB

#cd /usr/sbin/cluster/sbin 
#./cl_chfs -a size=-100G /share/log

After shrink using command cl_chfs make sure that FS is shrinked using following command

#df -gt /share/log 

step 2:
Add space to file system /share/oracle using following command.

#cd /usr/sbin/cluster/sbin 
#./cl_chfs -a size=+100G /share/oracle

After FS extend confirm using following command.

#df -gt /share/oracle

This trick will help when there is no free space in AIX volume group and we need to immediately expand file system on AIX cluster/non-cluster setup.

Thanks !!!!!!!!!!!!!

0516-404 allocp: This system cannot fulfill the allocation request

How to resolve following error when AIX admin face error like below ??

root@aixnode1:/usr/sbin/cluster/sbin : ./cl_chfs -a size=+100G /oracle/log
cl_chfs: Error executing chfs  -a size="+209715200" /oracle/log on node aixnode1
Error detail:
    aixnode1: 0516-404 allocp: This system cannot fulfill the allocation request.
    aixnode1:        There are not enough free partitions or not enough physical volumes
    aixnode1:        to keep strictness and satisfy allocation requests.  The command
    aixnode1:        should be retried with different allocation characteristics.
    aixnode1: RETURN_CODE=1
    aixnode1: cdsh: cl_rsh: (RC=1) /usr/es/sbin/cluster/cspoc/cexec  chfs  -a size="+209715200" /oracle/log

This issue encountered when there was no free PP available on AIX cluster PV where LV reside .
In this situation filesystem /oracle/log   reside on LV oralv and this LV upper bound value is 4 . But when we checked on how many  physical volume this lv having mapping then we found that it points to 4 Physical volume and on that pv's there is no free PP available. solution for this situation is extend each physical volume by 50 GB and then increase file system using command ./cl_chfs -a size=+100G /oracle/log.  "Reason for increasing each disk by 50 GB is ,file system is mirrored and user asked to increase FS by 100 GB so we need twice of 100 GB which is 200 GB and in cluster setup we need to equally distribute this space among 4 disk, that's why we decided to extend each disk by 50 GB."

If you observe below output there is no free PP available on hdisk5,hdisk6,hdisk7 and hdisk8.
root@aixnode1:/usr/sbin/cluster/sbin : lsvg -p vg0
hdisk5            active            33783       0                    00..00..00..00..00
hdisk6            active            33783       0                    00..00..00..00..00
hdisk7            active            33783       0                    00..00..00..00..00
hdisk8            active            33783       0                    00..00..00..00..00

so here we need to provide Lun ID details and by how much size each disk need to be extended.

root@aixnode1:/root : ssod
DISK         SIZE              ID                                                             VG

hdisk5      4000 GB XXXXXXXXXXXXXXXXXXXXXXA         vg0
hdisk6      4000 GB XXXXXXXXXXXXXXXXXXXXXXB         vg0
hdisk7      4000 GB XXXXXXXXXXXXXXXXXXXXXXC          vg0
hdisk8      4000 GB XXXXXXXXXXXXXXXXXXXXXXD          vg0

In this environment LV is mirrored so we need to increase size of PV by considering following calculation.

By 100 GB extend FS /oracle/log.

100*1024*1024*2 = 209715200 (size in blocks)

(209715200/1024)/10240=200 GB

Here in VG0 total 4 disk so we need extend each disk by equal size .

If we increase each disk by 50 GB then total size 200GB will be added in VG .

After disk extend from storage side 

After space added from storage side execute command chvg -g vg0 to reflect changes at storage side and then execute command 

root@aixnode1:/root : ssod
DISK         SIZE              ID                                                             VG

hdisk5      4050 GB XXXXXXXXXXXXXXXXXXXXXXA         vg0
hdisk6      4050 GB XXXXXXXXXXXXXXXXXXXXXXB         vg0
hdisk7      4050 GB XXXXXXXXXXXXXXXXXXXXXXC          vg0
hdisk8      4050 GB XXXXXXXXXXXXXXXXXXXXXXD          vg0

./cl_chfs -a size=+100G /oracle/log.

Thanks  !!!!!!!!!!