Quantcast
Viewing all articles
Browse latest Browse all 8328

MSA2040 multipath config

Hi,

Please your help, we have the following scenario:

  • Redhat 6.9
  • BL460c Gen8 (c7000 chassis with 2 SAN switch, all well interconnected)
  • MSA2040 SAN
  • One presented volume 6TB to this host (only)

An the following issue:

SAN_SW1
HBA_port1
CTRL_A_A1   /dev/sdb --> 11MBps    Prio 10
CTRL_B_B1   /dev/sde --> 67MBps    Prio 50 Owner

SAN_SW2
HBA_port2
CTRL_A_A2  /dev/sdc --> 11MBps    Prio 10
CTRL_B_B2  /dev/sdd --> 67MBps    Prio 50 Owner

 

As you see, the throughput is very bad over Controller A, the multipath.conf is the following, on device part:

devices {
        device {
        vendor "HP"
        product "MSA 2040 SAN"
        path_grouping_policy group_by_prio
        getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
        prio alua
        path_selector "round-robin 0"
        path_checker tur
        hardware_handler "0"
        failback immediate
        rr_weight uniform
        rr_min_io_rq 1
        no_path_retry 18
        }
}

The commands used to generate IO are:


fio --name=test_rand_read  --filename=/dev/sdb --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sdc --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sdd --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sde --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

And the multipath output is:

[root@XXXXXX ~]# multipath -ll
mpathb (3600c0ff0002946fd82e0e45a01000000) dm-0 HP,MSA 2040 SAN
size=6.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 4:0:1:0 sdd 8:48 active ready running
| `- 3:0:1:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:0 sdb 8:16 active ready running
  `- 4:0:0:0 sdc 8:32 active ready running

 

Pls, what could be the reason for this behaviour? we detect that whole mpathb was degraded, we are guessing that it is due this scenario.

Plaease your comments.

Regards

Rafael


Viewing all articles
Browse latest Browse all 8328

Trending Articles