Quantcast
Channel: All MSA Storage posts
Viewing all 8370 articles
Browse latest View live

Performance not as expected on MSA 2052

$
0
0

Hello All

We are refreshing the hardware for our main DB and I'm moving it to new DL380 G10 with 4 x 10GBps interfaces connected to four 10GBps ports - two ports on each of the two controllers of the MSA 2052 via HPE FlexFabric 5700 40XG 2QSFP+ Switch.

I have 9 800SSDs disk and 11 SAS 1.8TB. 8 of SSDs in raid 10 one spare and 10 SAS in raid 10 and 1 spare. They are part of one pool for Performance and Archive tire. I know it recommended to have it balanced on more pools but I need all the disk space accessible on this single server for the DBs

I have two LUNs - one with Performance affinity and one with Archive presented to the server.

I installed Windows 2012R2 on USB connected Kingstone 240 SSD to test the performance and use ATTO to asses it with 8K I/O size(this is the block our DB uses).

Installed al the latest firmware and drivers on from Gen10 SPP enabled multipath.

On both LUNs I get only 4.5k to 6.8k IO/s.  

If I run it against the USB SSD I have theOS on I get more than 20k IO/s reads and rights

I would expect this storage with this type of drives and this configuration to give a lot more performance than the €100 USB SSD drive.

Most probably I'm doing something wrong so I will appreciate any advise on how to improve the performance. 

 


Re: cannot login from any interface

$
0
0

when i cant get the page to load i clear the cookies etc sometimes works

MSA 2040 - low diskspace - performance hit?

$
0
0

Hey

I have a MSA 2040 SAS running version GL225R003 - it has 3x8 disk in a raid10.

My question is when I get below 20% in windows free space, should the system begin to slow it self down, in order to protect it self from getting 100% full ??

I am trying to find some online documentation, but have not found any?

Re: MSA 2040 - low diskspace - performance hit?

$
0
0

There are few queries here,

1> Are you dealing with Linear array or Virtual Array?

2> 100% full you mean by space or by CPU utilization or something else ?

If your question related to 20% space left at the WIndows Host for the Volume presented from MSA then from array perspective if this is linear array which means space alreay fully allocated so doesn't matter from array. So only we can consider Windows filesystem level impact. For any windows system and any drive if space is less free that means system need to find free space to write new IOs which may take time and that's why it may hit performance.

If your question related to Virtual Array then also 20% space left will be subjective to how much space got allocated out of the thin provisioned size from array for that thin volume. This time also Windows file system level impact may be same but from array perspective less space got allocated so block level some relief.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************

Storage Management Utility for Download

$
0
0

Bom dia Community,

I have one HP MSA MSA2312fc in my environment and some issues about Performance. I researched a little and discover that there is a Tool called "HP Storage Management Utility".

Where can i download this Tool or check the performance issue?

Can someone help me please?

Regards,

Marcos Onisto

 

Network / Switch configuration for MSA2052

$
0
0

Hi,

I am setting up a two node  Windows Server 2019 Hyper V Cluster.  I amplanning to use two 10 Gbe links from each controller to the storage VLAN on my C9300 switches.  What is unclear to me is do I set this into on big LACP group or do I do something else.   I plan to use Windows Server 2019 and MPIO.  I plan to have a two node cluster, with two 10 Gbe links from each server.   I am again assumming I should setup the these as an LACP group from both the Windows Server and the swith perspective.

Also, doe the MSA2052 support LACP groups, and does it support tagged VLAN traffic, or do I need to set the storage VLAN as the native VLAN on the switch ports?

I have tried to search for answers for this, but I have not found any document or article on netwoking best practices for the MSA2052, Cisco Catatalyst 9300, and Windows Server 2019 HyperV Cluster.

Any help is greatly appreciated.

Thx

Bryan

 

 

Re: Network / Switch configuration for MSA2052

$
0
0
Have a look here for Best Practices: https://h20195.www2.hpe.com/v2/getpdf.aspx/A00015961ENW.pdf

It seems to me that LACP and MPIO are somewhat exclusive each other...in any case LACP requires that physical member links are co-terminus on the same switching entity (a physical standalone switch or a virtual switch made of two or more ones using IRF, VSS, VSF, VSX or MC-LAGs technologies).

Re: Network / Switch configuration for MSA2052

$
0
0
Unfortunately that document is basically silent on network configuration.

Sent from my iPhone

Re: Network / Switch configuration for MSA2052

$
0
0

The MSA 2052 (SAN) doesn't support LACP on its Controllers' host ports facing the storage network(s) so you need to rely on MPIO or MCS implementations (references here, here, here, here and here<-- look for any reference about SAN iSCSI topology, SAN cabling and/or controllers' host ports configuration through CLI/SMU)...I think that topologically speaking if two 10G Switches are going to be used between your hosts and the MSA SAN controllers - so for your storage network portion or along with the production network - you could deploy these two Switches as either an IRF stack (AKA adopting the virtual switching approach) or as two standalone separate devices...in both cases LACP between MSA host ports and those Switches is not supported. Production Network links can instead be deployed using LACP between your hosts and the above switches (if collapsed approach is used) or to other if you segregate Storage/Production networks using totally different devices.

Edit: I think I found the document you can use as generic baseline for your iSCSI deployment. It's not based around MSA storage series but it explains some concepts and provide some suggestions that can be easily generalized (Network best practices, VLAN Tagging, iSCSI MPIO, etc.). It also treat separated storage/production or collapsed storage+production network scenarios (clearly all is biased on HPE FlexNetwork/FlexFabric Ethernet Switch series implementations - Standalone or IRF - that support FCoE/iSCSI protocols on their ethernet ports and HPE ProLiant servers hosts). Download it here. I admin I wasn't able to find a Validated Reference Design (VRD) about HPE MSA 2052 (or just 2050) SAN using iSCSI with possible detailed networking scenarios explained.

Re: Storage Management Utility for Download

$
0
0

Hello,

Please be aware the MSA2312 FC array went End of Support Life on January 31, 2018. Because the system is EOSL there is very limited (or none) spare parts, service options, or support. If you have data on this system please be sure you have a backup. 

The Storagae Management Utility (SMU) is the GUI web interface for managing the system. If you log into the system via an IP address in a web browser you are then using SMU. You can find more details in the following guide: https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01659237

Because this array is so old, your performance issue is likely due to new components on your SAN (OS, server, HBA/NIC, switch) that are no longer compatable with the MSA2312. Depending on recent upgrades and patches it may have caused a performance issue.

You should make sure all the drives are healthy, the RAID groups are fault tolerant, and pathing is correct.

Cheers,
Shawn

I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.

Re: Performance not as expected on MSA 2052

$
0
0

Hi Stefan,

I understand that its an old post and issue might have been resolved already.

May I know whats the read and write latency in ms that you can see if an IO meter test is run on the volume?

Have you tried disabling tier affinity and using no affinity option to check whether it helps?

Please disable ODX feature if its enabled as its not supported by MSA

Re: Performance not as expected on MSA 2052

$
0
0

Hello Stefan,

I hope the below provided response has helped you.

Kindly let me know if you have further queries  or the status.

HP MSA 2040 Degraded Status due to Disk healthy

$
0
0

Hii guys,

 

I have HP MSA 2040 storage. I have an issue on a enclosure. One disk is showing LEFTOVR status and I checked the status and it is showing "the disk may contain invalid metadata" .The particular disk showing solid amber 

 

Please help me to resolve this issue?

Re: HP MSA 2040 Degraded Status due to Disk healthy

$
0
0

Hello,

If a disk is showing leftover there are several steps you need to take to confirm overall health of your system.

First, are all the Pools or VDisks Fault Tolerant Online (FTOL)? If you have a Pool, disk-group, or Vdisk that is in a degraded or offline state please contact HPE Support immediately to help assist with recovery. 

If all your Pools, disk-groups, or Vdisks are FTOL you should review the logs for any disk errors. If the disk has had multiple disk errors then it needs to be replaced. If yoiu see unrecoverable read errors (UREs), SMART trips, or other ASQ/ASCI errors then the drive has gone into a leftover state due to bad drive media and should not be used again. Replace the disk and assign as a global spare or a dedicated vdisk spare.

If the drive has gone into a leftover state due to some other error you will need to carefully consider whether the drive is safe to use further. Depending on the age and use of the drive it may be a better option to replace the drive in order to protect your data.

If the drive has timed out or just recently been placed into the system you may clear the disk metadata in order to reuse the drive. For further information on how to use this command please review the CLI Guide: https://support.hpe.com/hpesc/public/docDisplay?docId=c04957376

Be aware that clearing disk metadata may cause data loss if not done properly. If you have questions or concerns please open a support case. This type of troubleshooting is not suitable for a forum.

Cheers,
Shawn

I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.

 

Re: Performance not as expected on MSA 2052

$
0
0

Hello Arun

No, it does not make a difference.

I had installed RedHat 8 on a local 240GB HP SSD.

I was told by our DBA that the command below should run for around 3 seconds and this is the result for a good filesystem speed. It is writing 18432 blocks of size of 16k

dd if=/dev/zero of=./test.out bs=16k count=18432 oflag=dsync
301989888 bytes (302 MB, 288 MiB) copied, 26.4979 s, 11.4 MB/s

On the MSA as you see I get more than 26 seconds which is far off.

What I can see is if I run multiple commands in parallel I get the same performance on each one of them which is the same as if I run one. So this will mean to me that something is limiting the performance on the storage for single tread access(for the lack of better description) and if I run it 4 times in parallel or a single every time the result is similar for all commands.

If I run the command to use big packets 16M and 144 of them I get

dd if=/dev/zero of=./test.out bs=16M count=144 oflag=dsync
2415919104 bytes (2.4 GB, 2.2 GiB) copied, 2.79892 s, 863 MB/s

So the bandwidth of the iSCSI is not a problem. To me, it looks like something is limiting the IOPS per single process.

If I run it against the root filesystem which is on the local SSD I get

dd if=/dev/zero of=./test.out bs=16k count=18432 oflag=dsync
301989888 bytes (302 MB, 288 MiB) copied, 2.98083 s, 101 MB/s

Which is what a good result should look like.

Do you know is there any logic on controllers the MSA2052 that limits the IOPS resources which a single host can utilize?


Re: P2000 G3 two vdisk with same name

$
0
0

i need your support i have p2000 g3 after doing storge and conrol restart  i can't see the storge or v disk and i don't know how can i insert the photo to see my error

Re: Performance not as expected on MSA 2052

$
0
0

  ......it's difficult to provide any guidance to Performance stuff in public forum. However, some advice,

1> You are comparing performance of DAS with SAN which is not correct. DAS is always faster compare to SAN as there are lots of other factor matters out of which networking/bandwidth is important thing.

2> Whenever it comes to performance then 1st thing to understand what you want to check or achieve here ? for you throughput is important or IOPs. You can't get both at the same time. The size of the I/O to and from the SAN impacts the measurable performance statistics of the SAN. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s).
Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency.

3> You need to check Queue Depth value set at the Host HBA or Network card level

4> You need to check what is installed on the LUN created over the volumes presented from MSA. It means application also matters means load balancer and DB can't be same.

5> You need to check what kind of RAID you are dealing with.

Anyway there are lot of things to check when it comes to performance meaure.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS!thumb below!

**********************************************************************

MSA2050

$
0
0

On the MSA 2050 HPE Best Practices recommends 1 volume per controller in a Vcenter clustered environment. I have 2xRaid10 and 1xRaid5 disk groups, all 1.2TB SAS 10K standard hdd's, per controller.  I need to have 6 different datastores housing specific VM workloads. Is it correct to make 1 large volume per controller or have a 1:1 ratio between datastore and diskgroup, thus having multiple volumes per controller?

Re: MSA2050

$
0
0

In order to get full potential of the array, it's suggested to configure it symmetrically i.e. the same disk or disk group configuration in both Pools, and that workloads are distributed between two volumes of which one is in each Pool. Furthermore, the disk configurations should abide to the Power of 2 rule which results in clean Page writes across parity protected Virtual Disk Groups. Datastores should have a 1:1 relationship with a Volume, and minimising the number of Datastores/Volumes by implementing small numbers of large Datastores will result in the best performance.

The best practice of not having too many volumes is less of a concern in Virtual Storage as it was in Linear. Virtual Storage introduces the concept of wide-striping, and therefore I/O for a single volume is distributed across all drives within the Tier in which the Page which is being read or written does or will reside. Of course, if you only have a single Virtual Disk group within the Pool, then this benefit is not realised, however once there are two or more it will quickly demonstrate a tangible improvement.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

**********************************************************************

Re: cannot login from any interface

$
0
0

Unfortunately, restarting both controllers did not help. Turned off both controllers and then turned them on again - all the same, there is no access via telnet, ssh, web. 

So, now we transfer data to another storage system, and then we will look for a problem.

Viewing all 8370 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>