Quantcast
Channel: All MSA Storage posts
Viewing all 8367 articles
Browse latest View live

HPE MSA 2052 remote replication

$
0
0

Hello,

I am a newbie to replication technolgies with HPE MSA 2052s. We are replacing 2 * P2000 arrays in remote geographical locations linked over a WAN connection. I am testing this set up at present with 2 arrays in the same cabinet & same ethernet switch.

The migrating the existing workloads from P2000 is easy enough. Primary site will be running 2 ESXi hosts with HP MSA2052 connected with direct attachment through FC. The VMs on the hosts will be approx 11Tb in size total, genearting approx 10Gb of change daily. We have appox. 21Tb of usable space with 2 * 800Gb SSDs used as performance tier

The intention is to use 2 * iSCSI ports for remote replication between primary & secondary site. Been advised to use virtual replication & do not want to affect the 'on-line day' will utilise scheduled replications 3 times a day.

A number of parameters are confusing me in setting up the replication set.

Do I want to choose Discard or Queue Latest? (Queue latest seems to be the default).

Do I want a Secondary Snapshot history? If so do I need a retention snapshot history of greater than 1? Do I want to keep Primary Snapshot history? & what should be retention priority - I am guessing Never Delete would not be good option if I want to preserve disk space!

Primarily the remote replication will be used for DR site recovery & not recovery of VMs from point in time.

Second last question - I am guessing that the Enable overcommitment of pool setting should be set as ON, other internal snapshots may grow & we run out of space.

Last question - In testing scheduled replication - continually get these errors.

Scheduler: The scheduler was unable to complete the replication. - Command is taking longer to complete than anticipated. (task: du-msa-01, task type: Replicate)

EVENT ID:#A2238

EVENT CODE:362

EVENT SEVERITY:Error

Normally happens 20 secs after replication completed successfully.

Firmware version is VL270R001-01

 

Many Thanks

 

Allan Clark

 

 

 


Re: MSA 2052 advice voor Hyper-v 2016 cluster

$
0
0

1st of all I want to correct few terminology otherwise it will create confusion for others who all are following your forum.

You have mentioned

"There are 4x disk pools:

- Controller A: 4xSSD RAID10 en 8xSAS 15k RAID10
- Controller B: 4xSSD RAID10 en 8xSAS 15k RAID10"

Here these are called Virtual Disk group (VDG) and not Pools

MSA2052 bydefault will have two Pools. One is for Controller A which is Pool A and another for Controller B which is Pool B

Above 4 VDG will be part of these two Pools. 1 SSD and 1 SAS VDG will be part of Pool A and another SSD and SAS VDG will be part of Pool B

You have mentioned

"- There are 10 CSV's configured:
2x SSD and 3x SAS per Storage pool"

How do you know that volume space you have got from SSD VDG or SAS VDG because at the time of any volume creation you can see only space in the Pool. You just have to mention size of the volume but you will never know from which VDG space will be allocated. This is Virtualization. Moreover at the time of creation of Volume space will not get allocated as these are Thin provision volume. Space gets allocated as per demand created by hosts and write request.

MSA2052 having Automated Tier License already included so as per the usage of the volumes data will be moving from SAS to SSD and SSD to SAS tier automatically.

If you still want to define some preference or want to set Quality of Service (QOS) then you can define "Volume Tier Affinity" feature on each volume.

You can distribute 5 volume per Pool which means 5 volumes will be taken care by Controller A and another 5 volume will be taken care by Controller B. This is how load balancing will happen.

You configuration is perfect and you just need little modification to get good performance.

You can also refer the best practices whitepaper to get more information on this,

https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00015961enw

 

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: HP MSA 2040 Seperating VMDK storage from NSA (iSCSI) storage

$
0
0

Hi,

MSA is a block level device, thus it is assigning block to a VMDK filesystem or being used as ISCSI direct storage.

Therefore , from MSA perspective , you can use them either way.

Having one pool assigned to VMDK and other to ISCSI storage is also fine, there is nothing wrong in doing it.

Regards

I am an HPE Employee

 

 

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

You already have 3 drives at location 1.2, 1.17 and 1.18. Out of them you need to just replace 1.2 as that one failed.

Then  you need another 11 drives of 600 Gb each. so that  you can create Virtual Disk Group 7 x 600 GB =4200 Gb drives in each Pool.

Otherwise you can take data backup, then delete all data and existing disk-groups. Then re-create VDG and volumes. After that you can convert them as Thick Volume means overcommit option you can disable by modifying Pools settings.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

If you talk about compatibility then ofcourse MSA2040 not compatible with ESX6.7 because HPE not yet tested this. So any behavior is unpredictable and there is no official support from HPE with this setup. You can get that details from SPOCK,

https://h20272.www2.hpe.com/spock2/cont/configsetresultview?repositoryId=270867&navId=0&source=Constraint

I am still interested to see NetApp is using what kind of technology where VMFS extend happens automatically. Do you have any document or link where it can be shown that after expand of LUN size in NetApp block level corresponding datastore VMFS extend happens automatically.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: HPE MSA 2052 remote replication

$
0
0

Do I want to choose Discard or Queue Latest? (Queue latest seems to be the default).

Ans: It's purely your call. When any new replication request comes and already one replication going on that time you need to decide what you want to do like you like to discard this new replication request or Queue it for later. Anyway the limit is only 1

Do I want a Secondary Snapshot history? If so do I need a retention snapshot history of greater than 1? Do I want to keep Primary Snapshot history? & what should be retention priority - I am guessing Never Delete would not be good option if I want to preserve disk space!

Ans: Again this is your call.  you can set retention count upto 16. You need to decide if you want to keep Snapshot history both primary and seconday or secondary only. retention priority never delete not good in terms of space management.

Second last question - I am guessing that the Enable overcommitment of pool setting should be set as ON, other internal snapshots may grow & we run out of space.

Ans: Yes it's always recommended to set overcommit as enabled that means volumes will be Thin volume otherwise space management will be difficult because if overcommitment is disabled then all will be fully allocated objects

Last question - In testing scheduled replication - continually get these errors.

Scheduler: The scheduler was unable to complete the replication. - Command is taking longer to complete than anticipated. (task: du-msa-01, task type: Replicate)

Ans: It depends upon size of the volume, replication schedule. We need to give enough time between consequitive replications so that data copy should happen properly.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 2052 advice voor Hyper-v 2016 cluster

$
0
0

Thanks for the explaination and advice.

Create 2 volume msa 2050

$
0
0

hi guys
i have msa 2050 i connected 2 port to my server as backup and redundant,when i create volume  and map it to injector in my windows server i see 2 voulme created.one online and one offline.so where is the problem?
as you see i create 2 voulme one for sql and another for db.but show 4 voulme.Untitled.jpg


Re: HPE MSA 2052 remote replication

$
0
0

Many Thanks for all your replies. Has put my mind at rest for this work.

On the last question. Just a bit concerend as I am currently testing with the 2 MSAs replicating on local area network with only 4 * 1Tb volumes & rate of change is not much.  When this goes into production the 2 MSAs will be replicating over a WAN & will have 4 * 4Tb LUNs.

The worklad will be 12 VMs as hosts will be vmware vSphere.

 

 

Re: HPE MSA 2052 remote replication

$
0
0

I would suggest you to follow the below guide and refer section "Network requirements" (Page 14 onwards)

https://h20195.www2.hpe.com/v2/getpdf.aspx/4aa1-0977enw.pdf

This will help you for sure.

Also request you to mark the forum as resolved if there is no more outstanding query from your end on this issue.

This will help for everyone who are all following your forum.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: Create 2 volume msa 2050

$
0
0

If you have created two volumes in MSA and presented it to one Server. Then I would suggest you to check in MSA what LUN ID you have used for those two volumes in MSA.

In windows "Disk Management" you need to right click and check properties to figure out what LUN ID you have presented here. This way you can confirm if proper MSA right and expected volume only you are seeing or not.

Also you need to keep in mind that through how many paths MSA Volumes got presented so you can see multiple entries for same volume. In order to fix this you should be configuring multi-path Software.

Check what is the Windows version you are using. Now a days most of the Operating System having native multi-pathing which you just need to enable. After that from Disk management you can see only single entry for any MSA volume.

You can refer the Best Practice Whitepaper and look for "Multipath Software" Page 26 onwards,

https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00015961enw

After the above followed and you have successfully recognized your MSA volumes but still you find any MSA volume shows as "Offline" then righ click on it and make it online.

If any MSA volume shows "Not Initialized" then right click on it and initialize it. Keep in mind that this can be destructive step if followed bymistake on production or existing volume. So becareful in this.

 

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

 .....Checked your query and test output but this is done at the VM level with the help of Diskspd tool which is Microsoft specific.  This forum is exclusively for MSA queries and you need to check Performance for MSA at the block level.

In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.

Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.

The type of workload will affect the results of the performance measurement.

Check this below Customer Advisory and disable "In-band SES" ,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564

You can check the below Customer Advisory as well.........in many situations this helped to improve performance,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698

If you have specific requirement and you want only SSD pages to deal with your IO then use 'Tier Affinity' on the particular volume.

If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

Hi Subhajit

Very kind of you to respond.  The issue is this - I don't know if I have a performance issue - the MSA is not in production yet and I'm simply stress testing it myself.

However you have seen my other postings about extending the size of a pool (and extending a volume)  and the problems it is causing me.  You and a colleague kindly pointed out that the MSA2040 firmware is not supported with VMWARE 6.7.

I'm following this up with my vendor and vmware (very very slow).

If they do not get back to me in 24 hours I'll downgrade to ESXi 6.5 (which is supported) and run all my tests again (extending a Pool, extending a volume and performance). 

ta,

p.s. Please note my MSA 2040 was purchased with only 10 disks (6 SAS 4 SSD)- so it will be extended in the future - no doubt.

Re: Create 2 volume msa 2050

$
0
0

Hello,

On your windows server make sure that MPIO is configured correctly and then rescan the discovered devices.

Cheers,
Shawn

I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

 ......Yes you are correct as of now MSA2040 not yet tested with ESX6.7 so any outcome will be unpredictable.

I do agree with your stress testing but we understand and do the analysis for MSA only in this forum. So application level or OS level testing only Microsoft expert can help you.

If you downgrade  to ESX6.5 and use with MSA2040 then kindly follow the instruction that I have suggested, this will help you for sure when you use this MSA for production.

If you still look for more information then kindly mention the same so that MSA experts can help you or else you can close this forum as of now.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 

 


Re: Create 2 volume msa 2050

$
0
0

Thank you very much
problem solved  by mpio

P2000 MSA G3 Controller B network degraded (but still seems to be working)

$
0
0

Hi all,

          I've got an odd issue on an old P2000 G3 MSA storage system.  It has 2 controllers both of which are working fine and a number of expansion units (again all working as expected). I noted that it was reporting that controller B's network management port was 'degraded' . Sure enough 'show system' is reporting the system as degraded and I can see a red X against the controller B manangement port.  Show FRUS however says alll subcompoents are in OK state, show controllers again is fine, show cache-parameters is OK,... I can also connect to the system over ssh on both controllers.  What I can't seem to do is connect to the WEBUI on controller B. I've tried restart MC to see if that helps. No. I've also run ping tests from both controllers.

Controller A can ping its gateway with no issues. It can also ping controller B. Controller B however is reporting it cannot or can only intermittently ping the default gateway and it's partner controller A. 

 

So all a bit odd really. If it was failed I'd expect no access at all, yet I can ssh onto it. 

 

Both controller are running TS230P006 which I am guessing is pretty old however we had not seen any issues until recently.

My next plan is probably to fail everything onto controller A and then re-seat controller B to see if the issue clears. If not I guess the next step is a new controller but given the issues I've had in the past being sent controllers with way newer firmware and then having the controllers refuse to talk to each other that will be a last resort.

If anyone has seen this type of issue before or has any suggestions it would be most welcome. 

thanks

Ad.

 

Re: P2000 MSA G3 Controller B network degraded (but still seems to be working)

$
0
0

Your plan seems fine but you can also try full power cycle of the MSA by plan a downtime for this activity.

If still issue persists then you can try with single controller by keeping other controller in shutdown state and check if everything working fine or not.

Yes firmware too much old so any unpredictable or odd behavior is expected.

Is this MSA not in HPE support? You can try upgrading latest firmware.

UPGRADE NOTE:
If upgrading from an older version of firmware it may require a multiple step process to get to TS252.
If below TS230 then upgrade to TS230P008 first, then TS240P004, and then TS250P003 before upgrading to TS252P005. If the firmware is between the mentioned versions, it is recommended to follow through the steps as well.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

New Failover/DR Setup with MSA

$
0
0

Hi Guys, I have a client that has bought 4 DL380 servers (2PR and 2DR) as well as 2 MSA 2050 (1PR and 1DR).

The environment is purely Windows, hence Hyper-V will be in use for virtualization. Any advice on how to achieve HA/Failover across the two sites?

Is MSA replication and clustering able to achieve this?

Re: MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

Hi Subhajit,

Your question about Netapp.  NetApp was preseting the storage to esxi as NFS (not iSCSI).

This is why there was nothing to do on the vSphere side in the Netapp environment.  I was not the storage admin at the last environment; the storage team simply increased the size of the LUN at their side and thats all thats required.  Apparently in NFS the LUN is actually a file on the storage side - so once the file size is inreased on the storage side vsphere sees the new file size automatically!

reagrds.

 

Viewing all 8367 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>