Quantcast
Channel: All MSA Storage posts
Viewing all 8348 articles
Browse latest View live

Re: MSA 2050 direct attach

$
0
0

Thank you for your reply.

Yes. Ports are in same network.

A1 iSCSI iqn.2015-11.com.hpe:storage.msa2050.18173c3ac3 Up 10Gb OK

Port Details
------------
IP Version: IPv4
IP Address: 172.16.1.1
Gateway: 172.16.1.10
Netmask: 255.255.255.0
MAC: 00:C0:FF:47:B2:F4

A2 iSCSI iqn.2015-11.com.hpe:storage.msa2050.18173c3ac3 Up 10Gb OK

Port Details
------------
IP Version: IPv4
IP Address: 172.16.2.1
Gateway: 172.16.2.10
Netmask: 255.255.255.0
MAC: 00:C0:FF:47:B2:F5

On ESX as well configured IPs from same subnets for vmk interfaces.


 


Re: MSA 2050 direct attach

$
0
0

IP connectivity issue is solved guys 

 

It appears local man did not connected cables as described 

Swapped A and B

After I've asked to keep only one cable all become clear.

Re: Hp p2000 g3 storage compatibility hard drive

$
0
0

Thanks for your reply.

What model would be compatible nowadays for said storage?

Re: MSA 2050 direct attach

$
0
0

Hello,

Since you are using iSCSI you will need to verify if the host ports and the array host ports are all configured the same. WIth 10G iSCSI you can use either one-way or mutual CHAP protocol. Make sure that everything is configured the same and using a valid subnet.

Everything being green indicates the links are active. Now all you need is to get the protocols the same and it should help with your connectivity.


Cheers,

Shawn

MAS P2000 SAS and VMWare 5.5 - insufficient space in datastore

$
0
0

Hi, We are having some strange issues with an MSA P2000 SAS and VMWare and wondered if somebody had seen similar problems before. 

Our environment is as follows: 

HP MSA P2000 G3 SAS – firmware TS251P006-02

2 * HP servers used as ESX hosts (ESXi 5.5) directly attached via SAS to the MSA. Each server is connected to each controller. 

We initially had a single RAID-5 vdisk, with a single volume provisioned to ESX. This has worked fine for years. The hosting team use thin provisioning at a VMware level. Recently they have had problems as it appeared they had run out of actual capacity due to the thin provisioning. This caused problems with the VM’s as you’d expect. It appeared that this was purely a capacity management issue.

The hosting team worked to free up some space and have managed to get the VM’s on-line. There is now around 400GB free in the datastore according to VMWare however they are still seeing disk capacity issues when trying to provision VM’s. We assumed this was a problem with VMWare not freeing up the now unused thin provisioned space…. However ..

In the background, an additional 4 disks were purchased to allow us to create a new vdisk and present additional capacity to VMWare. We took the decision to create a new RAID-5 Vdisk rather than to extend the existing one as it seemed like the safer option. The vdisk was created and a single volume presented to the ESX hosts (roughly 1.3TB). The ESX hosts detected the volume and a datastore was created all without issue. We assumed our job was done. Unfortunately even though the datastore was created without issue, the hosting team are unable to migrate any VM’s to it. Migrations will start and then fail with an ‘insufficient disk space’ error. 

Maybe coincidentally, on the day the majority of the issues started, the Compact flash card was replaced in one of the MSA controllers due to a failure. Since the Compact Flash card replacement, the array is up and healthy. 

We have tried the following:

Migrating VM’s from the old datastore to the new.

Creating new thick provisioned VM’s on the new datastore.

Creating new thin provisioned VM’s on the new datastore.

Rebooted each of the MSA controllers one at a time (to avoid an outage rather than a whole array reboot).

Rebooted both ESX hosts.

Disabled VAAI within ESX (the HP specific VAAI plug-in doesn’t seem to be installed)

Provisioned a smaller volume.

Created the datastore as VMFS3 rather than VMFS5.

Logged into the ESX hosts via SSH and tried to copy into the new datastore. 

During all testing we get some data copied and then the ‘insufficient disk space’ or similar error. The amount of data copied seems to vary but it’s never very much. 

We are at a loss to understand what could be happening as the system has worked fine for a number of years. We’re not even sure where the problem lies … VMWare/MSA etc.  As there is some data that can be copied and the fact that you can create the datastore, it feels like the volume is writeable but maybe the communications is being interupted.

Any help/suggestions would be much appreciated!

Re: MAS P2000 SAS and VMWare 5.5 - insufficient space in datastore

Re: Hp p2000 g3 storage compatibility hard drive

Re: MAS P2000 SAS and VMWare 5.5 - insufficient space in datastore

$
0
0

Your issue seems to be related to VMWare specific.

You can browse your datastore and try to check apart from Virtual Machine VMDK files what all files there which is consuming all space. It could be some big files for some reason there. You need to involve VMWare expert to troubleshoot this.

You can also search in Google with "insufficient disk space" error message and you will get lots of help from VMWare,

You should also check from VMWare console or CLI that if new datastore getting locked for some reason. Please find the below article to get some idea,

https://www.petenetlive.com/KB/Article/0001292

From MSA perspective, I would suggest to upgrade Controller firmware to latest version TS252P005,

www.hpe.com/storage/MSAFirmware 

 

 

Request you not to keep the forum open for long time if don't get any help as here everyone will be MSA expert only.

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 


HP MSA Internal data copy/migration

$
0
0

Hi All,

Require suppport to copy/migration data between MSA internal disks. From NL drives to new drives.

Attention: Immediate action required to maintain HPE Remote Support connectivity

$
0
0

Attention HPE Remote Support Users – Insight RS, OneView Remote Support, iLO Direct Connect, OA Direct Connect

Starting September 28, 2018, the HPE Remote Support solution must move to new security certificates issued by DigiCert. These certificates are required to ensure your secure communication with HPE Remote Support Data Centers.   

New certificates are available through recent software and firmware updates.

If you are running any of the following HPE Remote Support client versions, you may require an urgent upgrade as early as September 28, 2018. Review the Remote Support Certificate Update Guide for additional information on how to upgrade to supported versions and maintain your HPE remote support connectivity.

HPE Insight Remote Support

  • Insight RS versions prior to 7.9 require action

HPE OneView Remote Support

  • OneView Remote Support versions prior to 4.00.09 require action

HPE iLO Direct Connect

  • iLO 5 versions prior to version 1.30 require action
  • iLO 4 versions prior to 2.60 require action

HPE Onboard Administrator Direct Connect

  • OA versions prior to 4.8.0 require action

Review the Remote Support Certificate Update Guide for instructions on updating to version that include these certificates. Act today to ensure you maintain your HPE remote support connectivity.

- Vajith V

Re: HP MSA Internal data copy/migration

$
0
0

You haven't mentioned what is the MSA model.

Anyway irrespective of any model, there is no migration tool available inside MSA.

MSA Virtual array and models like MSA2040 or MSA2050 having one feature called "Volume Tier Affinity" but this is not like "Volume Pinning" feature where you can restrict data to a particular drive type. So this feature also will not serve your purpose.

So final answer this is not possible within MSA

You can only move your data through ESX VMotion feature from one array to another array and then after replacement of NL drives with new drives. You can create new vdisks and volumes. Then bring back VMs to this new datastore which is nothing but new volume.

Another option would take data backup to some external storage or HDD, then delete existing volumes and vdisks. Replace all NL drives with new drives. Create new vdisks and volumes. Then restore your data from backup.

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS!thumb below!

***********************************************************************************

Re: HP MSA Internal data copy/migration

$
0
0

Hi Subhajit,

Thanks for your reply!!

Old disks and new disks, both will be avilable in same storage. In that case any copy features will be useful to copy the data from old disks to new disks.

Re: HP MSA Internal data copy/migration

$
0
0

Unfortunately there is no such option available for MSA

Only option through outside of MSA only

You can have both disks in the MSA and then create a new vdisk and volume with new drives. Then present the same volume to same host where previous volume already presented which was created from old NL drives. There you can manually copy the data from one volume to another volume at the host end.

 

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS!thumb below!

***********************************************************************************

Re: Best iSCSI config for ESXi 5.5 to HP MSA 2040 for failover and dual controller support

$
0
0

We're looking to do something similar I believe in the next month.

How did you end up going about it, and would you have done anything differently in hindsight?

Greg

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

$
0
0

Hi - 

I  have the iSCSI HP MSA1040 attached via dual 10GB links to 2 ESXi 6.5 hosts that are Dell R820's, giving me 2 paths per server to two controllers on the MSA 1040.  The MSA1040 is half populated with 12x 900GB 10K disks in a RAID5 array.  I carved out one large LUN and presented the vDisk to my hosts which see the ~7TB datastore for VMs.

I am able to configure VMWare with an active/passive configuration and pull one of the 10GB, I do not lose connection to the back end vDisk holding my VMs.  However, it's come to my attention that the servers are slow when users who connect to my terminal server kept getting disconnected  periodically.  After going into Resource Monitor within Windows Server 2016, I can see the C:\ drive at 100% activity time nearly the entire time, which would cause a lot of slowness and then disconnects when the OS became unresponsive.  I checked my other 5 servers and they too had all their drives running at 100% activity time.

I then installed 600GB 10K disks on the local hosts in a RAID10 configuration and copied each of my VMs to the local data stores.  I booted up the VMs and it was night and day, the performance was great.  The Resource Monitors reflected the disk activity time at fractions of a percent, no longer 100%.  So I figured my RAID5 array did not possess the performance necessary to run VMs from VMWare.  So since my MSA 1040 no longer had any VMs stored on it, I blew away the RAID5 array and made a 10 disk RAID 5 with 2 global spares thinking this would solve the problem.

I copied the servers back to the MSA1040 now in a RAID10 format, and the activity time is still slower than when it was on the local RAID10 array.  It's very noticable, rebooting the server takes more then 2x as long when compared to the local storage, and the activity time for the drives on my VMs are running 100% for probably the first 4-5 minutes of each reboot.  The Server Manager does  not pop up nearly as quickly as the local data store, so I'm totally at a loss.

I've enabled jumbo frames on the MSA 1040.  But I really need to get my VMs off the local data stores and back onto my MSA1040 iSCSI SAN.   Saying all that, it appears that I either have a VMWare configuration problem, or there is a setting within the 10GB HBA that I didn't set correctly (I left everything default so far).  I would think 10GB pipes on a 10 disk RAID10 would perform as well if not better than local storage with a 6 disk RAID10.  Reboots on the local storage take maybe 20-30 seconds.

Hopefully someone can offer some advice.


Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

$
0
0

Hi Jason,

How do you have your iSCSI networks configured? I'd suggest to put each pair of connected ports into their own IP subnet, and do not enable iSCSI vmk port binding. Example:

MSA-A port 1	10.1.10.10 /24
ESXi01 port 1	10.1.10.11 /24

MSA-B port 1	10.1.11.10 /24
ESXi01 port 2	10.1.11.11 /24

MSA-A port 2	10.1.12.10 /24
ESXi02 port 1	10.1.12.11 /24

MSA-B port 2	10.1.13.10 /24
ESXi02 port 2	10.1.13.11 /24

 

Also, have your tested your Jumbo Frames connectivity from the ESXi host to the MSA? Use the following command, replacing the x's as appropriate:

vmkping -I vmkX -d -s 8972 x.x.x.x

 

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

$
0
0

Ok so I see what you're doing, putting each port on each side within their own subnet, each side in the subnet.

I see on the MSA1040 side where to do the IP assignments, that's pretty easy.

The vmware side, do port bind each HBA port to a vmnic like I've done, create a port group and vswitch with MTU9000 for jumbo frames? 

Re: MSA2012i firmware

Re: MSA2012i firmware

$
0
0

.........there is no relation between product reached End Of Life (EOL) and firmware download.

HPE policy changed and they stopped giving firmware/drivers free of cost for people who doesn't have active or valid contract with them.

When someone buy product from HPE then they should be having valid contract with it as well. It's not only limited to hardware part but the software or driver/firmware as well.

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS!thumb below!

***********************************************************************************

PE STOREEVER MSL2024 - Fan system

$
0
0

HPE STOREEVER MSL2024 How can I configure it, if it is possible, to have a MSL2024 with a FAN SYSTEM hot plug and/or swapable, including redundancy? (hardware speeking)

Viewing all 8348 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>