Quantcast
Channel: All MSA Storage posts
Viewing all 8368 articles
Browse latest View live

Re: MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

MSA2040 volume expansion is block level activity and this is online process. No need to stop Host IO. The same volume already presented to ESX Host but here question is VMFS filesystem extend for the datastore.

I don't think file system extend automatically possible. Please find the VMWare ESX6.7 process to extend VMFS file system,

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-D57FEF5D-75F1-433D-B337-E760732282FC.html

The article you have shared seems completely different activity "Recreating a missing VMFS datastore partition in VMware vSphere 5.x and 6.x (2046610)"

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hello,

All of your Pools and disk-groups are seen as healthy and not in a degraded state. So the drive failure is not currently playing a part in your issue. That being said you should get a replacement drive in the system and assigned as a global spare.

You have 3594.9GB of space allocated and 3594.9GB provisioned. When you disable the over committed feature there is not enough space to move from thinly provisioned to fully provisioned. Which means there is likely some data that is not yet fully committed.

I suggest you add more capacity to your existing disk-groups, allow the over provisioned space to move, and then you will be able to move from thinly provisioned to fully provisioned.

Cheers,
Shawn

I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.

Re: MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

HI Subhajit,

Yeah you are correct the link that I posted does "seem" to have no relationship to what I did.  It was only part of the solution this is what I had to do to solve the problem.  This is exactly what I had to do:

0. Increase the volume size on the MSA using the SMU

Now logon to the ESXi host and issue the following commands (only have to do on one esxi host in the vmware cluster) :

1.partedUtil fixGpt "/vmfs/devices/disks/naa.600c0ff00029475d0e5a605b01000000"

2.partedUtil getUsableSectors /vmfs/devices/disks/naa.600c0ff00029475d0e5a605b01000000

Make a note of the large number


3./etc/init.d/storageRM stop

Stops the storage monitor (has no affect on the running esxi system)

4. run this command
offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump
-n4 -s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done

5. partedUtil setptbl /vmfs/devices/disks/naa.600c0ff00029475d0e5a605b01000000 gpt "1 2048 2539061214 AA31E02A400F11DB9590000C2911D1B8 0"

(based on information ontained from ruinning command 4.)

6. vmkfstools --growfs "/vmfs/devices/disks/naa.600c0ff00029475d0e5a605b01000000:1" "/vmfs/devices/disks/naa.600c0ff00029475d0e5a605b01000000:1"

7. /etc/init.d/storageRM start
(restart the storage monitor)

You are also correct - no down time was required - I was able to do all this with the VM's running on the Datastore.

However the point I'm trying to make is that in a NetApp environment you do not have to extend the VMFS file system - it happens automatically.  You extend the volume on the MSA and immediatley VMFS is the correct size. (no need to manually extend)

thanks for taking the time to reply.

Re: MSA 2040 - Understanding Volume Tiering

$
0
0

Ah! I feel a bit daft now!  That makes much better sense.

thanks for assistance.

Re: MSA 2040 - Understanding Volume Tiering

$
0
0

Nice to know that you are happy and satisfied with the answer.

Request you to mark the forum as resolved if there is no more outstanding query from your end on this issue.

This will help for everyone who are all following your forum.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

# show disks
Location Serial Number Vendor Rev Description Usage Jobs Speed (kr/min) Size Sec Fmt Disk Group Pool Tier Health
------------------------------------------------------------------------------------------------------------------------------------------------------
1.1 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.2 REMOVED HP HPD3 SAS FAILED 15 600.1GB 512n N/A Fault
1.3 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.4 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.5 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.6 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.7 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.8 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.9 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.10 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.11 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.12 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.13 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.14 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group2 B Standard OK
1.15 REMOVED HP HPD3 SAS VIRTUAL POOL 15 600.1GB 512n Group1 A Standard OK
1.16 REMOVED HP HPD3 SAS GLOBAL SP 15 600.1GB 512n N/A OK
1.17 REMOVED HP HPD3 SAS AVAIL 15 600.1GB 512n N/A OK
1.18 REMOVED HP HPD3 SAS AVAIL 15 600.1GB 512n N/A OK
------------------------------------------------------------------------------------------------------------------------------------------------------
Info: * Rates may vary. This is normal behavior. (2019-03-21 12:34:45)

I have no idea drive failed part of which pool?

It was pool A and has now automatically failed to 15th disk

Are you facing overcommit disable issue for both Pools ?

Yes

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Will try that as soon as we get the disk replaced.

 

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Yes as per details received, it looks like 1.2 failed which was part of disk-group Group1 and 1.15 was configured as Global spare. The moment 1.2 failed, 1.15 got used to reconstruct data of 1.2 so 1.15 now part of Group1 means part of Pool A

As informed you earlier,  It seems better to get SMU V3 Homescreen screenshot to understand how much is Reserved space (RAID Parity and metadata) because this space will be taken from disk-group space only.

You don't have sufficient space to convert Thin Volume to Fully provision volume or Thick Volume that's why you are not getting overcommit disable option.

You need to add more drives and make new disk-group like Group1 and then add it to Pool A. It's always recommended to add same number of drives and configure in same RAID like existing disk-group in same tier to get better performance for that Pool. Keep in mind that you can't extend disk-group  here like you did vdisk expand in linear MSA array.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS!thumb below!

***********************************************************************************


Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

Re: MSA 2040 - VMWARE 6.7 - ReadCache

HP MSA 2040 Seperating VMDK storage from NSA (iSCSI) storage

$
0
0

Hi,

I present storage from my 2040 to my esxi hosts - they then mount it as a datastore. 

I then build VMs on that datastore. The local disks on that datastore are vmdk local storage. (For example the C: is a vmdk)

However, some of my VM's require iSCSI their own storage.  (For example the S: drive is an iSCSI drive).

I was thinking of doing the following.  Pool A is reserved for VMDK storage (SSD disks)

Pool B is reserved for the NAS (iSCSI) storage  for the VM's (SSD disks).

Is that a good way to go about it?

ta.

 

 

Re: MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

Hi There and thanks for replying

The firmware mentioned,GL225R003, was release in December 2017. (We are using that version)

ESXi 6.7 was not released until April 2018.....so it would have been impossible for the MSA firmware doucment to be aware of esxi 6.7

Having said that would you think that a firmware upgrade is required ?  GL225R003 is the latest version.

regards

 

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Thank you for sharing the Home Screen details.

According to that, you can see that total Virtual Disk Group size 8401 GB, out of that Allocated 4725 GB and unallocated2462 GB. Which means reserved space left = [8401 - (4725 + 2462)] = 1214 GB

Now there are two Pools and Two disk-groups which means for each disk-group 607 GB got reserved. This is the reason disk-group size reduced to (3594-607) = 2987 GB. Your Thin provisioned Volume Size was 3594 GB so that is more than disk-group size. This the main reason when you try to disable overcommit option that time it's greyed out which means with current condition not possible.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 

 

Re: MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

Status Update - I added a new disk group to Pool A.

Pool A already holds the two esxi DataStores - I rebooted one of my ESXI hosts - Bang!  the DataStores will no longer mount.

Looking in vmkernel.log I see the following message


2019-03-21T16:09:32.361Z cpu7:2097364)WARNING: Vol3: 3102: XXXXXX1/5c91215c-80529b43-df3a-5cb901cf9580: Invalid physDiskBlockSize 512

2019-03-21T16:09:22.301Z cpu2:2097364)FSS: 6092: No FS driver claimed device '5c91218e-9ca78e9c-4f5d-5cb901cf9580': No filesystem on the device

 

Is this problem caused by an incompatibility between the MSA Firmware GL225R003 and ESXi 6.7 ?

Re: MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

 

This is nothing to do with MSA I think. I just searched in Google and found many links where I see after reboot of ESX6.7 Datastore missing or got corrupted. Looks like some bug in ESX6.7

https://communities.vmware.com/thread/597817

https://communities.vmware.com/thread/597513

https://www.reddit.com/r/vmware/comments/9j8k6o/esxi_vmfs6_datastore_corruption_after_host_reboot/

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hi, apologies but is very hard to follow these calculations criteria.

I have 1 Disk Group in RAID 5 assigned to POOL A made of 8*600GB disks, totals up to ~4.2TB Netto.

From This POOL A we publish a Volume of 3593.9 GB as LUN 1 to VMware and we use ~70% of that.

Disk Group 1 shows 1542.5GB free.

Can you tell me why, it is not possible to thick provision this Volume?

It is hard to follow the new way HP has obliged us to use for the virtual arrays, the fact the linear capability was removed is very annoying and even more annoying is that we cannot add single disks to expand capacity.

 

 

 

Re: MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

Ta,

I'll raise a support call and see where it goes,.

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Yes there is difference between linear array and Virtual array.

In case of linear array all volumes Thick volume only but in Virtual array all volumes by default Thin Volume only.

In todays world most people interested in Thin Volume only because space should be allocated based on demand from backend rather than allocating entire space in 1st day itself which is the case of Thick volume. Lots of wastage of space.

Virtualization is a technology which helps to save price like pay as you use concept.

HPE came up with Virtualization technology with entry level Storage like MSA which is really great.

Applogise for the earlier Calculation which was not correct and I have corrected it.

Now coming to your setup, you are considering 7 x 600Gb drives which is approx 4.2TB physical space but when you create RAID 5 disk-group that time some space gets reserved as RAID parity and metadata for which affective space you get (4200-606) = 3594GB

Out of this VDG space already 2052 Gb got allocated and 1541 Gb unallocated space.

Virtual volumes make use of a method of storing user data in virtualized pages. These pages may be spread throughout
the underlying physical storage in a random fashion and allocated on demand. Virtualized storage therefore has a
dynamic mapping between logical and physical blocks.

Linear volumes make use of a method of storing user data in sequential fully allocated physical blocks. These blocks have
a fixed (static) mapping between the logical data presented to hosts and the physical location where it is stored.

This is the main reason when you try to disable overcommit option that time it look for sequential blocks or pages to allocate but it will not get that and that is the reason overcommit disable not possible in your case. 

So the solution is if you want to convert Thin Volume to Thick Volume then you need same size of space where sequeatial physical blocks available means 3594GB of sequential blocks required.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS!thumb below!

***********************************************************************************

Re: MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

Hi There

This problem (extending a volume containing esxi 6.7 datastores) might be an incompatibility between the esxi 6.7 and the firmware on the MSA 2040, GL225R003. (I do have a similiar issue when I add a new disk group to a Pool)

Checking out with VMWARE/HPE

 

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Does this mean we have to add 8*600GB disks to each pool in order to switch them to thick? Or less?

 

Viewing all 8368 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>