Quantcast
Channel: All MSA Storage posts
Viewing all 8378 articles
Browse latest View live

Re: SAN MSA2040

$
0
0

Hi Raji,

Unfortunately, there is no event recorded under the system Events for the unsuccessful logins, so there is no way to configure the email to alert any unsuccessful login attempts. 
not sure whether it will be included in the feature firmware release.

Regards,


PROVISIONING OF LUN FROM HP STORAGE TO ORACLE SERVERS

$
0
0

Hi ,

I have two environments in my DC. One is HP and the other is Oracle. Each of them have their separate storage  setup and they are functional. My question is :- Is it possible for me to provision LUN  for my Oracle servers in my Oracle environment from the HP storage in my HP environment? If yes, please  what do i need to achieve that and how best can i achieve it.

 

Thanks

 

Taiwo

 

Re: PROVISIONING OF LUN FROM HP STORAGE TO ORACLE SERVERS

$
0
0

Hello Taiwo,

You state you have two environments in your data center. Can one system in one environment talk to a system in the other environment? If yes, they you should be able to create a LUN on the array and then map to the servers with the Oracle database.

If the two environments cannot communicate, then you will have to enable that communication before you can have a LUN on the array available to Oracle.

Cheers,
Shawn

I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.

MSA 2040 - VMWARE 6.7 - ReadCache

$
0
0

Hi There

Just in case anyone else has this trouble.  I added a read-cache (SSD disk) to Pool A on my MSA 2040.  Pool A held the datastore for my VMWARE 6.7 environment.  Pool A had been up and running for months.  I decided to add the read cache to improve read perfromance (as best practice suggests).

As a test I rebooted one of the esxi hosts after doing this.  The datastore would no longer mount. The vmkernel log showed the error "Invalid physDiskBlockSize 512".

After much faffing about - I decided to remove the read-cache - the Datastore mounted correctly.

FYI.

MSA 1050 SAS connected to VMware 6.7 U1

$
0
0

VMware was installed using latest HPE customized ISO.

Having an issue with MSA 1050 and vSphere.

After configuring MSA was able to create datastores in vCenter.

Created new virtual machine and patched with no issues.

When deploying template, it hangs and never completes.

Storage vMotion between datastores hosted on the MSA have the same issue.

I am able to deploy the template to an iSCSI datastore on old SAN and then storage vMotion to datastore on the MSA.

Once on the datastore hosted by MSA, no clone or storage vmotion operations succeed.

No errors, just hang there. A small VM with only 100GB drive sat for 14 hours until I canceled it.

Going to the iSCSI store took 4 minutes.

I have a case open with VMware, but thought i'd ping here as well.

Been at a standstill for several days.

MSA 2052 advice voor Hyper-v 2016 cluster

$
0
0

Hi All,

We have the following hardware voor our virtualization:

2x HPE Proliant DL380 Gen10
2x Cisco MDS9481s
1x MSA2052 (8x SSD 800 GB en 16x 600 GB SAS 15k), two controllers

The current config of MSA is:
The MSA is connected via FC to the Cisco Fabric and then to HBA of Proliants
There are 4x disk pools:

- Controller A: 4xSSD RAID10 en 8xSAS 15k RAID10
- Controller B: 4xSSD RAID10 en 8xSAS 15k RAID10
- There are 10 CSV's configured:
2x SSD and 3x SAS per Storage pool

QUESTION:
- How to get the most performance out of the MSA?
for our VM's and File servers

We have tested and we saw that the SAS pool was faster then the SSD pool.
Is this normal?

- How about failover?
- Should i balance the volumes on each controller?
- Is there some Best practise for the CSV (Volume) size? We use for our SSD Volume 500 GB and for the SAS 750 GB

MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hi,

is I try to Change Pool Settings and remove the Flag on Enable overcommitment of pool? it is grayed out and I cannot remove it.

Running VL100P001 with 2 Pools  A and B, one disk group assigned to each pool and one LUN in each pool/group.

Is there any bug or any limitation why overcommit cannot be disabled?

We currently are on usage well below the allocated storage.

Thank you

 

 

Re: MSA 1050 SAS connected to VMware 6.7 U1

$
0
0

Hello,

Latest firmware on the HBAs in the hosts fixed the clone VM issue.

Only issue now is that the SAN connections show autonegotiated at 6GB.

Another MSA 1050 with all the exact same hardware, cables, drives, hosts, HBAs and firmware levels is connecting at 12GB.

I have powered off the hosts and restarted storage on the SAN and after powering everything back up, it's still at 6GB.

Is there a magic setting somewhere? On the array controller maybe?

Array controllers are E208E-P SR Gen10 with latest firmware.

 


Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Can you please provide the GUI screenshot of the Pool usage?

You need to provide size information at the Pool level, Disk Group leve to get the idea.

# show pools

# show disk-groups

# show snapshot-space

As per SMU guide "If you try to disable overcommitment and the total space allocated to thin-provisioned volumes exceeds the physical capacity of their pool, an error will state that there is insufficient free disk space to complete the operation and
overcommitment will remain enabled. If your system has a replication set, the pool might be unexpectedly overcommitted
because of the size of the internal snapshots of the replication set."

There is one known issue,

Issue: When downloading CSV data from the "Pools" table, the fifth column is incorrectly labeled as "Allocated Size", when it should be "Available Size".
Workaround: After downloading CSV data from the Pools table, change the label of the fifth column from "Allocated Size" to "Available Size".

 

You can try restarting both management controllers.

You can try by updating Controller firmware to latest version VL270R001-01

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 1050 SAS connected to VMware 6.7 U1

$
0
0

Can you please provide the command line output of the below command from both MSA1050 systems,

# show ports detail

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hi,

here it is.

poolusage.jpg

 

# show pools
Name Serial Number Blocksize Total Size Avail Snap Size OverCommit Disk Groups Volumes Low Thresh Mid Thresh High Thresh Sec Fmt Health Reason Action
---------------------------------------------------------------------------------------------------------------------------
A 00c0ff3c1efc0000efb4345b01000000 512 3594.9GB 1543.3GB 0B Enabled 1 1 50.00 % 75.00 % 94.02 % 512n OK
B 00c0ff3c1fee00007db5345b01000000 512 3594.9GB 922.0GB 0B Enabled 1 1 50.00 % 75.00 % 94.02 % 512n OK
---------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2019-03-20 08:42:35)


# show disk-groups
Name Blocksize Size Free Pool Tier % of Pool Own Pref RAID Disks Spr Chk Status Jobs Job% Serial Number Spin Down
SD Delay Sec Fmt Health Reason Action
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Group1 512 3594.9GB 1543.3GB A Standard 100 A A RAID5 7 0 64k FTOL 00c0ff3c1efc0000eeb4345b00000000 Disabled
0 512n OK
Group2 512 3594.9GB 922.0GB B Standard 100 B B RAID5 7 0 64k FTOL 00c0ff3c1fee00007bb5345b00000000 Disabled
0 512n OK
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2019-03-20 08:44:01)

# show snapshot-space
Snapshot Space
--------------
Pool: A
Limit (%Pool): 10%
Limit Size: 359.4GB
Allocated (%Pool): 0.0%
Allocated (%Snapshot Space): 0.0%
Allocated Size: 0B
Low Threshold (%Snapshot Space): 75%
Middle Threshold (%Snapshot Space): 90%
High Threshold (%Snapshot Space): 99%
Limit Policy: Notify Only

Snapshot Space
--------------
Pool: B
Limit (%Pool): 10%
Limit Size: 359.4GB
Allocated (%Pool): 0.0%
Allocated (%Snapshot Space): 0.0%
Allocated Size: 0B
Low Threshold (%Snapshot Space): 75%
Middle Threshold (%Snapshot Space): 9High Threshold (%Snapshot Space): 99%
Limit Policy: Notify Only

Success: Command completed successfully. (2019-03-20 08:44:35)

 

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

forgot to ask below command output,

# show volumes

Also please provide the error screenshot that you are getting. What step you are following kindly mention as well.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

MSA 2040 - extending Volume used as datastore in ESXi 6.7

$
0
0

Hi All,

Recently I extended the size of a volume on the MSA 2040.  The volume is used as a DataStore in my ESXi 6.7 environment.  Using vCenter 6.7.

I assummed that this increase would be automatically seen in VMWARE. However it was not. 

On close inspection, using vSphere web client, I could see that the "device" displayed the correct new increased size but the Datastore on this device still was the old size.

Eventually I got this solved using the steps in this post https://kb.vmware.com/s/article/2046610 . Very tricky steps and probably if you got wrong - these steps could have disasterous consequences.

However, I come from a NetApp environment (an enterprise NetApp environment)  and when a LUN was extended at the storage side the new size automatically appeared in VMWARE (and that was vmware 5.5!)

Is this (having to manually extend the datastore using esxi command line) a limitation of the MSA2040 - or did I get unlucky?

Ta.

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

Hi, here it is

 

# show volumes
Pool Name Total Size Alloc Size Type Health Reason Action
-----------------------------------------------------------
A LUN1 3593.9GB 2051.5GB base OK
B LUN2 3593.9GB 2672.8GB base OK
-----------------------------------------------------------
Success: Command completed successfully. (2019-03-20 09:45:32)

Clipboard01.jpg

Can it be the fact that there is a failed disk in the array and is not allowing changes?

disks.jpg

Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

Hi There,

Is anyone else out there using a HPE c7000 and MSA 2040 in a software iSCSI implmentation (I have two HPE 5700 switches providing the storage network). One 10GB connection (with a standby) to the storage using SFP-DAC cables.  I'm using VMWARE 6.7. The VMWARE datastore is on SAS 10K disks in a RAID 6 config.  The blades are CL460c's  Gen9.  (Power is set to MAX in the BIOS)

I've built a test VM Windows 2012 server 1 CPU 4GB RAM - One 40GB

I then run this test.

DiskSpd.exe -c15G -d300 -r -w40 -t8 -o32 -b64K -Sh -L c:\temp\testfile.dat

What performance are you getting. I'm not sure what is good for an MSA 2040.

While running the test above I also run performance monitor and specifically look at the counter "Average Disk sec/Transfer" . The average is alway around 0.262 when Microsoft says it should be around 0.005.  Thats whats really bugging me.

Appreciate it if you can run the test (as close to the config above would be great)


Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

$
0
0

My results

 


Command Line: DiskSpd.exe -c15G -d300 -r -w40 -t8 -o32 -b64K -Sh -L c:\temp\testfile.dat

Input parameters:

timespan: 1
-------------
duration: 300s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'c:\temp\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 60/40)
block size: 65536
using random I/O (alignment: 65536)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 8
using I/O Completion Ports
IO priority: normal

System information:

computer name:
start time: 2019/02/20 10:26:53 UTC

Results for timespan 1:
*******************************************************************************

actual test time: 300.00s
thread count: 8
proc count: 1

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 3.91%| 0.74%| 3.17%| 96.09%
-------------------------------------------
avg.| 3.91%| 0.74%| 3.17%| 96.09%

Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 2384723968 | 36388 | 7.58 | 121.29 | 263.452 | 96.438 | c:\temp\testfile.dat (15GiB)
1 | 2387214336 | 36426 | 7.59 | 121.42 | 263.080 | 97.048 | c:\temp\testfile.dat (15GiB)
2 | 2381905920 | 36345 | 7.57 | 121.15 | 263.822 | 97.947 | c:\temp\testfile.dat (15GiB)
3 | 2390556672 | 36477 | 7.60 | 121.59 | 262.768 | 96.274 | c:\temp\testfile.dat (15GiB)
4 | 2384592896 | 36386 | 7.58 | 121.29 | 263.433 | 97.782 | c:\temp\testfile.dat (15GiB)
5 | 2380398592 | 36322 | 7.57 | 121.07 | 263.954 | 97.094 | c:\temp\testfile.dat (15GiB)
6 | 2386821120 | 36420 | 7.59 | 121.40 | 263.202 | 97.589 | c:\temp\testfile.dat (15GiB)
7 | 2378235904 | 36289 | 7.56 | 120.96 | 264.204 | 97.470 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 19074449408 | 291053 | 60.64 | 970.17 | 263.489 | 97.208

Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1430781952 | 21832 | 4.55 | 72.77 | 248.969 | 90.696 | c:\temp\testfile.dat (15GiB)
1 | 1434255360 | 21885 | 4.56 | 72.95 | 248.666 | 93.539 | c:\temp\testfile.dat (15GiB)
2 | 1427243008 | 21778 | 4.54 | 72.59 | 248.341 | 92.967 | c:\temp\testfile.dat (15GiB)
3 | 1431175168 | 21838 | 4.55 | 72.79 | 248.458 | 92.297 | c:\temp\testfile.dat (15GiB)
4 | 1436876800 | 21925 | 4.57 | 73.08 | 248.435 | 93.917 | c:\temp\testfile.dat (15GiB)
5 | 1425080320 | 21745 | 4.53 | 72.48 | 248.747 | 91.595 | c:\temp\testfile.dat (15GiB)
6 | 1426587648 | 21768 | 4.53 | 72.56 | 248.319 | 94.191 | c:\temp\testfile.dat (15GiB)
7 | 1420689408 | 21678 | 4.52 | 72.26 | 248.630 | 90.786 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 11432689664 | 174449 | 36.34 | 581.49 | 248.571 | 92.510

Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 953942016 | 14556 | 3.03 | 48.52 | 285.175 | 100.626 | c:\temp\testfile.dat (15GiB)
1 | 952958976 | 14541 | 3.03 | 48.47 | 284.774 | 98.193 | c:\temp\testfile.dat (15GiB)
2 | 954662912 | 14567 | 3.03 | 48.56 | 286.967 | 100.603 | c:\temp\testfile.dat (15GiB)
3 | 959381504 | 14639 | 3.05 | 48.80 | 284.116 | 98.112 | c:\temp\testfile.dat (15GiB)
4 | 947716096 | 14461 | 3.01 | 48.20 | 286.173 | 99.128 | c:\temp\testfile.dat (15GiB)
5 | 955318272 | 14577 | 3.04 | 48.59 | 286.639 | 100.576 | c:\temp\testfile.dat (15GiB)
6 | 960233472 | 14652 | 3.05 | 48.84 | 285.314 | 98.356 | c:\temp\testfile.dat (15GiB)
7 | 957546496 | 14611 | 3.04 | 48.70 | 287.309 | 102.341 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 7641759744 | 116604 | 24.29 | 388.68 | 285.808 | 99.758

 

total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 3.384 | 2.193 | 2.193
25th | 204.069 | 255.489 | 214.836
50th | 230.440 | 281.170 | 254.376
75th | 269.826 | 314.423 | 295.324
90th | 338.094 | 352.563 | 347.093
95th | 408.343 | 386.474 | 398.219
99th | 622.458 | 750.348 | 654.924
3-nines | 925.564 | 1079.021 | 1047.529
4-nines | 1345.373 | 1172.517 | 1276.759
5-nines | 1696.298 | 1193.568 | 1695.573
6-nines | 1822.909 | 1218.004 | 1822.909
7-nines | 1822.909 | 1218.004 | 1822.909
8-nines | 1822.909 | 1218.004 | 1822.909
9-nines | 1822.909 | 1218.004 | 1822.909
max | 1822.909 | 1218.004 | 1822.909

MSA 2040 - Understanding Volume Tiering

$
0
0

Hi There

I've been reading the best practice guide for the 2040 and I'm stumped by volume tiering.

The document seems to be suggesting that "busy" data will "AUTOMATICALLY" move to volumes with a higher Tier - which as far as I can see if physically impossible.

quote from the best practive "As data is later accessed by the host application, data will be moved to the most appropriate tier"

Image if there are two volumes - One is "No Affinity" and one is "Performance".  These two volumes are presented to a Windows 2016 server and mounted as the D: and E: drives.  Imagine that the data on the D: drive started to get very busy - Is the best practice doucment trying to tell me that the data will be moved from the D: drive to the E: drive - that to me seems impossible.  Its actually something you'd never want to happen.

ta

 

 

Re: MSA 2040 - Understanding Volume Tiering

$
0
0

 

To use "Volume Tiering" would anyone know if the Pool that the volume sites on - does that Pool have to be configured with "Standard" and "Performance" disk groups? (forget archive for the moment)

If it does then "Volume Tiering" ....finally.....makes sence.

Re: MSA 2050 Change Pool Settings - Overcommit Flag cannot be disabled

$
0
0

It seems better to get SMU V3 Home screen screenshot to understand how much is Reserved space (RAID Parity and metadata) because this space will be taken from disk-group space only.

Also good to get "show disks" command output but don't share drive serial numbers in public. Just remove or hide all serial numbers.

In your Case total Pool A size 3594.9GB because disk-group Group1 size 3594.9GB. Now volume LUN1 provisioned size 3593.9GB. In this condition if you try to disable overcommit option that means you are trying to allocate full space for the volume which means this volume will no longer be Thin provisioned volume and it will become Thick Volume or fully allocated volume. This means LUN1 will require 3593.9GB of space from the pool. But if there is already some space got eaten up by Reserved space then we will be shortage of some space which LUN1 require to convert from Thin to Thick.

Another reason could be failure of one disk.

I have no idea drive failed part of which pool?

Are you facing overcommit disable issue for both Pools ?

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Re: MSA 2040 - Understanding Volume Tiering

$
0
0

1st of all there is nothing called "Volume Tiering"

The concepts called "Automated Tiering" and "Volume Tier Affinity".

Now in MSA Virtualization if I assume you have all 3 type of drives likes SSD, Enterprise SAS and  Midline SAS then you can have Performance, Standard and Archive Tier.

We create disk-groups as per the drive types. In this example I assume we have created three different disk-groups as per 3 different drive types.

I assume all these 3 disk-groups part of same Pool

Now when we create Volume of some particular size that time MSA just create that size volume by taking space from that Pool. You will not come to know from which tier or disk-group space or pages will be allocated. This is Virtualization.

Now lets say you have created two volumes. Like you said D and E drives. In Automated Tiering or Volume Tier Affinity data never move between one Volume to another volume. In case of AUtomated Tiering pages moved between one tier to another tier based on usage and hot demand of pages. May be you can go through same Best Practice guide and go through details of "Automated Tiering" to get more details on this.

Now coming to "Volume Tier Affinity"..........this is not like Tier Pinning where pages must move to some particular tier. I would suggest to go through same Best Practice guide and go through "Mechanics of Volume Tier Affinity" . This will give clear idea regarding this.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

Viewing all 8378 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>