Still the MSA 2070 doesnot has 32G FC connectiivty
The Gen 11 Servers only listed 32 FC HBA
Customer asking - why you using 32GB HBA sinc eyour Storage doesnt support
Still the MSA 2070 doesnot has 32G FC connectiivty
The Gen 11 Servers only listed 32 FC HBA
Customer asking - why you using 32GB HBA sinc eyour Storage doesnt support
Hi there,
Kindly requesting assistance with adding HPE storage volume to vmware esxi 8 host on an HP ProLiant DL360 Gen11 via iscsi connection. I've created two volumes for each storage pool and allocated two host IPs to controller A and B. Then I enabled iscsi on the HPE storage.
On the esxi host i created an iscsi configuration and added the controller IP to static target. Not sure what else to do.
Thanks
hi
you are right
Server DL380 g11
Only hba supports 32GB. I put HBA card AJ764 8gb and QW972 16gb it didn't work
These cards should be allowed to work on the server because someone may have an old 8GB or 16GB storage or SanSwitch.
Of course, there is one point
If you have a 32GB HBA card, for example, SN1610 2port 32GB, then connect it to a San switch or storage with a lower speed. is connected But HP company should put an update in the new version of SPP G11 to allow 8gb-16gb HBA cards to work on the server.
See: MSA Gen6: Basic Management with an 1060, 2060 and 2062
By the Way, the Best Practice is to use only one Pool unless you have a very big MSA.
Cali
Good day.
Running the verify vdisk-group command on your MSA 2040 will not directly change the state of the vDisk, but it is useful to validate the integrity of the vDisk group and check for any errors or inconsistencies. It typically verifies the consistency of the vDisks within the group and can help identify any issues that might have been caused by the earlier controller swap or the FTOL state.
Fyi, verify vdisk-group command is useful for diagnostics, but it may not directly resolve the FTOL error or fix the capacity issue. It's safe to run, but proceed with caution and consider your backup and recovery options if necessary.
hi Mr_Techie
thank you for your respond
How do I fix the problem of FTOL and 0 capacity showing Vdisk groups?
I need guidance and help. Hp doesn't support me yet because the device is old.
can you help me
I did everything I could think of, but still the vdisks are not accessible
I started to recover about 14 days ago, but the recovery is still ongoing and will take another 3 months. That is terrible.
I am doing volume recovery with the software. I don't know why my storage is messed up like this, apparently the virtual pool structure is damaged. I wish someone would guide me here. All my information is on storage. And I got into trouble badly
I did everything I could think of, but still the vdisks are not accessible
I started to recover about 14 days ago, but the recovery is still ongoing and will take another 3 months. That is terrible.
I am doing volume recovery with the software. I don't know why my storage is messed up like this, apparently the virtual pool structure is damaged. I wish someone would guide me here. All my information is on storage. And I got into trouble badly
Did you map the volumes to iscsi ports on msa?
An HP MSA 2040 device has failed due to a hard drive being full or a firmware bug
It was giving FTOL error on POOL A
Controller A was moved to B, it didn't make a difference
The Vdisk hard drives went into quarantine and the controllers were returned to their original state
Dequarnti-trust commands were executed and the capacity of 6 out of 7 vdisks became zero
But the volume capacity did not change
The hard drives are healthy
And the volumes show the capacity correctly, but it is not possible to access their real information. Do we have a command to repair vdisks so that they become operational?
Currently, I am recovering information with a series of recovery software, but it is extremely time-consuming, it takes about 3 months.
Hi,
I have MSA 2050 that I need to configure for 2 Node Hyper-V Cluster. How should I configure RAID on MSA. It is connected to both of the servers through fiber channels.
What are the recommendations for creating disk groups? Should I create two disk groups with the configuration below?
1. RAID 1 Group for VM Operating Systems (Storage A)
2. RAID 5 or 6 Group for Data (Storage B)
Thanks.
Dear Team,
We have MSA 2040 storage system with capaicity around 30TB. Recently observed one of the disk is having an issue and we replaced with similar and compatible disk. Post replacement, the disk went into RCON mode and stayed there for more than 4 days with 0% progress. we have two pools and the issue is with pool B with 29TB of data storage. Replaced disk is from Pool B. I observed that this pool went to fault status and the disk drives are in degraded state. All the virtuals disk went offline and my critical applications are not running because of this.
THe error message says the virtual pool is offline and the all the data is lost. I did not perform controller restart as it was recommended to take backup before performing this task.
My questions are:
1. There are no global spares defined in our storage. So, once the failed disk replaced with new one, do i need to configure the new disk as a spare to start Reconstruction?
2. As per the message, the data is lost. so is there any chance that we can retrieve the data/ take the data backup with the existing condition? if so how can i do it?
3. Since the data is already lost, (as per the error description) shall i perform a controller reboot? individually one after the other.
Request you to kidly look it this as a priority as there are some critical application running in this.
Appreciate your quick response and let me know ifyou need any additional data.Error Messages
Pool Fault Message
Disk Degraded Message
Hi @ariapostmail ,
HPE chat support team would have reached out to you through email.
Please respond to the email and we will see how best we could assist you.
We will need an HPE support case and logs to be reviewed to assist you.
Please note that if the Pools are already reporting healthy state , there wont be nothing much we could do from our end. Using trust command without reviewing logs could have created corruption at block level or file system level.
Hi,
I see 2 screenshots in the post.
One of the screenshots report controller A as down with firmware version TS251P006.
The second screenshot report both the controllers as up with the latest firmware TS252P005 installed.
Basic controller down troubleshooting would be:
1. Try to restart the storage controller which is down by issuing the below command from putty SSH session to management IP of working controller:
restart sc a
2. If controller still fails to come up, you could remove the controller A from its slot physically by 1-2 inches and reseat it.
3. If the storage controller A still fails to come up, it would require replacement.
Controller replacement instructions:
https://support.hpe.com/hpesc/public/docDisplay?docId=c03793459&docLocale=en_US
As the firmware version of controller is not quite old, you could enable partner firmware update and replace the storage controller A.
This will ensure that the firmware of controller B and new controller A gets synched automatically.
Partner firmware update synch might take around 30-45 minutes.
Controller replacement is an online activity and should be performed with the array powered on.
Scheduling a downtime for host servers accessing MSA volumes would be a good idea during controller replacement as the storage device is quite old.
Please note that P2000 G3 is an end of support life device and the chances of getting spare controller is quite less.
See: HPE MSA 1050/2050/2052 Best practices
In short: Use only one Group and if you have a min. of 12 Disks, use Raid DP+
Cali
Hi, one of my vdisk is showing degraded status, as msg below:
Component ID: Vdisk vd01
Health: Degraded
Health Reason: The vdisk is not fault tolerant. Reconstruction cannot start because there is no spare disk available of the proper type and size.
Health Recommendation: - Replace the failed disk.
- Configure the new disk as a spare so the system can start reconstructing the vdisk.
- To prevent this problem in the future, configure one or more additional disks as spare disks.
I have checked, but i couldnt indeditiy which disk has any issue.
Location Serial Number Vendor Rev How Used Type Size Rate*(Gb/s) SP Health Health Reason Health Recommendation
-----------------------------------------------------------------------------------------------------------------------------------------------
1.2 S0N0WWFD0000B418J7KF HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.3 S0N0WMHQ0000B418J33P HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.5 S0N0WFLW0000B418J523 HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.6 S0N0TFJY0000B418HZYW HP HPD5 VDISK SAS 900.1GB 6.0 OK
1.7 S0N0W77Y0000B418FTSK HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.9 S0N0KSWR0000N4111VY6 HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.10 S0N0KR6N0000B411BZQX HP HPD3 VDISK SAS 900.1GB 6.0 OK
1.11 S0N234L60000K451A6TE HP HPD5 VDISK SAS 900.1GB 6.0 OK
1.12 KXH507GR HP HPDC VDISK SAS 900.1GB 6.0 OK
1.13 KXJU061X HP HPDC VDISK SAS 900.1GB 6.0 OK
1.14 KXJTY2MX HP HPDC VDISK SAS 900.1GB 6.0 OK
1.15 KXJTYHMX HP HPDC VDISK SAS 900.1GB 6.0 OK
1.16 KXJTWGEX HP HPDC VDISK SAS 900.1GB 3.0 OK
1.17 S400K76L0000M609KEGJ HP HPD4 VDISK SAS 900.1GB 6.0 OK
1.18 S400LM890000K609CQMB HP HPD4 VDISK SAS 900.1GB 6.0 OK
1.19 S400LPJ70000K609AWGG HP HPD4 VDISK SAS 900.1GB 6.0 OK
1.20 S400LMXC0000K609DGXB HP HPD4 VDISK SAS 900.1GB 6.0 OK
1.21 S0N1G7LH0000B439AGEK HP HPD5 VDISK SAS 900.1GB 6.0 OK
1.22 S0N1HAQ90000M438FUXE HP HPD5 VDISK SAS 900.1GB 6.0 OK
1.23 S0N1HS650000M439Q3YT HP HPD5 VDISK SAS 900.1GB 6.0 OK
1.24 S0N1HBDN0000B438BHKR HP HPD5 VDISK SP SAS 900.1GB 6.0 OK
-----------------------------------------------------------------------------------------------------------------------------------------------
Info: * Rates may vary. This is normal behavior. (2025-01-20 11:19:25)
Success: Command completed successfully. (2025-01-20 11:19:25)
# show vdisks
Name Size Free Own Pref RAID Disks Spr Chk Status Jobs Job% Serial Number Drive Spin Down Spin Down Delay Health
Health Reason
Health Recommendation
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
vd01 13.4TB 0B A A RAID5 16 0 64k CRIT 00c0ff1ad6990000187ede5200000000 Disabled 0 Degraded
The vdisk is not fault tolerant. Reconstruction cannot start because there is no spare disk available of the proper type and size.
- Replace the failed disk.
- Configure the new disk as a spare so the system can start reconstructing the vdisk.
- To prevent this problem in the future, configure one or more additional disks as spare disks.
vd02 3597.0GB 0B B B RAID5 5 1 64k FTOL 00c0ff1ad65f0000f9264c5b00000000 Disabled 0 OK
I think your answer will be of great help to me.
I would like to ask you about firmware synchronization.
If Controller A needs to be replaced, I'm inquiring under the guise that it needs to be replaced.
Controller B has a lower firmware version than the replaceable spare part.
Controller B has all the storage configuration information.
The spare part you want to replace (high firmware version) is used equipment.
So, with PFU functionality, is there no risk that Controller A will become the main and destroy existing storage configuration information?
Hi,
Once a failed disk is replaced, if dynamic sparing option is enabled it will automatically join disk group and reconstruction will start.
If dynamic sparing option is disabled we will need to configure new disk as global spare for it to join the disk group.
In the current state there is no option to take data back up.
If the disk groups in Pool B are not reporting offline or quarantine state you could try performing a graceful power cycle after shutting down storage controllers through SMU/CLI.
Majority of the Pool offline cases if it doesn't get resolved after power cycle, would require detailed logs review and MSA engineering team engagement by logging an HPE support case.
I do understand that MSA 2040 is an end of support life product.
However, you could still try logging an HPE support case to check one time support options.
Hi everyone! Here’s the situation: There are two MSA2060-ENCL-SFF storage systems with MSA-12G-ENCL-LFF expansion shelves, one is old and the other is new. The old one is currently turned off. Both storage systems have Pool B with Disk Groups dgB01. On the expansion shelf of the old storage system, we needed to replace the fifth disk, so we replaced it with a disk from the old expansion shelf of the old system. The disk appeared in the system, but it is listed as an Unassociated Disk in Disk Groups (dgB01).
Here is the output from the CLI:
As I understand it, it has read the metadata from the disk that was replaced from the old storage system with the same Disk Groups, and now it shows this Disk Group as quarantined. What can be done here?
Data loss and storage issues are challenging, and given that this involves FTOL errors, quarantined vDisks, and 0 capacity showing for some vDisk groups, the virtual pool structure is likely in a severely damaged state. I would say please contact the Support team for further assistance.