You need to first understand below vdisk condition,
"Pool 1, name vd0001 is Raid50, class Linear, status Critical, 12 Disk, 3 Disk Degraded (the disk may containt invalid metadata)"
RAID 50 is a combination of RAID 5 (striping and error correction) and RAID 0 (striping) where RAID 5 sub-arrays are striped together. So if you lose two disks in the same RAID 5 sub-array your data is lost but if you lose 1 disk in each sub-array then your data is intact.
Now in your case vd0001 made with 12 disks so I am assuming each sub-vdisk created with 6 drives. So technically from both 6 drives sub-vdisk if 1 drive failed you still have data access and your vd0001 is alive. You have mentioned 3 drives degraded but still vd0001 shows Critical instead of offline or QTOF which means there was some dedicated spare configured i assume which is also in degraded state. So clarity on your full setup is very important otherwise troubleshooting will be difficult.
So you should verify why 3 drives went into LEFTOVER state and if they have any hardware errors. If so then you may have to replace them. If they don't have any hardware errors then you can clear metadata and re-use them again. Still I would recommend to capture MSA log and verify with HPE support as you have multiple drive degraded situation.
Coming to your other query, yes you can remove dedicated spare from vd0002 and create new dedicated spare for vd0001. You can refer command line guide and refer command "remove spares" and "add spares".
My recommendation would be don't add any spare to vd0001 without fixing LEFTOVER drives otherwise it make situation more worse.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
***********************************************************************