Hello.
I need to dissconect DS 2700 (with created LUN on it) from MSA 1040 and connect to MSA2012 withou lost data on DS 2700.
Is it possible? Does MSA 2012 see this LUN? Or i lost it?
Hello.
I need to dissconect DS 2700 (with created LUN on it) from MSA 1040 and connect to MSA2012 withou lost data on DS 2700.
Is it possible? Does MSA 2012 see this LUN? Or i lost it?
We've just been given the go-ahead to replace our existing virtualisation platform. The current one (which I implemented) has lasted around 5 years and we've learned that certain parts were over-provisioned and others under-provisioned. In particular, expansion of it is a bit of a dead-end so I'm keen to avoid this happening again.
Right now I'm looking at the following:
4x DL360 Gen9 - VMware hosts
1x E5-2630v4, 128GB, 10GbE LOM, 10GbE PCIE, no local storage
1x DL380 Gen9 - Veeam backup proxy and repository
2x E5-2630v4, 64GB, 10GbE LOM, 2x SFF for OS, 12x LFF MDL for data
1x Aruba 5406R zl2 - Core switch
3x 10GbE v3 modules
All of this should allow future expansion easily - the hosts are under half provisioned (CPUs and memory), Veeam can be upgraded with DAS if required, more modules are available on the switch. I'm looking at direct attach 10GbE SFP+ for networking.
Storage is my big sticking point. We need around 10TB of fast VM storage (SAS at a minimum, ideally either tiered SAS + SSD or all flash) and around 25TB of archival storage, where MDL SAS/SATA would suffice. My options are MSA, 3PAR or VSA. Some pros and cons that I can think of:
MSA
+ Cheapest option
+ Easily expanded with disk enclosures
+ MAS2040 supports performance tiering
- No Veeam storage snapshot capability
- Looks dated (G6/G7) versus our nice new Gen9 hosts (minor, but we like things to look uniform!)
3PAR
+ Best in class, incredible performance
+ Ultra reliable
+ Easily expanded with disk enclosures
- Pricing seems to be insanely expensive for additionals, e.g. licences per disk added
- If using FC for storage, requires expensive switches/networking
VSA
+ Highly redundant (e.g. 4 hosts running VSA = network RAID10)
+ Cost efficient, as our servers already come with storage controllers etc.
- Not really expandable at all
- Large datastores (e.g. our 25TB one) mean hosts will need a lot of disks (probably need to move to DL380 for hosts)
I'm pretty much torn between 3PAR and the MSA here. If the 3PAR comes in at a decent price (e.g. I've heard of a 3PAR 8200 all-flash array for ~$20k) then I might be swayed by it. It would also depend on FC pricing. So, some questions for you:
- Have you got any comments/thoughts on the above specs? Hosts , networking etc.
- Is the 3PAR worth it? The MSA is hardly going to be poor when it comes to performance.
- Would storage be best on FC or is 10GbE ok? For the 3PAR, it seems like I have to go down an expensive add-on adapter route for 10GbE.
- Does FC support direct attach for cabling? $300 of transcievers and $80 of fibre seems like a lot for a 1m connection when direct attach SFP+ is more like $75 a pop.
Thanks!
I don't think this is possible.
MSA1040 is G4, MSA2012 is G1.
IIRC the DS2700 is not even supported with the G1.
Any reason for doing this?
I don't know the difference between mixed use 2 and 3 but this HPE SSD Data Sheet has tables that have quants for many different SSDs.
Please check your private message and get back to me.
-JD
Hi,
If you have already got IP addresses form DHCP, then try accessing the SMU (web interface) with the same IP.
If fails, check if they are pingable and update me your findings.
-JD
Well, hello again! I can help you here or I can help you there! Or feel free to send me an email. calvin dot zito at hpe dot com.
i am looking forward to implement the MSA performanace tier on an exisiting MSA 2040 to increase the performance of the storage
Does any one have exposure and practical experience with the feature? can you share the step by step procedure to follow, any guideline or personal opinion?
Hi, I tried connect to HP MSA 2040 via SSH or TELNET interface. HTTP and HTTPs inteface running well and I have right settings of user permissions (manage role, CLI and SSH enabled).
I tried connect from Win10 (CMD + TELNET), Win10 (PuTTY), Win2012 R2 (PuTTY and CMD + TELNET) and from Linux CentOS 7 (Command line with ssh command). Every time I receive this message:
"/home/appuser/appshell: line 34: mccli: Input/outpur error"
and connection is closed by remote host.
I think, that is something wrong or not set on storage HP MSA 2040. Could You help me, please, with advice?
Thank You
Regards Martin Kubeš
Hello,
Give me the output for below command
“Show protocol”
-JD
Hi, here si log for telnet connection. There is only error message, not more :-(.
****************************************************
System Version: GL210R004
MC Version: GLM210R007-01
Serial Number: 00C0FF268F1A
10.0.20.118 login: adm
Password:
/home/appuser/appshell: line 34: mccli: Input/output error
*****************************************************
Regards Martin
Hi.
Do you know if it is possible to cascade an HP Storageworks D2700 off the back of an HP Storageworks MSA60 (connected to an HP Proliant DL 380 G5)?
If so, do you know what variant of mini-SAS cable is required? (ie, SFF8088/8470 etc..).
Thanks in advance.
Lawrence
Hi Fleischen,
How much of snapshot pool capacity do you have?
http://www8.hp.com/h20195/v2/GetPDF.aspx%2F4AA4-6892ENW.pdf
Refer to page 21 on snap pool space section.
http://h10032.www1.hp.com/ctg/Manual/c04220794
Also, what's the size of your snap pool? Page 201 & 206.
Hello Lawrence,
That would not be a supported configuration.
-JD
I am using HPE MSA 1040, and trying to configure the storage server MSA 1040 via CLI by using a Perl script. I took the Perl code from HPE MSA 1040/2040 CLI Reference Guide page 17, which I have a problem of SSL login:
use LWP::UserAgent; use Digest::MD5 qw(md5_hex); use XML::LibXML; use IO::Socket::SSL qw(debug3); my $md5_data = "manage_!manage"; my $md5_hash = md5_hex( $md5_data ); # Create a user agent for sending https requests and generate a request object. $user_agent = LWP::UserAgent->new( ); $url = 'https://msa1040_ip_address/api/login/' . $md5_hash; # Create a user agent for sending https requests and generate a request object. $request = HTTP::Request->new( GET => $url ); # Send the request object to the system. The response will be returned. $response = $user_agent->request($request);
I couldn't establish an SSL connection, as can be seen from the debug messages:
DEBUG: .../IO/Socket/SSL.pm:2755: new ctx 48462496 DEBUG: .../IO/Socket/SSL.pm:624: socket not yet connected DEBUG: .../IO/Socket/SSL.pm:626: socket connected DEBUG: .../IO/Socket/SSL.pm:648: ssl handshake not started DEBUG: .../IO/Socket/SSL.pm:684: not using SNI because hostname is unknown DEBUG: .../IO/Socket/SSL.pm:716: request OCSP stapling DEBUG: .../IO/Socket/SSL.pm:737: set socket to non-blocking to enforce timeout=180 DEBUG: .../IO/Socket/SSL.pm:750: call Net::SSLeay::connect DEBUG: .../IO/Socket/SSL.pm:753: done Net::SSLeay::connect -> -1 DEBUG: .../IO/Socket/SSL.pm:763: ssl handshake in progress DEBUG: .../IO/Socket/SSL.pm:773: waiting for fd to become ready: SSL wants a read first DEBUG: .../IO/Socket/SSL.pm:793: socket ready, retrying connect DEBUG: .../IO/Socket/SSL.pm:750: call Net::SSLeay::connect DEBUG: .../IO/Socket/SSL.pm:2656: did not get stapled OCSP response DEBUG: .../IO/Socket/SSL.pm:2609: ok=0 [0] /C=US/ST=CO/O=HP/OU=MSA-Storage/CN=172.22.1.190/C=US/ST=CO/O=HP/OU=MSA-Storage/CN=172.22.1.190 DEBUG: .../IO/Socket/SSL.pm:753: done Net::SSLeay::connect -> -1 DEBUG: .../IO/Socket/SSL.pm:756: SSL connect attempt failed DEBUG: .../IO/Socket/SSL.pm:756: local error: SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed DEBUG: .../IO/Socket/SSL.pm:759: fatal SSL error: SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed DEBUG: ...18.2/Net/HTTPS.pm:69: ignoring less severe local error 'IO::Socket::IP configuration failed', keep 'SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed' DEBUG: .../IO/Socket/SSL.pm:2777: free ctx 48462496 open=48462496 DEBUG: .../IO/Socket/SSL.pm:2782: free ctx 48462496 callback DEBUG: .../IO/Socket/SSL.pm:2789: OK free ctx 48462496
Note the error:
SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
What did I do wrong here? It's the code that I took from the example provided by HPE, and yet it doesn't work. I am using Perl v5.18.2 (from a Debian Trusty).
We have a customer running Oracle over 2 x RedHat 4.0 hosts currently using an IBM DS4700 for storage via Fibre Channel.
To increase disk space on a temporary basis, I have presented, via the same Fibre switches, an MSA 2324FC to the 2 x Redhat 4.0 hosts (HP Proliant DL380 G5) and also 1 x windows 2003.
I have successfully create a volume and presented it to the Windows 2003 box. However, i cannot seem to ge the RedHat boxes to see any volumes presented from MSA.
The Redhat O/S sees the MSA hosts ports, but cannot see any new volumes after a re-scan or even after rebooting.
MSA hosts ports are set to PTP
I have rescanned the hosts using
echo "- - -" > /sys/class/scsi_host/host0/scan
I have also rebooted the hosts.
Host HBA's can see the MSA hosts ports
[root@edi-ora02 qla2xxx]# cat 1
QLogic PCI to Fibre Channel Host Adapter for QLE2460:
Firmware version 4.00.23 [IP] , Driver version 8.01.06-fo
ISP: ISP2432
Request Queue = 0x257200000, Response Queue = 0x2579b0000
Request Queue count = 4096, Response Queue count = 512
Total number of active commands = 3
Total number of interrupts = 12830049
Device queue depth = 0x20
Number of free request entries = 2416
Number of mailbox timeouts = 0
Number of ISP aborts = 0
Number of loop resyncs = 0
Number of retries for empty slots = 0
Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0
Number of reqs in failover_q= 0
Host adapter:loop state = <READY>, flags = 0x1e03
Dpc flags = 0x4000000
MBX flags = 0x0
Link down Timeout = 030
Port down retry = 030
Login retry count = 030
Commands retried with dropped frame(s) = 0
Product ID = 0000 0000 0000 0000
SCSI Device Information:
scsi-qla0-adapter-node=200000e08b94ef01;
scsi-qla0-adapter-port=210000e08b94ef01;
scsi-qla0-target-0=200600a0b826cf50; (IBM DS4700)
scsi-qla0-target-2=257000c0ffd78137; (MSA Host port B2)
FC Port Information:
scsi-qla0-port-0=200600a0b826cf4e:200600a0b826cf50:010000:81;
scsi-qla0-port-1=200600a0b826cf4e:200700a0b826cf50:010100:82;
scsi-qla0-port-2=208000c0ffd78137:257000c0ffd78137:010600:83;
SCSI LUN Information:
(Id:Lun) * - indicates lun is not registered with the OS.
( 0: 0): Total reqs 1565425, Pending reqs 1, flags 0x2, 0:0:81 00
( 0: 1): Total reqs 688716, Pending reqs 0, flags 0x2, 0:0:81 00
( 0: 2): Total reqs 1371012, Pending reqs 0, flags 0x2, 0:0:81 00
( 0: 3): Total reqs 1330681, Pending reqs 0, flags 0x2, 0:0:81 00
( 0: 4): Total reqs 1664840, Pending reqs 2, flags 0x2, 0:0:81 00
( 0: 5): Total reqs 1568101, Pending reqs 0, flags 0x2, 0:0:81 00
( 0: 6): Total reqs 110, Pending reqs 0, flags 0x2, 0:0:81 00
( 0: 7): Total reqs 148576, Pending reqs 0, flags 0x2, 0:0:81 00
( 0:11): Total reqs 1646488, Pending reqs 0, flags 0x2, 0:0:81 00
( 0:12): Total reqs 1478518, Pending reqs 0, flags 0x2, 0:0:81 00
( 0:13): Total reqs 1564029, Pending reqs 0, flags 0x2, 0:0:81 00
[root@edi-ora02 qla2xxx]# cat 2
QLogic PCI to Fibre Channel Host Adapter for QLE2460:
Firmware version 4.00.23 [IP] , Driver version 8.01.06-fo
ISP: ISP2432
Request Queue = 0x257280000, Response Queue = 0x1200000
Request Queue count = 4096, Response Queue count = 512
Total number of active commands = 0
Total number of interrupts = 246
Device queue depth = 0x20
Number of free request entries = 4094
Number of mailbox timeouts = 0
Number of ISP aborts = 0
Number of loop resyncs = 0
Number of retries for empty slots = 0
Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0
Number of reqs in failover_q= 0
Host adapter:loop state = <READY>, flags = 0x1e03
Dpc flags = 0x4000000
MBX flags = 0x0
Link down Timeout = 030
Port down retry = 030
Login retry count = 030
Commands retried with dropped frame(s) = 0
Product ID = 0000 0000 0000 0000
SCSI Device Information:
scsi-qla1-adapter-node=200000e08b941a02;
scsi-qla1-adapter-port=210000e08b941a02;
scsi-qla1-target-2=207000c0ffd78137; (MSA Host Port A1)
FC Port Information:
scsi-qla1-port-0=200600a0b826cf4e:200600a0b826cf4f:010000:81;
scsi-qla1-port-1=200600a0b826cf4e:200700a0b826cf4f:010100:82;
scsi-qla1-port-2=208000c0ffd78137:207000c0ffd78137:010500:83;
SCSI LUN Information:
(Id:Lun) * - indicates lun is not registered with the OS.
There appears to be no multipathing on these hosts, and the original zoning is in correct for the DS4700 as on SAN Switch1 controller A and B aliases have the same wwn.
Has anyone come accross this issue before?
Cheers
OK I think my problem is that the Perl library cannot find the authentication certificate. I have tried to specify the certificate, but still it doesn't work.
Since my Perl script will be used in the preparation/configuration phase, I don't really care about the communication security, and I bypass the host verification as a workaround.
So what I need is just one line:
$user_agent = LWP::UserAgent->new( ); $user_agent->ssl_opts(verify_hostname => 0); # BYPASS HOSTNAME VERIFICATION! $url = 'https://msa1040_ip_address/api/login/' . $md5_hash; $request = HTTP::Request->new( GET => $url ); $response = $user_agent->request($request);
... and BOOM! It works.
Hello Community,
Had a question here... While I have extensive experience disasembling and cleaning servers, other MSA disk arrays, etc... I've been wanting to take my MSA 2040 offline and give it a good dusting/cleaning.
I've never actually disassembled an MSA 2040 before. I'm not concerned about the power supplies, and controllers, but I do see a sticker on the top towards the front stating "Warranty void if seal broken". This sticker leads me to this post...
By removing the controllers, PSUs, and disks, can I get access to every component and crevas to dust this puppy out, or am I limited. I'm getting the feeling that sticker may prevent me from getting access to the midplane part of the array.
As always, I know I'm not supposed to cause fans to rev from airflow.
Anyone have any advice?
Cheers
Hmmm,
Given that the unit is optimised for front to back air flow I don't think you will go too far wrong with compressed air through the front (not that I'm endorsing it but have you seen the YouTube videos using Garden Leaf Blowers for this :-O ) and Henry or one of his industrial vacuum friends at the back to catch the dust, fluff and cruft that flies out.
There is a school of thought that says that you shouldn't disassemble the components as that potentially exposes the connectors that are usually seated together and haven't been exposed to dust contamination. However, I too would be slightly wary of turning the exhaust fans into wind turbines so I would at least remove those :-)
Let us know how you get on.
Cheers
Ian
(Hope that helps - please give kudos if it does)