Question about adding more sync connections

Windows iSCSI SAN, Linux iSCSI SAN, Mac OS X iSCSI SAN and Virtual Native SAN.
Post Reply
webguyz
Posts: 35
Joined: Fri Dec 07, 2012 5:50 am

Question about adding more sync connections

Post by webguyz » Mon Jan 20, 2014 7:44 pm

Successfully upgraded my production HA system to 3.10 and was thinking of adding an additional sync connection to avoid the split-brain scenario you describe. I currently have a 10g replication link but I have available a 1g port on each server that I want to use as an additional sync. How does iStorage 3.10 handle multiple replication ports? Would it replication across both the 10g and 1g in a round robin fashion, or does it go down the sync list and use the first ones defined and keep using them until they are no longer available?

Thanks!

pstoianov
Posts: 4
Joined: Wed Jan 22, 2014 6:01 am

Re: Question about adding more sync connections

Post by pstoianov » Wed Jan 22, 2014 6:35 am

@webguyz - I have question as your.
Bur just wanted to add the following, if my 10GbE connection failed, the 1GbE should takes over what will happened when 10GbE is coming back online? What will happened if the 10GbE is coming up & down constantly (flip-flop)?

* I don't have a license yet but just testing it and want make sure that product is stable.
@webguyz, since you're having a personal experience, could you please share your performance results in RAID10 over 10GbE for R/W iSCSI? What about replication performance and stability?

webguyz
Posts: 35
Joined: Fri Dec 07, 2012 5:50 am

Re: Question about adding more sync connections

Post by webguyz » Wed Jan 22, 2014 7:00 pm

We've been using Istorage HA with Xenserver for over a year with great results. In our 2 istorage servers we have dual 10g nics, one connection for data and the other one is connected point to point on the other Istorage server for replication. You need to have at least 2 switches and one 10G data link goes from one server to a switch, and the other servers 10g data link goes to the other switch. This way if you have to boot one of the switches for an upgrade your disk stays up and does not kill your ISCSI session. To be honest the 10g is overkill for us. I have yet to see our network traffic on the 10g cards ever go over 7%, this is with 6 host servers and about 40 vm's. All our host servers have dual 1gb nics for ISCSI and like the servers, we connect one of the nics to one switch and the other nic to the other switch. Unless you have a LOT of servers you really don't need 10g to start off with. ISCSI with Jumbo Frames and 1Gb nics can handle a LOT of disk traffic

We use LSI disk controllers and Cachecade with a 250 GB readonly SSD for performance. Don't have any hard numbers with me but the performance is good and we've never had any complaints about disk performance.

We tried using Raid 5 a while back but its too slow. RAID10 is the only way to go for hosting.

We were using Open-E DSS V6 software before, but iStorage is much more affordable and I like that I can see traffic stats using standard windows tools. I am a windows guy and never felt comfortable running a Linux SAN, and iStorage makes it so simple to do iStorage upgrades or load windows updates. I was hesitant to try iStorage at first, but the cost of Open-E was about to double on us if we wanted to do active/active SAN so we set up a test iStorage network with a couple of cheap PC's and a Citrix Xenserver and then started torturing it by creating VM's and yanking cables and doing as much nasty to the iStorage servers as we could to make sure it would work when one of the servers dies. Ran into a few hitches but support resolved them all. After 3 months we put removed our Open-E software and installed windows and put the iStorage HA into production and are very happy. Couple this SAN with open source Xenserver 6.2 and you have a killer vm system for not a lot of money.

Not much traffic on the iStorage forums but Kernsafe support is good and I can usually get an answer the same day, even though there is a 14 hour time zone difference for us.

pstoianov
Posts: 4
Joined: Wed Jan 22, 2014 6:01 am

Re: Question about adding more sync connections

Post by pstoianov » Wed Jan 29, 2014 9:45 am

I was wondering about:
- Split-brain sync connections - how they are defined apart of Windows bonding? How to define more than one iSCSI sync connection ifaces?
- Configuration backup & restore. How to make bacula to perform configuration backup of all exported targets? Is it possible to export the conf via PowerShell or CLI in order to be backed up?
- In order to use 3x iSCSI LUNs each is 2TB, what are the best practice? Standard file, VHD or Physical drive and let ESXi to manage it?
- How slow will be iStorage Server if works without restarts for few months?
- I know how fast is IET iSCSI under Linux with 2TB LUNs (almost native performance), and wondering if the performance will be decreased if iStorage is used as replacement?
- What will happened if NODE-A is down for maintenance for few hours, then came back? Does NODE-A will start providing iSCSI service before SYNC-UPDATE completion?
- Does iStorage server works with VMWare iSCSI Round-Robin balancing where I/O is coming to NODE-A then next one to NODE-B? In this way both nodes are utilized?

It will be wonderful is someone is able to answer to all questions....

Thank you!

Charles [PM]
Posts: 35
Joined: Sat Aug 14, 2010 3:00 am

Re: Question about adding more sync connections

Post by Charles [PM] » Fri Jan 31, 2014 6:04 am

Please see my answers below:

- Split-brain sync connections - how they are defined apart of Windows bonding? How to define more than one iSCSI sync connection ifaces?
You can set up more than one interface for sync and heart-beat, if one interface not work, the HA will continue working by using other sync link.

- Configuration backup & restore. How to make bacula to perform configuration backup of all exported targets? Is it possible to export the conf via PowerShell or CLI in order to be backed up?
We doesn't support this feature at this time.

- In order to use 3x iSCSI LUNs each is 2TB, what are the best practice? Standard file, VHD or Physical drive and let ESXi to manage it?

They are the same in performance, as you want to use under 2TB, you can choose VHD, if you use Physical drive, you need to take care of them because if your windows server hosting software can access them will be some risk. Standard File is recommended.

- How slow will be iStorage Server if works without restarts for few months?
It won't be slower if you running for a long time, except there are many fragments in your storage.

- I know how fast is IET iSCSI under Linux with 2TB LUNs (almost native performance), and wondering if the performance will be decreased if iStorage is used as replacement?
In most case, the bottleneck is hard disk, if you build replication (such as HA) will reduce a little performance.

- What will happened if NODE-A is down for maintenance for few hours, then came back? Does NODE-A will start providing iSCSI service before SYNC-UPDATE completion?
It won't effect your business if NODE-A is down, after it start working, it will be providing iSCSI service right now but if some data are not up to date, will offer data from NODE-B.

- Does iStorage server works with VMWare iSCSI Round-Robin balancing where I/O is coming to NODE-A then next one to NODE-B? In this way both nodes are utilized?

It can work with ESX/ESXi round-robin high availability environments.

pstoianov
Posts: 4
Joined: Wed Jan 22, 2014 6:01 am

Re: Question about adding more sync connections

Post by pstoianov » Fri Feb 07, 2014 4:08 am

I did several tests and having the following concerns/issues:

I've now 5 targets & 4 application for HA.

- On NODE-B the standard image file has been deleted to simulate a RAID lost. New image file is created, re-sync started when application mirroring (HA) is re-created. From status PENDING is now RUNNING.
-- The issue: vSphere 5.5 is now showing this target with different "identifier". So, I've same iSCSI target twice but have different identifier and no multipaths when "Paths" are checked.
--- Is there any way to change "identifiers" in iStorage? How identifiers works on iStorage?

- SNMP support: is there any plans iStorage to support SNMP performance? SNMP traps in case of failure?

- I've been working with DRBD for years, I believe will be really important to have much more details about the SYNC and what is happening right now...who is the source of sync data where is going, estimation time, current speed, average speed, nic used and s.o.

pstoianov
Posts: 4
Joined: Wed Jan 22, 2014 6:01 am

Re: Question about adding more sync connections

Post by pstoianov » Fri Feb 07, 2014 3:17 pm

PERFORMANCE PROBLEM

Looks like iStorage Server is unable to achieve good performance - far from native.

iStorage v.3.20 x64 / Windows 2012 R2 (LSI drivers & iStorage Server are the only application installed otherwise clean & fresh)
This supermicro has 2x Xeon E5620 (16 logical cores in total) & 16GB RAM installed.

Tested with Crystal Benchmark 3.0.2 & HDTune Pro 5.50
Lun size = 1TB

Native performance (using same drive where images are stored):
seq read 2300 MB/s
seq write 1900 MB/s

Over 10GbE:
Client Windows 2008 R2 Server running on vSphere 5.5
Performance:
seq read: 313.5 MB/s
seq write: 99.5 MB/s

Over localhost - mounted iSCSI on same server using 127.0.0.1
seq read: 325.5 MB/s
seq write: 160 MB/s


If LUN size is 2GB:
Over localhost - mounted iSCSI on same server using 127.0.0.1
seq read: 320 MB/s
seq write: 364 MB/s

RAM DRIVE - iSCSI - 2GB - via 127.0.0.1
seq read: 328 MB/s
seq write: 472 MB/s

Any idea why is so slow?
CPU load during tests is under 10%; memory usage is under 20%?
Btw, tested it on two different servers and got same result.

Olivia (staff)
Posts: 203
Joined: Thu Nov 05, 2009 5:52 pm
Contact:

Re: Question about adding more sync connections

Post by Olivia (staff) » Mon Feb 10, 2014 11:19 pm

pstoianov wrote: - On NODE-B the standard image file has been deleted to simulate a RAID lost. New image file is created, re-sync started when application mirroring (HA) is re-created. From status PENDING is now RUNNING.
-- The issue: vSphere 5.5 is now showing this target with different "identifier". So, I've same iSCSI target twice but have different identifier and no multipaths when "Paths" are checked.
The identifier is created when target creating, and can be synchronized to another server's target when creating HA, if you re-create one target, the identifier will be changed, but you can synchronize from existing target.
pstoianov wrote: --- Is there any way to change "identifiers" in iStorage? How identifiers works on iStorage?
We don't provide official ways to change it at this time.
pstoianov wrote: - SNMP support: is there any plans iStorage to support SNMP performance? SNMP traps in case of failure?
It doesn't support this feature at this time, and we haven't had a plan for it.
pstoianov wrote: - I've been working with DRBD for years, I believe will be really important to have much more details about the SYNC and what is happening right now...who is the source of sync data where is going, estimation time, current speed, average speed, nic used and s.o.
Thank you for your advice, we will consider it and will do more improvement in the upcoming versions.
KernSafe Support Team
iSCSI SAN, iSCSI Target, iSCSI initiator and related technological support.
[email protected]

Post Reply