Davind & Alex,
Here are recommended settings for VIC1240. When I will be in the US next week, I’ll check status of VIC 1340 perf testing for you.
Begin forwarded message:
From: Samuel Kommu <skommu@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX
Date: 21 Dec 2015 22:02:02 CET
To: Anthony Burke <aburke@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>, Ray Budavari <rbudavari@vmware.com>, Leena Merciline <lmerciline@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>
Anthony,
Is it VIC 1240 or VIC 1340? We haven’t tested VIC 1340 yet. Ray (copied) has tested VIC 1240 and recommends the following tuning for performance:
NetQueue
|
UCS Ethernet Adapter Policy & VMQ Connection Policy
|
8
|
Provides additional queues for traffic using different DST Acs (benefits when there is a mix of both VXLAN and VLAN traffic)
|
NIC interrupt timers & TCP LRO
|
UCS Ethernet Adapter Policy
|
64us & Disabled
|
Reduce NIC adapter interrupt timers to enable faster processing of receive traffic
|
Multiple VTEPs using Load Balance - SRC ID policy
|
NSX VXLAN Configuration
|
2 VTEPs
|
Multiple VTEPs enables balanccing of network traffic processing across two CPU contexts
|
Network IO Control
|
VDS
|
Enabled
|
Provide additional TX contexts / CPU resources foor transmit traffic
|
Also,
- ESXi power management should be disabled
- UCS Firmware must be at a minimum version of 2.2(2c)
- ESXi hosts require ENIC driver 2.1.2.50 or newer
Above tuning is critical to improve performance.
Like all components within NSX, dvFilter’s performance is also influenced by the hardware offloads etc.,. Check out the NSX Performance slides from Vmworld that I sent earlier. Feel free to setup up a quick sync up call to discuss, if still in doubt.
Regards,
Samuel.
From: Anthony Burke <aburke@vmware.com>
Date: Monday, December 21, 2015 at 12:00 PM
To: Samuel Kommu <skommu@vmware.com>
Cc: Leena Merciline <lmerciline@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>, Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX
Hi Samuel,
The following setup is as follows:
vSphere 6.0
NSX 6.1.5 / NSX 6.2 (both have been tested)
UCSB200 M3 UCSB200M4
VIC1240/VIC1340 - Latest drivers.
Test VMs have VMXNET3 drivers running.
Please correct me if I am wrong but hardware offloads and NICs should not be an issue when utilising dvFilter as this is purely done in software. This is purely a performance requirement The test bed of two workloads on a VLAN backed port-group on the same host or different hosts are not utilising VXLAN.
The test criteria below has outlined:
- Same host for two VMs
- Same network (no routing)
- Any to Any with no FW rules = 22.x Gbit sustained.
- Any to Redirect
Having deploying PAN with NSX on UCS I do believe this is a Fortinet issue. Given the sensitivity to the customer I am pursing this internally.
If I a missing something and we are leveraging our NIC cards please let me know
As an aside I have a different customer (NSX friendly deployed in prod) who may give me access to a lab of M3 and M4 UCS but I cannot guarantee access.
Regards,
Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098
On 22 Dec 2015, at 6:09 AM, Samuel Kommu <skommu@vmware.com> wrote:
Anthony,
Haven’t received any hardware setup details yet. If you have already sent, could you please send it over again?
Note on NetX Performance: Close to line rate throughput is achievable with the use of hardware offloads and jumbo MTU etc., Check out VMworld 2015 slides: https://vault.vmware.com/group/nsx/document-preview?fileId=16312906
Regards,
Samuel.
From: <ask-nsx-pm-bounces@vmware.com> on behalf of Leena Merciline <lmerciline@vmware.com>
Date: Monday, December 21, 2015 at 10:55 AM
To: Anthony Burke <aburke@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX
Hello Anthony, a performance report on this is being done by Samuel K (TPM) using a sample service VM. This will be for internal use. We plan to publish this soon (by early Jan) on Vault.
Leena
From: Anthony Burke <aburke@vmware.com>
Date: Sunday, December 20, 2015 at 6:35 PM
To: ask-nsx-pm <ask-nsx-pm@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX
Hi team,
Is there any comments around this? This is having an impact on a lighthouse customer for us in Australia federal government. There are other customers watching these situation closely to see which way this customer progresses.
I cannot provide the customer clear information about dvFilter, performance, and commentary around the NetX framework. Can anyone comment here? Has any testing been done? Can we please have an official comment.
I have sent Sam Kommu details on hardware setup per a seperate unicast request.
Regards,
Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098
On 14 Dec 2015, at 9:28 AM, Anthony Burke <aburke@vmware.com> wrote:
Hi team,
In a familiar discussion about NetX again. This time it is with Fortinet. Customer of mine has raised a high concern over the lack of throughput when leveraging NSX net-x and Fortinet VMX 2.0.
Fortinet were quick to blame our single-threaded DVfilter plugin. Whilst we are managing expectations with partner and customer can we have an official comment around expected speeds of redirection alone (without 3rd party features enabled) ? Could we also have official communication to partners about this?
We’ve done this with Checkpoint locally and now Fortinet are piping up. I know we can do ~1.3Gbps with Palo Alto (customer is in production locally) and I heard rumours Fortinet could do a lot more.
Attached is customers rudimentary tests with iPerf. Will raising a SR on mysids help progress this?
Performance Testing (iperf between RHEL client<>servers):
Test Scenario
|
Throughput
|
Comment
|
|
22.6 Gbit
23.0 Gbit
23.4 Gbit
69.0 Gbit
|
|
|
322 Mbit
282 Mbit
355 Mbit
959 Mbit
-repeat-
326 Mbit
312 Mbit
358 Mbit
996 Mbit
-repeat-
352 Mbit
354 Mbit
372 Mbit
1078 Mbit
|
|
|
1.27 Gbit
-repeat-
1.16 Gbit
-repeat-
1.25 Gbit
|
|
|
311 Mbit
369 Mbit
365 Mbit
1045 Mbit
-repeat-
338 Mbit
370 Mbit
381 Mbit
1089 Mbit
|
|
|
359 Mbit
331 Mbit
358 Mbit
1048 Mbit
|
|
|
|
|
|
|
|
Regards,
Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098
--
You received this message because you are subscribed to the Google Groups "nsbu-se" group.
Visit this group at https://groups.google.com/a/vmware.com/group/nsbu-se/.
_______________________________________________
nsbu-se mailing list
--
You received this message because you are subscribed to the Google Groups "nsbu-se" group.
_______________________________________________
nsbu-se mailing list
No comments:
Post a Comment