Pages

Tuesday, January 26, 2016

VMware vSphere Profile-Driven Storage Service.......

VMware vSphere Profile-Driven Storage Service.......

service vmware-sps restart
service vmware-sps status

Monday, January 25, 2016

CISCO UCS VIC 1240 and VIV 1340

Is it VIC 1240 or VIC 1340?  We haven’t tested VIC 1340 yet.  Ray (Ray Budavari <rbudavari@vmware.com>) has tested VIC 1240 and recommends the following tuning for performance:

NetQueue
UCS Ethernet Adapter Policy & VMQ Connection Policy
8
Provides additional queues for traffic using different DST Acs (benefits when there is a mix of both VXLAN and VLAN traffic)
NIC interrupt timers & TCP LRO
UCS Ethernet Adapter Policy
64us & Disabled
Reduce NIC adapter interrupt timers to enable faster processing of receive traffic
Multiple VTEPs using Load Balance - SRC ID policy
NSX VXLAN Configuration
2 VTEPs
Multiple VTEPs enables balanccing of network traffic processing across two CPU contexts
Network IO Control 
VDS
Enabled
Provide additional TX contexts / CPU resources foor transmit traffic

Also,
  1. ESXi power management should be disabled
  2. UCS Firmware must be at a minimum version of 2.2(2c)
  3. ESXi hosts require ENIC driver 2.1.2.50 or newer

Above tuning is critical to improve performance.


Like all components within NSX, dvFilter’s performance is also influenced by the hardware offloads etc.,.  Check out the NSX Performance slides from Vmworld that I sent earlier.  Feel free to setup up a quick sync up call to discuss, if still in doubt.

Saturday, January 23, 2016

Networking stack to increase parallelism and improve performance for multi-processor systems

Cisco UCS, multi-queue NICs, and RSS

VMQ Deep Dive
http://blogs.technet.com/b/networking/archive/2013/09/10/vmq-deep-dive-1-of-3.aspx  http://blogs.technet.com/b/networking/archive/2013/09/24/vmq-deep-dive-2-of-3.aspx  http://blogs.technet.com/b/networking/archive/2013/09/24/vmq-deep-dive-3-of-3.aspx

RSS Deep Dive - Tech Talks

Introduction to Receive Side Scaling

Scaling in the Linux Networking Stack
This document describes a set of complementary techniques in the Linux
networking stack to increase parallelism and improve performance for
multi-processor systems.
The following technologies are described:
  RSS: Receive Side Scaling
  RPS: Receive Packet Steering
  RFS: Receive Flow Steering
  Accelerated Receive Flow Steering
  XPS: Transmit Packet Steering

Monday, January 18, 2016

Password for VMware beta presentations and recordings.

According to the community pages the password should be: 
Recording Password: hostedbeta

UCS VIC Performance

Davind & Alex,
Here are recommended settings for VIC1240. When I will be in the US next week, I’ll check status of VIC 1340 perf testing for you.

Begin forwarded message:
From: Samuel Kommu <skommu@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX
Date: 21 Dec 2015 22:02:02 CET
To: Anthony Burke <aburke@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>, Ray Budavari <rbudavari@vmware.com>, Leena Merciline <lmerciline@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>

Anthony,

Is it VIC 1240 or VIC 1340?  We haven’t tested VIC 1340 yet.  Ray (copied) has tested VIC 1240 and recommends the following tuning for performance:

NetQueue
UCS Ethernet Adapter Policy & VMQ Connection Policy
8
Provides additional queues for traffic using different DST Acs (benefits when there is a mix of both VXLAN and VLAN traffic)
NIC interrupt timers & TCP LRO
UCS Ethernet Adapter Policy
64us & Disabled
Reduce NIC adapter interrupt timers to enable faster processing of receive traffic
Multiple VTEPs using Load Balance - SRC ID policy
NSX VXLAN Configuration
2 VTEPs
Multiple VTEPs enables balanccing of network traffic processing across two CPU contexts
Network IO Control 
VDS
Enabled
Provide additional TX contexts / CPU resources foor transmit traffic

Also,
  1. ESXi power management should be disabled
  2. UCS Firmware must be at a minimum version of 2.2(2c)
  3. ESXi hosts require ENIC driver 2.1.2.50 or newer

Above tuning is critical to improve performance.

Like all components within NSX, dvFilter’s performance is also influenced by the hardware offloads etc.,.  Check out the NSX Performance slides from Vmworld that I sent earlier.  Feel free to setup up a quick sync up call to discuss, if still in doubt.

Regards,
Samuel.

From: Anthony Burke <aburke@vmware.com>
Date: Monday, December 21, 2015 at 12:00 PM
To: Samuel Kommu <skommu@vmware.com>
Cc: Leena Merciline <lmerciline@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>, Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX

Hi Samuel,

The following setup is as follows:

vSphere 6.0
NSX 6.1.5 / NSX 6.2 (both have been tested)
UCSB200 M3 UCSB200M4
VIC1240/VIC1340 - Latest drivers.

Test VMs have VMXNET3 drivers running.


Please correct me if I am wrong but hardware offloads and NICs should not be an issue when utilising dvFilter as this is purely done in software. This is purely a performance requirement The test bed of two workloads on a VLAN backed port-group on the same host or different hosts are not utilising VXLAN. 

The test criteria below has outlined:
  • Same host for two VMs
  • Same network (no routing)
  • Any to Any with no FW rules = 22.x Gbit sustained.
  • Any to Redirect 

Having deploying PAN with NSX on UCS I do believe this is a Fortinet issue. Given the sensitivity to the customer I am pursing this internally.

If I a missing something and we are leveraging our NIC cards please let me know

As an aside I have a different customer (NSX friendly deployed in prod) who may give me access to a lab of M3 and M4 UCS but I cannot guarantee access.

Regards,

Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098

On 22 Dec 2015, at 6:09 AM, Samuel Kommu <skommu@vmware.com> wrote:
Anthony,

Haven’t received any hardware setup details yet.  If you have already sent, could you please send it over again?

Note on NetX Performance:  Close to line rate throughput is achievable with the use of hardware offloads and jumbo MTU etc.,  Check out VMworld 2015 slides:  https://vault.vmware.com/group/nsx/document-preview?fileId=16312906

Regards,
Samuel.

From: <ask-nsx-pm-bounces@vmware.com> on behalf of Leena Merciline <lmerciline@vmware.com>
Date: Monday, December 21, 2015 at 10:55 AM
To: Anthony Burke <aburke@vmware.com>, ask-nsx-pm <ask-nsx-pm@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX

Hello Anthony, a performance report on this is being done by Samuel K (TPM) using a sample service VM. This will be for internal use. We plan to publish this soon (by early Jan) on Vault.
Leena

From: Anthony Burke <aburke@vmware.com>
Date: Sunday, December 20, 2015 at 6:35 PM
To: ask-nsx-pm <ask-nsx-pm@vmware.com>
Cc: Scott Clinton <sclinton@vmware.com>
Subject: Re: [nsbu-se] Fortinet & NetX

Hi team,

Is there any comments around this? This is having an impact on a lighthouse customer for us in Australia federal government. There are other customers watching these situation closely to see which way this customer progresses.
I cannot provide the customer clear information about dvFilter, performance, and commentary around the NetX framework. Can anyone comment here? Has any testing been done? Can we please have an official comment.

I have sent Sam Kommu details on hardware setup per a seperate unicast request.


Regards,

Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098

On 14 Dec 2015, at 9:28 AM, Anthony Burke <aburke@vmware.com> wrote:
Hi team,

In a familiar discussion about NetX again. This time it is with Fortinet. Customer of mine has raised a high concern over the lack of throughput when leveraging NSX net-x and Fortinet VMX 2.0.
Fortinet were quick to blame our single-threaded DVfilter plugin. Whilst we are managing expectations with partner and customer can we have an official comment around expected speeds of redirection alone (without 3rd party features enabled) ? Could we also have official communication to partners about this?

We’ve done this with Checkpoint locally and now Fortinet are piping up. I know we can do ~1.3Gbps with Palo Alto (customer is in production locally) and I heard rumours Fortinet could do a lot more.
Attached is customers rudimentary tests with iPerf. Will raising a SR on mysids help progress this?

Performance Testing (iperf between RHEL client<>servers):
Test Scenario
Throughput
Comment
  • Network Introspection = None
  • VMX FW = Not Applicable
  • VMX IPS = Not Applicable
  • DFW = Allow any<>any
  • ESX1 only // 3x client->server in parallel with 10 threads each, 1 min test
22.6 Gbit
23.0 Gbit
23.4 Gbit
69.0 Gbit
  • VM<>VM on the same ESX host eliminates any influence from the physical network and should represent ideal conditions for maximum throughput... As we can see...
  • Network Introspection = Redirect traffic to VMX appliance
  • VMX FW = Allow any<>any
  • VMX IPS = No policy applied to traffic rule
  • DFW = Allow any<>any
  • ESX1 only // 3x client->server in parallel with 10 threads each, 1 min test
322 Mbit
282 Mbit
355 Mbit
959 Mbit
-repeat- 
326 Mbit
312 Mbit
358 Mbit
996 Mbit
-repeat- 
352 Mbit
354 Mbit
372 Mbit
1078 Mbit
  •  Just forwarding through the VMX with no IPS or enforcement...Slow as hell
  • Network Introspection = Redirect traffic to VMX appliance
  • VMX FW = Allow any<>any
  • VMX IPS = No policy applied to traffic rule
  • DFW = Allow any<>any
  • ESX1 only // 1x client->server in parallel with 10 threads, 1 min test
1.27 Gbit
-repeat-
1.16 Gbit
-repeat-
1.25 Gbit
  • As above but just with a single client->server instance. Shows there's a bottleneck and it's not with the test VM's as they were clearly fighting for bandwidth before.
  • Network Introspection = Redirect traffic to VMX appliance
  • VMX FW = Allow any<>any
  • VMX IPS = Inspect across all signatures (~4700 or so, non-blocking)
  • DFW = Allow any<>any
  • ESX1 only // 3x client->server in parallel with 10 threads each, 1 min test
311 Mbit
369 Mbit
365 Mbit
1045 Mbit
-repeat-
338 Mbit
370 Mbit
381 Mbit
1089 Mbit
  • 3x client->server instances with IPS detection enabled (pass mode no blocking). Odd that this appears to beat the test before with no IPS mode in some cases. Then again a handful of sessions may not strain the inspection engine and iperf cannot scale over a hundred sessions. 
  • Network Introspection = Redirect traffic to VMX appliance
  • VMX FW = Allow any<>any
  • VMX IPS = Inspect across all signatures (~4700 or so, blocking mode)
  • DFW = Allow any<>any
  • ESX1 only // 3x client->server in parallel with 10 threads each, 1 min test
359 Mbit
331 Mbit
358 Mbit
1048 Mbit
  • As above but in IPS blocking mode.
  • Tests between ESX hosts pending...
  • <19572080.gif>
  • <19572080.gif>
  • Assessment
  • <19572080.gif>
  • NSX network introspection seems to hit a ceiling around 1Gbit for connectivity on the same ESX host where conditions are predisposed towards maximum throughput. Obviously a ~70 fold reduction in performance with open routing/forwarding through the appliance and no enforcement is hard to understand and performance is comparable on lightly loaded and heavily loaded ESX hosts. These metrics suggest an issue with NSX network introspection itself or with the specific VMX reciprocation of this redirection.

Regards,

Anthony Burke - Systems Engineer
Network Security Business Unit
VMware Australia & New Zealand
Level 7, 28 Freshwater Place, Southbank VIC 3006
+61 415 595 098


-- 
You received this message because you are subscribed to the Google Groups "nsbu-se" group.
_______________________________________________
nsbu-se mailing list

-- 
You received this message because you are subscribed to the Google Groups "nsbu-se" group.


_______________________________________________
nsbu-se mailing list

Monday, January 11, 2016

VMware Distributed Virtual Switch (DVS) Test Plan





























Category Name Description Test Method Expected Result Pass/Fail
Operations Implementation Create a new virtual switch. Create a new virtual switch within a specified vCenter and migrate hosts to it.  Specific steps are required here based on environment specific variables. New virtual switch is created successfully and is available for use.
Operations Upgrade Upgrade a virtual switch. Upgrade the virtual switch to the latest version based on vCenter/ESXi host versions. Virtual switch is upgraded with no impact to applications/users.
Operations Cross virtual switch vMotion Migrate a VM from one virtual switch to another. Dynamically migrate a VM from one virtual switch to another. VM is migrated with no impact to applications/users.
Operations Config Backup Backup configuration. Backup and save virtual switch configuration. Configuration is exported and saved.
Operations Config Restore Restore configuration. Delete or change virtual switch configuration then restore to a previous version. Configuration is restored successfully to a previous version.
Operations Network IO Control Designate different network IO properties for different types of VM workloads. Create Network Resource Pools to associate port groups with specific network SLAs. VM traffic is treated differently depending on identified SLAs configured.
Operations LACP Ensure virtual switch communicates properly across LACP enabled uplinks. Configure LAGs for host uplinks ports to match upstream switch LACP configurations. Network traffic successfully traverses the LAG.
Operations RBAC Ensure appropriate operations resources are able to manage/configure/monitor the virtual switch. Create a "network" specific role and apply permissions to the appropriate AD security group. Operations resources have the proper access required.
Operations VLAN Updates via PowerCLI Add additional VLANs to Port Groups Leverage PowerCLI script(s) to add one or more VLANs to an existing Port Group or create a new Port Group. Port Group is successfully created or updated and is configured to leverage the specified VLAN(s).
Operations Maximum Transfer Unit (MTU) Configure MTU per virtual switch. Specify the required MTU per virtual switch to support network traffic requirements. MTU is successfully configured and network traffic behaves properly.
Failover Host Failure Validate VMs are successfully restarted via HA on another host in the cluster. Power off a host with a test VM running on it. VM is restarted on another host and network traffic resumes normal operation.  An alert is also generated.
Failover vCenter Failure Validate normal network operations continue without the vCenter server. Power off vCenter. No network traffic from ESXi hosts or VM is impacted.  Any virtual switch modifications will not be available until vCenter is available.  An alert is also generated.
Failover Physical Switch Failure Validate physical network redundancy. Power off a physical upstream switch. No network traffic from ESXi hosts or VM is impacted because of redundant network uplink configuration and load balancing algorithms.  An alert is also generated.
Failover Physical NIC Failure Validate physical network redundancy. Unplug a physical NIC from the blade/chassis or virtually disable one via blade virtualization (Virtual Connect/UCS Manager). No network traffic from ESXi hosts or VM is impacted because of redundant network uplink configuration and load balancing algorithms.  An alert is also generated.
Troubleshooting NetFlow Send NetFlow data to a collector for analysis purposes. Configure and enable virtual switch to send flows to NetFlow collector.  Specific steps required here based on environment specific variables. NetFlow collector receives and analyzes the configured object(s).  Data is clean and usable.
Troubleshooting Port Mirroring Mirror a VM vNIC to an Layer 3 IP address for the analysis purposes. Configure and enable port mirroring to send traffic to a designated IP address.  Specific steps required here based on environment specific variables. Designated IP address receives specified network traffic from mirrored port and can be captured via 3rd party tools.  Data is clean and usable.
Troubleshooting Packet Capture Capture network packets for specific objects for analysis purposes. Configure a packet capture session for a specified workload and save/export the capture file in a ".pcap" file format. Packet capture is successfully generated and is able to be opened in a 3rd party packet capture analysis tool.
Troubleshooting Traffic Filtering Allow or Drop traffic from a specified object. Configure and enable traffic filtering to allow or drop specific types of traffic from specific objects. Designated traffic is allowed or dropped.
Troubleshooting Traffic Tagging Tag specific traffic via Cos or DSCP standards. Configure and enable traffic tagging to tag specific types of traffic from specific objects. Designated traffic is tagged.
Troubleshooting Monitor Statistics Connect via CLI to gather network statistics (dropped packets) Connect to ESXi via SSH or vCenter via PowerCLI to gather virtual switch statistics. Network statistics are viewed/gathered via CLI methods.

Sunday, January 10, 2016

Shell Prompt with Username and Hostname

export PS1="[${LOGNAME}@$(hostname)]$ "

In .profile >>

PS1="[${LOGNAME}@$(hostname)]$ ";       export PS1

SSH Bastion Host

Based on blog post here - http://blog.scottlowe.org/2015/11/21/using-ssh-bastion-host/

THis works for me on FreeBSD

Host fbsd01.dc01
        Hostname 95.80.241.17
        Port 2222
        User cdave
        ForwardAgent yes

Host fbsd02.dc01
        User cdave
        Hostname fbsd02.dc01.uw.cz
        IdentityFile ~/.ssh/id_rsa
        #ProxyCommand ssh cdave@fbsd01.dc01 nc %h %p
        ProxyCommand ssh cdave@fbsd01.dc01 -W %h:%p

Monday, January 4, 2016

Github / Git

// *********** github config
git config --global user.name "davidpasek"
git config --global user.email "david.pasek@gmail.com"

// *********** Create new git repository from directory
Create a directory to contain the project.
Go into the new directory.
Type git init
Write some code.
Type git add -A to add all the files from current directory.
Type git commit

// *********** Clone existing github repository
git clone https://github.com/davidpasek/math4kids

// *********** Add file to github

git status
git add file.html
git commit -m "Commit comment"

// push back to github
git push

// pull out from github
git pull

// *********** Add all files in local directory to github
git add -A
git commit -m "Initial add of files into the repository"
git push

// *********** Working with github - commit changes
git status
git pull
... working with files
git commit -a
git push
git status

GLOBAL CONFIG
git config --global user.name "David Pasek"
git config --global user.email david.pasek@gmail.com

Save credentials
$ git config credential.helper store
$ git push http://example.com/repo.git
Username: <type your username>
Password: <type your password>

[several days later]
$ git push http://example.com/repo.git
[your credentials are used automatically]

Q&A
Q: What is the difference between git clone and git checkout?
A: 
The man page for checkout: http://git-scm.com/docs/git-checkout
The man page for clone: http://git-scm.com/docs/git-clone
To sum it up, clone is for fetching repositories you don't have, checkout is for switching between branches in a repository you already have.

Links:
Visual GIT reference - http://marklodato.github.io/visual-git-guide/index-en.html
GIT Simple Guide - http://rogerdudler.github.io/git-guide/
How To Use Git Effectively -  https://www.digitalocean.com/community/tutorials/how-to-use-git-effectively