Pages

Thursday, November 27, 2014

IOA and 40Gb Add-In Module Default Behavior

4*10G mode is the default.

You change the uplink speed by changing the opmode with an argument:

stack-unit 0 iom-mode standalone 40G
(Requires reload)


show system stack-unit unit-number iom-uplink-speed

Thursday, November 20, 2014

azure pack vs vcac

Hi Scott,

I will make some comments based on my personal experience. We have implemented both solutions for different customers in Australia, both have their strengths and weaknesses.

Windows Azure Pack
The good
-          Portal is great, same as Azure
-          Pretty simple to set up, basic implementation requires just Windows and Virtual Machine Manager.
-          Provides most of the private cloud functions customers are looking for
-          Great story for Azure public cloud integration, machine migration is seamless
-          With Hyper-V recovery manager you can use Hyper-V replicas directly to Azure and to secondary data centre
-          WAP includes Azure Service Bus
-          The Scale Out File Server architecture on the MS platform is pretty solid, scalability is not bad
-          Licensing is simple, per processor for Windows and all System Centre products.
The not so good
-          No multi tenancy
-          No ability to customise the portal
-          The chargeback is very basic, you need to implement Service Manager for detailed reports (and SM is still pretty terrible)
-          Orchestration is fairly basic, you need SCO for custom orchestration.
-          Needs SCOM for monitoring and alerting, third party ticketing integration is complex.
-          Not possible with the MS virtual networking stack to do automated provisioning of multi tier applications, virtual load balancers and VLAN provisioning.
-          Locked in to MS cloud, poor integration with other cloud vendors.
-          If customer is existing VMware customer then migration of virtual machines can require significant effort. P2V migration functionality is no longer available in VMM 2012 R2, VMware integration is limited.

vRealize Automation (AKA vCAC)
The good
-          True multi tenancy
-          SDN integration is excellent, with NSX vCAC is able to do very complex provisioning and management of network services
-          Integrates with vCentre Orchestrator, with a couple of hundred workflows available out of the box
-          Good chargeback functionality out of the box
-          Portal is somewhat customisable.
-          VMware have announced full support for OpenStack, and have an OpenStack distribution in beta
-          VMware have announced support for Docker, Jenkins and Kubernetes, so is a good platform for open source cloud application development
The not so good
-          Complex to set up
-          vCloud Air public cloud is still fairly limited availability, and currently  integration is  rudimentary.
-          VSAN v1 is fairly basic at the moment, will need to wait for vSphere v6 for significant improvements
-          Needs vCOps for monitoring and alerting
-          Licensing is complex and pricing of the solution depends on the size and complexity of the implementation
-          DR options are more complex than MS, SRM is better for Enterprise DR but is not cloud ready.

Hope this helps.

Dean Gardiner
Practice Lead – Data Centre and Cloud
Australia and New Zealand
Dell | Global Infrastructure Consulting Services
mobile +61 409315591


Friday, November 14, 2014

Force10 - group command to create multiple vlans

·         “group” command can be used to create multiple vlans and apply any common bulk configuration to all the vlans
·         “range” command is used to apply bulk configuration to range of existing vlans(if they are already created)

Sample,
Creating vlan and adding the interface to it
Adding interface to existing vlan
New_MXL_iSCSI_C1(conf)#interface group vlan 10 - 12
New_MXL_iSCSI_C1(conf-if-group-vl-10-12)#tag te 0/2
New_MXL_iSCSI_C1(conf)#interface range vlan 10 - 15
New_MXL_iSCSI_C1(conf-if-range-vl-10-15)#tag te 0/2


Please note that “,”(comma) can be used for non-consecutive vlans.

Monday, November 10, 2014

ESX 5.x and Broadcom/Intel CNA - FCoE issue

Please note that we are currently seeing a problem with VMware ESX and FCoE deployments. Following are the details of the problem.

What is the problem
VMWare ESX servers may fail to establish FCoE sessions with storage devices when Software FCoE adapter capability is enabled on the servers. When CNA/NIC modules that support partial FCoE offload (Broadcom and Intel only) are used, VMware ESX server’s Software FCoE adapter has to be enabled to access LUNs over FCoE.  ESX’s Software FCoE adapter has a software defect that triggers the FCoE connectivity problems when connected to the S5000. 

How does it impact the customer environment
VMware ESX servers may take a long time or fail to connect to storage devices after rebooting the S5000, the server, or disabling/enabling the interfaces between the server and the S5000.

Who gets impacted by this problem
Any customer with the following environment will get impacted.
-          VMware ESX server with Broadcom or Intel CNA connecting to the S5000 either directly or through MXL/IOA (FSB). 

This issue does not affect VMWare ESX servers deployed with QLogic or Emulex CNAs, which have hardware FCoE offload capability enabled by default.

What is being done
Dell Networking engineering team is actively engaged with VMware to fix this issue. VMware support has already reproduced and acknowledged that this is a problem with ESX 5.x. Furthermore, they have forwarded the problem to VMware engineering for a fix. So far VMware has not given us an expected time to provide the fix.

What is the recommendation
We are fully engaged with VMware to resolve this issue. However, until the issue is resolved by VMware, we will have to pursue the following options.

-          Any FCoE deployments using VMware ESX, please use QLogic or Emulex CNA instead of Broadcom or Intel.
o   Also, please ensure that there is a case open for it with Dell support and VMware support.
-          If the customer does not have VMware ESX servers then it is ok to use Broadcom or Intel CNAs in the environment.

Saleem Muhammad
Dell  |  Product Management
5480 Great America Parkway | Santa Clara, CA  95054

Desk:  (408) 571-3118 Saleem_Muhammad@dell.com

Tuesday, November 4, 2014

DELL and VMware VSAN

Midway through the year, VMware changed their storage controller certification by requiring all of it to process through their lab, which is a bottle-neck.  PERC9 certification, including the H330, is in process but will not likely be approved before Q4.  In addition to the H330 having slightly less than a 256 queue depth, VMware is not entirely ready for 12gb SAS, so the testing / validation is taking more time than expected.  Keep in mind 13G vSphere support requires v5.5 U2 at the minimum for VSAN (v5.1 U2 will also work, but does not support VSAN).  Tom, I’d recommend syncing that customer up with the Solutions Center to do a 13G POC with the H730 if they want to test now.  Until we get a successful engineering check on the configuration, I’d be reluctant to tell them what to purchase at present.

On pass-through, the thing you will run into from VMware is them pushing pass-through since it enables single-drive replacement in the event of failure, instead of having to take down an entire node to replace one drive as would be the case for RAID0.  Considering it is still difficult to identify the physical location of a failed drive in a VSAN environment without either OME or the OpenManage integration into vCenter, you can argue that either way for the benefits of PERC.

We have a VSAN information guide posted to the documentation for ESXi, out at dell.com/virtualizationsolutions under VMware ESXi v5.x.  Page 7 of the VSAN information guide lists the storage controllers we’ve tested, which includes the H710, H710P, and the LSI 9207-8i.
For 11G servers, we have done NO certification of that generation as a “Ready Node”, meaning no Dell engineering has stood up an 11G cluster.  The VSAN compatibility list only requires certification of the storage controller, HDDs, and SSDs, so as long as all of those components are there, and the server is v5.5 U1 or higher certified (which most 11G are) VMware at least will support it.  VSAN OEM will only be available on 12G and newer.

And, since this is the Blades-Tech forum, I’d restate DAS still isn’t officially supported by VSAN (even if it works), so neither Blades nor VRTX are recommended VSAN targets at present.  The next major release of vSphere in 2015 will support JBOD, and we’ll look at certifications again in that time frame.


Damon Earley
Hypervisor Product Marketing
Dell | Product Group – Systems Management

Monday, November 3, 2014

Force10: restore factory-defaults

This command can be used to remove stack information. Yes, even the sticky stuff left in NVRAM. This is much, much easier for our customers to convert stacked units (especially those remote to the equipment).

Upgrade the stack to 9.5 or 9.6 and then abort BMP when prompted  

1)  Use the following command to set the switch to factory default, including the stacking ports     
    #restore factory-defaults stack-unit all clear-all
     Proceed: yes

2) When prompted about BMP, select A:

To continue with the standard manual interactive mode, it is necessary to abort BMP.
Press A to abort BMP now.
Press C to continue with BMP.
Press L to toggle BMP syslog and console messages.
Press S to display the BMP status.
[A/C/L/S]: A

3) Check to make sure that after the reboot the reload-type will be normal-reload

Dell#
Dell#show reload-type
Reload-Type                :   bmp [Next boot : normal-reload]
auto-save                  :   disable
config-scr-download        :   enable
dhcp-timeout               :   disable
vendor-class-identifier    :
retry-count                :   0

4) reload

Details on the command included here (can be found in the most recent Program Status).

In OS 9.5, we introduced a new command to reset the switch to factory default mode. The command is Dell# restore factory-defaults stack-unit all clear-all It does the following:
Deletes the startup configuration
Clears the NOVRAM and Boot variables, depending on the arguments passed
Enables BMP
Resets the user ports to their default native modes (ie., non-stacking, no 40G to 4x10G breakouts, etc.)
Removes all CLI users Then, the command reloads the switch in a similar state to a brand new device Restore does not change the current OS images and partition from which the switch will boot up. Likewise, restore does not delete any of the files you store in the SD (except startup-config)