Pages

Monday, December 29, 2014

HP H222 SAS Controller has AQLEN=600
Best is just to go with HP P420i embedded with AQLEN=1020
***Enable HBA mode/passthrough on P420i using HPSSACLI and following ESXi commands
-Make sure disks are wipe clean and no RAID exists
-Make sure FW is latest v5.42
-Make sure ESXi device driver is installed v5.5.0-44vmw.550.0.0.1331820 http://www.vibsdepot/hpq/feb2014-550/esxi-550-devicedrivers/hpsa-5.5.0-1487947.zip
-Put host in MM, from ilo of ESXi in support mode (Alt+F1) execute the following
To View controller config using HPSSACLI with ESXCLI
~ # esxcli hpssacli cmd -q “controller slot=0 show config detail”
To enable HBA mode on P420i using HPSSACLI
~ # esxcli hpssacli cmd -q “controller slot=0 modify hbamode=on forced”
Reboot the host & perform a scan and walah … disks will show up in vSphere web client on each host>devices>before you enable vSAN

Tuesday, December 16, 2014

MXL and Vmware dvS PVLAN

From: Jayson_Block [mailto:bounce-Jayson_Block@kmp.dell.com]
Sent: venerdì 12 dicembre 2014 21:26
To: Cloud_Virtualization@kmp.dell.com
Subject: RE: MXL and Vmware dvS PVLAN

Dell Customer Communication
The feature you are actually looking for, to support VMware and PVLAN together, is PVLAN trunking. I get into why here in just a second.

FTOS does indeed support this feature in the majority of the 10/40 lineup, which is actually a pretty significant thing as many other vendors (like Brocade Ethernet for example) do not support or are just now introducing support for PVLAN trunking today. Almost all vendors now support an implementation of PVLAN, that’s not at issue; VMware specifically requires PVLAN trunking and those trunks must support the ability to tag both normal VLAN IDs as well as PVLAN IDs.

Here is a link to the MXL FTOS 9.6.0.0 CLI reference guide – beware, it’s pretty big.


Details start at page 41.

We’re all used to presenting trunks to ESX hosts and these trunk switchports are configured to support multiple VLAN IDs which have been set to ‘tagged’ on those particular ports or port-channels. Private VLAN for VMware is handled the same way. You can configure those same trunks to support private-VLAN trunking and then tag both the primary PVLAN and the secondary (isolated, community, etc) PVLAN IDs onto those trunks.

In the dvS top-level  when you configure Private VLAN it will ask for both the primary VLAN ID as well as the attached secondary IDs. Once configured at the top-level you can then create port groups for the primary (if desired) and secondary PVLAN IDs as necessary.

At the physical switch level you create VLAN IDs as normal but then go into each VLAN interface you want to be a PVLAN and start defining their modes.

Below is purely an example:

All 32 of the internal switchports.

- int range tengigabitethernet 0/0-31
- description ESXi-host-trunk-ports
- switchport
- portmode hybrid
- mtu 12000
- flowcontrol rx on tx off
- switchport mode private-vlan trunk

- int vlan 10
- description Just-a-regular-vlan
- mtu 12000
- tagged TenGigabitEthernet 0/0-31

- int vlan 450
- description PVLAN-primary
- mtu 12000
- private-vlan mode primary
- private-vlan mapping secondary-vlan 451
- tagged TenGigabitEthernet 0/0-31

- int vlan 451
- description PVLAN-secondary-isolated
- mtu 12000
- private-vlan mode isolated
- tagged TenGigabitEthernet 0/0-31

Note that vlan 10 above is still tagged on 0/0-15 in addition to the PVLAN primary and secondary VLANs, though the addition of the line ‘switchport mode private-vlan trunk’ is what enables this feature; the ability to tag PVLAN IDs on a trunk.

Hope this helps!

--
Jayson Block
Senior Technical Design Architect
Dell | Datacenter, Cloud and Converged Infrastructure – C&SI
+1 443-876-3366 cell – Maryland – USA

From: Matteo_Mazzari [mailto:bounce-Matteo_Mazzari@kmp.dell.com]
Sent: Friday, December 12, 2014 1:27 PM
To: Cloud_Virtualization@kmp.dell.com
Subject: MXL and Vmware dvS PVLAN


Hi all,
Are there any guideline to configure FTOS and ESXi to use PVLAN? Experience? Suggestion?


Thanks a lot
Kind regards

Matteo Mazzari
Solution Architect
Dell | Global Storage Services

mobile +39 340 9312022

Thursday, November 27, 2014

IOA and 40Gb Add-In Module Default Behavior

4*10G mode is the default.

You change the uplink speed by changing the opmode with an argument:

stack-unit 0 iom-mode standalone 40G
(Requires reload)


show system stack-unit unit-number iom-uplink-speed

Thursday, November 20, 2014

azure pack vs vcac

Hi Scott,

I will make some comments based on my personal experience. We have implemented both solutions for different customers in Australia, both have their strengths and weaknesses.

Windows Azure Pack
The good
-          Portal is great, same as Azure
-          Pretty simple to set up, basic implementation requires just Windows and Virtual Machine Manager.
-          Provides most of the private cloud functions customers are looking for
-          Great story for Azure public cloud integration, machine migration is seamless
-          With Hyper-V recovery manager you can use Hyper-V replicas directly to Azure and to secondary data centre
-          WAP includes Azure Service Bus
-          The Scale Out File Server architecture on the MS platform is pretty solid, scalability is not bad
-          Licensing is simple, per processor for Windows and all System Centre products.
The not so good
-          No multi tenancy
-          No ability to customise the portal
-          The chargeback is very basic, you need to implement Service Manager for detailed reports (and SM is still pretty terrible)
-          Orchestration is fairly basic, you need SCO for custom orchestration.
-          Needs SCOM for monitoring and alerting, third party ticketing integration is complex.
-          Not possible with the MS virtual networking stack to do automated provisioning of multi tier applications, virtual load balancers and VLAN provisioning.
-          Locked in to MS cloud, poor integration with other cloud vendors.
-          If customer is existing VMware customer then migration of virtual machines can require significant effort. P2V migration functionality is no longer available in VMM 2012 R2, VMware integration is limited.

vRealize Automation (AKA vCAC)
The good
-          True multi tenancy
-          SDN integration is excellent, with NSX vCAC is able to do very complex provisioning and management of network services
-          Integrates with vCentre Orchestrator, with a couple of hundred workflows available out of the box
-          Good chargeback functionality out of the box
-          Portal is somewhat customisable.
-          VMware have announced full support for OpenStack, and have an OpenStack distribution in beta
-          VMware have announced support for Docker, Jenkins and Kubernetes, so is a good platform for open source cloud application development
The not so good
-          Complex to set up
-          vCloud Air public cloud is still fairly limited availability, and currently  integration is  rudimentary.
-          VSAN v1 is fairly basic at the moment, will need to wait for vSphere v6 for significant improvements
-          Needs vCOps for monitoring and alerting
-          Licensing is complex and pricing of the solution depends on the size and complexity of the implementation
-          DR options are more complex than MS, SRM is better for Enterprise DR but is not cloud ready.

Hope this helps.

Dean Gardiner
Practice Lead – Data Centre and Cloud
Australia and New Zealand
Dell | Global Infrastructure Consulting Services
mobile +61 409315591


Friday, November 14, 2014

Force10 - group command to create multiple vlans

·         “group” command can be used to create multiple vlans and apply any common bulk configuration to all the vlans
·         “range” command is used to apply bulk configuration to range of existing vlans(if they are already created)

Sample,
Creating vlan and adding the interface to it
Adding interface to existing vlan
New_MXL_iSCSI_C1(conf)#interface group vlan 10 - 12
New_MXL_iSCSI_C1(conf-if-group-vl-10-12)#tag te 0/2
New_MXL_iSCSI_C1(conf)#interface range vlan 10 - 15
New_MXL_iSCSI_C1(conf-if-range-vl-10-15)#tag te 0/2


Please note that “,”(comma) can be used for non-consecutive vlans.

Monday, November 10, 2014

ESX 5.x and Broadcom/Intel CNA - FCoE issue

Please note that we are currently seeing a problem with VMware ESX and FCoE deployments. Following are the details of the problem.

What is the problem
VMWare ESX servers may fail to establish FCoE sessions with storage devices when Software FCoE adapter capability is enabled on the servers. When CNA/NIC modules that support partial FCoE offload (Broadcom and Intel only) are used, VMware ESX server’s Software FCoE adapter has to be enabled to access LUNs over FCoE.  ESX’s Software FCoE adapter has a software defect that triggers the FCoE connectivity problems when connected to the S5000. 

How does it impact the customer environment
VMware ESX servers may take a long time or fail to connect to storage devices after rebooting the S5000, the server, or disabling/enabling the interfaces between the server and the S5000.

Who gets impacted by this problem
Any customer with the following environment will get impacted.
-          VMware ESX server with Broadcom or Intel CNA connecting to the S5000 either directly or through MXL/IOA (FSB). 

This issue does not affect VMWare ESX servers deployed with QLogic or Emulex CNAs, which have hardware FCoE offload capability enabled by default.

What is being done
Dell Networking engineering team is actively engaged with VMware to fix this issue. VMware support has already reproduced and acknowledged that this is a problem with ESX 5.x. Furthermore, they have forwarded the problem to VMware engineering for a fix. So far VMware has not given us an expected time to provide the fix.

What is the recommendation
We are fully engaged with VMware to resolve this issue. However, until the issue is resolved by VMware, we will have to pursue the following options.

-          Any FCoE deployments using VMware ESX, please use QLogic or Emulex CNA instead of Broadcom or Intel.
o   Also, please ensure that there is a case open for it with Dell support and VMware support.
-          If the customer does not have VMware ESX servers then it is ok to use Broadcom or Intel CNAs in the environment.

Saleem Muhammad
Dell  |  Product Management
5480 Great America Parkway | Santa Clara, CA  95054

Desk:  (408) 571-3118 Saleem_Muhammad@dell.com

Tuesday, November 4, 2014

DELL and VMware VSAN

Midway through the year, VMware changed their storage controller certification by requiring all of it to process through their lab, which is a bottle-neck.  PERC9 certification, including the H330, is in process but will not likely be approved before Q4.  In addition to the H330 having slightly less than a 256 queue depth, VMware is not entirely ready for 12gb SAS, so the testing / validation is taking more time than expected.  Keep in mind 13G vSphere support requires v5.5 U2 at the minimum for VSAN (v5.1 U2 will also work, but does not support VSAN).  Tom, I’d recommend syncing that customer up with the Solutions Center to do a 13G POC with the H730 if they want to test now.  Until we get a successful engineering check on the configuration, I’d be reluctant to tell them what to purchase at present.

On pass-through, the thing you will run into from VMware is them pushing pass-through since it enables single-drive replacement in the event of failure, instead of having to take down an entire node to replace one drive as would be the case for RAID0.  Considering it is still difficult to identify the physical location of a failed drive in a VSAN environment without either OME or the OpenManage integration into vCenter, you can argue that either way for the benefits of PERC.

We have a VSAN information guide posted to the documentation for ESXi, out at dell.com/virtualizationsolutions under VMware ESXi v5.x.  Page 7 of the VSAN information guide lists the storage controllers we’ve tested, which includes the H710, H710P, and the LSI 9207-8i.
For 11G servers, we have done NO certification of that generation as a “Ready Node”, meaning no Dell engineering has stood up an 11G cluster.  The VSAN compatibility list only requires certification of the storage controller, HDDs, and SSDs, so as long as all of those components are there, and the server is v5.5 U1 or higher certified (which most 11G are) VMware at least will support it.  VSAN OEM will only be available on 12G and newer.

And, since this is the Blades-Tech forum, I’d restate DAS still isn’t officially supported by VSAN (even if it works), so neither Blades nor VRTX are recommended VSAN targets at present.  The next major release of vSphere in 2015 will support JBOD, and we’ll look at certifications again in that time frame.


Damon Earley
Hypervisor Product Marketing
Dell | Product Group – Systems Management

Monday, November 3, 2014

Force10: restore factory-defaults

This command can be used to remove stack information. Yes, even the sticky stuff left in NVRAM. This is much, much easier for our customers to convert stacked units (especially those remote to the equipment).

Upgrade the stack to 9.5 or 9.6 and then abort BMP when prompted  

1)  Use the following command to set the switch to factory default, including the stacking ports     
    #restore factory-defaults stack-unit all clear-all
     Proceed: yes

2) When prompted about BMP, select A:

To continue with the standard manual interactive mode, it is necessary to abort BMP.
Press A to abort BMP now.
Press C to continue with BMP.
Press L to toggle BMP syslog and console messages.
Press S to display the BMP status.
[A/C/L/S]: A

3) Check to make sure that after the reboot the reload-type will be normal-reload

Dell#
Dell#show reload-type
Reload-Type                :   bmp [Next boot : normal-reload]
auto-save                  :   disable
config-scr-download        :   enable
dhcp-timeout               :   disable
vendor-class-identifier    :
retry-count                :   0

4) reload

Details on the command included here (can be found in the most recent Program Status).

In OS 9.5, we introduced a new command to reset the switch to factory default mode. The command is Dell# restore factory-defaults stack-unit all clear-all It does the following:
Deletes the startup configuration
Clears the NOVRAM and Boot variables, depending on the arguments passed
Enables BMP
Resets the user ports to their default native modes (ie., non-stacking, no 40G to 4x10G breakouts, etc.)
Removes all CLI users Then, the command reloads the switch in a similar state to a brand new device Restore does not change the current OS images and partition from which the switch will boot up. Likewise, restore does not delete any of the files you store in the SD (except startup-config)

Monday, October 27, 2014

Force10: 40GB to 4 X 10GB breakout cable

From: Bean, Bob
Sent: Thursday, October 23, 2014 09:08 AM Central Standard Time
To: Cassels, George; Beck, J; Pereira, Jacobo; WW Networking Domain
Subject: RE: 40GB to 4 X 10GB breakout cable

On the FTOS side use:

intf-type cr4 autoneg


-----Original Message-----
From: Cassels, George
Sent: Thursday, October 23, 2014 08:28 AM Central Standard Time
To: Beck, J; Pereira, Jacobo; WW Networking Domain
Subject: RE: 40GB to 4 X 10GB breakout cable

So far, we've used the following commands...

service unsupported-transceiver
no errdisable detect cause gbic-invalid

Now it doesn't errdisable, but still goes down/down with the same error as mentioned below.

________________________________________
From: Beck, J
Sent: Thursday, October 23, 2014 9:20 AM
To: Cassels, George; Pereira, Jacobo; WW Networking Domain
Subject: RE: 40GB to 4 X 10GB breakout cable

Have you set the command on the Cisco side to support noncertified transcievers?


Excuse any misspelled words as this is sent from a smart phone.

John Beck | Dell
Office of Technology and Architecture | CTO

-----Original Message-----
From: Cassels, George
Sent: Thursday, October 23, 2014 08:17 AM Central Standard Time
To: Pereira, Jacobo; WW Networking Domain
Subject: RE: 40GB to 4 X 10GB breakout cable


Jacobo,
     It is Option A below...
________________________________________
From: Pereira, Jacobo
Sent: Thursday, October 23, 2014 9:09 AM
To: Cassels, George; WW Networking Domain
Subject: RE: 40GB to 4 X 10GB breakout cable

What type of breakout are you using?

a) QSFP+ to 4xSFP+ ?
b) QSFP+ Transceiver with MTP to 4xLC cable?

-----Original Message-----
From: Cassels, George
Sent: Thursday, October 23, 2014 07:59 AM Central Standard Time
To: WW Networking Domain
Subject: 40GB to 4 X 10GB breakout cable


I am doing some testing at a customer site with the Z9000 to Cisco 10GB switch.  When we try to use the 40GB to 10GB breakout cable we are getting the following error that disables the ports on the cisco side.

Duplicate vendor-id and serial number

Setup is a two port connection setup in a LAG using LACP.

Is there any known fixes around this issue?  Also there is no issue if you plug in just one of the ports on the 10GB side.

Thanks,
George

Thursday, October 23, 2014

USB serial adapter in FreeBSD

Last modified: Jun. 13, 2009

Contents
1 - Summary
2 - Kernel options
3 - Plug in USB serial adapter
4 - Connect to router


1 - Summary

This guide explains how to use a USB serial adapter in FreeBSD. It also
explains how to connect to a device like a router over a serial connection.
As an example we will connect to a Cisco router. This has been tested in
FreeBSD 7.0 and 7.1.


2 - Kernel options

You will need to have the following options in your kernel.
  device          uhci                    # UHCI PCI->USB interface
  device          ohci                    # OHCI PCI->USB interface
  device          ehci                    # EHCI PCI->USB interface (USB 2.0)
  device          usb                     # USB Bus (required)
  device          ugen                    # Generic
  device          ucom                    # USB serial support
  device          uplcom                  # USB support for Prolific PL-2303 serial adapters
If you didn't already have them in your kernel you will need to reboot before
using the USB serial adapter.


3 - Plug in USB serial adapter

Log in with a normal user account. Plug in the USB serial adapter into the
computer and check to make sure it was detected properly.
# dmesg | tail -n 1
ucom0: Prolific Technology Inc. USB-Serial Controller, class 0/0, rev 1.10/3.00,
addr 2 on uhub0

Find what the actual device is listed as.
# ls -l /dev/cuaU*
crw-rw----  1 uucp  dialer    0, 116 Mar  2 18:54 /dev/cuaU0
crw-rw----  1 uucp  dialer    0, 117 Mar  2 18:54 /dev/cuaU0.init
crw-rw----  1 uucp  dialer    0, 118 Mar  2 18:54 /dev/cuaU0.lock
In our example it's listed as /dev/cuaU0.


4 - Connect to router

Connect a serial cable from the USB serial adapter to the console port on
the back of the Cisco router. Type the following and press [Enter] to connect.
# sudo cu -l /dev/cuaU0 -s 9600
Connected

User Access Verification

Username: xxx
Password: xxx
Welcome to router.test.com!
router>

When you are done type exit.

router>exit

router con0 is now available
Press RETURN to get started.

Type '~.' to exit. Press 'Shift+~' then period.

~
[EOT]

Wednesday, October 15, 2014

Force10 Internal Firmware repository

PoE max power loss

This is how you would calculate the max power loss on a 100m Cat6 Cable:

Typical DC power resistance loss in CAT6
Typical Cat6 UTP has a 7ohm/100m conductor resistance, resulting in a 7ohm/100m loop resistance. This is 1/3 the (worst case) loop resistance the 802.3af standard will accept.

Voltage Drop in typical data cable
 2* (0.175)*7 =2.5V

@Power dissipated (Pd) in typical data cable
Pd per wire is (0.175A)2 * 7 ohms = 0.214 W per wire

Power dissipated on 2 wires on 2 pairs is:
4 * 0.214 = 0.858 W maximum typical power dissipated per data cable
Note that the 802.3af standard tolerates a 2.45W cable loss, but typical Cat6 UTP cable will result
in only 0.858 W DC power loss over 100m.

Thursday, October 9, 2014

iDRAC 8

iDRAC8 with Lifecycle Controller – summary

iDRAC8 with Lifecycle Controller delivers revolutionary systems management capabilities:
-          Quick Sync bezel provides at-the-server management through NFC-enabled Android devices using the free DELL OpenManage Mobile app.  Configure a server and collect server inventory with a simple “tap” between the server bezel and mobile device.
-          Zero-Touch Auto Configuration can deploy a server out of the box with no intervention required; reducing server configuration time by as much as 99%.  Just rack, cable, and walk away.
-          iDRAC Direct lets customers use a USB cable or a USB key to provide configuration information to the iDRAC.  No more crash cart!
-          Simplify motherboard replacement with Easy Restore: key settings, such as BIOS, NIC, and iDRAC as well as licenses are automatically restored from the front panel.
-          Agent-free, real-time RAID management and configuration: use iDRAC to create and manage virtual disks, without reboots!
-          Increase datacenter security: Support for UEFI Secure Boot, and new System Erase capabilities for server repurpose/retirement, and new SNMP v3 Trap support.
-          Built-in Tech Support Report replaces the need for downloaded support tools; health reports are built right into iDRAC and can be uploaded to Dell Support.

The BEST PLACE TO START for Technical Papers/blogs/videos
www.delltechcenter.com/idrac  - updated with latest iDRAC and LC information

Customer facing presentation – on SalesEdge

iDRAC8 Quick Sync with OpenManage Mobile
note – OMM 1.1 is now available on the Google Play store
also – this video is available on www.delltechcenter.com/idrac

Sketch videos on you tube – as well as on www.delltecenter.com/idrac
http://youtu.be/ayEZXCL6Zdw - Freedom (OpenManage Mobile and iDRAC8 Quick Sync)
http://youtu.be/deNJDD3mLkY - Staying above the flood (Big Data)
http://youtu.be/ru-3Gc-t_UM - Simplified Management at the box (iDRAC Direct)


Tech Papers to support Dell 13G Systems Management claims – as well as on www.delltecenter.com/idrac

Support Docs on www.dell.com/support
Here you will find
·         iDRAC8 User Guide
·         iDRAC8 Release Notes
·         Lifecycle Controller User Guide
·         Racadm User Guide
·         iDRAC Service Module (iSM) Install Guide
·         SNMP and EEMI Guides

iDRAC – CMC – OME Trial/Evaluation Licenses are NOW ON SALESEDGE
·         30 day eval for iDRAC7 Enterprise
·         30 day eval for iDRAC8 Enterprise
·         30 day eval for CMC Enterprise for FX2
·         30 day eval for CMC Enterprise for VRTX
·         90 day eval for OME Server Configuration Management
·         See 411 for more details http://salesedge.dell.com/doc?id=0901bc82808a7078&ll=sr
·         Yes, you can send these to your customer



INTERNAL

Train the trainer deck – on SalesEdge

Dell internal only SourceBook - on SalesEdge


M630 NIC Options

4x1Gb – Broadcom & intel (new for 13g)
2 x 10Gb  - qlogic 57810, intel x520, and Emulex (same as available on m620)
4x10Gb  - qlogic 57840 (same as on m620)
In Q1cy15 we add new intel “Fortville” x710 controllers
2x10Gb

4x10Gb

Tuesday, October 7, 2014

DELL Storage SC4020 800GB Tier 1 SSD


We are pleased to announce the availability of a new drive – 800GB Tier 1 (Mixed Use) SSD – the first of its kind in an SC system. This drive type is to be used as a Tier 1 SSD similar to the existing 200GB/400GB write intensive (WI) drives. The SCOS will identify this drive with the same WI classification and will use the same tier as the 200GB and 400GB WI SSDs.  The industry is referring to these drives types as “mixed use (MU)” drives but from a Dell Storage perspective, these are used and tiered the same way as the Write Intensive (WI) SSDs.

Dell Storage is shifting to mixed use drives for a number of reasons:
1.       As new generations of SSDs are released, WI and MU drives will offer similar write performance.
2.      As capacity grows, MU drives offer similar endurance as the smaller WI drives when comparing total petabytes written in the drives life. 
3.      Field and customer data has helped determine that MU drives offer sufficient write endurance for even the most write intensive environments. 
4.      Mixed use drives offer higher capacity at a lower $/GB than comparable WI drives.
5.      The broader SSD market is making a shift to MU drives.

Table 1: Comparison of WI/MU/RI Drives for Dell Compellent

Dell Storage Use
Write Intensive
Read Intensive
Market Terminology
Write Intensive (WI)
Mixed Use (MU)
Read Intensive (RI)
Workload
Mainstream Applications
Any usage
Mostly Read
90/10 R/W Mix
Used with Compellent
Yes
Yes
Capacities
200/400 GB
800GB
1.6 TB
Endurance (Full writes / Day)*
10-30
<3
Endurance (written PBs)*
Up to 30PB
8PB
Random Read IOPS*
Up to 20K +
14K +
Random Write IOPS*
11K +
8K +
4K +
Sustained Write Bandwidth*
200-250 MB/s
150-225 MB/s
50-100 MB/s
List $/GB
Up to $31
$16.60
$5.25
* These performance values are for individual drives during benchmark testing. These values do not reflect actual system performance values. Values are expected to differ once drives are managed in the system with RAID virtualization and other system functions.

It is important to note that we recently moved to a new warranty policy that protects SSDs in Compellent Systems for the full length of a system’s warranty, regardless of wear or maximum rated life.