Cisco Nexus 1000V Installation and Upgrade Guide, Release 4.2(1)SV1(5.2) Chapter Title. Installing the Cisco Nexus 1000V Software Using ISO or OVA Files. PDF - Complete Book (12.02 MB) PDF - This Chapter (2.79 MB) View with Adobe Reader on a variety of devices. EPub - Complete Book (4.38 MB). .For the purpose of this example, we’ll be using the OVF file nexus-1000v.4.2.1.SV1.4.ovf and deploying the OVF connected directly to an ESXi host.-Note: If you’re interested in learning more about the differences between the 2, see the comparison I wrote: Differences between installing the OVF and the OVA for the Nexus 1000v. Cisco Nexus 1000V Installation and Upgrade Guide, Release 4.2(1)SV1(5.1) Chapter Title. ISO and OVA Installation. PDF - Complete Book (10.1 MB) PDF - This Chapter (2.01 MB). In the Select the VSM′s Host screen of the Cisco Nexus 1000V configuration wizard, highlight your B-Series ESXi host (ex. C3-p1-b-esxi.ucseduc.com) and click Next. In the Select OVA File to create VSM screen, click Browse OVA and navigate to My Documents Nexus1000v4.2.1.SV 1.5.2 N1KV 1.5.2 VSM Install. The following is a short list of what I did that resulted in an orphaned DVS created by the install of the 2nd Nexus 1000v: Download Nexus 1000v ova. Deploy first VSM via Deploy OVF Template in vCenter. Once the Nexus 1000v VM has been deployed, power up and navigate the management webpage. Launch the configuration wizard and complete the install.
Simplify and Scale Virtual Networking
Get highly secure, multitenant services by adding virtualization intelligence to your data center network with the Cisco Nexus 1000V Switch for VMware vSphere. This switch:
- Extends the network edge to the hypervisor and virtual machines
- Is built to scale for cloud networks
Get the Cisco Nexus 1000V Essential Edition at no cost.
Statement of Direction for Cisco Nexus 1000V Platform (PDF - 218 KB)
The Cisco Nexus 1000V will continue to be supported on the VMware hypervisor beyond VMware vSphere Release 6.0 as well as on other major hypervisors. Customers should upgrade their Cisco Nexus 1000V software to the latest version as they upgrade their VMware or other hypervisor environments.
Features and Capabilities
Important differentiators for the Cisco Nexus 1000V for VMware vSphere include:
- Extensive virtual network services built on Cisco advanced service insertion and routing technology
- Support for vCloud Director and vSphere hypervisor
- Feature and management consistency for easy integration with the physical infrastructure
- Exceptional policy and control features for comprehensive networking functionality
- Policy management and control by the networking team instead of the server virtualization team (separation of duties)
Use Virtual Networking Services
The Cisco Nexus 1000V Switch optimizes the use of Layer 4 - 7 virtual networking services in virtual machine and cloud environments through Cisco vPath architecture services.
Cisco vPath 2.0 supports service chaining so you can use multiple virtual network services as part of a single traffic flow. For example, you can simply specify the network policy, and vPath 2.0 can direct traffic:
- First, through the Cisco ASA1000V Cloud Firewall for tenant edge security
- Then, through the Cisco Virtual Security Gateway for Nexus 1000V Switch for a zoning firewall
In addition, Cisco vPath works on VXLAN to support movement between servers in different Layer 2 domains. Together, these features promote highly secure policy, application, and service delivery in the cloud.
Expand these offerings by building highly secure hybrid clouds with Cisco InterCloud.
Cisco Nexus 1000V Switch for VMware vSphere Editions
Cisco Nexus 1000V Switch for VMware vSphere is available in two separate editions. The Essential Edition includes all the basic switching features. Advanced Edition adds advanced security capabilities and Cisco Virtual Security Gateway for Nexus 1000V Switch to the base functionality of Essential Edition. Buying Cisco TAC support is optional for Essential Edition and highly recommended.
Specifications at a Glance
Features | Essential Edition | Advanced Edition |
---|---|---|
Layer 2 switching: VLANs, private VLANs, VXLAN, loop prevention, multicast, virtual PortChannels, LACP, ACLs | Yes | Yes |
Network management: SPAN, ERSPAN, NetFlow 9, vTracker, vCenter Server plug-in | Yes | Yes |
Enhanced QoS features | Yes | Yes |
Cisco vPath | Yes | Yes |
DHCP Snooping | No | Yes |
IP Source Guard | No | Yes |
Dynamic ARP Inspection | No | Yes |
Cisco TrustSec SGA Support | No | Yes |
Cisco Virtual Security Gateway | Supported1 | Included |
Other Virtual Services (Cisco ASA1000V, Cisco vWAAS, etc.) | Available separately | Available separately |
1 Shipping Virtual Security Gateway (VSG) versions are supported on Essential and Advanced editions. VSG is no longer available as a standalone product.
VXLAN Fundamentals
Watch a video on VXLAN technology in the Cisco Nexus 1000V virtual switch.
Data Sheets and LiteratureData Sheets
End-of-Life and End-of-Sale Notices
Q&A
Solution Overviews
White Papers
Over the last 12 months I’ve been doing a lot of work that has involved the Cisco Nexus 1000v, and during this time I came to realise that there wasn’t a huge amount of recent information available online about it.
Because of this I’m going to put together a short post covering what the 1000v is, and a few points around it’s deployment.
What is the Nexus 1000v?
The blurb on the VMware website defines the 1000v as “..a software switch implementation that provides an extensible architectural platform for virtual machines and cloud networking.”, and the Cisco website says, “This switch: Extends the network edge to the hypervisor and virtual machines, is built to scale for cloud networks, forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)”
So that’s all fine and good, but what does this mean for us? Well, the 1000v is a software only switch that sits inside the ESXi (and KVM or Hyper-V, if they’re your poison) Hypervisor that leverages VMware’s built-in Distributed vSwitch functionality.
It utilizes the same NX-OS codebase and CLI as any of the hardware Nexus switches, so if you’re familiar with the Nexus 5k, you can manage a 1000v easily enough, too.
This offers some compelling features over the normal dvSwitch such as LACP link aggregation, QoS, Traffic Shaping, vPath, and Enhanced VXLAN, and would also allow an administrative boundary between servers and networking, even down to the VM level. Obviously all the bonuses of a standard dvSwitch around centralised management also still apply.
Components of the 1000v
The 1000v is made up of 2 main components, 2 VSMs, and the VEMs. The VSMs are the Virtual Supervisor Modules, which equate to a supervisor module in a physical multi-chassis switch, and the VEMs are like the I/O blades that provide access to the network.
Virtual Supervisor Modules
The VSM runs as a VM within the cluster, with a second VSM running as standby on another ESXi host. Good practice would be to set an affinity rule to prevent both VSMs living on the same host in case of host failure.
Virtual Ethernet Modules
The VEM is the software module that embeds in the ESXi kernel and ties the server into the 1000v.
1000v Deployment
There are 2 ways of deploying the 1000v, Layer 2 mode (which is deprecated) and Layer 3 mode which allows the VSMs to sit on a different subnet to the ESXi hosts.
Deploying the 1000v is relatively straight forward, and this post is not designed to be a step-by-step guide to installing the 1000v (Cisco’s documentation can be found here). The later versions of the 1000v have a GUI installer which makes initial deployment simple.
Once the VSM pair has been deployed you need to:
Create a L3 SVS (SVS config sets how the VEMs connect to the VSMs) domain, and define your L3 control interface
- 1000v# svs-domain
- 1000v(config-svs-domain)# domain id 10
- 1000v(config-svs-domain)# no packet vlan
- 1000v(config-svs-domain)# no control vlan
- 1000v(config-svs-domain)# svs mode L3 interface mgmt0
Create a SVS connection to link the VSM with vCentre
- 1000v# svs connection vcenter
- 1000v(config-svs-conn)# protocol vmware-vim
- 1000v(config-svs-conn)# remote ip address 192.168.1.50
- 1000v(config-svs-conn)# vmware dvs datacenter-name London
- 1000v(config-svs-conn)# connect
Create your Ethernet (physical uplink port-profiles) and vethernet (VM port-profiles) port-profiles, and add L3 capability to your ESXi management vmk port-profile
- 1000v# port-profile type veth esxi-mgmt
- 1000v(config-port-prof)# capability l3control
- Warning: Port-profile ‘esxi-mgmt’ is configured with ‘capability l3control’. Also configure the corresponding access vlan as a system vlan in:
- * Port-profile ‘esxi-mgmt’.
- * Uplink port-profiles that are configured to carry the vlan
- 1000v(config-port-prof)# vmware port-group
- 1000v(config-port-prof)# switchport mode access
- 1000v(config-port-prof)# switchport access vlan 5
- 1000v(config-port-prof)# no shut
- 1000v(config-port-prof)# system vlan 5
- 1000v(config-port-prof)# state enabled
- 1000v(config-port-prof)# port-profile type ethernet InfrUplink_DvS
- 1000v(config-port-prof)# vmware port-group
- 1000v(config-port-prof)# switchport mode trunk
- 1000v(config-port-prof)# switchport trunk allow vlan 5
- 1000v(config-port-prof)# channel-group auto
- 1000v(config-port-prof)# no shut
- 1000v(config-port-prof)# system vlan 5
- 1000v(config-port-prof)# state enabled
Note the point above where you have to put a “system vlan” on your l3control interface, this ensures network traffic on that VLAN will always remain in the forwarding state, even before a VEM is programmed by the VSM, especially important in the case of the control interface.
Cisco Nexus 1000v Ova Download
Deploy the VEMs to the ESXi hosts (this can be done from the GUI)
Once the VEMs are on the ESXi hosts, you need to migrate the ESXi management vmk into the 1000v, this will then show the VEMs and the ESXi hosts in the 1000v when you run the ‘show modules’ command.
At this point we have communication between the VSM and the VEMs within ESXi, and we can start configuring port-profiles for our non-management traffic.
Simple, right?