Building Your Ultimate CCIE Data Center v3.1 Lab: A Complete Guide
By Brian McGahan, 4 x CCIE #8593 (R&S/SP/SC/DC), CCDE #2013::13
Passing the CCIE Lab Exam - and becoming an Expert - requires countless hours of hands-on lab practice. Originally, this also meant spending countless hours researching, buying, and building a physical practice lab environment for advanced Cisco training, IT training, and comprehensive networking training purposes.
With the advent of network virtualization technologies, though, most of the topics covered in the CCIE exams can be learned and tested in virtual environments. One of the outliers, however, is the CCIE Data Center certification, which still requires at least some physical equipment to learn and practice these topics for advanced networking training.
What Can You Virtualize vs. What Requires Hardware?
The good news? You can practice significant portions of the CCIE Data Center v3.1 Lab Exam Blueprint using virtualization, and hands-on labs for these topics are already included in INE’s CCIE Data Center v3.1 Learning Path.
The challenging news? Some critical technologies require physical hardware for proper preparation. In this guide, I’ll discuss which topics covered in CCIE DC v3.1 can be effectively practiced in virtual labs, and which ones still require physical hardware. This breakdown will help you allocate your time and budget effectively, which is critical for planning your study approach and lab investments.
Exam Topics That Can Be Fully Virtualized
First, let’s talk about topics that can be covered using virtualization. The first section of the CCIE Data Center v3.1 blueprint, Data Center L2/L3 Connectivity, can be completely covered using virtual Nexus 9000v switches. These topics include:
Section 1.0 - Data Center L2/L3 Connectivity
VLAN & Trunking
Port-Channels
Virtual Port-Channels (vPCs)
Spanning-Tree Protocol (PVST / RSTP / MSTP)
IPv4 / IPv6 Routing Protocols (OSPFv2 / OSPFv3 / ISIS / BGP)
Bidirectional Forwarding Detection (BFD)
First-Hop Redundancy Protocols (HSRP / VRRP)
Multicast Protocols (PIM, IGMP)
INE’s CCIE Data Center v3.1 Learning Path includes hands-on labs for all of these topics in our cloud-based virtual environment, like the one shown below. Labs on these topics can be found in the following courses:
Next, we jump to section 3 of the blueprint, Data Center Fabric Connectivity. With the exception of the ACI topics, everything in this section can be fully practiced using virtualization. These topics include:
Section 3.0 - Data Center Fabric Connectivity
VXLAN EVPN Overlay Fabrics
VXLAN EVPN External Connectivity with VRF Lite, OSPF, & BGP
VXLAN EVPN Multi-Site
VXLAN EVPN Transit Routing
These topics are split between two courses in INE’s CCIE Data Center v3.1 Learning Path, and both include hands-on labs, similar to the Nexus Dashboard Fabric Controller (NDFC) topology seen below:
Next, we jump to section 6 of the blueprint, Data Center Security and Network Services. Most security features - with the exception of ACI contracts - along with various network services, can be practiced virtually using the Nexus 9000v switch. These topics include:
Section 6.0 - Data Center Security and Network Services
ACLs
RBAC
AAA / RADIUS / TACACS+
First Hop Security
Port Security
Private VLANs
Policy-Based Routing
Network services insertion and redirection
Other network services (SPAN / ERSPAN / SNMP / DHCP / Netflow, etc.)
These topics are split between three courses in INE’s CCIE Data Center v3.1 Learning Path as follows:
Exam Topics Requiring Physical Hardware
Three major topic domains for CCIE DC v3.1 still require hardware, which are Application Centric Infrastructure (ACI), the Cisco Unified Computing System (UCS), and Storage Area Networking (SAN Switching).
Application Centric Infrastructure (ACI) - Partial Virtual Support
In section 2 of the blueprint, Data Center Fabric Infrastructure, Application Centric Infrastructure (ACI) presents the most significant virtual lab limitations. Although there is an ACI Simulator that you can download as a virtual machine from https://software.cisco.com - or use the free hosted versions at https://devnetsandbox.cisco.com - without real Nexus 9000 (ACI) switches there is no way to test the functionality of data-plane connectivity (e.g. ping), or perform proper verifications using the NX-OS CLI (e.g. show endpoint).
ACI Simulator Capabilities:
APIC GUI configuration practice
REST API and Python automation development
Policy creation and management concepts
ACI Simulator Critical Limitations:
No data-plane connectivity testing
No NX-OS CLI access on ACI switches
Cannot validate network policies with actual traffic
In INE’s CCIE Data Center v3.1 Learning Path, these topics are covered in Implementing Cisco Application Centric Infrastructure (ACI) v5 using physical hardware for demonstrations.
Unified Computing System (UCS) - Limited Virtual Support
In section 4 of the blueprint, Data Center Compute, the Unified Computing System (UCS) presents another challenge. Like ACI, Cisco provides a free UCS Platform Emulator (UCSPE) that allows you to learn how to configure the system using the UCS Manager (UCSM) GUI, and to practice automating the system using APIs, Python, and other tools, but it lacks critical validation capabilities. Without real UCS hardware, there is no way to test the functionality of LAN & SAN connectivity, or to perform proper verifications using the NX-OS CLI of the UCS Fabric Interconnects (UCS FIs).
UCS Platform Emulator Capabilities:
UCS Manager GUI navigation and configuration
Python automation and API scripting practice
Policy and template concept understanding
Critical Limitations:
Cannot test LAN and SAN connectivity functionality
No NX-OS CLI access on UCS Fabric Interconnects for verification
Cannot validate physical server integration scenarios
No troubleshooting of actual connectivity issues
In INE’s CCIE Data Center v3.1 Learning Path, these topics are covered in Implementing Unified Computing System (UCS) using physical hardware for demonstrations.
Storage Area Networking (SAN Switching) - No Virtual Support
Section 5 of the blueprint, Data Center Storage Protocols and Features, is not supported in virtualization. This section requires physical hardware for all components. You'll need real equipment to practice:
Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE)
VSANs, Trunking, and SAN Port-Channels
Zoning
NPV/NPIV
QoS with DCQCN congestion control (PFC, ECN)
In INE’s CCIE Data Center v3.1 Learning Path, these topics are covered in Storage Area Networking (SAN) Switching on NX-OS, which uses physical hardware for demonstrations.
Recommended Hardware Specification
Now that we know which topics can be covered virtually and which require hardware, let’s outline how to combine virtual devices and hardware platforms to build the most cost-effective CCIE Data Center v3.1 practice lab topology.
Nexus Switching
Nexus Switches could be virtual machines using Nexus 9000v, or physical devices. The Nexus Dashboard should run as a virtual machine, with both the Nexus Dashboard Fabric Controller (NDFC) for NX-OS-based automation and the Nexus Dashboard Orchestrator (NDO) for Multi-Site ACI automation. Specifically, devices for this section should include:
Virtual Nexus 9000v switches or physical Nexus 9000 switches
NX-OS 9000v in CML / EVE-NG / GNS3
Physical Nexus 9000 switches in NX-OS mode (e.g., Nexus 93180YC-EX)
Nexus Dashboard Virtual Machine with NDFC (version 3.2.1i or later)
Application Centric Infrastructure (ACI)
ACI requires hardware for the APIC controllers, Leaf switches, and Spine switches. The Nexus Dashboard can be the same VM from above, running the Nexus Dashboard Orchestrator (NDO) for ACI Multi-Site. Specifically, hardware for this section should include:
Minimum 2 APIC controllers (for Multi-Site support)
Minimum 2 ACI Spines (Gen 3 Cloud-Scale or later, e.g., Nexus 9332C)
Minimum 4 ACI Leafs (Gen 2 Cloud-Scale or later, e.g., Nexus 93180YC-EX)
Minimum 2 UCS blade or rack servers to attach to ACI Leafs
Nexus Dashboard Virtual Machine with NDO (version 4.2 or later)
Unified Computing System (UCS)
UCS requires hardware for the Fabric Interconnects (FIs), along with Blade & Rack Servers. Specifically, hardware for this section should include:
2 UCS Fabric Interconnects (e.g., UCS-FI-6248UP or later)
UCS Blade Chassis (e.g., UCSB-5108-AC2 or later)
Minimum 2 UCS Blade Servers (e.g., UCS B200 M4 or later)
Minimum 2 UCS Rack Servers (e.g., UCS C220 M4 or later)
Storage Area Networking (SAN Switching)
SAN Switching requires physical Nexus Switches, servers to act as storage initiators (e.g. the UCS blade & rack servers), and a storage array to act as the target. Specifically, hardware for this section should include:
2 Nexus 5500/6000 with Unified Ports (e.g., Nexus 5672UP)
Server with 2 or 4 port Native Fibre Channel HBA (e.g. UCS C220 M4 with Emulex LPe12002-M6, QLogic QLE2462/QLE2564, etc.)
Fibre Channel Target Software (for example, https://www.esos-project.com)
Support Infrastructure
Last, but not least, we need to physically wire all the devices together and be able to manage the devices through both the serial console port and the dedicated MGMT0 port. To support this, you must also include:
Access/Terminal Server (e.g. any modular Cisco router with HWIC-16A & CAB-HD8-ASYNC octal cables)
Management Switch (any managed 1Gbps switch)
Proper cabling (40G Twinax, 10G Twinax, Fibre Channel SFPs, Multi-Mode Fiber (MMF), Ethernet Patch Cables, Console Cables)
Example Wiring Diagram
The diagram below is an example CCIE Data Center v3.1 Lab Exam Topology with physical devices used for Multi-Site ACI and UCS LAN & SAN Connectivity. All devices are wired to a Management Switch (MGMT-SW), which then connects to the “Cloud” via an Ethernet Trunk.
The “Cloud” is a physical server that runs the Nexus 9000v switches in a virtual topology, along with the Nexus Dashboard. The server can run any hypervisor of your choice, such as VMware ESXi, CML, EVE-NG, or GNS3. The server should connect to the MGMT-SW via an Ethernet Trunk, which allows you to insert the Nexus 9000v switches anywhere in the topology by adding additional 1/10G links from the ACI Leafs or Nexus 5Ks to the MGMT-SW, and using VLANs to separate the traffic.
Finally, the Nexus Dashboard should install both the Nexus Dashboard Fabric Controller (NDFC) to automate CLI-based NX-OS Switches and the Nexus Dashboard Orchestrator (NDO) to automate ACI Multi-Site. In order to perform this automation, you must first be able to ping between the Nexus Dashboard's management interface and the NX-OS Switches' MGMT0 interfaces, as well as the APIC controller’s MGMT0 for ACI. To simplify these requirements, the Nexus Dashboard VM should typically be placed in the same VLAN/Subnet as the MGMT0 interfaces of the Nexus switches and APIC controllers.
Final Thoughts
Preparing for the CCIE Data Center v3.1 lab exam requires a significant investment in both time and resources. While virtualization can help you master certain topics through effective IT training methodologies and structured networking training, there's no substitute for hands-on experience with physical hardware for ACI, UCS, and SAN Switching. Remember that your goal isn't just to pass the exam but to develop genuine expertise that will serve you throughout your career.
Ready to build your lab? Have questions? Leave a comment below!