Cisco Application Centric Infrastructure (ACI)
Cisco ACI is the Cisco’s turnkey SDN solution. Currently based on the Nexus 9000 series of switches, the ACI abstracts the underlying Data Center network to a level where it can almost be seen as a service provider network for attached client systems. The solution works by using a central controller to dictate policies that are enforced by the physical switches in the LAN.
- The ACI solution provides a VXLAN overlay network for multi-tenancy services.
- Device agnostic polices that can be applied to hypervisor or bare metal devices.
- An abstracted way to organise your Data Center devices by characteristics.
- Multiple ways to manage and program the environment.
- Pure Layer 3 based Data Center fabric
- Zero Touch fabric discovery
Like most SDN solutions the network fabric is based on a ‘Clos’ or ‘leaf and spine’ architecture where all devices including the controller only ever connect to the leaf switches. The network fabric is entirely Layer 3 based, allowing ECMP routing between any two endpoints and leverages VXLAN to stretch Layer 2 based tenants throughout the environment.
The solution utilizes the Nexus 9000 Series hardware, but requires a new ACI based OS to be installed in order to utilise the SDN solution. It should also be noted that only a subset of the Nexus 9500 series switches are ACI capable. A full list of the available hardware will be listed in a future post.
The core element of the solution is the controller, which acts as a policy orchestrator for all of the switches within the fabric so let’s take a deeper look at this controller.
Application Policy Infrastructure Controller (APIC)
This is the ACI controller, what’s interesting about the APIC servers is that they are not required for the network to operate. They simply provide a mechanism to apply and edit the policies on the network. So should all of the controllers fail during operation, the network will continue to run, with the only limitation being that no policies can be adjusted while they are out of service.
As you may have noticed I have referenced APIC’s not APIC and a minimum of three controllers are required within the solution to eliminate any split brain failure issues. Currently the hardware appliance is based on the UCS C220 M3.
- End Point: If only to make the next component clearer every system attached to the fabric is classified as an ‘End Point’
- End Point Group (EPG): You got it an EPG is a group of end points or hosts with similar characteristics, much like a group in active directory, where you wouldn’t apply policies to a user or host, but to a group. What makes EPGs more interesting is that systems can be grouped on pretty much anything from Network interface card (NIC), virtual NIC (vNIC), IP address, VLAN, VXLAN, or Domain Name System (DNS) name.
- Application Network Profile: is a logical container for grouping EPGs.
- Contracts: Provide a way to define policy relationship’s between EPGs, where you can filter, redirect or apply QoS as necessary. In addition service chaining is possible should you need to utilize connected services such as load balancers or firewalls.
- Context: A context is a unique Layer 3 forwarding and application policy domain, what many might consider a VRF instance.
- Bridge Domain: represents a Layer 2 forwarding construct within the fabric.
- Tenants: provide a logical container for application policies that enable an administrator to exercise domain-based access control. Tenants can be isolated from one another or share resources.
ACI supports multivendor hypervisor environments including:
- VMware (ESX/vCenter/vShield)
- Microsoft (Hyper-V/SCVMM/Azure Pack)
- Red Hat Enterprise Linux OS (KVM OVS/OpenStack)
Orchestration wise, ACI integrates with:
- Microsoft System Center
- Cisco UCS Director
ACI can also be leveraged from most of the common automation tools, such as Puppet, Chef and CFEngine
Primary Use Case
ACI is perfect for any large multi-tenant Data Center that is happy to lock into a single hardware vendor, but have the flexibility of multiple hypervisors as well as connectivity for both physical and virtual machines.