Difference Between Expansion Slots And Ports

Posted on  by 

  1. Expansion Slots Agp
  2. Pci Expansion Slots 32-bit
  3. Difference Between Expansion Slots And Ports 2
  4. Expansion Slots Types
  5. Expansion Slots
  6. Expansion Slots Types

The Cisco Nexus series switches are modular and fixed port network switches designed for the data center. Cisco Systems introduced the Nexus Series of switches on January 28, 2008. The first chassis in the Nexus 7000 family is a 10-slot chassis with two supervisor engine slots and eight I/O module slots at the front, as well as five crossbar switch fabric modules at the rear. Beside the Nexus 7000 there are also other models in the Nexus range.

All switches in the Nexus range run the modular NX-OS firmware/operating system on the fabric. NX-OS has some high-availability features compared to the well-known Cisco IOS. This platform is optimized for high-density 10 Gigabit Ethernet.

  • 1The Nexus switching range
    • 1.6Nexus 5000 series
    • 1.7Nexus 6000 series
    • 1.8Nexus 7000 series
    • 1.9Nexus 9000 series

PCI: The PCI slot is the most common form of internal expansion for a PC. Some PCs have a mixture of PCI and PCI Express slots. If so, go with PCI Express when you have that option. AGP: This type of expansion slot was specifically designed to deal with graphics adapters. In fact, AGP stands for Accelerated Graphics Port.

The Nexus switching range[edit]

The Nexus 7000 is the high-end model in the Nexus range of datacenter switches. Other models are:[1]

  • Nexus 1000v virtual switch
  • Nexus 4001 IBM Blade Center switch
  • Nexus 7000 series modular datacenter switches

Nexus 1000v[edit]

The 1000v is a virtual switch for use in virtual environments including both VMware vSphere and Microsoft Hyper-V[2] It is as such not a physical box but a software application that interacts with the hypervisor so you can virtualize the networking environment and be able to configure your system as if all virtual servers have connections to a physical switch and include the capabilities that a switch offers such as multiple VLANs per virtual interface, layer-3 options, security features etc. Per infrastructure/cluster you have one VM running the Nexus 1000v as virtual appliance, this is the VSM or Virtual Supervisor Module and then on each node you would have a 'client' or Virtual Ethernet Module (VEM) a vSwitch which replaces the standard vSwitch.

The VEM uses the vDS API, which was developed by VMware and Cisco together[3] VMware announced in May 2017, vDS API support will be removed from vSphere 6.5 Update 2 and later. Therefore Nexus 1000v can no longer be used. VMware KB https://kb.vmware.com/s/article/2149722https://www.theregister.co.uk/2017/03/31/vmware_to_end_support_for_thirdparty_virtual_switches/

Besides offering the NX-OS interface to configure, manage and monitor the virtual switch it also supports LACP link aggregation where the standard virtual switches only support static LAGs[4]

The configuration of VEMs is done via the VSM NX-OS Command-line interface.

Nexus 1010 / 1010x / 1100x[edit]

The Virtual Supervisor Module or VSM would normally run as a virtual appliance in an ESX/ESXi cluster but it is possible to run the VSM on dedicated hardware: the Nexus 1010, 1010x and 1100. For organisations where there is a very strict boundary between network management and server management, network administrators can avoid the dependency on the VSM running as virtual machine within the ESX cluster. The capabilities and limitations of a VSM running on a Nexus1010 are the same as a VSM running as virtual applicance under ESX. A Nexus 1100 can host up to 14 VSMs and it also allows additional services such as a Network Analysis Module to be run.

Nexus 2000 series[edit]

The Nexus 2000 series are fabric extenders (FEX): 'top of rack' 1U high system that can be used in combination with higher end Nexus switches like the 5000, 6000 or 7000 series: the 2000 series is not a 'stand-alone' switch but needs to be connected to a parent and should be seen as a 'module' or 'remote line card' but then installed in a 19' rack instead of in a main switch-enclosure. The interconnection between this 'remote line card' and the 5000 or 7000 parent switch uses either proprietary interfaces (CX-1 for copper or the short or long range Cisco Fabric Extender Transceiver (FET) interfaces), or standard interfaces (Cisco SFP+ SR and LR fibre interface modules or SFP+ Twinax cables). In combination with the 5000/6000/7000 mother-switch you can create a so-called Distributed Modular System.

The 2000 series consists of 4 different models. Three models offer 24 or 48 Gbit/s only or 1 Gbit/s/fastethernet copper interfaces and up to four 10 Gigabit uplink interfaces on copper or fibre. The Nexus 2232PP offers thirty-two 1/10 Gbit/s ethernet and FCoE interfaces.[5] The Nexus 2248PQ offers forty-eight 1/10 Gbit/s ethernet and FCoE interfaces.

For the HP BladeSystem C3000 and C7000 server blade chassis, the Cisco Nexus B22HP fabric extender exists. (October 2011)[6]

The Fujitsu PRIMERGY BX400 and BX900 blade server chassis uses the B22F fabric extender. (July 2012)[7]
For the Dell M1000e blade server chassis, the Cisco Nexus B22Dell fabric extender was released in January 2013, which is 2.5 years after the initially planned release. Due to a disagreement between Dell and Cisco, Cisco stopped development of the FEX for the M1000e in 2010[8]

The Nexus B22FEX offer 16 x 10 Gbase-KR internal 10 Gbit/s link to each blade-server interface and up to 8 SFP+ ports for uplink with a Nexus 5010, 5548 or 5596 switch. The maximum distance between the FEX and the mother-switch is 3 kilometer when it is only used for TCP/IP traffic and 300 meter when carrying also FCoE traffic.[9]

Nexus 3000 series[edit]

The model 3064 is currently the only Nexus switch in the 3000-series utilizing merchant silicon. The 1U rack-switch with 1, 10 and 40 Gbit/s ethernet interfaces is designed for use in colo center. Offers layer2 and layer3 capabilities at wire-speed for all 64 interfaces running in 10 Gbit/s. Layer3 routing protocols supported include static routes, RIP v2, OSPF and BGP-4. The switch-fabric can switch 2.28 Tbit/s and forward up to 950 million packets per second. The switch is capable of building a route-table with up to 16000 prefixes, 8000 host-entries and 4000 multicast routes and up to 4096 VLANs are supported. On top of that a high number of ingress or egress ACLs can be configured.

The 3064 has a single fan tray, two replaceable power-supplies on board and two separate out of band management interfaces. To connect the 3064 to the rest of the network the use of proprietary EtherChannel or Link aggregation using industry-standard LACP or IEEE 802.3ad is supported with up to 32 port-channels with each up to 16 physical interfaces.

The switch holds of 48 SFP+[10] for 1 Gbit/s or 10 Gbit/s ethernet interfaces and four QSFP+[11] each handling 4 x 10 Gbit/s interfaces allowing for 40 Gbit/s over a single fibre-pair[12]

Nexus 4000 series[edit]

The Nexus 4000 series consists of only the model 4001: a blade-switch module for IBM BladeCenter that has all 10 Gbit Fibre Channel over Ethernet or FCoE interfaces. This blade-switch had 14 server-facing downlinks running on 1 Gbit/s or 10 Gbit/s and six uplinks using 10 Gbit/s SFP+ modules. For out-of-band management three ethernet-interfaces are available: one external 10/100/1000 bit/s copper interface, one internal management interface for the AMM or Advanced Management Module and one in-band interface using the VLAN interface option. And this blade-switch also has a serial console cable for direct access to the CLI[13]
At present only switches for the IBM blade systems are available. When the Nexus 4000 series were announced in 2009 it was expected that there would be Nexus 4001 series for IBM and Dell (and not HP)[14] but in February 2010 it became clear that Cisco canceled the Nexus 4001d for the Dell M1000e[8]
For the HP blade system Cisco released a Fabric Extender, which compares with the Nexus 2000top of rack devices, but then in a blade-form factor.[6] The FEX that was developed for the Dell blade system, which was due to be released in the summer of 2010 was dropped at the same time as the Nexus 4001d in February of that year[8]

Nexus 5000 series[edit]

The Nexus 5000 series is a range of 5 models 1U or 2U rack-switches offering 20 to 96 interfaces running on 1 or 10 Gbit/s ethernet and 10 Gbit/s FCoE interfaces. They can be used with the above-mentioned Nexus 2000 series fabric extender. The 5000-series offer carrier-grade layer2 and layer3 switching as well as the mentioned FCoE capabilities[15]

The Nexus 5000 has 5 models:

Nexus 5010[edit]

  • A one rack-unit high switch with 20 fixed 10 Gbit/s supporting ethernet, FCoE and DCB interfaces and one expansion port offering one of the following modules:
8 ports with 1, 2 or 4 Gbit/s Fibre Channel
6 ports with 1, 2, 4 or 8 Gbit/s Fibre Channel
4 ports with 10 Gbit/s FCoE or DCB and 4 ports offering 1, 2 or 4 Gbit/s Fibre Channel
6 ports offering 10 Gbit/s FCoE or DCB
  • Nexus 5010 is End Of Life - http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/eol_c51-709037.html

Nexus 5020[edit]

A two rack-unit high switch with 40 fixed 10 Gbit/s supporting ethernet, FCoE and DCB and two expansion ports each offering one of the modules

  • 8 ports with 1, 2 or 4 Gbit/s Fibre Channel
  • 6 ports with 1, 2, 4 or 8 Gbit/s Fibre Channel
  • 4 ports with 10 Gbit/s FCoE or DCB and 4 ports offering 1, 2 or 4 Gbit/s Fibre Channel
  • 6 ports offering 10 Gbit/s FCoE or DCB
  • Nexus 5020 is End of Life - http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/eol_c51-709037.html

Nexus 5548[edit]

The 5548 comes in two sub-models: the 5548P and 5548UP

  • Nexus 5548P switch: 1U chassis with 32 fixed non-unified ports and up to 16 additional ports using the expansion slot. The 5548 chassis can be the main fabric for the Nexus 2000 series fabric extenders. The interfaces in the expansion slots are:
  • 16 port unified offering 1-10 Gbit/s SFP+ slot for ethernet and FCoE OR 1,2,4 or 8 native fibre channel
  • 16 port SFP+ 10 Gbit/s ethernet and FCoE
  • 8 ports SFP+ 10 Gbit/s ethernet and FCoE plus 8 ports 1,2,4 or 8 native fibre-channel.[16]
  • Nexus 5548UP: also a 1U chassis with 32 fixed unified ports and up to 16 additional ports using the expansion slot. The difference between the 5548P and 5548UP is that the 5548Ps fixed (on-board) SFP+ slots are non-unified there where the same SFP+ slots on the UP chassis are unified.[16]

Nexus 5596[edit]

The 5596 comes in two sub-models the UP and the T:

Difference Between Expansion Slots And Ports
  • Nexus 5596UP: a two-RU chassis with 48 fixed unified ports and up to 48 additional interfaces in three expansion slots. Capabilities of the 5596UP is same as the 5548UP but this switch is two RU high and supports three expansion slots[16]
  • Nexus 5596T: a two-RU chassis with 48 fixed ports (32 of 10G Base-T + 16 SFP+) and up to 48 additional interfaces in three expansion slots. 5596T supports the upcoming 10G Base-T ports on the fixed as well as expansion slots along with supporting any other generic expansion modules that are supported on 5596UP.[16]

Next to the expansion modules all three Nexus 55xx switches offer the capability to insert a 160Gbit/s layer-3 routing engine

Nexus 6000 series[edit]

The Cisco Nexus 6000 range contains two models, the 6001 model and the 6004 model.[17] They can be used as layer2 and layer3 switches and can aggregate traffic from the Fabric Extenders (FEX) for different blade-server systems. Both models support either front to back or back to front airflow and they do support Fibre Channel over Ethernet in combination with a 'full' FCoE switch (e.g. Nexus 5500 or Brocade 8000 switch (which is same as Dell PowerConnect 8000e or blade version PCM 8428-k)).

Nexus 6001[edit]

The Nexus 6001 is a fixed 1 RU switch with 48 x 10 Gbit/s and 4 x 40 Gbit/s interfaces for uplinks. It can operate as both layer2 and as layer3 switch and in combination with FEX (fabric extenders) you can aggregate up to 1152 ports at 1 Gbit/s or 10 Gbit/s. System speed is wire-speed at layer2 and 1.28 Tbit/s for layer3 operation.

Nexus 6004 & 5696Q[edit]

The 2nd model in the Nexus 6000 series is a modular chassis, 4 Rack units high. The basic chassis offers 48 fixed QSFP+ ports at 40 Gbit/s each, each can be split in 4 x 10Gbit/s SFP+ ports. Besides the 48 QSFP+ ports the chassis can hold up to 4 expansion modules - each offering 12 additional 40Gbit/s QSFP+ ports - thus in total up to 96 QSFP+ ports or 384 SFP+/10Gbit/s ports and when aggregating FEX up to 1536 (blade)server ports at 1 or 10Gbit/s. As with the 6001 layer 2/layer 3 operation is at line-rate and total switching capacity of a chassis is 7,68Tbit/s. The Nexus 6004-EF switch is a modular device which provides the same features as the 6004 but with the use of expansion modules in all slots of the switch. The base configuration of the 6004-EF must have 2 x 12 port 40GbE expansion modules, delivering 24 ports of 40GbE or 96 ports of 10GbE. Additional capacity can be provided by installing further expansion modules.

For layer3 and FCoE operation additional licences are required[18]

Cisco released the Nexus 6004X switch and renamed it to the Nexus 5696Q. Previously, the Nexus 6000 series was meant to be focused on the Cisco 40G aggregation products, and the 5500 and 5600 series on 10G. However, these switches mostly shared common hardware components, ASICs, and the same software images, so recently the Cisco decided to merge the product portfolios.

Nexus 7000 series[edit]

Although the Nexus 5000 had some modular capabilities and you can attach the Nexus 2000 fabric extender to the 5500 range, the Nexus 7000 is the real modular switch in the Nexus family with six versions: one 4 slot, one 9 slot, two 10 slot and two 18 slot switches.[19] Unlike the other Nexus models, the 7000 series switches are the modular switches for campus core and data center access, aggregation and core. Some details on the models are detailed below. As with the Nexus 5000 series the Nexus 2000 Fabric Extenders can act as a remote line card on the 7000 series. 70xx and 77xx linecards and supervisor modules are not compatible.

Nexus 7004[edit]

Expansion Slots Agp

  • 4 slots: 3-4 are line card slots, 1-2 are supervisor slots
  • 7 RU height
  • Supports 96 1 or 10 Gbit/s ports (48 per slot), 12 40 Gbit/s ports (6 per slot) or 4 100 Gbit/s ports (2 per slot), all non-blocking ports
  • 1.92 Tbit/s system bandwidth
  • 440 Gbit/s, 720 million pps (720 Mpps) per slot
  • Air flow is side to rear (input on right)
  • The chassis does not have fabric modules, the I/O modules connect directly through the backplane
  • Up to 4 power supplies.

Nexus 7009[edit]

Pci Expansion Slots 32-bit

  • 9 slots: 3-9 are line card slots, 1-2 are supervisor slots
  • 14 RU height
  • Supports 336 1 or 10 Gbit/s ports (48 per slot), 42 40 Gbit/s ports (6 per slot) or 14 100 Gbit/sc ports (2 per slot), all non-blocking ports
  • 8.8 Tbit/s system bandwidth
  • 550 Gbit/s, 720 Mpps per slot
  • Air flow is side to side (right to left)
  • Up to 5 Crossbar Fabric Modules
  • Up to 2 power supplies

Nexus 7010[edit]

  • 10 slots: 1-4 and 7-10 are line card slots, 5-6 are supervisor slots
  • 21 RU height
  • Supports 384 1 or 10 Gbit/s ports (48 per slot), 48 40 Gbit/s ports (6 per slot) or 16 100 Gbit/s ports (2 per slot), all non-blocking ports
  • 550 Gbit/s, 720 Mpps per slot
  • Air flow is front to back
  • Up to 5 Crossbar Fabric Modules
  • Up to 3 power supplies

Nexus 7018[edit]

  • 18 slots: 1-8 and 11-18 are line card slots, 9-10 are supervisor slots
  • 25 RU height
  • Supports 768 10 Gbit/s and/or 1 Gbit/s, all non-blocking ports
  • Supports 768 1 or 10 Gbit/s ports (48 per slot), 96 40 Gbit/s ports (6 per slot) or 32 100 Gbit/s ports (2 per slot), all non-blocking ports
  • 18.7 Tbit/s system bandwidth
  • 550 Gbit/s, 720 Mpps per slot
  • Air flow is side to side (right to left)
  • Up to 5 Crossbar Fabric Modules
  • Up to 4 power supplies

Nexus 7710[edit]

  • 10 slots: 1-4 and 7-10 are line card slots, 5-6 are supervisor slots
  • 14 RU height
  • Supports up to 384 10 Gbit/s ports, 192 40 Gbit/s ports or 96 100 Gbit/s ports, all non-blocking ports
  • 42 Tbit/s system bandwidth (21 Tbit/s full duplex)
  • 1.32 Tbit/s per slot
  • Air flow is front to back
  • Up to 6 switch fabric modules
  • Up to 8 power supplies (3 kW each)
  • 6 microsecond Latency

Nexus 7718[edit]

  • 18 slots: 1-8 and 11-18 are line card slots, 9-10 are supervisor slots
  • 26 RU height
  • Supports up to 768 10 Gbit/s ports, 384 40 Gbit/s ports or 192 100 Gbit/s ports, all non-blocking ports
  • 83 Tbit/s system bandwidth (42 Tbit/s full duplex)
  • 1.32 Tbit/s per slot
  • Air flow is front to back
  • Up to 6 switch fabric modules
  • Up to 16 power supplies (3 kW each)

Nexus 9000 series[edit]

The Nexus 9000 series is a range of many models 2U to 21U rack-switches offering 60 to 2304 interfaces running on 100Mb, 1Gb, 10Gb, 25Gb, 40Gb, 100Gb Ethernet, 10/25/40 Gb FCoE interfaces. They can be used with the above-mentioned Nexus 2000 series fabric extender.

The Nexus 9000 has many models:

Nexus 9396PX[edit]

  • A two rack-unit high switch with 48 SFP+ 10Gbit/s supporting ethernet, FCoE and DCB interfaces and one expansion port offering the modules
12 ports with 40Gb supporting ethernet, FCoE or DCB

Nexus 93128TX[edit]

  • A three rack-unit high switch with 96 fixed 1/10Gbit/s supporting ethernet, FCoE and DCB interfaces and one expansion port offering the modules
8 ports with 40 Gbit/s supporting ethernet, FCoE or DCB
Expansion slots types

Nexus 9504[edit]

  • 4 line card slots
  • 7 RU height
  • Supports 576 10 Gbit/s and/or 1 Gbit/s, all non-blocking ports
  • 15 Tbit/s system bandwidth

Nexus 9508[edit]

  • 8 line card slots
  • 13 RU height
  • Supports 1152 10 Gbit/s and/or 1 Gbit/s, all non-blocking ports
  • 30 Tbit/s system bandwidth

Nexus 9516[edit]

  • 16 line card slots
  • 21 RU height
  • Supports 2304 10 Gbit/s and/or 1 Gbit/s, all non-blocking ports
  • 60 Tbit/s system bandwidth



Cisco Nexus 9516 Switch


Form factor: 21 RU;Line card slots: 16;Supervisor slots: 2;Fabric module slots: 6;ACI support: Yes;Bandwidth per slot: 3.84 Tbit/s;Bandwidth per system: 60 Tbit/s;Maximum number of 1/10G BASE-T ports 768Maximum number of 10 GE ports 2304Maximum number of 40 GE ports 576Maximum number of 100GE ports 576Airflow Front to backPower supplies (3-kW AC/DC) Up to 10Fan trays 3

Difference Between Expansion Slots And Ports 2

End-of-Life Switches[edit]

Base ModelForm FactorVariantsAvailable ports/ModulesNumber of power suppliesNumber/Type of supervisorsExpansion typeSyncEnd-of-life (only major notices listed)Comments

Current Switches[edit]

Base ModelForm FactorVariantsAvailable ports/ModulesNumber of power suppliesNumber/Type of supervisorsExpansion typeSyncEnd-of-life (only major notices listed)Comments
Nexus 2000 Series[20]Fixed2348
2332
2248
2232
2224
2148[21]
24 8P8C/2 SFP
48 8P8C/4 SFP[22]
32 8P8C/4 SFP
48 8P8C/6 SFP
32 8P8C(1/10G)/8 SFP+[23]
48 SFP+/2 to 6 SFP+
32 SFP+/8 SFP+[24]
NoneNoneSeries only behaves as FEX. Cannot be standaloneNo EoL announcements to date
Nexus 3000 Series[25]Fixed3112
3548
3524
3264
3232
3172
3164
3132
3064
3048
3016[26]
48 SFP+/4 QSFP+
32 8P8C/4 QSFP+
48 8P8C/4 QSFP+
16 QSFP+
48 8P8C/4 SFP+[27]
32 QSFP+
64 QSFP+
48 SFP+/4 QSFP+
48 8P8C/6 QSFP+
96 SFP+/8 QSFP+[28]
24 SFP+
48 SFP+[29]
Up to 2NoneNoneAnnounced 2012 (3064PQ only)[30]
Announced 2015 (3016 only)[31]
Nexus 4000 Series[32]Module4001i[33]NoneNoneNo EoL announcements to dateBlade module for IBM servers
Nexus 5000 Series[34]Hybrid56128
5696
5672
5648
5624
5596
5548[35]
5020 (EoSale)
5010 (EoSale)[36]
48 SFP+/6 QSFP+
48 SFP+/4 QSFP+
12 QSFP+
24 QSFP+
Nothing Fixed/Expansion[37]
32 10GBase-T/16 SFP+/Expansion[38]
48 SFP+/Expansion[39]
32 SFP+/Expansion[40]
40 SFP+/Expansion[41]
Up to 2None24 SFP+/2 QSFP+
8 1/2/4 Gbit/s FC
6 1/2/4/8 Gbit/s FC
4 10 Gbit/s/4 1/2/4 Gbit/s FC
6 10 Gbit/s FCoE or DCB
Can use Nexus 2000 series as FEXAnnounced 2012 (5010 and 5020)[42]
Announced 2015 (5548P only)[43]
Announced 2018 (5548UP and 5596)[44]
Several models have air flow direction options, Various Unified port options
Nexus 7000 SeriesModule7004

7009

7010

7018 7702

7706

7710

7718

96 1/10 GE

24 40 GE

12 100 GE

Up to 22SFP

SFP+

QSFP+

References[edit]

  1. ^Cisco product overview Datacenter switches: Nexus, visited 28 May 2011
  2. ^http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns955/ns963/solution_overview_c22-687087.html
  3. ^Overview of the Nexus 1000v virtual switch, visited 8 July 2012
  4. ^Cisco brochure Cisco1000v Virtual Switch, PDF, retrieved 28 May 2011
  5. ^Cisco brochure Nexus 2000, PDF, retrieved 28 May 2011
  6. ^ abIT KnowledgeExchange website: Cisco FEX finally available for the HP blade-system, 18 October 2011. Visited: 27 August 2012
  7. ^Fujitsu Press Release
  8. ^ abcTheRegister website: Cisco cuts Nexus 4001d blade switch, 16 February 2010. Visited: 10 March 2013
  9. ^Cisco datasheet: Cisco Nexus B22 Blade Fabric Extender, July 2012. Visited: 27 August 2012]
  10. ^Cisco documentation on Cisco 10 Gigabit modules, visited 28 May 2011
  11. ^Cisco documentation on Cisco 40 Gigabit modules, visited 28 May 2011
  12. ^Cisco brochure ,Nexus 3000, PDF, retrieved 28 May 2011
  13. ^Cisco brochure Nexus 4001 At a Glance, PDF, retrieved 28 May 2011
  14. ^Bladesmadesimple.com: Cisco announces Nexus 4000 for blades, 29 September 2009. Visited: 26 Augustus, 2012
  15. ^Cisco brochure ,Nexus 5000 series, PDF, retrieved 28 May 2011
  16. ^ abcdCisco website on the Nexus 5500 chassis, visited 28 May 2011
  17. ^Cisco website: Cisco Nexus 6000 series, visited: 14 April 2013
  18. ^Cisco Nexus 6004 datasheet, 2013, downloaded: 14 April 2013
  19. ^Cisco Nexus 7000 Series Switches
  20. ^Cisco Nexus 2000 Series Product Line
  21. ^Cisco Nexus 2000 model list
  22. ^Cisco Nexus 2000 model types 1 GE
  23. ^Cisco Nexus 2000 model types 10G SFP
  24. ^Cisco Nexus 2000 model types 10G-Base
  25. ^Cisco Nexus 3000 Series Product Line
  26. ^Cisco Nexus 3000 model list
  27. ^Cisco Nexus 3000 model types
  28. ^Cisco Nexus 3100 model types
  29. ^Cisco Nexus 3500 model types
  30. ^Cisco Nexus 3064 EoL announcement
  31. ^Cisco Nexus 3016 EoL announcement
  32. ^Cisco Nexus 4000 Series Product Line
  33. ^Cisco Nexus 4000 model list
  34. ^Cisco Nexus 5000 Series Product Line
  35. ^Cisco Nexus 5000 model list
  36. ^Cisco Nexus 5000 EoS list
  37. ^Cisco Nexus 5000 model comparison
  38. ^Cisco Nexus 5596
  39. ^Cisco Nexus 5548P
  40. ^Cisco 5548UP
  41. ^Cisco Nexus 5020
  42. ^Cisco Nexus 5010/5020 EoL announcement
  43. ^Cisco Nexus 5548 EoL announcement
  44. ^Cisco Nexus 5500 series EoL announcement
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Cisco_Nexus_switches&oldid=927452982'
Example of a PCI digital I/O expansion card
PCI expansion slot

In computing, the expansion card, expansion board, adapter card or accessory card is a printed circuit board that can be inserted into an electrical connector, or expansion slot, on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus.

An expansion bus is a computer bus which moves information between the internal hardware of a computer system (including the CPU and RAM) and peripheral devices. It is a collection of wires and protocols that allows for the expansion of a computer.[1]

  • 1History
Between

History[edit]

Even vacuum-tube based computers had modular construction, but individual functions for peripheral devices filled a cabinet, not just a printed circuit board. Processor, memory and I/O cards became feasible with the development of integrated circuits. Expansion cards allowed a processor system to be adapted to the needs of the user, allowing variations in the type of devices connected, additions to memory, or optional features to the central processor (such as a floating point unit). Minicomputers, starting with the PDP-8, were made of multiple cards, all powered by and communicating through a passive backplane.

The first commercial microcomputer to feature expansion slots was the Micral N, in 1973. The first company to establish a de facto standard was Altair with the Altair 8800, developed 1974-1975, which later became a multi-manufacturer standard, the S-100 bus. Many of these computers were also passive backplane designs, where all elements of the computer, (processor, memory, and I/O) plugged into a card cage which passively distributed signals and power between the cards.

Proprietary bus implementations for systems such as the Apple II co-existed with multi-manufacturer standards.

IBM PC and descendants[edit]

IBM introduced what would retroactively be called the Industry Standard Architecture (ISA) bus with the IBM PC in 1981. At that time, the technology was called the PC bus. The IBM XT, introduced in 1983, used the same bus (with slight exception). The 8-bit PC and XT bus was extended with the introduction of the IBM AT in 1984. This used a second connector for extending the address and data bus over the XT, but was backward compatible; 8-bit cards were still usable in the AT 16-bit slots. Industry Standard Architecture (ISA) became the designation for the IBM AT bus after other types were developed. Users of the ISA bus had to have in-depth knowledge of the hardware they were adding to properly connect the devices, since memory addresses, I/O port addresses, and DMA channels had to be configured by switches or jumpers on the card to match the settings in driver software.

IBM's MCA bus, developed for the PS/2 in 1987, was a competitor to ISA, also their design, but fell out of favor due to the ISA's industry-wide acceptance and IBM's licensing of MCA. EISA, the 32-bit extended version of ISA championed by Compaq, was used on some PC motherboards until 1997, when Microsoft declared it a 'legacy' subsystem in the PC 97 industry white-paper. Proprietary local buses (q.v. Compaq) and then the VESA Local Bus Standard, were late 1980s expansion buses that were tied but not exclusive to the 80386 and 80486 CPU bus.[2][3][4] The PC/104 bus is an embedded bus that copies the ISA bus.

Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993. The PCI bus was introduced in 1991 as a replacement for ISA. The standard (now at version 3.0) is found on PC motherboards to this day. The PCI standard supports bus bridging: as many as ten daisy chained PCI buses have been tested. Cardbus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format.

Intel introduced the AGP bus in 1997 as a dedicated video acceleration solution. AGP devices are logically attached to the PCI bus over a PCI-to-PCI bridge. Though termed a bus, AGP usually supports only a single card at a time (LegacyBIOS support issues). From 2005 PCI-Express has been replacing both PCI and AGP. This standard, approved[Like whom?] in 2004, implements the logical PCI protocol over a serial communication interface. PC/104(-Plus) or Mini PCI are often added for expansion on small form factor boards such as Mini-ITX.

Expansion Slots Types

For their 1000 EX and 1000 HX models, Tandy Computer designed the PLUS expansion interface, an adaptation of the XT-bus supporting cards of a smaller form factor. Because it is electrically compatible with the XT bus (a.k.a. 8-bit ISA or XT-ISA), a passive adapter can be made to connect XT cards to a PLUS expansion connector. Another feature of PLUS cards is that they are stackable. Another bus that offered stackable expansion modules was the 'sidecar' bus used by the IBM PCjr. This may have been electrically comparable to the XT bus; it most certainly had some similarities since both essentially exposed the 8088 CPU's address and data buses, with some buffering and latching, the addition of interrupts and DMA provided by Intel add-on chips, and a few system fault detection lines (Power Good, Memory Check, I/O Channel Check). Again, PCjr sidecars are not technically expansion cards, but expansion modules, with the only difference being that the sidecar is an expansion card enclosed in a plastic box (with holes exposing the connectors).

Other families[edit]

Most other computer lines, including those from Apple Inc. (Apple II, Macintosh), Tandy, Commodore, Amiga, and Atari, offered their own expansion buses. The Amiga used Zorro II. Apple used a proprietary system with seven 50-pin-slots for Apple II peripheral cards, then later used the NuBus for its Macintosh series until 1995, when they switched to a PCI Bus. Generally, PCI expansion cards will function on any CPU platform if there is a software driver for that type. PCI video cards and other cards that contain a BIOS are problematic, although video cards conforming to VESA Standards may be used for secondary monitors. DEC Alpha, IBM PowerPC, and NEC MIPS workstations used PCI bus connectors.[5] Both Zorro II and NuBus were plug and play, requiring no hardware configuration by the user.

Even many video game consoles, such as the Sega Genesis, included expansion buses; at least in the case of the Genesis, the expansion bus was proprietary, and in fact the cartridge slots of many cartridge based consoles (not including the Atari 2600) would qualify as expansion buses, as they exposed both read and write capabilities of the system's internal bus. However, the expansion modules attached to these interfaces, though functionally the same as expansion cards, are not technically expansion cards, due to their physical form.

Other computer buses were used for industrial control, instruments, and scientific systems. Some of these standards were VMEbus, STD Bus, and others.

External expansion buses[edit]

Laptops generally are unable to accept most expansion cards. Several compact expansion standards were developed. The original PC Card expansion card standard is essentially a compact version of the ISA bus. The CardBus expansion card standard is an evolution of the PC card standard to make it into a compact version of the PCI bus. The original ExpressCard standard acts like it is either a USB 2.0 peripheral or a PCI Express 1.x x1 device. ExpressCard 2.0 adds SuperSpeed USB as another type of interface the card can use. Unfortunately, CardBus and ExpressCard are vulnerable to DMA attack unless the laptop has an IOMMU that is configured to thwart these attacks.

Applications[edit]

The primary purpose of an expansion card is to provide or expand on features not offered by the motherboard. For example, the original IBM PC did not have on-board graphics or hard drive capability. In that case, a graphics card and an ST-506 hard disk controller card provided graphics capability and hard drive interface respectively. Some single-board computers made no provision for expansion cards, and may only have provided IC sockets on the board for limited changes or customization. Since reliable multi-pin connectors are relatively costly, some mass-market systems such as home computers had no expansion slots and instead used a card-edge connector at the edge of the main board, putting the costly matching socket into the cost of the peripheral device.

In the case of expansion of on-board capability, a motherboard may provide a single serial RS232 port or Ethernet port. An expansion card can be installed to offer multiple RS232 ports or multiple and higher bandwidth Ethernet ports. In this case, the motherboard provides basic functionality but the expansion card offers additional or enhanced ports.

Physical construction[edit]

One edge of the expansion card holds the contacts (the edge connector or pin header) that fit into the slot. They establish the electrical contact between the electronics on the card and on the motherboard. Peripheral expansion cards generally have connectors for external cables. In the PC-compatible personal computer, these connectors were located in the support bracket at the back of the cabinet. Industrial backplane systems had connectors mounted on the top edge of the card, opposite to the backplane pins.

Depending on the form factor of the motherboard and case, around one to seven expansion cards can be added to a computer system. 19 or more expansion cards can be installed in backplane systems. When many expansion cards are added to a system, total power consumption and heat dissipation become limiting factors. Some expansion cards take up more than one slot space. For example, many graphics cards on the market as of 2010 are dual slot graphics cards, using the second slot as a place to put an active heat sink with a fan.

Some cards are 'low-profile' cards, meaning that they are shorter than standard cards and will fit in a lower height computer chassis. (There is a 'low profile PCI card' standard[6] that specifies a much smaller bracket and board area). The group of expansion cards that are used for external connectivity, such as network, SAN or modem cards, are commonly referred to as input/output cards (or I/O cards).

Daughterboard[edit]

Expansion Slots

A sound card with a MIDI daughterboard attached
A daughterboard for Inventec server platform that acts as a RAID controller based on LSI 1078 chipset

A daughterboard, daughtercard, mezzanine board or piggyback board is an expansion card that attaches to a system directly. [7] Daughterboards often have plugs, sockets, pins or other attachments for other boards. Daughterboards often have only internal connections within a computer or other electronic devices, and usually access the motherboard directly rather than through a computer bus.

Daughterboards are sometimes used in computers in order to allow for expansion cards to fit parallel to the motherboard, usually to maintain a small form factor. This form are also called riser cards, or risers. Daughterboards are also sometimes used to expand the basic functionality of an electronic device, such as when a certain model has features added to it and is released as a new or separate model. Rather than redesigning the first model completely, a daughterboard may be added to a special connector on the main board. These usually fit on top of and parallel to the board, separated by spacers or standoffs, and are sometimes called mezzanine cards due to being stacked like the mezzanine of a theatre. Wavetable cards (sample-based synthesis cards) are often mounted on sound cards in this manner.

Some mezzanine card interface standards includethe 400 pin FPGA Mezzanine Card (FMC);the 172 pin High Speed Mezzanine Card (HSMC);[8][9]the PCI Mezzanine Card (PMC);XMC mezzanines;the Advanced Mezzanine Card;IndustryPacks (VITA 4), the GreenSpring Computers Mezzanine modules;etc.

Examples of daughterboard-style expansion cards include:

  • Enhanced Graphics Adapter piggyback board, adds memory beyond 64 KB, up to 256 KB[10]
  • Expanded memory piggyback board, adds additional memory to some EMS and EEMS boards[11]
  • ADD daughterboard
  • RAID daughterboard
  • Network interface controller (NIC) daughterboard
  • CPU Socket daughterboard
  • Bluetooth daughterboard
  • Modem daughterboard
  • AD/DA/DIO daughter-card
  • Communication daughterboard (CDC)
  • Server Management daughterboard (SMDC)
  • Serial ATA connector daughterboard
  • Robotic daughterboard
  • Access control List daughterboard
  • Arduino 'shield' daughterboards
  • Beaglebone 'cape' daughterboard
  • Raspberry Pi 'HAT' daughterboard.
  • Network Daughterboard (NDB). Commonly integrates: bus interfaces logic, LLC, PHY and Magnetics onto a single board.

Standards[edit]

  • PCI Extended (PCI-X)
  • PCI Express (PCIe)
  • Accelerated Graphics Port (AGP)
  • Conventional PCI (PCI)
  • Industry Standard Architecture (ISA)
  • Micro Channel architecture (MCA)
  • VESA Local Bus (VLB)
  • CardBus/PC card/PCMCIA (for notebook computers)
  • ExpressCard (for notebook computers)
  • Audio/modem riser (AMR)
  • Communications and networking riser (CNR)
  • CompactFlash (for handheld computers and high speed cameras and camcorders)
  • SBus (1990s SPARC-based Sun computers)
  • Zorro (Commodore Amiga)
  • NuBus (Apple Macintosh)

Expansion Slots Types

See also[edit]

  • M-Module, an industrial mezzanine standard for modular I/O

References[edit]

  1. ^'What is expansion bus'. Webopedia.
  2. ^'MB-54VP'. ArtOfHacking.com. Retrieved 2012-11-17.
  3. ^'NX586'. ArtOfHacking.com. Retrieved 2012-11-17.
  4. ^'LEOPARD 486SLC2 REV. B'. ArtOfHacking.com. Retrieved 2012-11-17.
  5. ^'Motherboards'. Artofhacking.com. Retrieved 2012-11-17.
  6. ^'PCI Mechanical Working Group ECN: Low Profile PCI Card'(PDF). Pcisig.com. Retrieved 2012-11-17.
  7. ^ IEEE Std. 100 Authoritative Dictionary of IEEE Standards Terms, Seventh Edition, IEEE, 2000,ISBN0-7381-2601-2, page 284
  8. ^Jens Kröger.'Data Transmission at High Rates via Kapton Flexprints for the Mu3e Experiment'.2014.p. 43 to 44.
  9. ^Altera.'High Speed Mezzanine Card (HSMC) Specification'.p. 2-3.
  10. ^Market Looks to EGA as De Facto Standard, InfoWorld, Aug 19, 1985
  11. ^Product Comparison: 16-Bit EMS Memory, InfoWorld, Sep 7, 1987

External links[edit]

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Expansion_card&oldid=931157618'

Coments are closed