Higig Header Format For Essay

IRVINE, Calif., Sept. 24, 2014 /PRNewswire/ —

News Highlights:

  • First to deliver 32 ports of 100GE, 64 ports of 40GE/50GE or 128 ports of 25GE on a single chip
  • Significantly improves efficiency of cloud-scale networks using high-density 25/50GE data center link protocols1
  • In-field configurable flow processing and instrumentation engines enrich network control and visibility
  • Leverages broad ecosystem of network software, hardware, OEM, operator and application partners

Broadcom Corporation (NASDAQ: BRCM), a global innovation leader in semiconductor solutions for wired and wireless communications, today announced the immediate availability of a new line of switches optimized for cloud-scale data centers. Building on its widely deployed StrataXGS®  Trident and StrataDNX™ products, the new StrataXGS®Tomahawk™ Switch Series is the industry’s highest performance Ethernet switch, delivering 3.2 Terabits per second (Tbps) switching capacity, unparalleled port density and SDN-optimized engines in a single chip. For more news, visitBroadcom’s Newsroom.

With more than 7 Billion integrated transistors, the StrataXGS Tomahawk Series enables the transformation of next-generation cloud fabrics to all-25Gbps per-lane interconnect, increasing link performance by 2.5X2. With dense 100GE connectivity and authoritative support for new 25GE and 50GE protocol standards, the StrataXGS Tomahawk Series significantly bolsters the bandwidth capacity, scalability, and cost efficiency of today’s mega data centers and high performance computing (HPC) environments.

“Our StrataXGS Tomahawk Series will usher in the next wave of data centers running 25G and 100G Ethernet, while delivering the network visibility required to operate large-scale cloud computing, storage and HPC fabrics,” said Rajiv Ramaswami, Broadcom Executive Vice President, Infrastructure & Networking Group. “This is the culmination of a multi-year cooperative effort with our partners and customers to prepare for this transition. We are pleased to see significant industry investment in the Tomahawk 32x100GE form factor as well as the 25G/50G Ethernet specification, whichBroadcom defined and co-founded as an industry standard.”

Transforming Leaf-Spine Networks to 25/100GE for Maximum Efficiency and Scale-Out

By deploying StrataXGS Tomahawk based switches, data center networks currently running 10GE at the top-of-rack (leaf) level and 40GE at the end-of-row (spine) level can upgrade to 25GE and 100GE interconnect, respectively, to accommodate growth in distributed server/storage workloads without increasing network equipment footprint or cabling complexity. A three-tier data center fabric of StrataXGS Tomahawk switches, using standard, compact, CAPEX-efficient form factors, can deliver over 15X higher network bandwidth capacity3.

In lieu of upgrading server-to-switch connections to 40GE, a StrataXGS Tomahawk based network driving 25GE to the server reduces cabling elements within the rack by as much as 75 percent, while quadrupling the number of server and storage nodes that can be interconnected in a leaf-spine topology4. This dual-pronged improvement in bandwidth efficiency and port density compared to existing 40GE solutions gives modern data centers unprecedented ability to scale out their networks and achieve significant return on investment.

Comprehensive Visibility and Control for Software-Defined Data Centers

Optimized for Software Defined Network (SDN) application ecosystems, Broadcom’s new BroadView™ instrumentation feature set enables data center operators to have full visibility of network and switch-level analytics. With extensive application flow and debug statistics, link health and utilization monitors, streaming network congestion detection and packet tracing capabilities, the StrataXGS Tomahawk Series provides operators the telemetry to troubleshoot large-scale networks, apply controls for optimal performance, respond to potential problems before they happen and drive downOPEX.

Featuring new FleXGS™ packet processing engines, the StrataXGS Tomahawk Series enables operators to adapt to changing workloads and control their networks, with an extensive suite of user configurable functions for flow processing, security, network virtualization, measurement/monitoring, congestion management and traffic engineering. Among other benefits, FleXGS engines provide in-field configurable forwarding and classification database profiles, more than 12X greater application policy scale compared to previous generation switches, increased flexibility of packet lookups and key generation, and rich load balancing and traffic redirection controls. All these configurable capabilities are accessible to the network control plane via industry-proven software APIs and come without sacrificing network data plane throughput or latency.

StrataXGS Tomahawk Key Features

  • 3.2 Tbps multilayer Ethernet switching
  • Integrated low-power 25Ghz SERDES
  • Authoritative support for 25G and 50G Ethernet Consortium specification
  • Configurable pipeline latency enabling sub 400ns port-to-port operation
  • Supports high performance storage/RDMA protocols including RoCE and RoCEv2
  • BroadView instrumentation: provides switch- and network-level telemetry
  • High-density FleXGS flow processing for configurable forwarding/match/action capabilities
  • OpenFlow 1.3+ support using Broadcom OF-DPA™
  • Comprehensive overlay  and tunneling support including VXLAN, NVGRE, MPLS, SPB
  • Flexible policy enforcement for existing and new virtualization protocols
  • Enhanced Smart-Hash™ load balancing modes for leaf-spine congestion avoidance
  • Integrated Smart-Buffer™ technology with 5X greater performance versus static buffering
  • Single-chip and multi-chip HiGig™ solutions for top-of-rack and scalable chassis applications

Availability

The Broadcom StrataXGS BCM56960 Tomahawk Switch Series is now sampling.

Tags: 25G Ethernet, Broadcom, Tomahawk

Category: News

Engineers have been rapidly increasing chip-to-chip I/O speeds in an effort to keep pace with the bandwidth needs of increasingly integrated silicon. Consequently, a variety of parallel and serial options at speeds up to 10 Gbits/second are becoming available that a designer would do well to evaluate carefully.

When selecting a chip-to-chip interface, factors that should be considered include size, power, number of required package signal balls and latency. However, when comparing 10 Gbit/s serializer/deserializer (serdes), the 10-Gbit attachment unit interface (XAUI), and system packet interface 4 (SPI-4) in dynamic mode, a 10-Gbit/s serdes will typically have the advantage on size, power and signal pins.

With respect to power consumption, a 10 Gbit/s serdes can be about two-thirds that of an XAUI interface and about one-third that of an SPI-4 interface. The physical size of a 10 Gbit/s serdes core can be about half and one-eighth that of XAUI and SPI-4 interfaces respectively.

The number of signal pins can vary depending on the package, but in general, the interface with fewer high-speed signal I/Os will have the advantage. For example, a 10-Gbit/s serdes uses four high-speed signal lines while XAUI uses 16 and SPI-4 uses 72.

Interfaces with more data lines generally provide links with lower latency at the PHY layer. This is because, at a high level, as you transmit and receive at a higher speed over a single line, more serialization and deserialization is being performed on the data before it is transmitted and after it is received.

A 10-Gbit/s serdes, for example, will perform 64-to-1 or 32-to-1 serialization and deserialization. Each lane of an XAUI interface, on the other hand, will only need to perform 10-to-1 serialization and deserialization. So, for applications where link latency is critical, SPI-4 type physical (PHY) layer or a XAUI interface will generally provide superior performance.

Depending on the application, other items to consider when selecting an interface include the capability to be backward compatible with legacy interfaces, packaging options and channel characteristics.

To dig deeper into the high-speed interconnect issue, let's take a look at an example of a 10-Gbit/s interface and how it could be implemented in an ASIC.

In this example, the ASIC logic transmits 32- or 64-bit parallel data. Depending on the application, this data can be 64/66B encoded or Sonet scrambled. The data is serialized and sent using an NRZ (non-return-to-zero) signal over a single differential pair at 9.95 Gbits/s to 11.1 Gbits/s.

On the receive side, the differential serial data stream is received, the clock is recovered and the data is deserialized. The data is then provided to the ASIC logic at 32 or 64 bits wide with respect to the recovered clock. The core employs on-chip termination (100 ohms, differential) and for most applications transmits with a nominal signal amplitude of 500 mV differential, peak-to-peak.

Such a 10-Gbit/s serdes core may be optimized for low power and small size by targeting applications which require up to approximately 20 cm of FR4-type pc board and a single connector. These applications would include chip-to-chip applications where both chips are on the same board, or when one chip is on a daughter card.

By designing the serdes core so that it supports the XFP MSA (10 Gigabit Small Form Factor Pluggable Multi Source Agreement), it could then also be used to interface with XFP optical modules. These modules do not perform serialization and deserialization, and they have significantly smaller form factors than modules that employ parallel interfaces.

Rather than pushing 10 Gbits/s over a single differential pair, an alternative is to use an XAUI interface. XAUI employs four serdes, or lanes, each transmitting and receiving differential 8B/10B-encoded data at 3.125 Gbits/s. The effective bandwidth after the overhead for the 8B/10B coding is removed is 10 Gigabits of data per second.

XAUI interfaces may be used to transfer data between chips, over a backplane or to an optical module such as a Xenpak or XPAK (two 10-Gbit MSAs). When the XAUI interface is integrated into an ASIC, it typically provides 10-to-1 serialization and deserialization and interfaces to an XGXS (10-Gbit Ethernet extended sublayer) block that performs 8B/10B encoding/decoding, lane alignment, character substitution, and resynchronization of the received data stream to the local clock.

Another alternative for 10-Gbit/s chip-to-chip links is to use low-voltage differential signaling (LVDS) I/Os consisting of multiple data lines. SPI-4 interfaces are examples of such links. They feature 16 LVDS transmit data lines, one transmit control line, 16 receive data lines, one receive control line, and separate transmit and receive LVDS clocks which are forwarded with the data in each direction. In a typical application, each data line operates at a minimum of 622 Mbits/s, with 800 Mbits/s and above being common. The clocks run at half the baud rate � for example, 311 MHz for data-line operation of 622 Mbits/s.

There are two primary operating modes for the electrical interface in SPI-4 links. In static mode, which is for lower data rates, such as 800 Mbits/s and below, the skew between data lines and between clock and data must be carefully managed so that the clock may latch in the data on the receive side. Dynamic mode requires a more sophisticated receiver that determines the optimum timing for sampling each data line. In this mode there are no restrictions on clock-to-data skew and on-chip de-skew logic can resolve multiple bit times of skew between data lines.

Dynamic mode uses an initialization sequence to allow the receiver sampling circuitry to determine the optimal timing for latching each data line. With no restriction on clock-to-data skew, relaxed requirements for data-to-data skew and higher speed capability, dynamic mode has proved to be a popular interface. For both modes, the interface generally uses on-chip 8-to-1 serdes for each data line.

Rich Hovey (rhovey@lsil.com) is manager of the CoreWare product management group at LSI Logic Corp. (Milpitas, Calif.).

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *