HARDWARE

x16 slots up to 128GBps


This morning, the PCI Special Interest Group (PCI-SIG) announces the long-awaited final (1.0) specification for PCI Express 6.0. The next-generation ubiquitous bus once again doubles the data transfer rate of PCIe tape, bringing it up to 8 GB / second in each direction – and far, far more for multi-tape configurations. With the final version of the specification now sorted and approved, the group expects the first commercial hardware to hit the market in 12-18 months, which in practice means it should start appearing on servers in 2023.

First announced in the summer of 2019, PCI Express 6.0 is, as the name suggests, an immediate continuation of the current generation PCIe 5.0 specification. As their goal was to continue to double the PCIe bandwidth approximately every 3 years, PCI-SIG almost immediately started working on PCIe 6.0 after the specification 5.0 was completed, looking for ways to double the PCIe bandwidth again. The product of these development efforts is the new PCIe 6.0 specification, and while the group missed its original release target in late 2021 in just a few weeks, today they announce that the specification has been finalized and released to group members.

As always, the creation of an even faster version of PCIe technology has been driven by the insatiable needs of the industry. The amount of data transmitted by graphics cards, accelerators, network cards, SSDs and other PCIe devices is only increasing, and thus the speeds of the bus to power these devices. As with previous versions of the standard, the current demand for faster specification comes from server operators, who already regularly use large amounts of high-speed hardware. But over time, the technology should be filtered to consumer devices (ie computers).

By doubling the PCIe connection speed, PCIe 6.0 represents a comprehensive doubling of the flow rate. X1 connections range from 4GB / second / direction to 8GB / second / direction, and this increases up to 128GB / second / direction for a full x16 connection. For devices already sewing a given width connection, the additional bandwidth represents a significant increase in bus limit; Meanwhile, for devices that have not yet saturated the connection, PCIe 6.0 offers the opportunity to reduce the connection width, maintaining the same bandwidth while reducing hardware costs.

PCI Express Bandwidth
(Full Duplex: GB / second / direction)
Groove width PCIe 1.0
(2003)
PCIe 2.0
(2007)
PCIe 3.0
(2010)
PCIe 4.0
(2017)
PCIe 5.0
(2019)
PCIe 6.0
(2022)
x1 0.25 GB / sec 0.5 GB / sec ~ 1GB / sec ~ 2GB / sec ~ 4GB / sec 8GB / sec
x2 0.5 GB / sec 1GB / sec ~ 2GB / sec ~ 4GB / sec ~ 8GB / sec 16GB / sec
x4 1GB / sec 2GB / sec ~ 4GB / sec ~ 8GB / sec ~ 16GB / sec 32GB / sec
x8 2GB / sec 4GB / sec ~ 8GB / sec ~ 16GB / sec ~ 32GB / sec 64GB / sec
x16 4GB / sec 8GB / sec ~ 16GB / sec ~ 32GB / sec ~ 64GB / sec 128GB / sec

PCI Express was first launched in 2003, and today’s release of 6.0 essentially marks the third major revision of the technology. While PCIe 4.0 and 5.0 were “only” extensions of earlier signaling methods – in particular, the continued use of PCIe 3.0 128b / 130b signaling with NRZ – PCIe 6.0 undertakes a major revision, probably the largest in the history of the standard.

To pull off another bandwidth doubling, PCI-SIG has completely changed the signaling technology, moving from the non-return to zero (NRZ) technology used from the beginning, to pulse-amplitude modulation 4 (PAM4).

As we wrote then that the development of PCIe 6.0 was announced for the first time:

At a very high level, what PAM4 does compared to NRZ is to take a page from the MLC NAND manual and double the number of electrical states that one cell (or in this case transmission) will retain. Instead of the traditional 0/1 high / low signaling, PAM4 uses 4 signal levels, so the signal can encode for four possible two-bit patterns: 00/01/10/11. This allows PAM4 to carry twice as much data as NRZ without having to double the bandwidth, which for PCIe 6.0 would result in a frequency of around 30GHz (!).

PAM4 itself is not a new technology, but so far it has been the domain of ultra-high-end network standards such as 200G Ethernet, where the amount of space available for multiple physical channels is even more limited. As a result, the industry already has several years of experience working with the signaling standard, and as their bandwidth needs continue to grow, PCI-SIG decided to bring it into the chassis based on the next generation of PCIe on it.

The trade-off for using PAM4 is of course the cost. Even with higher bandwidth per Hz, PAM4 currently costs more to deploy at almost any level, from PHY to physical layer. Why it has not taken over the world and why NRZ is still used elsewhere. The massive PCIe deployment scale itself will of course help a lot – economies of scale still cost a lot – but it will be interesting to see how things stand in a few years after PCIe 6.0 is in the midst of growth.

Meanwhile, unlike MLC NAND in my earlier analogy, due to additional signal states, the PAM4 signal itself is more fragile than the NRZ signal. This means that for the first time in the history of PCIe, the Forward Error Correction (FEC) standard was added to PAM4. In keeping with its name, Forward Error Correction is a signal debugging tool for delivering a constant flow of debugging data, and is already commonly used in situations where data integrity is critical and there is no time to retransmit (such as DisplayPort). 1.4 w / DSC). Although FEC has not been necessary for PCIe so far, the fragility of PAM4 will change that. The inclusion of FEC should not make a noticeable difference for end users, but for PCI-SIG it is another design requirement to be faced. In particular, the group should ensure that their FEC implementation is low latency, while still robust enough, as PCIe users will not want a significant increase in PCIe latency.

It is worth noting that the FEC was also paired with Cyclic Redundancy Checking (CRC) as the final layer of defense against significant errors. Packets that, even after the FEC still fails CRC – and are therefore still corrupted – will trigger a complete packet retransmission.

The result of switching to PAM4 is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements will not increase. PCIe 6.0 will have the same 36 dB loss as PCIe 5.0, which means that although track lengths are not officially defined by the standard, PCIe 6.0 connections should be able to reach the same as PCIe 5.0 connections. Which, coming from PCIe 5.0, is undoubtedly a relief for both vendors and engineers.

In addition to PAM4 and FEC, the latest major technological addition to PCIe 6.0 is its FLIT control unit (FLIT) coding method. Not to be confused with PAM4, which is at the physical level, FLIT encoding is used at the logical level to break data into fixed-size packets. By moving the logical layer to fixed-size packets, PCIe 6.0 can implement FEC and other debugging methods, as these methods require the specified fixed-size packets. FLIT encryption is not a new technology in itself, but, like PAM4, it is essentially borrowed from the area of ​​fast networking, where it is already used. And, according to PCI-SIG, this is one of the most important parts of the specification, as it is a key part for enabling (continuing) PCIe operation with low latency with FEC, as well as for very minimal costs. Overall, PCI-SIG considers PCIe 6.0 encoding to be a 1b / 1b encoding method, as there are no additional costs in the data encoding itself (however, there are additional costs in the form of additional FEC / CRC packets).

As this is more of a enabling part than a specification feature, FLIT encoding should be fairly invisible to users. However, it is important to note that PCI-SIG considered it important / useful enough that FLIT encoding is also transmitted backwards in some way to reduce connection speeds; once FLIT is enabled on the link, the connection will remain in FLIT mode all the time, even if the connection speed is reduced. So, for example, if the PCIe 6.0 graphics card drops from 64 GT / s (PCIe 6.0) to 2.5 GT / s (PCIe 1.x) to save idle power, the connection itself will still work. in FLIT mode, instead of returning to a full PCIe 1.x-style connection. This at the same time simplifies the design of the specification (without having to renegotiate connections outside the connection speed) and allows all connection speeds to benefit from low latency and low FLIT costs.

As always, PCIe 6.0 is backwards compatible with earlier specifications; so older devices will work on newer hosts, and new devices will work on older hosts. Also, current connector shapes are still supported, including the ubiquitous PCIe card edge connector. So while specification support will have to be built into newer generations of devices, it should be a relatively easy transition, just like previous generations of technology.

Unfortunately, PCI-SIG has not been able to give us much guidance on what this means for implementations, especially in consumer systems – the group is just setting the standard, and it is up to the hardware manufacturers to implement it. Since switching to PAM4 means that the amount of signal loss for a given track length has not increased, conceptually, setting up PCIe 6.0 slots should be approximately as flexible as setting up PCIe 5.0 slots. However, we will have to wait and see what AMD and Intel will come up with in the next few years. Being able to do something and being able to do it with a consumable hardware budget is not always the same thing.

Concluding, with the PCIe 6.0 specification finally completed, PCI-SIG tells us that, based on previous adoption deadlines, we should start seeing PCIe 6.0 compatible hardware coming to market in 12-18 months. In practice, this means that we should see the first server equipment next year, and then maybe another year or two for consumer equipment.



Source link

Naveen Kumar

Friendly communicator. Music maven. Explorer. Pop culture trailblazer. Social media practitioner.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button