Theoretically Achievable Cycle Time The performance of the systems has been the subject of intense debate, which has focused on the theoretical cycle times achievable by Industrial Ethernet systems. The briefest possible cycle time in theory is calculated as follows:

Source: frame makeup as defined in IEEEE 802.3 (The interframe gap of .96 µs must be added on top of the 5.1 µs cited above.)

Hence, if a master sends out a frame addressed to itself that does not pass through any other nodes, that frame will be available to the master again after 122 microseconds have elapsed (in the case of a single, maximum-length Ethernet frame).
In theory, it would be possible to process parts of a frame as soon as they are received. However, the CRC bytes that confirm the validity of the data received are last to arrive at the end of a frame. This scenario does not factor in delays affected by PHYs, cables, and Ethernet ports, times for internal data transfer in the master, etc. Moreover, once a signal leaves the master, the time it takes to travel along network lines (5 ns/m) and the processing time inside a slave have to be taken into account as well.
Prospective extensions of a system and possible future requirements need to be carefully considered for selecting either a centralized or a decentralized architecture. One advantage of the decentralized processing of various control loops is that it allows for adding nodes without any noticeable effect on the basic cycle time, i.e. no fundamental changes to the overall concept must be made. Moreover, additional functionality such as condition monitoring or integrated safety technology will have less impact on the control concept than in central architectures, which depend significantly on a low volume of data.
In order to select a solution that is viable for future use as well, wherever possible preference should be given to a decentralized handling of control loops for cycle times below 500 microseconds, especially in drive applications.

Communication Architecture of the Systems

Direct Cross-Traffic
Direct cross-traffic provides crucial benefits particularly in case of very demanding real-time requirements: for fast drive controllers, axes can be synchronized easily and with extreme precision, since all position values can be distributed directly without having to go through a master. That results in lower network load and also ensures that data (e.g. actual angle positions of axes) is available to all relevant nodes within the current cycle. If data needs to pass through a master first, it is not only delayed by one cycle, but overall data traffic on the network is increased as well.

With POWERLINK and SERCOS III, direct cross-traffic is a feature even for modules that only have slave functionality, while EtherNet/IP requires a module with scanner functionality.

Heavy Data Traffic
In applications involving a large volume of process data, the time required for passing through the nodes greatly impacts the overall cycle time. Data prioritization, on the other hand, enables lower cycle times. Systems that support prioritization mechanisms allow for reading high-priority data once every cycle and polling for data with a lower priority only every n-th cycle.

For POWERLINK, EtherNet/IP, and PROFINET, variable cycle times have been firmly established in the protocols‘ specifications. SERCOS III has only recently added this feature. For EtherCAT, solutions for this requirement can be implemented as part of a specific application.

Network Load for Safety Communication
Safety over Ethernet is based on a cyclic exchange of protected data between safety nodes (emergency stop switches, drives with Safety controllers). The safeguard procedures in this process involve data duplication and wrapping data in safe “containers”. This increases data rates on the network. Solutions using the summation frame method will see the frame count go up, whereas the single frame method will increase the volume of data in each of the frames that are due to be sent anyway. All in all, the theoretically superior performance of the summation frame method is neutralized.

Actual Cycle Time
In solutions using the summation frame method, data must pass twice through each controller. If a signal has to go through many nodes, total transfer time will rise considerably as it makes its way. Raw performance data cited by the organizations supporting such solutions has to be adjusted to account for this effect. Another aspect to consider is that performance depends on implementation specifics, e.g. task classes, in the actual control systems used for an application.

It is crucial for control quality on a network to ensure minimal jitter (clock deviation) and to determine signal delays very precisely. To this end, network nodes must be synchronized as precisely as possible. Competing Ethernet variants employ different mechanisms to achieve that goal. While EtherCAT uses the principle of distributed clocks solved by a proprietary algorithm within the ESC (EtherCAT Slave Controller), synchronization is accomplished via a simple sync signal (SoC) in POWERLINK networks.

EtherCAT, POWERLINK, and SERCOS III give users a system with almost no jitter (< 100 ns) at all times. On EtherNet/IP networks, jitter can be considerably reduced with special IEEE 1588 extensions in all components. Reduced jitter can also be achieved in PROFINET IRT applications.

Performance Contest
In practice, comparing system performance proves to be a difficult endeavor due to the specific characteristics of the various systems: EtherNet/IP and PROFINET RT are excluded from the start because these systems are only suitable for soft real-time requirements. PROFINET IRT poses problems due to the indispensable switches, which lead to a different application architecture that makes a direct comparison of measurements complicated. The values below were determined based on published calculation schemes.

Test scenarios were
1. a small machine comprising a master and 33 I/O modules (64 analog and 136 digital channels);
2. an I/O system with a master and twelve Ethernet slaves with 33 modules each (in total, 2000 digital and 500 analog channels were taken into account in this application);
3. a Motion Control network with 24 axes and one I/O station with 110 digital and 30 analog I/Os.

In practice, POWERLINK is faster than EtherCAT in most applications. EtherCAT is optimized for applications with only very low network traffic volume. In systems with a heavier data load, there is a disproportionate rise in cycle times in EtherCAT environments. Where decentralized architectures (e.g. for decentralized Motion control) are implemented, EtherCAT suffers greatly from the lack of direct cross-traffic (in both directions), which sharply reduces the performance that can theoretically be achieved. A direct I/O integration of EtherCAT also results in lower sampling rates (I/O system), since the time the signal takes to pass through the I/O has a direct impact on the cycle time within reach. For POWERLINK and SERCOS III, there are no such effects. The publication by Prytz (2008)* was used as a reference for the calcu- lations concerning EtherCAT. Delays for signals passing through the EtherCAT ASIC were verified again by measurements. For POWERLINK, applications with actual products were set up for practical measure- ments, leaving no room for doubt and reconfirming the cited figures. No tests and calculations were conducted for SERCOS III. However, SERCOS III can be expected to provide a performance level similar to POWERLINK, making it faster than EtherCAT in many applications.

*G. Prytz, EFTA Conference 2008, A Performance analysis of EtherCAT and PROFINET IRT. Referenced on the EtherCAT Technology Group‘s website, www. Last accessed: 14 September, 2011.