Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format.
Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET. In recent years, Wi-Fi, the wireless LAN standardized by IEEE 802.11, is prevalent in home and small office networks and augmenting Ethernet in larger installations.
Ethernet was originally developed at Xerox PARC in 1973–1975. Robert Metcalfe and David Boggs wrote and presented their "Draft Ethernet Overview" before March 1974. In March 1974, R.Z. Bachrach wrote a memo to Metcalfe and Boggs and their management, stating that "technically or conceptually there is nothing new in your proposal" and that "analysis would show that your system would be a failure." This analysis was flawed in that it ignored the "channel capture effect", though this was not understood until 1994. In 1975, Xerox filed a patent application listing Metcalfe and Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.
The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.
Twisted-pair Ethernet systems have been developed since the mid-80s, beginning with StarLAN, but becoming widely known with 10BASE-T. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system offering higher performance.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.
Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.
Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card.
CSMA/CD shared medium Ethernet
Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:
Main procedure
1. Frame ready for transmission.
2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.
Collision detected procedure
1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage
This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.
Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all; these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed.
Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.
For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 feet). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device. If termination was not done, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication could take place.
A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working - although depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless.
People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network, and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point. Multiport Ethernet repeaters became known as "Ethernet hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable.
Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.
Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.[4] The results showed that, even for the smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.
This report was wildly controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).
While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the farthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased-in upgrades impossible.
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.
Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable.
Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).
In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These devices consisted of an internal two-port switch, dividing the 10BASE-T (10 Mbit) and 100BASE-T (100 Mbit) segments. The device would typically consist of more than two physical ports. When a network device becomes active on any of the physical ports, the device attaches it to either the 10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration from 10BASE-T to 100BASE-T networks. These devices are also known as dual-speed hubs because the traffic between devices connected at the same speed is not switched.
Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:
* They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require.
* It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
* Large amounts of broadcast traffic, whether malicious, accidental, or simply a side effect of network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever greater amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port.
o If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen).
* They suffer from bandwidth choke points where a lot of traffic is forced down a single link.
Some switches offer a variety of tools to combat these issues including:
* Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
* Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link.
* VLANs to keep different classes of users separate while using the same physical infrastructure.
* Fast routing at higher levels to route between those VLANs.
* Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.
The autonegotiation standard does not allow autodetection to detect the duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speed. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to autonegotiation.
Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to autonegotiate the best possible shared mode of operation. This works well for the case of every device being set to autonegotiate. The autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation.
Interoperability problems lead network administrators to manually set the mode of operation of interfaces on network devices. What would happen is that some device would fail to autonegotiate and therefore had to be set into one setting or another. This often led to duplex setting mismatches: in particular, when two interfaces are connected to each other with one set to autonegotiation and one set to full duplex mode, a duplex mismatch results because the autonegotiation process fails and half duplex is assumed – the interface in full duplex mode then transmits at the same time as receiving, and the interface in half duplex mode then gives up on transmitting a packet. The interface in half duplex mode is not ready to receive a packet, so it signals a collision, and transmissions are halted, for amounts of time based on backoff (random wait times) algorithms. When both packets start trying to transmit again, they interfere again and the backoff strategy may result in a longer and longer wait time before attempting to transmit again; eventually a transmission succeeds but this then causes the flood and collisions to resume.
Because of the wait times, the effect of a duplex mismatch is a network that is not completely 'broken' but is incredibly slow.
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).
Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.
Fiber optic variants of Ethernet are commonly used in structured cabling applications. These variants have also seen substantial penetration in enterprise datacenter applications, but are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new higher speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 40 Gbit/s and 100 Gbps Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards may have to be overthrown to reach terabit Ethernet.
A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the host (in contrast, it is often the device driver which removes the CRC32 (FCS) from the packets seen by the user).
The table below shows the complete Ethernet frame, as transmitted. Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard.
After a frame has been sent transmitters are required to pause a specified time before transmitting the next frame. For 10M this is 9600 ns, 100M 960 ns, 1000M 96 ns.
10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10 Gbps (XGMII) PHY works with 32 bits at a time. Some implementations use larger jumbo frames.
Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET. In recent years, Wi-Fi, the wireless LAN standardized by IEEE 802.11, is prevalent in home and small office networks and augmenting Ethernet in larger installations.
Ethernet was originally developed at Xerox PARC in 1973–1975. Robert Metcalfe and David Boggs wrote and presented their "Draft Ethernet Overview" before March 1974. In March 1974, R.Z. Bachrach wrote a memo to Metcalfe and Boggs and their management, stating that "technically or conceptually there is nothing new in your proposal" and that "analysis would show that your system would be a failure." This analysis was flawed in that it ignored the "channel capture effect", though this was not understood until 1994. In 1975, Xerox filed a patent application listing Metcalfe and Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.
The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used.
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.
Twisted-pair Ethernet systems have been developed since the mid-80s, beginning with StarLAN, but becoming widely known with 10BASE-T. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system offering higher performance.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.
Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.
Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card.
CSMA/CD shared medium Ethernet
Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:
Main procedure
1. Frame ready for transmission.
2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.
Collision detected procedure
1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage
This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.
Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all; these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed.
Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.
For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 feet). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device. If termination was not done, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication could take place.
A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working - although depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless.
People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network, and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point. Multiport Ethernet repeaters became known as "Ethernet hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable.
Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.
Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.[4] The results showed that, even for the smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.
This report was wildly controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).
While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the farthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased-in upgrades impossible.
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.
Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable.
Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).
In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These devices consisted of an internal two-port switch, dividing the 10BASE-T (10 Mbit) and 100BASE-T (100 Mbit) segments. The device would typically consist of more than two physical ports. When a network device becomes active on any of the physical ports, the device attaches it to either the 10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration from 10BASE-T to 100BASE-T networks. These devices are also known as dual-speed hubs because the traffic between devices connected at the same speed is not switched.
Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:
* They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require.
* It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
* Large amounts of broadcast traffic, whether malicious, accidental, or simply a side effect of network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever greater amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port.
o If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen).
* They suffer from bandwidth choke points where a lot of traffic is forced down a single link.
Some switches offer a variety of tools to combat these issues including:
* Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
* Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link.
* VLANs to keep different classes of users separate while using the same physical infrastructure.
* Fast routing at higher levels to route between those VLANs.
* Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.
The autonegotiation standard does not allow autodetection to detect the duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speed. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to autonegotiation.
Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to autonegotiate the best possible shared mode of operation. This works well for the case of every device being set to autonegotiate. The autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation.
Interoperability problems lead network administrators to manually set the mode of operation of interfaces on network devices. What would happen is that some device would fail to autonegotiate and therefore had to be set into one setting or another. This often led to duplex setting mismatches: in particular, when two interfaces are connected to each other with one set to autonegotiation and one set to full duplex mode, a duplex mismatch results because the autonegotiation process fails and half duplex is assumed – the interface in full duplex mode then transmits at the same time as receiving, and the interface in half duplex mode then gives up on transmitting a packet. The interface in half duplex mode is not ready to receive a packet, so it signals a collision, and transmissions are halted, for amounts of time based on backoff (random wait times) algorithms. When both packets start trying to transmit again, they interfere again and the backoff strategy may result in a longer and longer wait time before attempting to transmit again; eventually a transmission succeeds but this then causes the flood and collisions to resume.
Because of the wait times, the effect of a duplex mismatch is a network that is not completely 'broken' but is incredibly slow.
The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).
Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.
Fiber optic variants of Ethernet are commonly used in structured cabling applications. These variants have also seen substantial penetration in enterprise datacenter applications, but are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new higher speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 40 Gbit/s and 100 Gbps Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards may have to be overthrown to reach terabit Ethernet.
A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the host (in contrast, it is often the device driver which removes the CRC32 (FCS) from the packets seen by the user).
The table below shows the complete Ethernet frame, as transmitted. Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard.
After a frame has been sent transmitters are required to pause a specified time before transmitting the next frame. For 10M this is 9600 ns, 100M 960 ns, 1000M 96 ns.
10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10 Gbps (XGMII) PHY works with 32 bits at a time. Some implementations use larger jumbo frames.
No comments:
Post a Comment