Friday, March 28, 2008

12 SAFETY FEATURES THAT YOUR CAR MUST HAVE

#1-crumple zones-A mainstay in todays automobiles, this is the harmonic flow of numerous body panels and brackets that absorb the energy normally associated with a crash. Parts like the hood, bumper and fenders are engineered to crumple like an accordion, therefore taking the brunt in any accidental situations.

#2-Wraparound Headlights-Just like its name implies, it's a one piece headlight design that integrates the low beam, high beam, and turn signals. The headlights wrap around from the front or back of the car to the sides. Not only are the halogen headlights brighter and wider with the use of reflective cuts in the chamber, but folks driving along our blind spots can easily know our lane changing intentions. As a result, causing less accidents.

#3-Breakaway Motor Mounts-These mounts attach the engine to the frame of the car. They're not noticeable, but the life saving impact is huge. In a front impact collision, they're specifically designed to break the engine away from the frame and with the forward motion, will make the engine slide underneath the car at a 45 degree angle. Making it less likely to have an engine sitting in your lap when the crash comes to a halt.

#4-steel belted radials-It's pretty obvious, our tires are very important safety features, it is what keeps the car on the road. Tires are built with steel fibers built right in, how do they help? Well, motorists will have the peace of mind that their tires will hold up in even the most extreme conditions. Those belts will also give these tires a longer lasting life span. Less maintenance in the long run.

#5-ventilated disc brakes-Equally important as the tires are, disc brakes stop the car. Brakes are constructed of a rotor, pads, and calipers for short. The rotors are engineered with internal vanes, to help vent out the heat. And this will help defend against fade and making it less likely to repair the brakes often.

#6-Side Impact Door Beams-Like the crumple zones, this aids in absorbing energy in a side impact collision. They are steel intrusion beams built inside the door for extra reinforcement. Every car and truck have these.

#7-Laminated Windshield-This might be important, it is the very object that keeps bugs out of our teeth, and the rain out of our hair. The windshield is made up of two pieces of tempered glass with a laminate sheet in between. This is a glass sandwich that holds together well when sharp or heavy objects smash into it. There's no shattering or large pieces of glass flying about.

#8-Tempered Safety Glass-The other glass that gives us 360 degrees of protection is also designed with safety in mind. Automotive glass is heat tempered, so that when it breaks, it shatters into a multitude of small cubes. The small cubes won't cut or injure the occupants.

#9-Child Safety Door Locks-As this name indicates, they are small locks in the inside door jams of the rear two doors(4 door sedan or suv only). Lock them up and the little guys in the back seat can't unlock and pull the inside door handles while we are driving.

#10-5 mph bumpers-I would classify this as a safety item for the car itself. In the event a driver lightly hits a lightpole, grocery cart, etc. at 5mph or under, it is unlikely there will be any major structural damage. These days, the government mandated limit to follow is 2.5 mph, most automakers have the 5 mph variety.

#11-Center High Mounted Brake Light-It's actually just a 3rd brake light mounted higher than the two main brake lights, and most autos have them. They're main purpose is to make drivers behind the motorist aware of their braking intentions, normally cars 6-10 back can see this clearly.

#12-Safety Cage Construction-Think of a built in rollcage, it's the main exo-skeletal feature that provides the most protection. In every accidental situation, this protects 360 degrees. We can even literally turn a car or truck upside down on its roof, the cage will support 1.5x it's own weight. There's nothing more important.

Tuesday, March 25, 2008

INTERNET PING SERVICE

Ping is a computer network tool used to test whether a particular host is reachable across an IP network; it is also used to self test the network interface card of the computer. It works by sending ICMP “echo request” packets to the target host and listening for ICMP “echo response” replies. Ping estimates the round-trip time, generally in milliseconds, and records any packet loss, and prints a statistical summary when finished.

The word ping is also frequently used as a verb or noun, where it can refer directly to the round-trip time, the act of running a ping program or measuring the round-trip time. It is also used in colloquial slang to mean a 'pinging' sound, referring often to the sound made by an elastic band or any similar noise.

Mike Muuss wrote the program in December, 1983, as a tool to troubleshoot odd behavior on an IP network. He named it after the pulses of sound made by a sonar, since its operation is analogous to active sonar in submarines, in which an operator issues a pulse of energy (a network packet) at the target, which then bounces from the target and is received by the operator. Later David L. Mills provided a backronym, "Packet InterNet Grouper (Groper)" (sometimes also defined as "Packet Inter-Network Groper).

The usefulness of ping in assisting the "diagnosis" of Internet connectivity issues was impaired from late in 2003, when a number of Internet Service Providers filtered out ICMP Type 8 (echo request) messages at their network boundaries.

This was partly due to the increasing use of ping for target reconnaissance, for example by Internet worms such as Welchia that flood the Internet with ping requests in order to locate new hosts to infect. Not only did the availability of ping responses leak information to an attacker, it added to the overall load on networks, causing problems for routers across the Internet.

Although RFC 1122 prescribes that any host must accept an echo-request and issue an echo-reply in return, one finds that this standard is frequently not followed on the public Internet. Notably, Windows XP SP1 will not respond to an echo request on the public Internet in the default configuration.

Proponents of not honoring echo requests say that this practice increases network security. However, attackers can still send network packets to a machine, regardless of whether it responds to a ping. Those who insist that the standard be followed say that not honoring ping interferes with network diagnostics.

Friday, March 21, 2008

SEO MARKETING EXPLAINED

Search engine marketing, or SEM, is a form of Internet marketing that seeks to promote websites by increasing their visibility in search engine result pages (SERPs). According to the Search Engine Marketing Professionals Organization, SEM methods include: search engine optimization (or SEO), paid placement, and paid inclusion. Other sources, including the New York Times, define SEM as the practice of buying paid search listings.

In 2006, North American advertisers spent US$9.4 billion on search engine marketing, a 62% increase over the prior year and a 750% increase over the 2002 year. The largest SEM vendors are Google AdWords, Yahoo! Search Marketing and Microsoft adCenter. As of 2006, SEM was growing much faster than traditional advertising.

As the number of sites on the Web increased in the mid-to-late 90s, search engines started appearing to help people find information quickly. Search engines developed business models to finance their services, such as pay per click programs offered by Open Text in 1996 and then Goto.com in 1998. Goto.com later changed its name to Overture in 2001, and was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing. Google also began to offer advertisements on search results pages in 2000 through the Google AdWords program. By 2007 pay-per-click programs proved to be primary money-makers for search engines.

Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged. The term "Search Engine Marketing" was proposed by Danny Sullivan in 2001 to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals. In 2007 Search Engine Marketing is stronger than ever with SEM Budgets up 750% as shown with stats dating back to 2002 vs 2006.

Paid search advertising hasn't been without controversy, and issues around how many search engines present advertising on their pages of search result sets have been the target of a series of studies and reports by Consumer Reports WebWatch, from Consumers Union. The FTC also issued a letter in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties to Ralph Nader.

* SEMPO, the Search Engine Marketing Professional Organization, is a non-profit professional association for search engine marketers.

Search engines with SEM programs

* Google - global
* Yahoo! - global
* Microsoft Live - global
* Ask.com - global
* Baidu - China
* Yandex - Russia
* Rambler - Russia
* Timway - Hong Kong



Monday, March 10, 2008

TOP MICROSHIP MANUFACTURER LIST

The list of manufacturers are known for their quality produced microchip. The companies are listed as follows:

Manufacturers

A list of notable manufacturers; some operating, some defunct:

* Agere Systems (formerly part of Lucent, which was formerly part of AT&T)
* Agilent Technologies (formerly part of Hewlett-Packard, spun-off in 1999)
* Alcatel
* Altera
* AMD (Advanced Micro Devices; founded by ex-Fairchild employees)
* Analog Devices
* ATI Technologies (Array Technologies Incorporated; acquired parts of Tseng Labs in 1997; in 2006, became a wholly-owned subsidiary of AMD)
* Atmel (co-founded by ex-Intel employee)
* Broadcom
* Commodore Semiconductor Group (formerly MOS Technology)
* Cypress Semiconductor
* Fairchild Semiconductor (founded by ex-Shockley Semiconductor employees: the "Traitorous Eight")
* Freescale Semiconductor (formerly part of Motorola)
* Fujitsu
* Genesis Microchip
* GMT Microelectronics (formerly Commodore Semiconductor Group)
* Hitachi, Ltd.
* Horizon Semiconductors
* IBM (International Business Machines)
* Infineon Technologies (formerly part of Siemens)
* Integrated Device Technology
* Intel (founded by ex-Fairchild employees)
* Intersil (formerly Harris Semiconductor)
* Lattice Semiconductor
* Linear Technology
* LSI Logic (founded by ex-Fairchild employees)
* Maxim Integrated Products
* Marvell Technology Group
* Microchip Technology Manufacturer of the PIC microcontrollers
* MicroSystems International
* MOS Technology (founded by ex-Motorola employees)
* Mostek (founded by ex-Texas Instruments employees)
* National Semiconductor (aka "NatSemi"; founded by ex-Fairchild employees)
* Nordic Semiconductor (formerly known as Nordic VLSI)
* Nvidia (acquired IP of competitor 3dfx in 2000; 3dfx was co-founded by ex-Intel employee)
* NXP Semiconductors (formerly part of Philips)
* ON Semiconductor (formerly part of Motorola)
* Parallax Inc.Manufacturer of the BASIC Stamp and Propeller Microcontrollers
* PMC-Sierra (from the former Pacific Microelectronics Centre and Sierra Semiconductor, the latter co-founded by ex-NatSemi employee)
* Renesas Technology (joint venture of Hitachi and Mitsubishi Electric)
* Rohm
* Samsung Electronics (Semiconductor division)
* STMicroelectronics (formerly SGS Thomson)
* Texas Instruments
* Toshiba
* TSMC (Taiwan Semiconductor Manufacturing Company. semiconductor foundry)
* VIA Technologies (founded by ex-Intel employee) (part of Formosa Plastics Group)
* Volterra Semiconductor
* Xilinx (founded by ex-ZiLOG employee)
* ZiLOG (founded by ex-Intel employees) (part of Exxon 1980–89; now owned by TPG)

MICROCHIP THE START OF THE NEW AGE IN COMPUTERS


In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material.

A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Performance is high since the components switch quickly and consume little power, because the components are small and close together. As of 2006, chip areas range from a few square mm to around 350 mm², with up to 1 million transistors per mm².

Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.

ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current, although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors

Only a half century after their development was initiated, integrated circuits have become ubiquitous. Computers, cellular phones, and other digital appliances are now inextricable parts of the structure of modern societies. That is, modern computing, communications, manufacturing and transport systems, including the Internet, all depend on the existence of integrated circuits. Indeed, many scholars believe that the digital revolution brought about by integrated circuits was one of the most significant occurrences in the history of humankind.

Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).

Digital integrated circuits can contain anything from a few thousand to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.

Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.

ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.

The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialised applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.

Semiconductor ICs are fabricated in a layer process which includes these key process steps:

* Imaging
* Deposition
* Etching

The main process steps are supplemented by doping, cleaning and planarisation steps.

Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium) tracks deposited on them.

* Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.

* In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.

* Resistive structures, meandering stripes of varying lengths, form the loads on the circuit. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity determines the resistance.

* Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Owing to limitations in size, only very small capacitances can be created on an IC.

* More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.

Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.

A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.

Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.

As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over a billion US Dollars to construct[1], because much of the operation is automated. The most advanced processes employ the following techniques:

* The wafers are up to 300 mm in diameter (wider than a common dinner plate).
* Use of 65 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using 45 nanometers for their CPU chips, and AMD and NEC have started using a 65 nanometer process. IBM and AMD are in development of a 45 nm process using immersion lithography.
* Copper interconnects where copper wiring replaces aluminium for interconnects.
* Low-K dielectric insulators.
* Silicon on insulator (SOI)
* Strained silicon in a process used by IBM known as Strained silicon directly on insulator (SSDOI)

The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by Small-Outline Integrated Circuit -- a carrier which occupies an area about 30 – 50% less than an equivalent DIP, with a typical thickness that is 70% less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.

Small-Outline Integrated Circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.

Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.

Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.

When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.

In April 1949, the German engineer Werner Jacobi (Siemens AG) filed the earliest patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 3-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.

The integrated circuit was later also conceived by a radar scientist, Geoffrey W.A. Dummer (1909-2002), working for the Royal Radar Establishment of the British Ministry of Defence, and published in Washington, D.C. on May 7, 1952. Dummer unsuccessfully attempted to build such a circuit in 1956.

A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.

The first integrated circuits were manufactured independently by two scientists: Jack Kilby of Texas Instruments filed a patent for a "Solid Circuit" made of germanium on February 6, 1959. Kilby received several US patents. Robert Noyce of Fairchild Semiconductor was awarded a patent for a more complex "unitary circuit" made of Silicon on April 25, 1961.

Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.

In the 1980's programmable integrated circuits were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders, and registers. Current devices named FPGAs (Field Programmable Gate Arrays) can now implement tens of thousands of LSI circuits in parallel and operate up to 550 MHz.

The techniques perfected by the integrated circuits industry over the last three decades have been used to create microscopic machines, known as MEMS. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.

In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.

Future developments seem to follow the multi-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears a staggering 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. X10 is the new open-source programming language designed to assist with this task.

DEEP BLUE - THE CHESS COMPUTER WIZARD


Deep Blue was a chess-playing computer developed by IBM. On 11 May 1997, the machine won a six-game match by two wins to one with three draws against world champion Garry Kasparov. Kasparov accused IBM of cheating and demanded a rematch, but IBM declined and retired Deep Blue.

Kasparov had won an earlier match against a previous version of Deep Blue in 1996.

The computer system dubbed "Deep Blue" was the first machine to win a chess game against a reigning world champion (Garry Kasparov) under regular time controls. This first win occurred on February 10, 1996. Deep Blue - Kasparov, 1996, Game 1 is a famous chess game. However, Kasparov won three games and drew two of the following games, beating Deep Blue by a score of 4–2. The match concluded on February 17, 1996.

Deep Blue was then heavily upgraded (unofficially nicknamed "Deeper Blue") and played Kasparov again in May 1997, winning the six-game rematch 3½–2½, ending on May 11, finally ending in game six. Deep Blue thus became the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls.

The project was started as "ChipTest" at Carnegie Mellon University by Feng-hsiung Hsu; the computer system produced was named Deep Thought after the fictional computer of the same name from The Hitchhiker's Guide to the Galaxy. Hsu joined IBM (Research division) in 1989 and worked with Murray Campbell on parallel computing problems. Deep Blue was developed out of this. The name is a play on Deep Thought and Big Blue, IBM's nickname.

The system derived its playing strength mainly out of brute force computing power. It was a massively parallel, 30-node, RS/6000, SP-based computer system enhanced with 480 special purpose VLSI chess chips. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In June 1997, Deep Blue was the 259th most powerful supercomputer, capable of calculating 11.38 gigaflops, although this did not take into account Deep Blue's special-purpose hardware for chess.

The Deep Blue chess computer which defeated Kasparov in 1997 would typically search to a depth of between six and twelve plies to a maximum of forty plies in some situations. An increase in search depth of one ply corresponds on the average to an increase in playing strength of approximately 80 Elo points. Levy and Newborn estimate that one additional ply increases the playing strength 50 to 70 points.

Deep Blue's evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g. how important is a safe king position compared to a space advantage in the center, etc.). The optimal values for these parameters were then determined by the system itself, by analyzing thousands of master games. The evaluation function had been split into 8,000 parts, many of them designed for special positions. In the opening book there were over 4,000 positions and 700,000 grandmaster games. The endgame database contained many six piece endgames and five or fewer piece positions. Before the second match, the chess knowledge of the program was fine tuned by grandmaster Joel Benjamin. The opening library was provided by grandmasters Miguel Illescas, John Fedorowicz and Nick de Firmian. When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused. However, Kasparov did study many popular PC computer games to become familiar with computer game play in general.

After the loss, Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, suggesting that during the second game, human chess players, in violation of the rules, intervened. IBM denied that it cheated, saying the only human intervention occurred between games. The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play revealed during the course of the match. This allowed the computer to avoid a trap in the final game that it had fallen for twice before. Kasparov requested printouts of the machine's log files but IBM refused, although the company later published the logs on the Internet at http://www.research.ibm.com/deepblue/watch/html/c.shtml. Kasparov demanded a rematch, but IBM declined and retired Deep Blue.

In 2003 a documentary film was made that explored these claims. Titled Game Over: Kasparov and the Machine, the film implied that Deep Blue's heavily promoted victory was a plot by IBM to boost its stock value.

One of the two racks that made up Deep Blue is on display at the National Museum of American History in their exhibit about the Information Age; the other rack appears at the Computer History Museum in their "Mastering The Game: A History of Computer Chess" exhibit.

Feng-hsiung Hsu later claimed in his book Behind Deep Blue that he had the rights to use the Deep Blue design to build a bigger machine independently of IBM to take Kasparov's rematch offer, but Kasparov refused a rematch. Kasparov's side responded that Hsu's offer was empty and more of a demand than an offer because Hsu had no sponsors, no money, no hardware, no technical team, just some patents and demands that Kasparov commit to putting his formal world title on the line before further negotiations could even begin (with no guarantees as to fair playing conditions or proper qualification matches).

Deep Blue, with its capability of evaluating 200 million positions per second, was the strongest computer that ever faced a world chess champion. Today, in computer chess research and matches of world class players against computers, the focus of play has often shifted to software chess programs, rather than using dedicated chess hardware. Modern chess programs like Rybka, Deep Fritz or Deep Junior are more efficient than the programs during Deep Blue's era. In a recent match, Deep Fritz vs. Vladimir Kramnik in November 2006, the program ran on a personal computer containing two Intel Core 2 Duo CPUs, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies in the middlegame.

* Deep Blue was seen on the Futurama episode "Anthology of Interest I" voiced by Tress MacNeille.
* Servotron has a song entitled "Deep Blue, Congratulations" on their album Entertainment Program for Humans (Second Variety).
* On the April 14, 2005 episode of The Daily Show with Jon Stewart, Stewart invited a fictional version of Deep Blue to comment on the recent extradition of former chess champion Bobby Fischer. Deep Blue didn't offer any analysis of any kind, and repeatedly suggested they play chess.
* Deep Blue is a 1997 album and song by Peter Mulvey. The title song was inspired by the 1997 Kasparov match.
* Deep Blue made an appearance on The Tonight Show with Jay Leno, and was parodied on Late Show with David Letterman with Top Ten Ways Deep Blue is Celebrating its Victory.
* Referenced in Pure Pwnage. Said to have been beaten by Teh_Masterer, who had used only a row of pawns and a single bishop.
* In a Nike commercial, former San Antonio Spurs center David Robinson played Deep Blue in one-on-one basketball.
* On the TV show Monkey Dust (Series 3 - Episode 6), in skit "They All Come Home", Lieutenant Al Jablonski walks into the hanger and asks a soldier, Hershburg, who he is playing chess with on the computer. Hershburg responds, "a little tin-can called Deep Blue".

Sunday, March 2, 2008

FIBER OPTIC THE KEY TO CONNECTION SUCCESS


An optical fiber (or fibre) is a glass or plastic fiber designed to guide light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communication, which permits transmission over longer distances and at higher data rates than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are immune to electromagnetic interference. Optical fibers are also used to form sensors, and in a variety of other applications.

Light is kept in the "core" of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multimode fibers (MMF). Fibers which support only a single mode are called singlemode fibers (SMF). Multimode fibers generally have a large-diameter core, and are used for short-distance communication links or for applications where high power must be transmitted. Singlemode fibers are used for most communication links longer than 200 meters.

Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with an electric arc. Special connectors are used to make removable connections.

The light-guiding principle behind optical fibers was first demonstrated by Daniel Colladon and Jacques Babinet in the 1840s, with Irish inventor John Tyndall offering public displays using water-fountains ten years later. Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. The principle was first used for internal medical examinations by Heinrich Lamm in the following decade. In 1952, physicist Narinder Singh Kapany conducted experiments that led to the invention of optical fiber, based on Tyndall's earlier studies; modern optical fibers, where the glass fiber is coated with a transparent cladding to offer a more suitable refractive index, appeared later in the decade. Development then focused on fiber bundles for image transmission. The first fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material. A variety of other image transmission applications soon followed. The advent of ultrapure silicon for semiconductor devices made low-loss silica fiber practical.

In 1965, Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables were the first to suggest that attenuation of contemporary fibers was caused by impurities, which could be removed, rather than fundamental physical effects such as scattering. They speculated that optical fiber could be a practical medium for communication, if the attenuation could be reduced below 20 dB per kilometer. This attenuation level was first achieved in 1970, by researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for American glass maker Corning Glass Works, now Corning Inc. They demonstrated a fiber with 17 dB optic attenuation per kilometer by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km using germanium oxide as the core dopant. Such low attenuations ushered in optical fiber telecommunications and enabled the Internet. Nowadays, attenuations in optical cables are far less than those in electrical copper cables, leading to long-haul fiber connections with repeater distances of 500–800 km.

The erbium-doped fiber amplifier, which reduced the cost of long-distance fiber systems by reducing or even in many cases eliminating the need for optical-electrical-optical repeaters, was co-developed by teams led by David Payne of the University of Southampton, and Emmanuel Desurvire at Bell Laboratories in 1986. The more robust optical fiber commonly used today utilizes glass for both core and sheath and is therefore less prone to aging processes. It was invented by Gerhard Bernsee in 1973 by Schott Glass in Germany.

In 1991, the emerging field of photonic crystals led to the development of photonic crystal fiber, which guides light by means of diffraction from a periodic structure, rather than total internal reflection. The first photonic crystal fibers became commercially available in 1996. Photonic crystal fibers can be designed to carry higher power than conventional fiber, and their wavelength dependent properties can be manipulated to improve their performance in certain applications.

Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Additionally, the light signals propagating in the fiber can be modulated at rates as high as 40 Gb/s, and each fiber can carry many independent channels, each by a different wavelength of light (wavelength-division multiplexing). Over short distances, such as networking within a building, fiber saves space in cable ducts because a single fiber can carry much more data than a single electrical cable. Fiber is also immune to electrical interference, which prevents cross-talk between signals in different cables and pickup of environmental noise. Also, wiretapping is more difficult compared to electrical connections, and there are concentric dual core fibers that are said to be tap-proof. Because they are non-electrical, fiber cables can bridge very high electrical potential differences and can be used in environments where explosive fumes are present, without danger of ignition.

Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances (up to 500 m), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.

Optical fibers can be used as sensors to measure strain, temperature, pressure and other parameters. The small size and the fact that no electrical power is needed at the remote location gives the fiber optic sensor an advantage over a conventional electrical sensor in certain applications.

Optical fibers are used as hydrophones for seismic or SONAR applications. Hydrophone systems with more than 100 sensors per fiber cable have been developed. Hydrophone sensor systems are used by the oil industry as well as a few countries' navies. Both bottom mounted hydrophone arrays and towed streamer systems are in use. The German company Sennheiser developed a microphone working with a laser and optical fibers.

Optical fiber sensors for temperature and pressure have been developed for downhole measurement in oil wells. The fiber optic sensor is well suited for this environment as it is functioning at temperatures too high for semiconductor sensors (Distributed Temperature Sensing).

Another use of the optical fiber as a sensor is the optical gyroscope which is in use in the Boeing 767 and in some car models (for navigation purposes) and the use in Hydrogen microsensors.

Fiber-optic sensors have been developed to measure co-located temperature and strain simultaneously with very high accuracy. This is particularly useful when acquiring information from small complex structures.

Fibers are widely used in illumination applications. They are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. In some buildings, optical fibers are used to route sunlight from the roof to other parts of the building. Optical fiber illumination is also used for decorative applications, including signs, art, and artificial Christmas trees. Swarovski boutiques use optical fibers to illuminate their crystal showcases from many different angles while only employing one light source. Optical fiber is an intrinsic part of the light-transmitting concrete building product, LiTraCon.

Optical fiber is also used in imaging optics. A coherent bundle of fibers is used, sometimes along with lenses, for a long, thin imaging device called an endoscope, which is used to view objects through a small hole. Medical endoscopes are used for minimally invasive exploratory or surgical procedures (endoscopy). Industrial endoscopes are used for inspecting anything hard to reach, such as jet engine interiors.

An optical fiber doped with certain rare-earth elements such as erbium can be used as the gain medium of a laser or optical amplifier. Rare-earth doped optical fibers can be used to provide signal amplification by splicing a short section of doped fiber into a regular (undoped) optical fiber line. The doped fiber is optically pumped with a second laser wavelength that is coupled into the line in addition to the signal wave. Both wavelengths of light are transmitted through the doped fiber, which transfers energy from the second pump wavelength to the signal wave. The process that causes the amplification is stimulated emission.

Optical fibers doped with a wavelength shifter are used to collect scintillation light in physics experiments.

Optical fiber can be used to supply a low level of power (around one watt) to electronics situated in a difficult electrical environment. Examples of this are electronics in high-powered antenna elements and measurement devices used in high voltage transmission equipment.

An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis, by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The boundary between the core and cladding may either be abrupt, in step-index fiber, or gradual, in graded-index fiber.

Fiber with large (greater than 10 μm) core diameter may be analyzed by geometric optics. Such fiber is called multimode fiber, from the electromagnetic analysis. In a step-index multimode fiber, rays of light are guided along the fiber core by total internal reflection. Rays that meet the core-cladding boundary at a high angle (measured relative to a line normal to the boundary), greater than the critical angle for this boundary, are completely reflected. The critical angle (minimum angle for total internal reflection) is determined by the difference in index of refraction between the core and cladding materials. Rays that meet the boundary at a low angle are refracted from the core into the cladding, and do not convey light and hence information along the fiber. The critical angle determines the acceptance angle of the fiber, often reported as a numerical aperture. A high numerical aperture allows light to propagate down the fiber in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber. However, this high numerical aperture increases the amount of dispersion as rays at different angles have different path lengths and therefore take different times to traverse the fiber. A low numerical aperture may therefore be desirable.

In graded-index fiber, the index of refraction in the core decreases continuously between the axis and the cladding. This causes light rays to bend smoothly as they approach the cladding, rather than reflecting abruptly from the core-cladding boundary. The resulting curved paths reduce multi-path dispersion because high angle rays pass more through the lower-index periphery of the core, rather than the high-index center. The index profile is chosen to minimize the difference in axial propagation speeds of the various rays in the fiber. This ideal index profile is very close to a parabolic relationship between the index and the distance from the axis.

Fiber with a core diameter less than about ten times the wavelength of the propagating light cannot be modeled using geometric optics. Instead, it must be analyzed as an electromagnetic structure, by solution of Maxwell's equations as reduced to the electromagnetic wave equation. The electromagnetic analysis may also be required to understand behaviors such as speckle that occur when coherent light propagates in multi-mode fiber. As an optical waveguide, the fiber supports one or more confined transverse modes by which light can propagate along the fiber. Fiber supporting only one mode is called single-mode or mono-mode fiber. The behavior of larger-core multimode fiber can also be modeled using the wave equation, which shows that such fiber supports more than one mode of propagation (hence the name). The results of such modeling of multi-mode fiber approximately agree with the predictions of geometric optics, if the fiber core is large enough to support more than a few modes.

The waveguide analysis shows that the light energy in the fiber is not completely confined in the core. Instead, especially in single-mode fibers, a significant fraction of the energy in the bound mode travels in the cladding as an evanescent wave.

The most common type of single-mode fiber has a core diameter of 8 to 10 μm and is designed for use in the near infrared. The mode structure depends on the wavelength of the light used, so that this fiber actually supports a small number of additional modes at visible wavelengths. Multi-mode fiber, by comparison, is manufactured with core diameters as small as 50 micrometres and as large as hundreds of micrometres.

Some special-purpose optical fiber is constructed with a non-cylindrical core and/or cladding layer, usually with an elliptical or rectangular cross-section. These include polarization-maintaining fiber and fiber designed to suppress whispering gallery mode propagation.

Photonic crystal fiber is made with a regular pattern of index variation (often in the form of cylindrical holes that run along the length of the fiber). Such fiber uses diffraction effects instead of or in addition to total internal reflection, to confine light to the fiber's core. The properties of the fiber can be tailored to a wide variety of applications.

Glass optical fibers are almost always made from silica, but some other materials, such as fluorozirconate, fluoroaluminate, and chalcogenide glasses, are used for longer-wavelength infrared applications. Like other glasses, these glasses have a refractive index of about 1.5. Typically the difference between core and cladding is less than one percent.

Plastic optical fibers (POF) are commonly step-index multimode fibers with a core diameter of 0.5 mm or larger. POF typically have higher attenuation co-efficients than glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based systems.

Standard optical fibers are made by first constructing a large-diameter preform, with a carefully controlled refractive index profile, and then pulling the preform to form the long, thin optical fiber. The preform is commonly made by three chemical vapor deposition methods: inside vapor deposition, outside vapor deposition, and vapor axial deposition.

With inside vapor deposition, a hollow glass tube approximately 40 cm in length known as a "preform" is placed horizontally and rotated slowly on a lathe, and gases such as silicon tetrachloride (SiCl4) or germanium tetrachloride (GeCl4) are injected with oxygen in the end of the tube. The gases are then heated by means of an external hydrogen burner, bringing the temperature of the gas up to 1900 kelvins, where the tetrachlorides react with oxygen to produce silica or germania (germanium oxide) particles. When the reaction conditions are chosen to allow this reaction to occur in the gas phase throughout the tube volume, in contrast to earlier techniques where the reaction occurred only on the glass surface, this technique is called modified chemical vapor deposition.

The oxide particles then agglomerate to form large particle chains, which subsequently deposit on the walls of the tube as soot. The deposition is due to the large difference in temperature between the gas core and the wall causing the gas to push the particles outwards (this is known as thermophoresis). The torch is then traversed up and down the length of the tube to deposit the material evenly. After the torch has reached the end of the tube, it is then brought back to the beginning of the tube and the deposited particles are then melted to form a solid layer. This process is repeated until a sufficient amount of material has been deposited. For each layer the composition can be modified by varying the gas composition, resulting in precise control of the finished fiber's optical properties.

In outside vapor deposition or vapor axial deposition, the glass is formed by flame hydrolysis, a reaction in which silicon tetrachloride and germanium tetrachloride are oxidized by reaction with water (H2O) in an oxyhydrogen flame. In outside vapor deposition the glass is deposited onto a solid rod, which is removed before further processing. In vapor axial deposition, a short seed rod is used, and a porous preform, whose length is not limited by the size of the source rod, is built up on its end. The porous preform is consolidated into a transparent, solid preform by heating to about 1800 kelvins.

The preform, however constructed, is then placed in a device known as a drawing tower, where the preform tip is heated and the optic fiber is pulled out as a string. By measuring the resultant fiber width, the tension on the fiber can be controlled to maintain the fiber thickness.

In practical fibers, the cladding is usually coated with a tough resin buffer layer, which may be further surrounded by a jacket layer, usually plastic. These layers add strength to the fiber but do not contribute to its optical wave guide properties. Rigid fiber assemblies sometimes put light-absorbing ("dark") glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces cross-talk between the fibers, or reduces flare in fiber bundle imaging applications.

Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, dual use as power lines, installation in conduit, lashing to aerial telephone poles, submarine installation, or insertion in paved streets. In recent years the cost of small fiber-count pole-mounted cables has greatly decreased due to the high Japanese and South Korean demand for fiber to the home (FTTH) installations.

Traditional fiber's loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. "Bendable fibers", targeted towards easier installation in home environments, have been standardised as ITU-T G.657. This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed. Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage.

Optical fibers are connected to terminal equipment by optical fiber connectors. These connectors are usually of a standard type such as FC, SC, ST, LC, or MTRJ.

Optical fibers may be connected to each other by connectors or by splicing, that is, joining two fibers together to form a continuous optical waveguide. The generally accepted splicing method is arc fusion splicing, which melts the fiber ends together with an electric arc. For quicker fastening jobs, a "mechanical splice" is used.

Fusion splicing is done with a specialized instrument that typically operates as follows: The two cable ends are fastened inside a splice enclosure that will protect the splices, and the fiber ends are stripped of their protective polymer coating (as well as the more sturdy outer jacket, if present). The ends are cleaved (cut) with a precision cleaver to make them perpendicular, and are placed into special holders in the splicer. The splice is usually inspected via a magnified viewing screen to check the cleaves before and after the splice. The splicer uses small motors to align the end faces together, and emits a small spark between electrodes at the gap to burn off dust and moisture. Then the splicer generates a larger spark that raises the temperature above the melting point of the glass, fusing the ends together permanently. The location and energy of the spark is carefully controlled so that the molten core and cladding don't mix, and this minimizes optical loss. A splice loss estimate is measured by the splicer, by directing light through the cladding on one side and measuring the light leaking from the cladding on the other side. A splice loss under 0.1 dB is typical. The complexity of this process makes fiber splicing much more difficult than splicing copper wire.

Mechanical fiber splices are designed to be quicker and easier to install, but there is still the need for stripping, careful cleaning and precision cleaving. The fiber ends are aligned and held together by a precision-made sleeve, often using a clear index-matching gel that enhances the transmission of light across the joint. Such joints typically have higher optical loss and are less robust than fusion splices, especially if the gel is used. All splicing techniques involve the use of an enclosure into which the splice is placed for protection afterward.

Fibers are terminated in connectors so that the fiber end is held at the end face precisely and securely. A fiber-optic connector is basically a rigid cylindrical barrel surrounded by a sleeve that holds the barrel in its mating socket. It can be push and click, turn and latch, or threaded. A typical connector is installed by preparing the fiber end and inserting it into the rear of the connector body. Quick-set adhesive is usually used so the fiber is held securely, and a strain relief is secured to the rear. Once the adhesive has set, the fiber's end is polished to a mirror finish. Various polish profiles are used, depending on the type of fiber and the application. For singlemode fiber, the fiber ends are typically polished with a slight curvature, such that when the connectors are mated the fibers touch only at their cores. This is known as a "physical contact" (PC) polish. The curved surface may be polished at an angle, to make an "angled physical contact" (APC) connection. Such connections have higher loss than PC connections, but greatly reduced back reflection, because light that reflects from the angled surface leaks out of the fiber core; the resulting loss in signal strength is known as gap loss. APC fiber ends have low back reflection even when disconnected.

It often becomes necessary to align an optical fiber with another optical fiber or an optical device such as a light-emitting diode, a laser diode, or an optoelectronic device such as a modulator. This can involve either carefully aligning the fiber and placing it in contact with the device to which it is to couple, or can use a lens to allow coupling over an air gap. In some cases the end of the fiber is polished into a curved form that is designed to allow it to act as a lens.

In a laboratory environment, the fiber end is usually aligned to the device or other fiber with a fiber launch system that uses a microscope objective lens to focus the light down to a fine point. A precision translation stage (micro-positioning table) is used to move the lens, fiber, or device to allow the coupling efficiency to be optimized.

At high optical intensities, above 2 megawatts per square centimetre, when a fiber is subjected to a shock or is otherwise suddenly damaged, a fiber fuse can occur. The reflection from the damage vaporizes the fiber immediately before the break, and this new defect remains reflective so that the damage propagates back toward the transmitter at 1–3 meters per second. The open fiber control system, which ensures laser eye safety in the event of a broken fiber, can also effectively halt propagation of the fiber fuse. In situations, such as undersea cables, where high power levels might be used without the need for open fiber control, a "fiber fuse" protection device at the transmitter can break the circuit to prevent any damage.

ETHERNET DEVICE

Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format.

Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been in use from around 1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET. In recent years, Wi-Fi, the wireless LAN standardized by IEEE 802.11, is prevalent in home and small office networks and augmenting Ethernet in larger installations.

Ethernet was originally developed at Xerox PARC in 1973–1975. Robert Metcalfe and David Boggs wrote and presented their "Draft Ethernet Overview" before March 1974. In March 1974, R.Z. Bachrach wrote a memo to Metcalfe and Boggs and their management, stating that "technically or conceptually there is nothing new in your proposal" and that "analysis would show that your system would be a failure." This analysis was flawed in that it ignored the "channel capture effect", though this was not understood until 1994. In 1975, Xerox filed a patent application listing Metcalfe and Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.

The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used.

Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30, 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.

Twisted-pair Ethernet systems have been developed since the mid-80s, beginning with StarLAN, but becoming widely known with 10BASE-T. These systems replaced the coaxial cable on which early Ethernets were deployed with a system of hubs linked with unshielded twisted pair (UTP), ultimately replacing the CSMA/CD scheme in favor of a switched full duplex system offering higher performance.

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.

From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.

Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.

Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.

Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card.

CSMA/CD shared medium Ethernet

Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm:

Main procedure

1. Frame ready for transmission.
2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.

Collision detected procedure

1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage

This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.

Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all; these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed.

Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.

For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 feet). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device. If termination was not done, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication could take place.

A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working - although depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless.

People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network, and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point. Multiport Ethernet repeaters became known as "Ethernet hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable.

Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.

Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.

Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.[4] The results showed that, even for the smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.

This report was wildly controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).

While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the farthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased-in upgrades impossible.

To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.

Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.

Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable.

Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.

When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).

In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These devices consisted of an internal two-port switch, dividing the 10BASE-T (10 Mbit) and 100BASE-T (100 Mbit) segments. The device would typically consist of more than two physical ports. When a network device becomes active on any of the physical ports, the device attaches it to either the 10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration from 10BASE-T to 100BASE-T networks. These devices are also known as dual-speed hubs because the traffic between devices connected at the same speed is not switched.

Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:

* They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require.
* It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
* Large amounts of broadcast traffic, whether malicious, accidental, or simply a side effect of network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever greater amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port.
o If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen).
* They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some switches offer a variety of tools to combat these issues including:

* Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
* Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link.
* VLANs to keep different classes of users separate while using the same physical infrastructure.
* Fast routing at higher levels to route between those VLANs.
* Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.

The autonegotiation standard does not allow autodetection to detect the duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speed. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to autonegotiation.

Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to autonegotiate the best possible shared mode of operation. This works well for the case of every device being set to autonegotiate. The autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation.

Interoperability problems lead network administrators to manually set the mode of operation of interfaces on network devices. What would happen is that some device would fail to autonegotiate and therefore had to be set into one setting or another. This often led to duplex setting mismatches: in particular, when two interfaces are connected to each other with one set to autonegotiation and one set to full duplex mode, a duplex mismatch results because the autonegotiation process fails and half duplex is assumed – the interface in full duplex mode then transmits at the same time as receiving, and the interface in half duplex mode then gives up on transmitting a packet. The interface in half duplex mode is not ready to receive a packet, so it signals a collision, and transmissions are halted, for amounts of time based on backoff (random wait times) algorithms. When both packets start trying to transmit again, they interfere again and the backoff strategy may result in a longer and longer wait time before attempting to transmit again; eventually a transmission succeeds but this then causes the flood and collisions to resume.

Because of the wait times, the effect of a duplex mismatch is a network that is not completely 'broken' but is incredibly slow.

The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45).

Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.

Fiber optic variants of Ethernet are commonly used in structured cabling applications. These variants have also seen substantial penetration in enterprise datacenter applications, but are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new higher speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 40 Gbit/s and 100 Gbps Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards may have to be overthrown to reach terabit Ethernet.

A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the host (in contrast, it is often the device driver which removes the CRC32 (FCS) from the packets seen by the user).

The table below shows the complete Ethernet frame, as transmitted. Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard.

After a frame has been sent transmitters are required to pause a specified time before transmitting the next frame. For 10M this is 9600 ns, 100M 960 ns, 1000M 96 ns.

10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10 Gbps (XGMII) PHY works with 32 bits at a time. Some implementations use larger jumbo frames.

LOCAL AREA NETWORKING (LAN)

local area network (LAN) is a computer network covering a small geographic area, like a home, office, or group of buildings e.g. a school. The defining characteristics of LANs, in contrast to Wide Area Networks (WANs), include their much higher data transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines.

Ethernet over unshielded twisted pair cabling, and Wi-Fi are the two most common technologies currently, but ARCNET, Token Ring and many others have been used in the past.

The first LAN put into service occurred in 1964 at the Livermore Laboratory to support atomic weapons research. LANs spread to the public sector in the late 1970s and were used to create high-speed links between several large central computers at one site. Of many competing systems created at this time, Ethernet and ARCNET were the most popular.

The development and proliferation of CP/M and then DOS-based personal computers meant that a single site began to have dozens or even hundreds of computers. The initial attraction of networking these was generally to share disk space and laser printers, which were both very expensive at the time. There was much enthusiasm for the concept and for several years, from about 1983 onward, computer industry pundits would regularly declare the coming year to be “the year of the LAN”.

In reality, the concept was marred by proliferation of incompatible physical layer and network protocol implementations, and confusion over how best to share resources. Typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which provided even-handed support for the 40 or so competing card/cable types, and a much more sophisticated operating system than most of its competitors. Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid 1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups.

Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple network operating system which formed the base of 3Com's 3+Share, Microsoft's LAN Manager and IBM's LAN Server. None of these were particularly successful.

In this same timeframe, Unix computer workstations from vendors such as Sun Microsystems, Hewlett-Packard, Silicon Graphics, Intergraph, NeXT and Apollo were using TCP/IP based networking. Although this market segment is now much reduced, the technologies developed in this area continue to be influential on the Internet and in both Linux and Apple Mac OS X networking—and the TCP/IP protocol has now almost completely replaced IPX, AppleTalk, NBF and other protocols used by the early PC LANs.

Initially, LANs were limited to a range of 185 meters or 600 feet and could not include more than 30 computers. Today, a LAN could connect a max of 1024 computers at a max distance of 900 meters or 2700 feet.

Although switched Ethernet is now the most common data link layer protocol and IP as a network layer protocol, many different options have been used, and some continue to be popular in niche areas. Smaller LANs generally consist of a one or more switches linked to each other - often with one connected to a router, cable modem, or DSL modem for Internet access.

Larger LANs are characterized by their use of redundant links with switches using the spanning tree protocol to prevent loops, their ability to manage differing traffic types via quality of service (QoS), and to segregate traffic via VLANs.

LANs may have connections with other LANs via leased lines, leased services, or by 'tunneling' across the Internet using VPN technologies.

MAINFRAMES V.S. SUPERCOMPUTERS

The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally focus on problems which are limited by calculation speed while mainframes focus on problems which are limited by input/output and reliability ("throughput computing") and on solving multiple business problems concurrently (mixed workload). The differences and similarities include:

* Both types of systems offer parallel processing. Supercomputers typically expose it to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.

* Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.

* Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.

* Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power.

There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different.

* Historically 85% of all mainframe programs were written in the COBOL programming language. The remainder included a mix of PL/I (about 5%), Assembly language (about 7%), and miscellaneous other languages. eWeek estimates that millions of lines of net new COBOL code are still added each year, and there are nearly 1 million COBOL programmers worldwide, with growing numbers in emerging markets. Even so, COBOL is decreasing as a percentage of the total mainframe lines of code in production because Java, C, and C++ are all growing faster.

* Mainframe COBOL has recently acquired numerous Web-oriented features, such as XML parsing, with PL/I following close behind in adopting modern language features.

* 90% of IBM's mainframes have CICS transaction processing software installed.[2] Other software staples include the IMS and DB2 databases, and WebSphere MQ and WebSphere Application Server middleware.

* As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe. Many are running Linux, some exclusively. There are new z/OS customers as well.

* In May, 2006, IBM claimed that over 1,700 mainframe customers are running Linux. Nomura Securities of Japan spoke at LinuxWorld in 2006 and is one of the largest publicly known, with over 200 IFLs in operation that replaced rooms full of distributed servers.

* Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.

* Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.

* Typically, a mainframe is repaired without being shut down. Also, memory, storage and processor modules of chips could be added or hot swapped without being shut down. It is not unusual for a mainframe to be continuously switched on for 6 months at a stretch.

The CPU speed of mainframes has historically been measured in millions of instructions per second (MIPS). MIPS have been used as an easy comparative rating of the speed and capacity of mainframes. The smallest System z9 IBM mainframes today run at about 26 MIPS and the largest about 17,801 MIPS. IBM's Parallel Sysplex technology can join up to 32 of these systems, making them behave like a single, logical computing facility of as much as about 569,632 MIPS.

The MIPS measurement has long been known to be misleading and has often been parodied as "Meaningless Indicator of Processor Speed." The complex CPU architectures of modern mainframes have reduced the relevance of MIPS ratings to the actual number of instructions executed. Likewise, the modern "balanced performance" system designs focus both on CPU power and on I/O capacity, and virtualization capabilities make comparative measurements even more difficult. See benchmark (computing) for a brief discussion of the difficulties in benchmarking such systems. IBM has long published a set of LSPR (Large System Performance Reference) ratio tables for mainframes that take into account different types of workloads and are a more representative measurement. However, these comparisons are not available for non-IBM systems. It takes a fair amount of work (and maybe guesswork) for users to determine what type of workload they have and then apply only the LSPR values most relevant to them.

To give some idea of real world experience, it is typical for a single mainframe CPU to execute the equivalent of 50, 100, or even more distributed processors' worth of business activity, depending on the workloads. Merely counting processors to compare server platforms is extremely perilous.