Computers and their peripherals (such as networks and disc drives) are becoming quicker by the day. Current CPU speeds of DEC Alpha and Pentium processors are far over 100MHz, allowing them to execute billions of instructions per second (BIPS). This is comparable to the speed of a supercomputer five years ago.
The programmes that run on computers are now ranging from interactive graphics, speech recognition, video conferencing, real-time animations, and so on, thanks to the increasing speed of computers. To carry more data, all of these new applications will rely on networks.
Network bandwidth is expanding at the same time as CPU rates. We now have 100 Mbs Ethernet, which was considered fast in the 1980s. Much thanks to research in the field of fibre optic signalling, bandwidth is approaching 1 billion bits per second (1 Gbs).
The three major fields of data communications, computers, and telecommunications are in the midst of a transformation. Computing is evolving at a breakneck pace, with processor speeds doubling every year. File-systems with gigabit bandwidth have been made possible by the latest RAID (Redundant Arrays of Inexpensive Disks) technology.
Data communications, which permits data flow between computing systems, must keep up with the rapid advancement of computing technology. Previously, data communications offered services such as e-mail. Virtual reality, video conferencing, and video on demand services are now available.
The telecommunications business has been transporting voice traffic for almost a century. With each passing year, the situation is evolving, as telephone networks transport more data. Data traffic on the telephone network is increasing at a rate of 20% per year, whereas voice traffic is only increasing at 3% per year.
Data traffic will soon overtake voice traffic. All of this has piqued the interest of the telecommunications industry in transporting data across their networks.
As a result, the three groups are coming together to pursue a similar goal of carrying more data at faster rates. As a result, there have been some cooperative activities.
One of the most significant of these endeavours is the establishment of gigabit testbeds in the United States.
Standardization of ATM (Asynchronous Transfer Mode), a set of communication protocols that supports integrated voice, video, and data networks, is another cooperative activity. The National Coordination Office for HPCC (High Performance Computing and Communications).
The Corporation for National Research Initiatives, and the IEEE Communications Society Technical Committee on Gigabit Networking are some of the institutions that are working on gigabit networking research.
Many researchers believed that present networking knowledge would not apply to gigabit networks, which are significantly faster than existing networks, when gigabit networking was on the horizon.
After years of research, it has been discovered that many of the methods and approaches (such as protocol stacking) still function in gigabit networks.
There are numerous operational Gigabit testbeds (see AppendixAfor full list). Gigabit networks will be a reality in five to ten years. It’s unknown whether a single gigabit technology with a defined standard protocol will emerge.
However, it appears that there will be numerous competing gigabit networking technologies (as there are many LAN technologies) and protocols, with one of them finally becoming the most popular.
Gigabit Concepts and Technologies
The evolution of fibre optics is inextricably related to the development of high-speed networks. The introduction of fibre optic signalling technology capable of transmitting several gigabits per second over long distances with low error rates via optical fibre demonstrated that gigabit networks were practical and served as a motivator for researchers.
Other media, including as radio and microwaves, have also been investigated and discovered to be capable of supplying gigabit bandwidth. AT&T built the LuckyNet, a 2.4 Gbps (OC-48) testbed link.
The Advanced Communications Technology Satellite (ACTS) is a NASA-launched experimental satellite that supports OC-12 links in a single configuration.
Another key trend in gigabit networking is the growing use of cell networking, often known as cell switching or cell-relay. We will give a quick overview of these concepts and technology in the following (sub) sections.
Fiber Optics
Fiber Optic Fundamentals
When light travels from one medium to another, some of it is reflected, while the rest is refracted. When the angle of incidence is larger than a critical angle, all of the light is reflected, which is an interesting feature of light. In fibre optics, this quality of light is used to transfer signals.
A thin strand of glass called the core is encircled by a thicker outer layer called the cladding. Light is directed at the proper angle inside the core and flows through it; any light that escapes the core is reflected back into it.
The bits are carried by light pulses. One essential point to note is that while the bits do not travel quicker (the propagation delay is comparable to that of copper wire), the bandwidth is increased because the bits may be packed more densely (1000 times more than that of copper).
The fibre has a bandwidth of 25 THz in theory, with wavelengths of 0.85, 1.3, and 1.5 microns.
There are various signalling issues with fibre optics, just as there are with copper wire. Modal, chromatic, and material dispersions are the three main forms of dispersions.
To counteract these dispersion, repeaters and amplifiers that boost the signals are utilised. To avoid dispersion, single-mode fibre (also known as monomode) is employed. Modal dispersion issues persist in the multimodefiber.
The phrases “transmitter” and “receiver” refer to equipment that are connected to a fibre and transmit and receive signals. There are two types of these: fixed and adjustable.
Tunable ones can dynamically set the lightwave frequency at which they transmit or receive, whereas fixed ones are set to a specific wavelength. The fixed ones are straightforward, however the tunable ones are more difficult.
The Mach-Zehnder Inferometer is an example of a tunable device. One of the light paths is slightly longer than the other, causing them to be out of phase, and the difference is utilised to tune to a specific wavelength.
The SDH and SONET
The Synchronous Optical Network (SONET) is a telephony standard (called the Sychronous Digitial Hierarchy, SDH in Europe). SONET was created to facilitate multiplexing on lines capable of hundreds of megabits per second or more.
Its principal purpose was to provide a single set of high-speed multiplexing standards. The standard specifies many rates known as Synchoronous Transport Signal Levels (STS) or Optical Carrier Levels (OCLs). The data speeds range from 51.84 megabits per second (STS-1,OC-1) to 2.4 gigabits per second (OC-1) (OC-48).
WDM (Wireless Data Management) Networks
WDM networks, unlike SONET, make use of the unique features of optical fibres. The bandwidth of the fibre is divided into numerous channels in WDM networks, and hosts communicate with each other on a specific channel.
WDM networks are divided into two categories: single-hop and multihop. In single-hopWDM networks, the hosts are connected directly to each other through a star-coupler, as the name implies. The single-hop can be further divided based on whether the transmitters and receivers are fixed or adjustable.
LAMBDANET, a Bell Communications Research project, and RAINBOW, an IBM initiative, are two instances of single-hopWDM networks. MultihopWDM networks can be built in a variety of methods.
The basic goal is to create a high-connectivity graph with the fewest possible hops between any two nodes. The TeraNet, which was developed as part of Columbia University’s ACRON project, is an example of a multihopWDM network.
Gigabit Networking Trends and Issues
Gigabit networking is currently the subject of a great deal of study. Annual Workshops on Gigabit Networking are held by the IEEE Communications Society Technical Committee on Gigabit Networking. The sections that follow provide a quick overview of current gigabit networking trends and concerns.
Gigabit Network Challenges
The major challenges in networking research are to take advantage of newly developed techniques for building high-speed networks, and to figure out how to evolve them to meet new application needs, keep up with other computing technologies, and encourage the adoption of gigabit technologies by the general public.
To meet these objectives, the Gigabit Networking Workshop identified key issues in the following areas:
Evaluation of performance
Higher speeds and novel traffic mixes are forcing a much-needed re-examination of networking performance models and algorithms.
Changing the technology
The networking community is being forced to create innovative approaches for decreasing the cost of per packet processing in switches and routers as a result of faster speeds and new types of traffic.
Finding a means for these strategies to scale to switch designs with more connections per switch and higher bandwidths per connection is a constant issue.
Control and administration of the network
Congestion control and data transfer routing have become significantly more challenging due to a mix of new forms of traffic, higher bandwidths, and long relative delays in gigabit networks.
There is a need for research into how to best balance network and end-system congestion control, as well as strategies for swiftly finding valid paths for new data transfers.
Internetworking
Gigabit networking technologies must communicate with one another as well as with existing networking technologies. As a result, internetworking will continue to be as vital as it is now.
While the core concepts of IP architecture apply to gigabit networks, our internetworking technology must change to take use of gigabit networks’ increased capabilities.
Connecting PCs and applications to the internet
While it is now possible to provide data at gigabit rates to a computer’s interface, we continue to have significant challenges transferring data fast and at gigabit rates through the interface and computer’s operating system to the application.
0
If applications are to take advantage of gigabit networks to their full potential, a lot of work will be required, most likely in collaboration with the operating system community.
PCs with Gigabit interfaces
Gigabit networking isn’t just for supercomputers and high-end workstations anymore. PCs will soon require gigabit capabilities, so we must stimulate the development of low-cost, low-heat, and low-power connections.
Protocols that run from beginning to finish
There are better techniques to design end-to-end protocols that suit the requirements of applications. Applications’ capacity to synthesis new protocols from functional components, for example, should be investigated.
Technologies for shared media access
Traditional thinking holds that gigabit networks’ high bandwidths and relatively long delays limit our options for local media access techniques, but new research suggests that there may be a wide range of media access techniques that work for gigabit networks, and that these options should be investigated.
Striping and parallel channels
Striping, or sending data in parallel over many connections, is typically more cost effective than sending data at a higher bandwidth over a single channel. While striping is a well-known concept, it is still poorly understood.
Protocol design and verification
Our failure to develop protocols of even modest sophistication and prove that they work correctly is a huge source of frustration in networking. Some new ideas are being explored in this area that integrate design and formal verification, and we need to encourage new work in this area because existing verification technology is nearly 15 years behind the rest of the field.
Applications for Gigabit Networks
The majority of today’s data network applications are unaffected by delays or capacity fluctuations. It makes no difference if your files take a bit longer to transfer over the internet.
However, applications in the telecommunications (telephone) business are time-sensitive. Humans normally stop between sentences, and if the pause is too long, the other speaker talks.
If the pause is long owing to network delays, both speakers may speak at the same time, causing confusion. As a result, telephone delays should be limited.
If gigabit networks are used, other software such as X Windows, remote login, and so on will run faster.
Any application that requires a fast response time or a large amount of bandwidth qualifies as a gigabit application.
The recent introduction of gigabit networks has spawned a slew of new applications. IVOD is one of them (Interactive Video on Demand). Consumers place orders for any programmes they wish to see, and the programmes are delivered to them via a central server.
This application will benefit from gigabit networking since video apps utilise a lot of bandwidth and different viewers may wish to watch different programmes at the same time.
Though compression technologies such as MPEG can be utilised, full screen data must be delivered every now and then. Gigabit networking technology can help with this.
Highly computationally complex issues can be split down into smaller chunks and distributed across computers connected by high-bandwidth networks.
Researchers at UCLA, for example, are experimenting with simulations of atmosphere-ocean interactions. One supercomputer (CM-2) mimics the water, while another (CM-3) simulates the atmosphere, and the interactions between them are explored.
Typically, 5 to 10 Mbs of data is exchanged per cycle; in a 10 Mbs Ethernet environment, this will take a second, while in a gigabit networking environment, it will take only 100ms.
Another type of application is one that involves real-time human contact. Video conferencing is a good example. Humans have the ability to absorb vast amounts of visual information and are extremely sensitive to the quality of that information.
Virtual reality applications are another type of programme that gives the user the sensation of being somewhere else.
NASA has conducted some fascinating experiments. They devised a method that allows geologists to interact with the Martian terrain. Geologists investigate the surface by touching it (virtually), viewing the 3D picture from various angles, and so forth.
All of these things necessitate a lot of bandwidth, which is where gigabit networking comes in.
Conclusion
Gigabit networks are now achievable thanks to advances in fibre optics, computing systems, and networking technology. Gigabit networks, which have a capacity of more than 1 Gbps, can handle growing network traffic as well as a wide range of complex computer applications. Other components of networking, such as routing, switching, and protocols, should also be considered in order to build truly gigabit networks.
Despite the fact that routers are no longer regarded a major bottleneck and are being replaced by more cost-effective switches, they are still an important component in the development of future high-speed networks.
Routers are more important than ever since they can provide network security and firewalls, with 80 percent of today’s network traffic crossing subnet borders. As a result, numerous ways to improving routing and router performance have recently been proposed.
Routing Switch, High Performance Routing, Gigabit Routers, and I/O Switching are some of the approaches presented in this study.
It is crucial to note, however, that the technologies mentioned here may become obsolete when new approaches and technologies emerge.
Finally, today’s high-speed LAN technologies include at least four gigabit technologies. ATM, Fiber Channel, Gigabit Ethernet, and Serial HIPPI are the four technologies. With new coming technology, this list may soon change. Serial HIPPI appears to be the only technology that currently provides gigabit performance with 100 percent dependability.
This does not rule out the possibility that other technologies, such as ATM and Gigabit Ethernet, will play a key role in the deployment of gigabit networking.