What are the 4 types of wireless networks?

09 Apr.,2024

 

All applications should perform well regardless of underlying connectivity. As a user, you should not care about the underlying technology in use, but as developers we must think ahead and architect our applications to anticipate the differences between the different types of networks. And the good news is every optimization that we apply for wireless networks will translate to a better experience in all other contexts. Let’s dive in.

Further, while the mechanics of data delivery via radio communication are fundamentally different from the tethered world, the outcome as experienced by the user is, or should be, all the same—same performance, same results. In the long run all applications are and will be delivered over wireless networks; it just may be the case that some will be accessed more frequently over wireless than others. There is no such thing as a wired application, and there is zero demand for such a distinction.

As such, given the diversity, it is not wise to make sweeping generalizations about performance of wireless networks. However, the good news is that most wireless technologies operate on common principles, have common trade-offs, and are subject to common performance criteria and constraints. Once we uncover and understand these fundamental principles of wireless performance, most of the other pieces will begin to automatically fall into place.

Wireless networks are at the epicenter of this trend. At its broadest, a wireless network refers to any network not connected by cables, which is what enables the desired convenience and mobility for the user. Not surprisingly, given the myriad different use cases and applications, we should also expect to see dozens of different wireless technologies to meet the needs, each with its own performance characteristics and each optimized for a specific task and context. Today, we already have over a dozen widespread wireless technologies in use: WiFi, Bluetooth, ZigBee, NFC, WiMAX, LTE, HSPA, EV-DO, earlier 3G standards, satellite services, and more.

One of the most transformative technology trends of the past decade is the availability and growing expectation of ubiquitous connectivity. Whether it is for checking email, carrying a voice conversation, web browsing, or myriad other use cases, we now expect to be able to access these online services regardless of location, time, or circumstance: on the run, while standing in line, at the office, on a subway, while in flight, and everywhere in between. Today, we are still often forced to be proactive about finding connectivity (e.g., looking for a nearby WiFi hotspot) but without a doubt, the future is about ubiquitous connectivity where access to the Internet is omnipresent.

Your next application may be delivered over a mobile network, but it may also rely on NFC for payments, Bluetooth for P2P communication via WebRTC, and WiFi for HD streaming. It is not a question of picking, or betting on, just one wireless standard!

The point of the classification is not to partition each technology into a separate bin, but to highlight the high-level differences within each use case. Some devices have access to a continuous power source; others must optimize their battery life at all costs. Some require Gbit/s+ data rates; others are built to transfer tens or hundreds of bytes of data (e.g., NFC). Some applications require always-on connectivity, while others are delay and latency tolerant. These and a large number of other criteria are what determine the original characteristics of each type of network. However, once in place, each standard continues to evolve: better battery capacities, faster processors, improved modulation algorithms, and other advancements continue to extend the use cases and performance of each wireless standard.

The preceding classification is neither complete nor entirely accurate. Many technologies and standards start within a specific use case, such as Bluetooth for PAN applications and cable replacement, and with time acquire more capabilities, reach, and throughput. In fact, the latest drafts of Bluetooth now provide seamless interoperability with 802.11 (WiFi) for high-bandwidth use cases. Similarly, technologies such as WiMAX have their origins as fixed-wireless solutions, but with time acquired additional mobility capabilities, making them a viable alternative to other WAN and cellular technologies.

A network is a group of devices connected to one another. In the case of wireless networks, radio communication is usually the medium of choice. However, even within the radio-powered subset, there are dozens of different technologies designed for use at different scales, topologies, and for dramatically different use cases. One way to illustrate this difference is to partition the use cases based on their "geographic range": Type Range Applications Standards Personal area network (PAN) Within reach of a person Cable replacement for peripherals Bluetooth, ZigBee, NFC Local area network (LAN) Within a building or campus Wireless extension of wired network IEEE 802.11 (WiFi) Metropolitan area network (MAN) Within a city Wireless inter-network connectivity IEEE 802.15 (WiMAX) Wide area network (WAN) Worldwide Wireless network access Cellular (UMTS, LTE, etc.) Table 5-1. Types of wireless networks

§Performance Fundamentals of Wireless Networks

Each and every type of wireless technology has its own set of constraints and limitations. However, regardless of the specific wireless technology in use, all communication methods have a maximum channel capacity, which is determined by the same underlying principles. In fact, Claude E. Shannon gave us an exact mathematical model (Channel capacity is the maximum information rate) to determine channel capacity, regardless of the technology in use.

Channel capacity is the maximum information rate

  • C is the channel capacity and is measured in bits per second.

  • BW is the available bandwidth, and is measured in hertz.

  • S is signal and N is noise, and they are measured in watts.

Although somewhat simplified, the previous formula captures all the essential insights we need to understand the performance of most wireless networks. Regardless of the name, acronym, or the revision number of the specification, the two fundamental constraints on achievable data rates are the amount of available bandwidth and the signal power between the receiver and the sender.

§Bandwidth

Unlike the tethered world, where a dedicated wire can be run between each network peer, radio communication by its very nature uses a shared medium: radio waves, or if you prefer, electromagnetic radiation. Both the sender and receiver must agree up-front on the specific frequency range over which the communication will occur; a well-defined range allows seamless interoperability between devices. For example, the 802.11b and 802.11g standards both use the 2.4–2.5 GHz band across all WiFi devices.

Who determines the frequency range and its allocation? In short, local government (Figure 5-1). In the United States, this process is governed by the Federal Communications Commission (FCC). In fact, due to different government regulations, some wireless technologies may work in one part of the world, but not in others. Different countries may, and often do, assign different spectrum ranges to the same wireless technology.

Politics aside, besides having a common band for interoperability, the most important performance factor is the size of the assigned frequency range. As Shannon’s model shows, the overall channel bitrate is directly proportional to the assigned range. Hence, all else being equal, a doubling in available frequency range will double the data rate—e.g., going from 20 to 40 MHz of bandwidth can double the channel data rate, which is exactly how 802.11n is improving its performance over earlier WiFi standards!

Finally, it is also worth noting that not all frequency ranges offer the same performance. Low-frequency signals travel farther and cover large areas (macrocells), but at the cost of requiring larger antennas and having more clients competing for access. On the other hand, high-frequency signals can transfer more data but won’t travel as far, resulting in smaller coverage areas (microcells) and a requirement for more infrastructure.

Certain frequency ranges are more valuable than others for some applications. Broadcast-only applications (e.g., broadcast radio) are well suited for low-frequency ranges. On the other hand, two-way communication benefits from use of smaller cells, which provide higher bandwidth and less competition.

Figure 5-1. FCC radio spectrum allocation for the 2,300–3,000 MHz band

§A Brief History of Worldwide Spectrum Allocation and Regulation

If you spend any time in the world of wireless communication, you will inevitably stumble into numerous debates on the state and merits of current spectrum allocation and regulation processes. But what is the history?

In the early days of radio, anyone could use any frequency range for whatever purpose she desired. All of that changed when the Radio Act of 1912 was signed into law within the United States and mandated licensed use of the radio spectrum. The original bill was in part motivated by the investigation into the sinking of the Titanic. Some speculate that the disaster could have been averted, or more lives could have been saved, if proper frequencies were monitored by all nearby vessels. Regardless, this new law set a precedent for international and federal legislation of wireless communication. Other countries followed.

A few decades later, the Communications Act of 1934 created the Federal Communications Commission (FCC), and the FCC has been responsible for managing the spectrum allocation within the U.S ever since, effectively "zoning" it by subdividing into ever-smaller parcels designed for exclusive use.

A good example of the different allocations are the "industrial, scientific, and medical" (ISM) radio bands, which were first established at the International Telecommunications Conference in 1947, and as the name implies, were reserved internationally. Both the 2.4–2.5 GHz (100 MHz) and 5.725–5.875 GHz (150 MHz) bands, which power much of our modern wireless communication (e.g., WiFi) are part of the ISM band. Further, both of these ISM bands are also considered "unlicensed spectrum," which allow anyone to operate a wireless network—for commercial or private use—in these bands as long as the hardware used respects specified technical requirements (e.g., transmit power).

Finally, due to the rising demand in wireless communication, many governments have begun to hold "spectrum auctions," where a license is sold to transmit signals over the specific bands. While examples abound, the 700 MHz FCC auction, which took place in 2008, is a good illustration: the 698–806 MHz range within the U.S. was auctioned off for a total of $19.592 billion to over a dozen different bidders (the range was subdivided into blocks). Yes, that is billion with a "b."

Bandwidth is a scarce and expensive commodity. Whether the current allocation process is fair is a subject on which much ink has been spilled and many books have been published. Looking forward, there is one thing we can be sure of: it will continue to be a highly contested area of discussion.

§Signal Power

Besides bandwidth, the second fundamental limiting factor in all wireless communication is the signal power between the sender and receiver, also known as the signal-power-to-noise-power, S/N ratio, or SNR. In essence, it is a measure that compares the level of desired signal to the level of background noise and interference. The larger the amount of background noise, the stronger the signal has to be to carry the information.

By its very nature, all radio communication is done over a shared medium, which means that other devices may generate unwanted interference. For example, a microwave oven operating at 2.5 GHz may overlap with the frequency range used by WiFi, creating cross-standard interference. However, other WiFi devices, such as your neighbors’ WiFi access point, and even your coworker’s laptop accessing the same WiFi network, also create interference for your transmissions.

In the ideal case, you would be the one and only user within a certain frequency range, with no other background noise or interference. Unfortunately, that’s unlikely. First, bandwidth is scarce, and second, there are simply too many wireless devices to make that work. Instead, to achieve the desired data rate where interference is present, we can either increase the transmit power, thereby increasing the strength of the signal, or decrease the distance between the transmitter and the receiver—or both, of course.

Path loss, or path attenuation, is the reduction in signal power with respect to distance traveled—the exact reduction rate depends on the environment. A full discussion on this is outside the scope of this book, but if you are curious, consult your favorite search engine.

To illustrate the relationship between signal, noise, transmit power, and distance, imagine you are in a small room and talking to someone 20 feet away. If nobody else is present, you can hold a conversation at normal volume. However, now add a few dozen people into the same room, such as at a crowded party, each carrying their own conversations. All of the sudden, it would be impossible for you to hear your peer! Of course, you could start speaking louder, but doing so would raise the amount of "noise" for everyone around you. In turn, they would start speaking louder also and further escalate the amount of noise and interference. Before you know it, everyone in the room is only able to communicate from a few feet away from each other (Figure 5-2). If you have ever lost your voice at a rowdy party, or had to lean in to hear a conversation, then you have firsthand experience with SNR. Figure 5-2. Cell-breathing and near-far effects in day-to-day situations

In fact, this scenario illustrates two important effects:

Near-far problem

A condition in which a receiver captures a strong signal and thereby makes it impossible for the receiver to detect a weaker signal, effectively "crowding out" the weaker signal.

Cell-breathing

A condition in which the coverage area, or the distance of the signal, expands and shrinks based on the cumulative noise and interference levels.

One, or more loud speakers beside you can block out weaker signals from farther away—the near-far problem. Similarly, the larger the number of other conversations around you, the higher the interference and the smaller the range from which you can discern a useful signal—cell-breathing. Not surprisingly, these same limitations are present in all forms of radio communication as well, regardless of protocol or underlying technology.

§Modulation

Available bandwidth and SNR are the two primary, physical factors that dictate the capacity of every wireless channel. However, the algorithm by which the signal is encoded can also have a significant effect.

In a nutshell, our digital alphabet (1's and 0's), needs to be translated into an analog signal (a radio wave). Modulation is the process of digital-to-analog conversion, and different "modulation alphabets" can be used to encode the digital signal with different efficiency. The combination of the alphabet and the symbol rate is what then determines the final throughput of the channel. As a hands-on example:

  • Receiver and sender can process 1,000 pulses or symbols per second (1,000 baud).

  • Each transmitted symbol represents a different bit-sequence, determined by the chosen alphabet (e.g., 2-bit alphabet: 00, 01, 10, 11).

  • The bit rate of the channel is 1,000 baud × 2 bits per symbol, or 2,000 bits per second.

The choice of the modulation algorithm depends on the available technology, computing power of both the receiver and sender, as well as the SNR ratio. A higher-order modulation alphabet comes at a cost of reduced robustness to noise and interference—there is no free lunch!

Don’t worry, we are not planning to dive headfirst into the world of signal processing. Rather, it is simply important to understand that the choice of the modulation algorithm does affect the capacity of the wireless channel, but it is also subject to SNR, available processing power, and all other common trade-offs.

How Wireless Networks Work

Wireless networking, or WiFi, is a very popular wireless networking technology today. There are more than several hundreds of millions of WiFi devices. In this chapter, we will explore the basics of wireless networking, including the different types of wireless networks, how they work, and the standards that govern them.

Essentially, a wireless network allows devices to remain linked to the network without any cables attached, providing greater convenience and mobility for the user.

Wireless networks operate using radio frequency (RF) technology, which generates an electromagnetic field when an RF current is supplied to an antenna. This field can then spread through space, allowing devices to communicate with each other wirelessly. The radio spectrum is a limited resource that must be shared by everyone. During most of the twentieth century, governments and international organizations have regulated most of the radio spectrum. This regulation controls the utilization of the radio spectrum, in order to prevent interference among different users. A company that wants to use a frequency range in a given region must apply for a license from the regulator. Most regulators charge a fee for the utilization of the radio spectrum and some governments have encouraged competition among companies bidding for the same frequency to increase the license fees.

For an introduction to wireless netoworking, watch this CertBros video (2023) [12:15].

Wireless Network Topologies

The two basic modes (also referred to as topologies) in which wireless networks operate are referred to as infrastructure and ad-hoc networks.

Source: http://www.e-cartouche.ch/content_reg/cartouche/LBStech/en/html/LBStechU2_wlantopo.html (CC-BY)

Infrastructure mode requires a physical structure to support it. This essentially means there should be a medium handling the network functions, creating an infrastructure around which the network sustains. In infrastructure-based wireless networks, the communication takes place between the wireless nodes (i.e., endpoints in the network such as your computer, your phone, etc.) and the access points (i.e., the router) only. There can be more than one access point on the same network handling different wireless nodes. A typical example of an infrastructure network would be cellular phone networks, which have to have a set infrastructure (i.e., network towers) to function.

You may use an infrastructure network if you can easily add more access points to boost the range, if you want to set up a more permanent network, and/or if you will need to bridge to other types of networks (e.g., you can connect to a wired network if required).

The one major downfall with infrastructure networks is that they are costly and time consuming to set up once. So, if you need your device to operate in remote areas where the infrastructure is weak or nonexistent, you cannot rely on infrastructure networks.

Ad-hoc wireless networks, on the other hand, do not require a set infrastructure to work. In ad-hoc networks, each node can communicate with other nodes, so no access point that provides access control is required. Whereas the routing in infrastructure networks is taken care of by the access point, in ad-hoc networks, the nodes in the network take care of routing to find the best possible path between the source and destination nodes to transfer data.

All the individual nodes in an ad-hoc network maintain a routing table, which contains the information about the other nodes. As the nature of the ad-hoc network is dynamic, this results in ever-changing router tables. An ad-hoc network is asymmetric by nature, meaning the path of data upload and download between two nodes in the network may be different.

A typical example of an ad-hoc network is connecting two or more laptops (or other supported devices) to each other directly without any central access point, either wirelessly or using a cable.

You may consider an ad-hoc network when you want to quickly set up a peer-to-peer (P2P) network between two devices, when creating a quick temporary network, and/or if there is no network infrastructure set up in the area (ad-hoc is the only network mode that can be used in areas like this). As the routing is handled by each node in the network, this uses more resources; as the number of devices connected in an ad-hoc network increases, the network interference increases, which may lead to slower networks.

Ranges of Wireless Networks

Wireless networks can be divided into four major types, based on their ranges.

  • WPANs (Wireless Personal Area Networks) are short-range wireless networks that connect devices within a few meters, such as Bluetooth headphones, keyboards, mice, and smartwatches. WPANs use low-power radio waves and have a data rate of up to 25 Mbps. WPANs are suitable for personal use and small-scale applications, such as wireless printing, file sharing, and health monitoring.
  • WLANs (Wireless Local Area Networks) are medium-range wireless networks that connect devices within a few hundred meters, such as Wi-Fi routers, laptops, smartphones, and tablets. WLANs use radio waves in the 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, and 5.9 GHz bands and have a data rate of up to 10 Gbps. WPANs are suitable for personal use and small-scale applications, such as wireless printing, file sharing, and health monitoring. WLANs are suitable for home and office use and provide internet access, network security, and multimedia streaming.
  • WMANs (Wireless Metropolitan Area Networks) are long-range wireless networks that connect devices within a few kilometers, such as WiMAX base stations, antennas, and modems. WMANs use radio waves in the 2.3 GHz, 2.5 GHz, 3.5 GHz, and 5.8 GHz bands and have a data rate of up to 1 Gbps. WMANs are suitable for urban and rural use and provide broadband access, voice over IP, and video conferencing.
  • WWANs (Wireless Wide Area Networks) are very long-range wireless networks that connect devices across the globe, such as cellular towers, satellites, and mobile phones. WWANs use radio waves in the 700 MHz, 800 MHz, 900 MHz, 1.8 GHz, 1.9 GHz, 2.1 GHz, 2.6 GHz, and 3.5 GHz bands and have a data rate of up to 100 Mbps. WWANs are suitable for mobile and remote use and provide voice, text, email, web browsing, and GPS services.

Wireless Networking Standards

Several wireless networking standards are included in the 802.11 family of standards. These standards are developed by the Institute of Electrical and Electronics Engineers (IEEE) and define the specifications for a range of wireless LAN (Wi-Fi) technologies, some of which are designed for short-range communication, while others are optimized for longer ranges. The range of an 802.11 technology depends on factors such as frequency band, modulation schemes, transmit power, and the environment in which it is deployed. The table below provides a summary of the main 802.11 standards.

Standard Frequency Typical throughput Max bandwidth Range (m) indoor/outdoor Year of development 802.11a 5 GHz 25 Mbps 54 Mbps 35/120 1999 802.11b 2.4 GHz 6.5 Mbps 11 Mbps 38/140 1999 802.11g 2.4 GHz 20 Mbps 54 Mbps 38/140 2003 802.11n 2.4/5 GHz 100 Mbps 600 Mbps 70/250 2009 802.11ac 5 GHz 210 Mbps 6.9 Gbps 35/150 2014 802.11ad 60 GHz 800 Mbps 7 Gbps 10/100 2012 802.11ah 0.9 GHz 150 Kbps 18 Mbps 1000/1000 2017 802.11ax 2.4/5 GHz 600 Mbps 10 Gbps 50/200 2021 802.11be 2.4/5 GHz 2.4 Gbps 40 Gbps 50/200 2024 (estimated)

All IEEE 802.11 standard amendments are constructed in a manner such that devices that operate according to their specifications will be backward compatible with earlier versions so that any modern IEEE 802.11-based device can communicate with older products. The 802.11 working group defined the basic service set (BSS) as a group of devices that communicate with each other. We continue to use network when referring to a set of devices that communicate.

While most of the frequency ranges of the radio spectrum are reserved for specific applications and require a special license, there are a few exceptions. These exceptions are known as the Industrial, Scientific, and Medical (ISM) radio bands. These bands can be used for industrial, scientific and medical applications without requiring a license from the regulator. For example, some radio-controlled models use the 27 MHz ISM band and some cordless telephones operate in the 915 MHz ISM. In 1985, the 2.400-2.500 GHz band was added to the list of ISM bands. This frequency range corresponds to the frequencies that are emitted by microwave ovens. Sharing this band with licensed applications would have likely caused interference, given the large number of microwave ovens that are used. Despite the risk of interference with microwave ovens, the opening of the 2.400-2.500 GHz allowed the networking industry to develop several wireless network techniques to allow computers to exchange data without using cables.

When developing its family of standards, the IEEE 802.11 working group took a similar approach as the IEEE 802.3 working group that developed various types of physical layers for Ethernet networks. 802.11 networks use the CSMA/CA Medium Access Control technique described earlier and they all assume the same architecture and use the same frame format.

For more information on the current status of the project, see The Evolution of Wi-Fi Technology and Standards (IEEE Standards Association, 16 May 2023).

What are the 4 types of wireless networks?

Introduction to Wireless Networks – Telecommunications and Networking