Now that you have a solid foundation of knowledge about how data is packaged to be sent across the network from the second post, let's explore the various network devices your API request may encounter on its journey. We'll use a simplified flow as show below.

Request Flow

As discussed in the last post, the first connection would be between your laptop and the access point. Office buildings will usually contain many such devices to ensure coverage as you move between your desk and conference rooms. Some may have external antennas as shown below or may be contained within the device. These are usually mounted on the ceiling to ensure maximum coverage. Your router at home usually handles this functionality rather than it being a separate device.

Wireless Router

The communication between your laptop and the access point would use one of the Wi-Fi protocols depending on the versions that each device supports. This may also leverage a security protocol to ensure data is encrypted and the device is properly authenticated. The access point contains a wireless radio in order to communicate with your laptop and other devices and usually has an ethernet port to communicate with other devices on the wired network.

Our next stop on our packet's journey, we would encounter a switch. Switches can be thought of as the wired equivalent to wireless access points. Switches help extend the wired network to the various offices, desks, and other locations within a corporate location. Switches have many interfaces to help enable this as shown in the post's cover image or the one below.

Network Switch

This is how offices or data centers can help handle the scale of thousands of devices. Wired connections are also more performant than wireless, as they are not prone to loss from physical obstructions.

As scale increases, routers help manage the paths communication between devices would take. Routers usually contain a number of physical interfaces to manage the traffic it gets, but typically not as many as a switch. As discussed in the last post, a router can use headers/metadata in the packet to compare the destination IP address with a route table on which interface it should send the packet off to next. The API request would be sent towards the destination network of where the server is hosted.

After flowing through another switch in our flow above, the request would encounter a load balancer. Load balancer usually operate at the network level (TCP or UDP), or at the application level (ex: http/https). While a network load balancer may use the TCP headers to determine where to send the packet to on a server behind it, application load balancer might use the URL path contained within the http payload. In addition to this packet inspection, load balancers may evaluate the health of the servers behind it to ensure the service stays available if any one server experiences an outage. With a healthy target, the load balancer sends the packet on the way to the API server.

Finally, having reached the API server, the request makes its way up the TCP/IP stack to the http handler which processes the request, generates the response, and re-encapsulates the data and sends it back to the laptop over the path we just took.

In the next blog post we will continue to expand our understanding of how networks may handle requests at a larger scale. We'll consider multi-site deployments where companies might have data center or branches. We'll also consider remote users needing access, and customers accessing services from the internet and what role network security plays in protecting company data.