Picture this. It is a sunny Sunday afternoon (H/T to The Kinks) You are on the couch and fighting off a nap because you are too tired to get up and go to the bedroom. (Taking a nap should not take work, but fate is cruel) The couch is comfortable, but not comfortable enough that you could sneak a nap right there. You are watching an episode of <insert whatever’s popular – it seems to change every week> and the screen suddenly freezes.
You might want to curse your Internet Service Provider and wish a pile of wet towels on their bed all the time. You even break that inertia that you have methodically constructed after your mid-day meal and try that one thing that everyone begrudgingly, yet inevitably, does…
…to no avail.
If you have experienced this over the last year, then welcome to the world of fail-safe mechanisms built into the design of the Internet, well, to save the internet.
Why were fail-safe mechanisms necessary in the first place? I wrote about the Elders of the Internet 😎 a few weeks ago and the history of how the internet evolved. One critical problem that the pioneers of the TCP/IP architecture encountered was that of scale – when large volumes of data were relayed through the early versions of the internet, the network collapsed. This was fixed through the deployment of congestion control algorithms to improve the performance of data transfers over networks. It is still used in 90% of the computers hooked to the internet today.
How it came about is an interesting read on its own. The short version – a professor of Computer Science at Berkeley in the 1980s, Van Jacobson, persisted because he wanted to upload class materials to the University computers without breaking the network.
“…his invention was a response to unusable internet in the mid-1980s, when networks mostly used by universities kept breaking when too many people were online at once. Congestion control algorithms are now widely used. And web video companies have designed software on a similar premise to automatically downgrade internet video quality if internet networks are clogged.”
Why is this relevant right now? Because, (much like everything else) pandemic. The pandemic changed the way people used the internet so dramatically that these congestion control algorithms became critical for continued access globally. Internet Service Providers and other networks plan for traffic increases into their infrastructure design. They are typically prepared for a 30% increase in traffic annually. After the lockdown began, traffic increased by 20% a week (eek!).
During the early days of the pandemic, speeds fell across the world, with some countries seeing drops by 40-50%. A clarion call was issued. Governments sanctioned additional spectrum to network operators. Streaming platforms were asked to reduce the file sizes of their content. A lot of them obliged and switched their high-definition content to standard-definition, with some doing so for a month with assurances of continuously monitoring it. Large software updates were timed to be delivered during off-peak hours. There was an invisible, large-scale mobilisation to keep all of us connected.
A study one year on from the pandemic, that assessed internet traffic and speeds, gave a thumbs-up to internet operation during the pandemic. It was down to a combination of original design to find efficient routes, the flexibility that cloud computing offers, and automated additional capacity provisioning to handle unexpected traffic spikes.
It is terrifying to think of a pandemic response without the internet – right from how it could have affected those seeking emotional support to livelihoods to collaborations in vaccine research, among countless others.
The internet did not melt down, only because the pioneers built flexibility into its original architecture.
That is frickin’ phenomenal.
One thought on “The Internet Did Not Melt Down”