According to The Telegraph, internet usage in the U.K. could become so overtaxed that they might have to begin rationing out service, in particular to prioritize health and educational resources over things like gaming and videos, which take up a lot of bandwidth. While I’m no gamer and I’m not in the U.K., the idea of rationing did get me a bit rattled. After all, my Cheers binge is what’s getting me through quarantine right now, and if I don’t have enough internet, I can’t get Hulu, and if I can’t get Hulu, what am I supposed to do, buy DVDs like some kind of animal?
I know we’re all making sacrifices here, but that’s the kind of dystopian scenario I just can’t accept.
Even scarier though, is the idea of what might happen if we don’t ration. Can the internet just up and break? And if the internet does break, who the hell is going to fix it?
“In theory, yes, the internet can break,” says Justine Sherry, an assistant professor of computer science at Carnegie Mellon University. She goes on to explain that this already happened once back in 1986 thanks to internet overload. This was in the days before the public was on the internet, so all that was being sent were emails and file transfers between universities, government agencies and the occasional hacker. Back then, Sherry explains, “there were too many senders sending too much data too fast.” Because of this, routers began dropping messages, which is what happens when there’s too much going on.
Once those messages were dropped, people re-sent them, which compounded the problem, as the internet was already overloaded. “The internet was in this terrible cycle, with overloaded routers dropping more and more messaging with people continuing to resend them,” Sherry continues. “Ultimately it became impossible for users to send any data, which we now refer to as congestion collapse. At the time, the internet was still there, but it was very unlikely that anyone could connect to anything.”
So who fixed it back then?
Sherry says much of the credit goes to a man named Van Jacobson, a researcher at Lawrence Berkeley Lab who created the first congestion control algorithm. “There were two sets of algorithms invented since 1986 that were designed to prevent congestion collapse, the first of which were ‘congestion control algorithms,’” Sherry explains. These algorithms made it so that if the system got overloaded, no messages were dropped. Instead, everyone would just be slowed down.
Sherry compares this to on-ramps with a stop light that only allow one car to merge at a time. While there’s still congestion, this slows everyone down just a little bit, so that the highway doesn’t overload and total gridlock is prevented. “Every computer in the world has a little stoplight, which tells the computer to let you on, or wait a minute before it lets you on if traffic is high. This is automatic,” Sherry says.
The other solution to congestion overload came out in the late 1990s and early 2000s, when videos began to be sent online more frequently. Video traffic accounts for a huge amount of internet traffic — about 60 percent — so Sherry says they introduced “adaptive bitrate algorithms,” which degrade the quality of video being sent online, depending upon how much traffic there is. Sherry explains, “If I’m watching Netflix at 3 a.m., I’m almost definitely going to get 4K video, but if I’m watching it during a high traffic time after everyone just got home from work, I’m going to be getting standard definition instead. Using Netflix’s numbers, they can support about 50 users at standard definition using the same bandwidth as one user using 4K.”
Every major video service does this, including YouTube, Hulu and anyone else you can think of. Sherry adds that this also happens automatically, which is why she says it was “funny” when these big streaming companies promised to lower their bitrate recently, as people are using more internet under quarantine. “These algorithms already do this automatically, so it was all a bit silly,” Sherry tells me.
So, is there a way for these safeguards to fail?
Sherry says in theory, yes, but it’s highly unlikely. “Right now, we’re seeing a 2x [or double] increase in load, which, compared to Black Friday or Super Bowl Sunday, isn’t really a big deal. If we saw a 1,000x increase in load, that might be too much for the system to handle.”
How much would it take to reach that?
Sherry believes that’s so inconceivable that it’s hard to even hypothesize: “If everyone had a hundred computers on at the same time, then it might be a problem.” At that point, what might happen is that video would be so degraded that it would be unwatchable, and data would move so slowly that it would never arrive, but we’re nowhere near that amount of usage.
But if you’re at home right now and you’re experiencing slowdowns or getting kicked off the internet, does that mean the internet is at capacity? Sherry says no. Instead, she explains that there are places where the internet “bottlenecks,” and service can slow down en route to the larger web. To go back to the highway analogy, imagine the internet as a vast series of highways, but your own service is simply a local road. Your internet service provider supplies internet to you via a cable that feeds your neighborhood, so if you’re experiencing a slowdown, it means that your own service at home may have too many devices on, or the internet has bottlenecked at your local level, before reaching the big superhighways of the internet.
As for how to fix that, you may need to upgrade your service, and if that doesn’t work, your cable company may need to upgrade your neighborhood’s capacity.
Not only is the internet guarded from digital disruptions via those algorithms I mentioned, but it also defends itself against physical interruptions. Going back to the highway analogy, there are hubs which those highways pass through called “peering points” — imagine them as the bridges and tunnels of the system. If one of those hubs suddenly disappeared or was blown up or something like that, Sherry says the information wouldn’t be lost, it would just be rerouted, like the Waze app does in your phone. The info would simply go around that hub and get your information to its destination via a detour — it might take a little longer, but the internet knows to automatically reroute information if a hub weren’t functioning.
If all of the hubs were taken offline, like, say, in a worldwide blackout, Sherry says that although major networks like Sprint, Google, etc. would be taken offline, the internet itself would still exist. You wouldn’t be able to access it from your phone, as cell towers require power, but Sherry says that in some remote areas, cell towers run on solar power, and the people in those remote areas could access the internet, though vast sections of it would be missing. There is also no central place where the internet is located. Originally, says Sherry, there were just networks, and those networks decided to connect, and then the internet was formed and grew from there, without a central base.
But though the internet can’t really break, there are ways that certain important parts of the internet can break. Justin Cappos, a professor at the NYU Tandon School of Computer Science and Engineering, explains that there are companies called “Content Delivery Networks” which act as a means of delivery for your data. Returning to the highway analogy (I know, I know), imagine your router as your mailman, and if you send an email — or letter — to your friend across the country, your mailman isn’t delivering it all the way to your friend. Instead, it will be carried by several routers along the way.
These more central, high-capacity routers are called Content Delivery Networks (“CDN”), and big internet companies often hire these networks to send information more quickly. But these CDNs can be overloaded, resulting in congestion collapse for that CDN, which can cause outages for the big companies utilizing them.
An example of this actually happened back in 2016, when a group of hackers attacked the CDN carrying the blog of cybersecurity blogger Brian Krebs, who was being carried on the same CDN as Twitter and other big sites. The hackers overwhelmed the CDN with a bunch of webcams and other devices that took up a lot of bandwidth, resulting in a congestion collapse, which meant people couldn’t access Twitter and those other sites carried by the CDN. To get out of the mess, it took the CDN pairing with the IT departments of those big companies it serviced to locate the source of the threat and disable the attack close to where it began. While CDNs face constant attack, this was so massive that it took a highly coordinated effort to fix. Despite this, the Twitter outage only lasted a couple of hours, and since this happened, many big companies use several CDNs to carry their content, so, once again, if anything becomes overwhelmed, the traffic is simply rerouted.
To put it quite simply, the internet as it stands is pretty much impervious to totally breaking, which, Sherry explains, is part of its design. “The original formation of the internet was done by the Defense Department, and they were designing it to withstand a nuclear attack, so it had to be able to withstand large outages.”
In other words, even in the event of nuclear armageddon, I’m still going to be able to get my Cheers reruns — assuming Hulu survives — so people working from home during coronavirus certainly aren’t going to “break” the internet. At most, service might slow down a bit at the local level, but it’s not like the gang at Cheers was moving all that fast anyway.