We’ve all clicked on a link, assuming we would be taken to the right page, only to get a big fat, “404 Error” or “Page not found” shoved in our faces. It isn’t just annoying, it’s one of the biggest problems facing the internet at this time. For some it means missing out on that cute pic of kittens or trending video on BuzzFeed, for others it means the inability to pass legislation or research Supreme Court decisions. A 2014 study at Harvard Law School, discovered that 70% of the URLs in the Harvard Law Review and over 50% of the URLs in the United States Supreme Court opinions, don’t even link to the originally cited content. All these instances constitute broken links and are contributing to the general link rot problem plaguing the Internet. Yes, it’s sad that you are missing out on those cute videos, but when this issue is impeding a country’s law-making decisions, actions need to be taken to rectify the problem.
There are many contributing factors to broken links and link rot. Content and webpages can be permanently removed, temporarily moved, and URLs altered (by one character even), for instance. Even the advent of content management systems like WordPress and Drupal only further complicated the issue, as complete websites underwent restructuring and programing. New websites meant new content and in many cases, new URLs. Much of the old content ended up falling by the wayside and with it, the old URLs. This means that the end user can see the old URLs, through web searches or from other resources, but can’t get to it… an infuriating problem when it happens to you.
University scholars and large corporations alike are attempting to address the issue of link rot and broken links, and have come up with several promising solutions – like the creation of Perma.cc, a group of many law libraries that has set out to rid the Internet of link rot. There are several solutions to the problem, such as the implementation of permanent links to content or access to content as is existed in the past, not just as it exists today. This would help to mitigate the problem but content professionals and web developers that actually deal with a website’s content are slow to incorporate these solutions. Ultimately, it means more time that they must spend creating and organizing content.
Until a permanent solution is created and implemented, the responsibility falls to each individual Webmaster and content curator to ensure that their sites are free of broken links. Luckily, one of the most difficult steps in this process, namely identifying broken links and missing content on a website, can be done very easily with automated software. You can schedule daily or weekly scans and be emailed with the results. From there, it is just a matter of addressing your issues. After this is integrated in to your regular routine, it becomes a lot less time-consuming and is one of the most responsible things we can do as webmasters, content curators or web developers. Just as you would not pollute the environment with waste or debris, help to rid the Internet of broken links contributing to link rot.
By: Andrew Gunn
Learn more or sign up today at: LinkTiger.com