If you're seeing this message, it means we're having trouble loading external resources on our website. Show
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
The world relies on connections on many different levels. Many industries cannot afford any sort of downtime when it comes to their business operations. This is where network redundancy comes in. Network redundancy allows for the internet connection, servers, and network to remain up and running even during a failure. What Is Network Redundancy?As mentioned earlier network redundancy can be applied to internet connections, servers, and networks. It is the method of adding more network devices and other ways of communication to safeguard the network. It can also decrease the chance of serious data failure. Network redundancy can be thought of as a backup system to help prevent more serious problems that could potentially happen. Photo Source: nectus5There are a couple of different forms of network redundancy which include fault tolerance and high availability. Both of these methods can be beneficial in many different industries. The first method, fault-tolerant redundant system, offers full hardware redundancy. This form of redundancy copies applications to at least two identical systems that run simultaneously. If a problem occurs in the primary system, the duplicate backup will run instead. This allows the applications and system to continue without interruption. Fault-tolerant redundancy offers the best protection which can be beneficial for industries that cannot have any downtime like the health care industry. The next type of network redundancy is high availability. This type of redundancy is a software-based system that uses several servers that monitor each other. These servers have protocols that can automatically switch to one of the backups that haven’t been compromised. If something were to go wrong, the backup server takes control. There is some expected downtime while the backup servers start up again making this type of redundancy a system that certain systems cannot use. How to Achieve Network Redundancy?Partnering with a trusted colocation data center can help your business achieve network redundancy with their all-encompassing backup systems. Data centers also have uninterruptible power supply systems that can shift to a backup generator without losing any data in any case of emergency. But companies can take steps toward network redundancy by constructing a plan that examines the current infrastructure. Companies need to make sure the current system is optimized to help protect the system from any failures. There are a couple of different things that can help optimize network redundancy. One of the first steps is to back up important data in a separate offsite location. This data should also be backed up regularly. If you are using multiple data centers, having a strategy of the best places to replicate and store data for quick and convenient access will be beneficial. If a disruption were to within the main system having a data center nearby can allow for the marginal interruption. A trusted colocation provider performs regular backup system checks to make sure everything is maintained and ready to go if it’s ever needed. Testing all of the hardware and its connections can make sure everything is working properly. If a data center manager finds any problems or potential problems in the system an action items list is made to be fixed immediately. Network redundancy is all about preventative care and regularly testing the backup system is another aspect of this. Network and system failures can happen for many different reasons. Being prepared for them all will benefit your infrastructure greatly. Cyberattacks are a possibility when it comes to vital business data. Cyberattacks can come in different forms including malware, ransomware, hacking, phishing, DDoS attacks more. Being prepared with a response plan and network redundancy will be important moving forward after an attack. Photo Source: techcrunchAchieving network redundancy requires a good deal of effort, and it may deter some organizations from putting in the extra work. But the benefits of network redundancy will build trust from the users and make the overall infrastructure more reliable and better than ever. The first benefit that a redundant network offers is uptime or the lack of downtime. A network that is always up and ready to go is crucial for many (if not all) different industries. The cost of downtime can company’s an average of $5,600 every minute and an average of $300,000 every hour. This number can even be as much as $540,000 per hour. This isn’t even mentioning the critical operations such as hospitals. Network redundancy can prevent downtime that many companies cannot afford. Network redundancy can also affect the efficiency of cybersecurity. Redundancy allows your network to have the best security measures. We discussed the monetary cost of downtime, but downtime can also leave your data vulnerable. There will be security concerns as your staff is trying to locate and isolate where the problem is. Redundancy takes away some of the risks from cyberattacks. Photo Source: thesbjournalThe problem of latency is also an issue that many companies deal with. More businesses have gone online which is resulting in slow speeds and system failures from high usage. Network redundancy can give you multiple paths which can help avoid high traffic slowdowns. At the end of the day, a business must continue regardless of circumstances. Network redundancy keeps your business moving regardless of outside threats, power outages, natural disasters, system failures, and more. Staying open no matter what issues the business comes across is a vital part of the overall success of your business. ConclusionNetwork redundancy is one of the best ways to safeguard your network. Whether your company is looking at deploying a fault tolerance or high availability redundancy—having one of these systems will help defend and preserve your current operation and connectivity. Companies that don’t have a redundant network can potentially experience many costly problems down the road. Just an hour of downtime can cost upwards of $500,000. Protect your company with network redundancy. If you’re looking for a trusted data center provider—connect with us today.
Network redundancy is process of providing multiple paths for traffic, so that data can keep flowing even in the event of a failure. Put simply: more redundancy equals more reliability. It also helps with distributed site management. The idea is that if one device fails, another can automatically take over. By adding a little bit of complexity, we reduce the probability that a failure will take the network down. But complexity is also an enemy to reliability. The more complex something is, the harder it is to understand, the greater the chance of human error, and the greater the chance of a software bug causing a new failure mode. So, when designing a network, it’s important to balance redundancy against complexity. What are the different types of network redundancy?There are two main forms that network redundancy can take. The first is fault tolerance, which uses full hardware redundancy—there’s at least one complete duplicate of the system hardware running side-by-side with the primary system. Should one system fail, the other will take over simultaneously, with no loss of service. The second type of network redundancy is high availability. In this structure, rather than duplicate all of the physical hardware, a cluster of servers is run together. The servers monitor each other and have failover capacities, so if there is a problem on one server, a backup can take action. If you’re curious about the benefits and drawbacks of each, consider this: fault tolerance systems deliver next to zero downtime but are highly expensive to implement, while high availability infrastructure is less expensive to implement but may come with a risk of minor impacts to service during outages. Designing for redundancyThere are useful network redundancy protocols at many different OSI layers. The first thing to think about is what happens at each layer if you lose any individual link or piece of equipment. If you’re new to this, I suggest creating detailed Layer 1, Layer 2 and Layer 3 network diagrams showing every box and every link. Put your pencil or your mouse on each line or box in succession and ask these questions for each element:
There are a lot of different redundancy protocols around, not all of which are equally robust. You’ll need to choose appropriate protocols for your equipment and network, but here are the ones I generally use. At Layer 1 and 2, I like to use Link Aggregation Control Protocol (LACP) for link redundancy. This includes multi-chassis LACP variations like Cisco’s Virtual Port Channel (VPC) technology, available on all Nexus switches. Note, however, that most multi-chassis link aggregation protocols have serious limitations. HP’s Distributed Trunking, for example, is best used for providing redundant connectivity for servers, and can have strange behavior when interconnecting pairs of switches. The other important Layer 2 protocol to use is Spanning Tree Protocol (STP). I prefer the modern fast converging STP variants, MSTP and RSTP. (I’ve written about spanning tree protocol before). At Layer 3, your redundancy mechanisms need to make the routing functions available when a device fails. The choice of protocol here depends on many factors. If the devices on this network segment are mostly end devices, such as servers or workstations, then I prefer to use a protocol that will allow the default gateway function to jump to a backup device in case the primary device fails. The best choices for this are Cisco’s proprietary Hot Standby Routing Protocol (HSRP) or the open standard Virtual Router Redundancy Protocol (VRRP). If the segment is being used primarily to interconnect network devices, then it might make more sense to use a dynamic routing protocol such as OSPF, EIGRP or BGP. I don’t advise using the older RIP protocol because it has serious limitations in both convergence time and network size. However, I strongly advise against using both types of protocols, like deploying HSRP with OSPF. Doing this can lead to network instability, particularly when dealing with multicast traffic. For physical box redundancy, the exact technology will dictate the best choice. For firewalls, which need to maintain massive tables of state information for every connection, there are no viable open standards. In these cases, you really need to use the vendor’s proprietary hardware redundancy mechanisms. Similarly, stackable switches are always very simple to deploy, usually requiring almost no special configuration to achieve box network switch redundancy. The only thing to bear in mind is that you need to be careful about how you distribute connections between the stack members. For switch redundancy (and routers), it makes sense to combine a Layer 1, 2 and a Layer 3 protocol from the ones discussed above. Be careful, though. Make sure the same device is the “master” at all layers. For example, at any moment your Layer 3 default gateway should be the same physical device as the spanning tree root bridge. In all cases, make sure you thoroughly understand the implementation guidelines for each technology you’ll be using and follow them carefully. If you don’t understand it, trying it out in a production network can be career limiting. And for goodness sakes, keep network backups in place for emergencies! DDoS attacks and network redundancyDistributed Denial of Service or DDoS attacks are cyberattacks no network admin wants to deal with. The goal of an attack like this is to render a network or service inoperable. Luckily, network redundancy can help to mitigate the impact of DDoS attacks, because it improves network security. By using multiple ISPs, data centers can reroute network services in the event of an attempted DDoS attack. That’s why it is crucial to have redundant networks with flexible internet access. Businesses can’t operate if the network is down—continuous internet connections and functioning technology are essential these days. If your network lacks redundancy, especially a redundant internet connection, the failure of a single device could result in hours of downtime for the entire network. Network redundancy and infrastructureThe terms ‘fail’ and ‘failure’ have come up a lot so far, because when planning network redundancy it’s important to think about all of the ways a network can fail. Beyond the software issues, think about the physical and environmental factors that can impact the performance of a device. The enemies of device uptime like heat, water, and power are all things to think about when planning network redundancy. Ensure that you’re using redundant electrical supplies including UPSs and possibly even backup generators, have redundant cooling systems in place, and have redundant environmental sensors to warn you if the physical environment is becoming less-than-ideal for network devices. After all, logical network redundancy requires the backup devices and backup data paths to be physically operational! Tips for achieving minimal complexity in network redundancyHere’s a list of things to keep in mind for implementing network redundancy while minimizing complexity. Consider these some network design best practices. Identical systems with identical connectionsI like to provide redundancy by implementing exact duplicate systems in key spots in the network. For example, a core switch will be two identical switches. When I say identical, I mean they should be the same model, running the same software, and they should have the same connections, as much as possible. The easiest way to do this with switches is to use stackable switches. Then there’s really nothing to do—connect up the stacking cable and you have redundancy out of the box. Simple redundancy protocolsThere are a lot of ways to implement network redundancy. The most reliable ones involve the simplest configuration on the fewest devices. For example, if I need a highly available firewall, I’ll implement a pair of devices. And I’ll always use the vendor’s fail-over mechanisms. Then I don’t need to worry about making the firewall take part in any routing protocols. Unless there’s a compelling reason for the firewall to run a routing protocol, it only introduces unnecessary complexity. Always use the simplest configuration that meets the requirements! Keep everything parallelOne thing that often trips people up is how to connect successive layers of redundant devices. The trick is to keep it all parallel. Create an A path and a B path with a cross-over connection at each layer. The idea is that any one device can fail completely without disrupting the end-to-end path. For example, suppose I have a pair of access switches, a pair of core switches, and a pair of firewalls. I’d connect access switch A to core switch A, which also supports firewall A. Similarly, access switch B connects to core switch B, which connects to firewall B. I’d also connect the two access switches to one another and the two core switches to one another. In this example, you may be tempted to further connect access switch A to core switch B and access switch B to core switch A. It’s certainly a common configuration, but as soon as you do this, you need to know what you’re doing in terms of link aggregation and spanning tree. That could add considerable complexity if you’re new to network design. Never do more than you need toAs the previous example suggests, it’s easy to go further in implementing redundancy than is absolutely required. In many cases the extra redundancy is warranted and could provide additional functionality. But carefully consider every piece of equipment, every link, and every protocol. For each one, ask whether it’s providing enough additional functionality to warrant the additional complexity. Cookie cuttersFinally, it’s extremely useful to follow a standard model when implementing your networks. If you have multiple data centers, make them as nearly identical as possible in terms of topology. Similarly, make your access switches as nearly identical as possible. Use common VLAN assignments everywhere, have a common IP addressing scheme that works everywhere. Make the default gateway on every segment follow a common rule such as the first or the last IP address. If you use redundancy protocols like HSRP, use them everywhere, and configure them the same way everywhere. All this similarity helps limit the possibility of human error. Maybe the new engineer has never looked at this particular device before. But if it’s exactly the same as every other device performing a similar function, then it’s much less likely that he or she will miss some obscure bit of protocol magic that was implemented on this device and only this device. Maximum availability with minimum complexityThe goal is maximum availability with minimum complexity. So it’s vitally important to keep the configuration simple. Don’t implement multiple redundancy mechanisms that are trying to accomplish the same logical function, or network navigation will become very difficult. When it comes to routing protocols in particular, think about whether you can get away with a static route pointing to an HSRP default gateway. Routing protocols have to distribute a lot of information among a lot of devices, and that always takes time. HSRP and VRRP are both faster and simpler so you should use them if you can. If you have stacked switches, think about what happens to upstream and downstream connections if one stack member fails. Where possible, you should distribute these links among the various stack members. Above all, remember that building a real-world network is not a test where you have to demonstrate your deep understanding of network redundancy examples. Points won’t be deducted for using static routes and trivial default configurations. Keep it simple. |