
Preventing Distributed Denial of Service Attacks: Seven Best Practices
News | 5 Jun 2014
2014 is shaping up to be the year of the distributed denial of service (DDoS) attack. A DDoS attack is when malicious codes infect a computer, triggering mass attacks against targeted websites, making them inaccessible to regular users. If the attack is strong enough to affect network equipment at the perimeter of the target (e.g. firewalls), the entire network of the service under attack may stop responding. A DDoS attack can be incredibly difficult to defend against despite the fact that it isn't considered very sophisticated.
Many DDoS attacks succeed because organizations do not understand how to protect against them, and have not made it a priority. Security managers are generally well versed in choosing the most fitting technologies to counter threats such as intrusions, worms and Web application exploitations. But there is a common misconception among the security community that these same technologies can also be relied upon for DDoS protection.
Perhaps the biggest misconception tied to DDoS attacks is that installing and running a single protective software on a well-known Internet platform or host is sufficient to keep the organization safe. This has been disproved in spades as recent attacks to major websites have rocked the IT community. In this slideshow, Zensar Technologies has outlined the steps an organization can take in order to better protect itself as DDoS attacks continue to gain traction. These steps include a combination of anti-DDoS technology and anti-DDoS emergency response services. Track your changes All organizations should develop a detailed record of the purpose behind their network's design and organization. This document must be updated in real time, so that when revisited, the reasoning behind the configuration of the network infrastructure can be easily determined. This helps users to maintain awareness of older, trusted network configurations. Because infrastructure is rarely redesigned and configurations may remain for several years, it's not rare for a network configuration to outlast personnel. To ensure that all team members are looped in, it is important to provide an overview of configurations during orientation or as part of annual auditing. Sometimes keeping it simple can be stupid Technology is often praised when it is simple. However, this virtue shouldn't necessarily be held to the security industry. When a simply designed network is penetrated by a malicious user, the entire network can be easily taken down. By designing a complex system, more stability can be achieved through redundancy and fault tolerance That being said, an auditor should understand the reasoning behind configurations and if something appears unusual, it is important for that administrator to investigate why it was designed in that manner. While a complex system can offer more stability, it's important that it remain logical and fully understood by internal audiences. Test in two ways When testing the usability of a site, it may be easy to examine the solution within a network and call it a day. However, when serving customers over the Internet, it's important to test the solution in the same manner in order to offer a high-quality customer experience. By leveraging a real-world approach, users are able to determine a true experience, offer higher quality service and improve satisfaction.
Give standardized marching orders A network outage can be difficult to diagnose because network administrators can be too focused on only looking within their own segments. Issues often spread further than where they were initially diagnosed and a problem solved in one segment may still leave vulnerabilities in others. When building out standard operating procedures (SOPs) and emergency operating procedures (EOPs), it's important that administrators recognize the implications their changes have on others within the network, as well as have clear guidelines for who they must contact and coordinate with in the event of an emergency.
Complacency can be dangerous Internal processes can pose a much greater risk to the network than hackers. Just because an organization may not have experienced a serious outage, it may assume that its processes are solid. However, this is not a sound mindset. Network administrators must constantly update and improve their systems in order to defend against new and more aggressive threats. Proactive monitoring, testing and awareness are necessary to ensure network availability. Always be prepared Unfortunately, an actual security emergency is often the catalyst for shifting attitudes about security and prompting the organization to change its processes. In order to proactively prevent security issues, organizations can leverage the best practice of imagining a worst case scenario and then determining whether the organization's defense would be capable of standing up against it.
Do not wait for hackers; Get a regular checkup Organizations cannot fix problems they aren't aware of. In order to identify issues, it's important to work with a specialist rather than taking the "free clinic" approach and trying to assess problems on your own.
Specialists can perform assessments in three different scenarios: Black Hat, Grey Hat and White Hat. A Black Hat checkup is when the specialist doesn't have any knowledge of the customer's environment and attempts to penetrate their system like a malicious attacker. Any previous knowledge is derived from research and resources that an actual attacker would have at their fingertips. A Grey Hat is when the "attacker" has limited knowledge of the company and its operation, and only utilizes information such as the website IP address or whether there is an IDS/IPS deployed. Finally, a White Hat checkup means that the hacker is provided with complete knowledge of the company, including internal and external IP schemes, IDS/IPS deployments, firewall deployed and network diagrams. Finally, it's important to educate internal teams and frequently retest. By following these steps, organizations can identify risks, test vulnerabilities and work on patching proactively.