Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

4 Misconceptions About DDoS Mitigation

4 Misconceptions About DDoS Mitigation

Blog Article Published: 11/02/2021

This blog was originally published by MazeBolt here.

Written by Yotam Alon, MazeBolt.

After several years in cybersecurity and specifically in the DDoS mitigation space, I often come across certain common and widespread misconceptions. Here are my top four:

Misconception #1: "DDoS attacks are big"

Based on my interactions with CISOs and backed by recent reports on the subject, one of the top misconceptions is that DDoS attacks are big. Networking is a complex topic – there are many ways to make a service unavailable, and most of them do not focus on massive rates. A reason for this misconception could spring from the fact that big attacks are the ones that get assured widespread media attention.

There is just a minimum rate you need to reach, and MazeBolt's data shows that this rate is in between the range of 0.5 and 10Gbps. Research confirms that large attacks of 100Gbps and above have fallen by 64 percent since 2019. However, there has been a startling 158 percent increase in attacks sized 5Gbps. or less.

Enterprises struggle to distinguish between the low-rate attacks and the legitimate traffic, and at the same time, find it difficult to maintain a low false-negative rate. Similar to the big attacks, small size attacks can bring down the services rapidly and can create an equivalent impact on the businesses; urging companies to be prepared and review their web security arrangements.

Misconception #2: "DDoS attacks may be common but we have never experienced one!"

This thought is by far the most harmful of all DDoS misconceptions. DDoS attacks are getting smarter and sneakier, and companies aren’t sufficiently prepared to dodge devious threats. In 2020, successful DDoS attacks witnessed a 200% growth. From Netflix to Twitter, Wikipedia to international banks, gaming and gambling sites, DDoS attacks have spared no industry segment. Although the statistics may exhibit that the attacks target `certain types' of enterprises, in reality, 9 out of 10 businesses have claimed to experience an attack, with an average downtime of 30 minutes. Gartner estimates that a single minute of downtime costs most businesses $5,600, or more than $300,000 per hour. The aftermath of a DDoS attack leads to monetary loss, operational challenges and loss of customer trust. Ironically, enterprises who believe they are safe from web attacks are the ones who suffer the most debilitating threats because they are unprepared.

Misconception #3: "I don't care about <Region>, so Geo-blocking is effective for me."

Geo-blocking is used to block malicious traffic using IP geo location software. Enterprises reject traffic originating from locations that have had a history of launching DDoS attacks. The problem with this arrangement is that the genuine traffic from these locations also gets blocked. Geo-blocking essentially restricts traffic and thereby stunts business growth. Despite this problem, very often enterprises see this as a quick fix to prevent DDoS attacks. However, hackers are smart, and they find ways to bypass geolocation blocking algorithms by spoofing their IP addresses.

In most cases, geo-blocking is not accurate. It provides an approximate estimation which displays the information on the "best guess" basis. IP ranges sold between countries and regions, further adds to the inaccuracy. Geo-blocking ultimately fails if the attacker spoofs the source IP or uses a reflection attack based on location.

Misconception #4: "My DDoS mitigation solution offers full protection!"

Even with the most sophisticated DDoS mitigation and testing solutions deployed, most companies experience a staggering 48% DDoS vulnerability level. The vulnerability gap stems from DDoS mitigation solutions & infrequent Red Team DDoS testing being reactive, instead of continuously evaluating and closing vulnerabilities.

Currently, mitigation solutions are inept to re-configure and fine-tune their DDoS mitigation policies, leaving their ongoing visibility limited, and forcing them to troubleshoot issues at the very worst possible time, i.e. when a successful DDoS attack brings down systems. These solutions are all reactive, reacting to an attack, and not closing DDoS vulnerabilities before an attack happens.

Identifying vulnerabilities through DDoS Red Team Testing is not workable in the long run. The testing simulates a small variety of DDoS attack vectors in a controlled manner to validate the human response (Red Team) and procedural handling for a successful DDoS attack. Red Team testing is a static test done on dynamic systems and usually carried out twice a year. It does not diagnose a company's vulnerability level for DDoS attacks, and any information gained from this testing is valid for that point in time only. Further, Red Team testing disrupts the IT systems and requires a planned maintenance window.

Learn more about how to stop DDoS attacks in our CloudBytes webinar.


About the Author

Yotam is Head - R&D at MazeBolt and is in charge of all R&D activities, infrastructure and security. With five years in the security industry, Yotam brings fresh perspectives and insights into current technologies and development flows. He holds a BSc. in mathematics and philosophy and enjoys hitting the archery range in his spare time.

Share this content on your favorite social network today!