Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

Abuse in the Cloud

Published 02/12/2021

Abuse in the Cloud

Written By: Allan Stojanovic and Spencer Cureton from Salesforce, Inc.

Join the new Cloud Abuse Circle Community, to participate in the discussion around this topic..

When we talk about “abuse”, we use the term as shorthand for the much more encompassing “Abuse, Misuse, Malice and Crime” (with credit to Trey Ford). Within this definition we find that there are three subcategories of activities; Monetization, Weaponization, and Misinformation campaigns. And although not perfect, it certainly starts to feel like we have some language to describe what we see.

Abuse actors are goal oriented.

Abuse actors are often goal oriented. They have an idea of what they want to accomplish, and go about attempting to accomplish that goal in a serial fashion. They tackle each barrier as they arise, and try to figure out a way over, around or through it.

“No different than other programmers.”

The barriers they encounter are rarely security controls, because in most cases, the hosting platform is not the target. So in reality, they take advantage of blind spots, flaws in business logic, and areas of ambiguity where they can argue about rules on a technicality.

“Technically, I’m not spamming, I’m telling people the truth about the moon landings!”

All that being said if a security control is put into place after they have been accomplishing their goal for some length of time, they will test those controls, and attempt to circumvent them.

It is also worth mentioning that if no control is put into place, many of the types of abuse will end up a commodity, sometimes commercialized, so that anyone can accomplish the same goal without needing the same technical skills to do so. You don’t need to look any farther than any open marketplace to see examples of this.

Abuse actors often have multiple goals.

To add complexity, some abusers are what we term “blended threats”. They have multiple goals and sometimes look to achieve them at the same time. That means that they can be difficult to categorize, and stopping one campaign does not mean that they have left the cloud platform.

“Maybe I’ll cryptomine while waiting for my next phish victim.”

For those of us in the cloud services game, most of this is as old as the first scam on the Internet. If you are offering a messaging system, someone is using it for spamming and phishing. If you are offering “free compute”, someone is trying to cryptomine on it. If you have a social network, someone is trying to spread disinformation to its users.

When these threats are localized, entirely self contained within a single platform, there is at least a chance of catching and stopping the abuse, as long as the organization has agreed to a list of what they will and will not tolerate. But that very agreement has been leading abusers, at least some abusers, to spread their efforts across multiple platform providers.

“If my front end is on this service, and my backend is on that one, can they still detect me?”

As one cloud provider we are seeing more and more of these multi-strata toolkits. In one particular case, we saw a phishing kit that used an SMS service to send its messaging. The link in the message sent victims to a second service that hosted the static landing page asking for credentials. When the submit button was pressed, the form details were sent to another service that hosted the form back-end, and that program sent the phished credentials to multiple email addresses (all free), and also stored the credentials in a database cloud provider (in their free tier offering) for good measure.

“They are only using our services as designed.”

This technique is not all that new, except with the proliferation of cheap or free online services, it is no longer necessary to use “hacked” machines to do this. Now, an abuser can set it up easily and for free. And it has become easier to automate to boot.

And it isn't even limited to phishing attacks. We have also seen multi-strata coin miners that detect which platform they are running on to take best advantage of the available features. By itself this is not bad, but coupled with some obfuscation techniques, and the goal of stealing the resources to run the miners does point to abusive intent.

Getting these types of kits reported and shut down can be quite the chore for so many different reasons, many of which you can probably imagine. But it is possible.

The biggest issue is what we can only think of as “corporate jurisdiction”. Cloud providers necessarily consider all users of their platform customers, and therefore they are usually covered under various forms of contractual protections. They cover things like privacy, confidentiality and usually some basic security. And this is good.

But this is being used as air cover for all sorts of activities, including criminal ones.

“So what if they catch me, they'll just delete the kit, and I’ll deploy it again. Besides it will take them a couple of weeks to figure it out, and by then I’ll have what I want.”

We want to start a discussion about abuse that takes advantage of these frameworks to keep operating, and to avoid detection and identification. We are looking to form an Abuse ISAC with the partnership of the Cloud Security Alliance, to talk about some of these issues, and more, and see if there is anything that we should collectively be doing differently. Will you join us?

If you are a member of the Cloud Security Alliance, you are already invited. And be sure to join CSA’s Circle community to stay engaged on this topic, as well as other current discussions and topics. Join the group today! CSA's Cloud Abuse Circle Page.

Here is a question to get you started.

How do you “fire” a customer that is criminal, and when you do, what contractual obligations remain?

Let us know what you think in our Circle community for Cloud Abuse.

Share this content on your favorite social network today!