Avoiding Storms In The Cloud – The Critical Need for Independent Verification
Blog Article Published: 08/16/2012
By Chris Wysopal, Co-founder and CTO of Veracode
Last year, Forrester predicted that cloud computing would top $240 billion in 2020. Market Research Media came up with a more aggressive forecast of $270 billion in 2020. None of this data is particularly surprising, as cloud technology is clearly here to stay, particularly if cloud providers are able to maintain secure environments for their customers. As companies adapt to the shifting cloud paradigm to address cost, scalability, and ease of delivery issues, there continues to be a growing concern about the safety of data in the cloud, and whether cloud security can ever be as robust as enterprise security.
The dangers associated with storing information in the cloud are regularly highlighted in well publicized breaches and security flaws experienced by some of the world’s most well-known brands. Cloud businesses such as Amazon, Yahoo, Linkedin, eHarmony and Dropbox have all been attacked in just the last few months, but the problem is not exclusive to consumer facing businesses. B2B organizations that offer cloud-based solutions, like my company Veracode, are facing their own set of security requirements from business customers the need to ensure data is protected.
The answer to why cloud security has become such a fast growing concern for enterprise organizations today can be found in a perfect storm of current trends.
First is that the reporting of security breaches has skyrocketed, in part because hackivists love the publicity but also because crime typically occurs where there is value, and in our digital economy the value resides in various forms of intellectual property.
Second is that today’s cloud computing environments often distributes corporate intellectual property to many different infrastructures while promising authorized users ready access to that information, which means the value can be found in many places.
Third is that enterprise organizations rarely use just one cloud-based service. If one was to count the number of Salesforce.com customers that have integrated the service with other cloud-based marketing automation solutions, or cloud-based accounting solutions, it would be a very high number. With all of this corporate information and intellectual property now residing in so many interconnected places in the cloud, hackers that are actively looking for weaknesses can abuse those connections and wreak havoc for the cloud customer and provider alike.
What most enterprise organizations are looking for from prospective cloud-based solution providers is transparency in the provider’s security mechanisms and IT processes. Companies want to know what security mechanisms are being used to keep their information confidential and secure, particularly while it is in transit to and from the provider’s datacenter, but also while it is in use in the datacenter, while it is at rest in a disaster recovery site, and ultimately, how the information is finally deleted. Customers are also concerned about the security mechanisms used to authenticate company users that will be accessing and updating the information. Sure, the goal of most cloud-delivered services is to provide fast, easy, ready access to corporate information – but only to the appropriate people.
In terms of process transparency, companies need (and want) to know that a provider’s IT procedures do not allow for corporate information to be exposed to members of the provider’s workforce, even during routine maintenance or updates to infrastructure or service software. They also want to know whether the service infrastructure and software is continually being hardened against attack, and that the incident response procedures are well known and appropriately followed. Many breaches have been tied to vulnerabilities, such as SQL injection, in the custom software developed by the service provider. Customers are beginning to seek evidence that this software was developed and tested for security.
This brings us to the impact cloud security concerns are having on solution providers. While customers are certainly asking more questions about their providers’ security, they are also increasingly expecting independent proof of the answers. This is a good thing.
One example that we recently encountered at Veracode was during an RFP process, which asked that we answer the checklist questions published in Gartner’s September 2011 research note titled “Critical Security Questions to Ask a Cloud Service Provider.” The checklist is designed to arm customers with the necessary security questions to ask of their cloud-based solution providers as part of their due diligence. We provided those answers, but the customer went further to ask for our SysTrust report and proof that our hosting provider was certified as an SSAE 16 facility. SysTrust certification demands Ernst & Young audits every January and February that review process documentation, includes personnel interviews and reviews activity logs to see whether effective platform controls existed to protect information during the previous year. The hosting provider also goes through a similar process with their auditors, providing an added layer of third party security validation.
Ultimately the burden of security should fall on both the cloud solution provider and the customer. As Greg Rusu, general manager of PEER 1 Hosting’s public cloud division Zunicore stated in a recent InfoSecurity article, “the burden of security lies with both the cloud provider and the customer. No matter how secure the cloud provider makes the infrastructure…what we see in practice is that security is a partnership.”
After all, at the end of the day it’s the customers’ duty to protect their intellectual property and corporate information. Taking assurances from cloud solution vendors, even in writing, only provides a certain level of assurance, which is why calling for third party validation is so critical. This level of third party inspection is no different than the advice we give our own customers about securing their applications – trust is good but independent validation is much better.
Chris Wysopal, co-founder and chief technology officer of Veracode, is responsible for the security analysis capabilities of Veracode technology. He is recognized as an expert in the information security field, and his opinions on Internet security are highly sought after. Wysopal has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. He also has spoken as the keynote at West Point, to the Defense Information Systems Agency (DISA) and before the International Financial Futures and Options Exchange in London. Wysopal’s groundbreaking work in 2002 while at the company @stake was instrumental in developing industry guidelines for responsibly disclosing software security vulnerabilities. He is a founder of the Organization for Internet Safety, which established industry standards for the responsible disclosure of Internet security vulnerabilities.
Trending This Week
#1 Cloud Network Virtualization Benefits of SDN over VLAN
#2 Simple but Effective Tactics to Protect Your Website Against DDoS Attacks in 2021
#3 Understanding the OWASP API Security Top 10
#4 How to Choose a Zero Trust Architecture: SDP or Reverse Proxy
#5 3 Big Amazon S3 Vulnerabilities You May be Missing
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.