![]() |
Looking at the Evidence for Trusting Cloud Computing Ian Farquhar, Advisory Technology Consultant for RSA, the Security Division of EMC Are we trusting a third party with our data? Yes, we are, and have been for years. In the past many companies used bureau computing, where they sent out workloads on magnetic or paper tape, and got the results (usually a print-out) back a few days later. Sometimes this was Software-as-a-Service, sometimes this was Platform-as-a-Service, although we didn’t use those acronyms then. It was just service bureau computing. We also must ask what is a “third party” in this situation, especially when most organizations use service providers and system integrators. For example. many organizations I deal with use MPLS via a major service provider to link their offices, yet few encrypt their data as it transits through that service provider.[1] We also allow system integrator staff access into our networks, and even have contractors in our corporate identity management systems, regularly with system privileges. What of the “insider risk”? Although many companies perform background checks on prospective employees (ISO27001 section 4.3.2 recommends them), it’s a point-in-time analysis. RSA’s former CTO, Bret Hartman, once noted that employee trustability is never a fixed quantity, but can change over time.[2] Just because someone is trustable now, doesn’t make them trustable forever. Let’s also look at the challenge of software reliability and trustability. Like the POS terminal, what is my rational basis for assuming that I can trust a piece of software provided by a third party? Firstly, it might have serious security or reliability bugs which compromise my data or production capabilities. But it might also have deliberate backdoors or time bomb features, too. There are certainly many stores from the intelligence community of trojanized application and system software being infiltrated into target organizations. This is known as “red threading” by the intelligence communities. Finally, what makes people assume that the in-house hardware is secure? I firmly contend that this belief is an irrational control bias. The average server contains around 5-10[3] microprocessors, only one of which is the main CPU[4]. Processors can be found in the RAID controller, the “lights out” management controller, the Ethernet controller and even each disk drive. All run firmware, some of which even boot embedded operating systems. Some even have DMA access into main memory. Most of the firmware for these units is either included in the device itself, or is quietly uploaded by the driver during the boot process. Outside of the intelligence community, I’ve never heard of an organization worrying about an Ethernet controller firmware change sent across during a driver upgrade, or more generally, whether they should trust a specific piece of hardware at all. But perhaps they should. In the hardware security research we do at RSA, some of our brainstorming sessions have turned to ways we would attack a target, which is a useful way to create countermeasures for those attacks. As a security community, we need to ask ourselves how we can prevent that from being a future attack vector. There are security researchers looking into how hardware can be trojanized[5], and how to detect that trojanization.[6]. The above list of risks is not exhaustive, but an overview of some of the risks we take on for raw-metal and private cloud deployment. The point in all of this is to note that even for in-house workload deployments, we accept a significant amount of risk: from insiders, from contracted third-parties, from both software bugs and malicious trojanization, and even from the fact that the hardware is significantly more programmable than most people realize. All of these challenges apply to the providers of public cloud services too, and need to be managed there as well. But right now, I perceive few organizations are doing so, in either camp. Let’s also remember that public cloud services also provide security controls which benefit from economies of scale such deployments provide. Security Operation Center (SoC) and Critical Incident Response Center (CIRC) capabilities are supported by economies of scale. For Software-as-a-Service deployments, the in-depth knowledge of the application provider also facilitates additional functionality such as enhanced domain-specific analytics, which supports constant authentication and risk/fraud evaluation. A cloud service provider may have incident response capabilities ready to go, whereas a typical organization would need to contract that in. All of these are economy of scale benefits which don’t apply to in-house deployments. So, are all cloud skeptics suffering from a control bias? Are all cloud evangelists hopeless marketing droids driven by “the latest buzzwords”. In reality, we should not be cloud skeptics, nor cloud evangelists. We need to be security professionals. Security professionals manage risk. They evaluate it, they constantly reevaluate it, and they are agile and change their approach if the evidence warrants it.
Ian Farquhar is an Advisory Technology Consultant for RSA, the Security Division of EMC. In this role, he advises organizations throughout Australia and New Zealand in areas including information security, cryptography, compliance, privacy and data protection. Ian also contributes to R&D at RSA in the area of hardware security. Ian has over 20 years of experience working in the IT security industry. [1] When I was doing security consulting in a previous company, a large organization had a policy that all network traffic not in their facilities must be encrypted. However, their ISP had convinced the organization not to encrypt on MPLS, because it made debugging issues difficult for them. So they encrypted data when passing between floors through via ducts, but not when travelling over a shared telco MPLS infrastructure! [2] This is something well understood by government security clearance programs, which not only seek previous evidence that the employee has demonstrated integrity, but also seek to understand whether there are leveragable stressors which could be used to break that integrity. Examples of these could include financial challenges, family members overseas in hostile locations, and many others. Few enterprises can afford to do that, and many would be legally prevented from that level of investigation. [3] For desktop and server systems it’s even more. Radio basebands, keyboard and mouse controllers, DSPs in the sound controller, the GPU core itself, multimedia accelerator cores in the GPU, and so on. For a real attack against a keyboard microcontroller: http://www.zdnet.com/blog/security/hacker-demos-persistent-mac-keyboard-attack/3851 [4] Don’t forget that many peripherals also include BIOS code, and that x86 processors from both Intel and AMD upgrade their microcode during the boot process. |
Update your feed preferences |
