In my previous blog, I introduced the idea that the concepts around security incident response need to evolve based on the threat landscape facing organizations. The first step in heading towards this next generation of security operations is improving the visibility into what is going on with the technical infrastructure. I used the analogy of giving telescopes to the lookouts on the castle walls to see the impending attack sooner. First, our lookouts need to be looking in the right direction and taking in the activities in and around our castle. Real Time Monitoring is necessary to capture events and organize the data such that the security operations function can make sense of the activity. Security Information and Event Management (SIEM), log collection and correlation systems are examples of this infrastructure. This infrastructure also would include file integrity monitoring systems, system event logging systems, application logging systems and any other technology, role or process that is actively monitoring systems. Secondly, the lookouts need to not only see, but understand, what is going on around them. So a second element is enabling Forensics and Analysis to review security information from the real time monitoring processes and perform analysis based on expert input to identify patterns of active threats in the infrastructure. This also includes the evidence collection, preservation and analysis processes that would support Incident Management and Investigations. Most organizations have these capabilities. The depth and breadth of the ability to capture and inspect events and network traffic are varied but this infrastructure has been part of security strategies for a while. There are two key inputs that are needed to really move the needle when it comes to improving these capabilities within Security Operations. “Real time” event analysis opens up many challenges – too much data moving too quickly towards an overwhelmed team of people. The technologies for these monitoring processes are getting better. A dimension that can greatly advance the process is feeding the criticality and data profile of devices into the mix. Understanding the connection of the devices to business processes, and ultimately what data is flowing through those devices, provides ‘business context’ and is the next evolution of “tuning” for real time monitoring. The second factor in improving monitoring processes is security intelligence and ‘indicators of compromise’. Known malicious code, URLs, hosts and other data will assist security operations in identifying possible attacks or actual breaches. This information, coupled with the ‘business context’, greatly improves the prioritization ability of security operations. I won’t keep the analogy running too much longer and exhaust my readers, but I think it is an apropos way to look at this. The first iteration of real time monitoring placed lookouts on the ramparts focused on watching everything going on OUTSIDE the castle. Next, we told the lookouts to watch both outside and inside the castle. Now we need to give the lookouts better methods to view what is going on and methods to identify areas of surveillance (key vulnerable areas, indicators of malicious activity, etc.) that need extra attention. To see what RSA is doing in these areas, check out the upcoming Security Analytics event sponsored by RSA: https://presentations.inxpo.com/shows/rsa_sa/registration/rsasar.html |
|
Update your feed preferences |
Next Generation Security Operations: Telescopes for the Lookouts
A New Chapter for EMC's New CTO
John Roese, Senior Vice President and Chief Technology Officer I would like to introduce myself… my name is John Roese, and I am the new Global CTO of EMC Corporation. While I have been with the company since October 2012, I am now getting time to participate in the robust...
|
|
Update your feed preferences |
Converged compute and storage solutions
Lately I have been looking more and more in to converged compute and storage solutions, or “datacenter in a box” solutions as some like to call them. I am a big believer of this concept as some of you may have noticed. Those who have never heard of these solutions, an example of this would be Nutanix or Simplivity. I have written about both Nutanix and Simplivity in the past, and for a quick primer on those respective solutions I suggest to read those articles. In short, these solutions run a hypervisor with a software based storage solution that creates a shared storage platform from local disks. In others, no SAN/NAS required, or as stated… a full datacenter experience in just a couple of U’s. One thing that stood out to me though in the last 6 months is that for instance Nutanix is often tied to VDI/View solutions, in a way I can understand why as it has been part of their core message / go-to-market strategy for a long time. In my opinion though there is no limit to where these solutions can grow and go. Managing storage, or better said your full virtualization infrastructure, should be as simple as creating or editing a virtual machine. One of the core principles mentioned during the vCloud Distributed Storage talk at VMworld, by the way vCloud Distributed Storage is a VMware software defined storage initiative. Hopefully people are starting to realize that these so-called Software Defined Storage solutions will fit in most, if not all, scenarios out there today. I’ve been having several discussions with people about these solutions and wanted to give some examples of how it could fit in to your strategy. Just a week ago I was having a discussion with a customer around disaster recovery. They wanted to add a secondary site and replicate their virtual machines to that site. The cost associated with a second storage array was holding them back. After an introduction to converged storage and compute solutions they realized they could step in to the world of disaster recovery slowly. They realized that these solutions allowed them to protect their Tier-1 applications and expand their DR protected estate when required. By using a converged storage and compute solutions they can avoid the high upfront cost and it allows them to scale out when needed (or when they are ready). One of the service providers I talk to on a regular basis is planning on creating a new cloud service. Their current environment is reaching its limits and predicting how this new environment will grow in the upcoming 12 months is difficult due to the agile and dynamic nature of this service they are developing. The great thing though about a converged storage and compute solution is that they can scale out whenever needed, without a lot of hassle. Typically the only requirement is the availability of 10Gbps ports in your network. For the provider though the biggest benefit is probably that services are defined by software. They can up-level or expand their offerings when they please or when there is a demand. These are just two simple examples of how a converged infrastructure solution could fit in to your software defined datacenter strategy. The mentioned vendors Nutanix and Simplivity are also just two examples out of various companies offering these. I know of multiple start-ups who are working on a similar products and of course there are the likes of Pivot3 who already offer turnkey converged solutions. As stated earlier, personally I am a big believer in these architectures and if you are looking to renew your datacenter or at the verge of a green-field deployment… I highly recommend researching these solutions. Go Software Defined – Go Converged! "Converged compute and storage solutions" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB. |
|
Update your feed preferences |
A new EMC FCoE “Case Studies” TechBook revision is available!
A new version of the “Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) Case Studies” TechBook is available. This version introduces a new Juniper QFX3500 case study as well as a few updates (bug fixes) to the existing case studies. The complete list of configurations described in the TechBook is:
We're currently working on two additional updates that we hope to release in late February. I’ll provide more detail once we release them, but for now I’ll just say that we’ll be updating the Nexus 7k case study and the UCS case study will be expanded to include an EMC Storage direct connect topology. Thanks for reading! |
|
Update your feed preferences |
Free VDI training videos from TrainSignal
First and foremost let me point out that TrainSignal (@TrainSignal) is a sponsor on this vTexan.com blog site. BUT I’d like to point out that I loved them before they were a sponsor Thank god for Social Media (Twitter and Facebook in this case) or I would have totally missed these little gems of FREE Videos !! I first saw this come across David Davis’s twitter account (if you aren’t following him, you are crazy – @DavidMDavis) and then I saw it post to my Facebook timeline. So – note to others – get on twitter and follow some people !! Anyway, for those of you that follow my blog you know I have a soft spot for End User Computing. I spend some of my time working with customers that are way down the path of deploying Virtual Desktops and I also work with others that are struggling to get their hands around this ever changing world. Anyway, here is an AWESOME collection for those of you that are in the beginning stages of getting a better understanding of VDI. David Davis of TrainSignal does an awesome job of going through some of the top solutions. Intro to Desktop Virtualization <—Main YouTube Page for the videos. Lesson 1 – Virtual Desktop Infrastructure Overview Lesson 2 – What is Desktop Virtualization and VDI Lesson 3 – Desktop Virtualization vs Terminal Services Lesson 4 – Microsoft’s Desktop Virtualization Solution Lesson 5 – Citrix’s Desktop Virtualization Solutions Lesson 6 – VMware’s Desktop Virtualization – I have to disagree with David on his description of VMware View being geared towards large, or very large enterprises. I have lots of customers running 50’s and 100’s of desktops. Not sure what his definition is of large (very) enterprise. Lesson 7 – Next steps in Desktop Virtualization Lesson 8 – Installing Citrix VDI-in-a-Box By the way, if you liked the videos, make sure you check out Trainsignal’s Website. They have some great Computer Based Training Modules around VDI. Here are some examples: VMware (View and vSphere) Training –> http://www.trainsignal.com/VMware-Training.aspx Citrix Training –> http://www.trainsignal.com/Citrix-Training.aspx So, if you are looking for some FREE training, make sure you check out the YouTube Training ! If you have any questions, feel free to leave a comment.
PS – Hey @TrainSignal – how about a course on hints/tricks to managing WordPress.Org!! |
|
Update your feed preferences |
The Big Data Storymap
I wanted to share some recent work that we have been doing inside EMC Global Services, to create a “Big Data Storymap” that would help clients understand the big data journey in a pictorial format. The goal of a storymap is to provide a graphical visualization that uses metaphors and themes to educate our clients about the key components of a successful big data strategy[1]. And like any good map, there are important “landmarks” that I want to make sure you visit. Landmark #1: Explosive Market DynamicsMarket dynamics are changing due to big data. Data, like water, is powerful. Massive volumes of structured and unstructured data, wide variety of internal and external data, and high-velocity data can either power organizational change and business innovation, or it can swamp the unprepared. Organization that don’t adapt to big data risk:
On the other hand, organizations that aggressively integrate big data thinking and capabilities will be able to:
Landmark #2: Business And IT ChallengesBig Data enables business transformation, moving from a “rearview mirror” view of the business using a subset of the data in batch to monitor business performance, to the predictive enterprise that leverages all available data in real-time to optimize business performance. However, organizations face significant challenges in leveraging big data to transform their businesses, including:
Traditional business intelligence and data warehouses struggle to manage and analyze new data sources. Their architectures are:
Landmark #3: Big Data Business TransformationWhere are an organization’s aspirations with respect to leveraging big data analytics to power value creation processes? Some organizations struggle understanding the business potential of big data. They are unclear as to the different stages of business maturity. Our Big Data Maturity model benchmarks an organization’s big data business aspirations, and provides a way to identify the level of sophistication desired for data monetization opportunities:
Landmark #4: Big Data JourneyThe big data journey requires collaboration between business and IT stakeholders along a path to identify the right business opportunities and necessary big data architectures. The big data journey needs to 1) focus on powering an organization’s key business initiative while 2) ensuring that the big data business opportunities can be implemented by IT. The big data journey following this path:
Landmark #5: Operationalize Big DataSuccessful organizations define a process to continuously uncover and publish new insights about the business. Organizations need a well-defined process to tease out and integrate analytic insights back into the operational systems. The process should clearly define roles and responsibilities between business users, the BI/DW team, and data scientists to operationalize big data:
Landmark #6: Value Creation CityBig data holds the potential to transform or rewire your value creation processes to create competitive differentiation. Organizations need a big data strategy that links their aspirations to the organization’s key business initiatives. Envisioning workshops and analytic labs identify where and how big data can power the organization’s value creation processes. There is almost no part of the organization that can’t improve its value creation capabilities with big data, including:
The Big Data Journey StorymapThe big data storymap provides an engaging visual for helping organizations understand some of the key components of a successful big data strategy. I hope that you will enjoy the storymap as much as I enjoyed the opportunity to work with Mark Lawson and Glenn Steinhandler to pull it together!
[1] Check out Mark’s blog “Visual Thinking The IT Transformation Storymap” for the IT Transformation storymap.
|
|
Update your feed preferences |
What Matters Most to You? Get Executive Insights and Share Yours with the New, Channel Matters Video Blog
By Fred Kohout, EMC Vice President Global Channel Marketing, @nkohouts
I’m excited to introduce a new video blog series, Channel Matters, featuring EMC executives who are keen to share their insights on channel news that really matter to you, our partners.
In this first Channel Matters blog, Leonard Iventosch, Vice President of Channel Sales in the Americas, discusses highlights from 2012 and shares his vision for 2013. Check it out, and be sure to share your thoughts on what matters most to you by posting a comment.
|
|
Update your feed preferences |
Going to Cisco Live in London this week? Don’t miss out on all the cool stuff!
EMC is a Platinum Sponsor for this weeks event, and as such we a have HUGE booth right by the entrance to the exhibition hall with 8 different areas fully stocked with experts wanting to talk to you: Workstation 1 & 2: Cloud transforms IT Infrastructure To make it easy for you to keep track of all the cool stuff that we do at Cisco Live, EMC have a Community site up at https://community.emc.com/community/events/cisco_live, please have a look there for content, discussions and latest news. Also, we’ll have presentations on how you can Transform IT+Business+Yourself running throughout the day with drawings for t-shirts and Apple TVs (awesome device IMHO). Btw, ever wondered what a Vblock or a VSPEX actually looks line? Not only do we have real hardware on the floor, we also have an interactive screen where you can easily move, remove, add, change and zoom in on all the hardware that makes up those converged infrastructures. Come touch it yourself! We also have a bunch of great speaking slots, make sure you don’t miss out on the ones you’re really interested in: EMC CONFERENCE SESSION: Date: Jan. 30 – 16:30-17:30 EMC CASE STUDY: Date: Jan. 31 – 11:30-12:30 LIVE WEBCAST: Date: Jan. 30 - 15:00-16:00 At the Cisco stand: Date: Jan. 31 – 10:45-11:00 At the VCE stand: Dates: Jan 29 at 11:30 and Jan 30 at 10:30 At the LSI stand: Title: EMC, LSI and Cisco’s combine to deliver best-of- breed solution for server flash caching Jan. 29: Josh Mello – VSPEX (12:00-12:30) Hope to see you on the show floor! |
|
Update your feed preferences |
The “Switch Target” Part I – Why Me?
By Peter M. Tran, Senior Director, RSA Advanced Cyber Defense Practice Conventional computer network defense (CND) concepts in the past 10 + years introduced practices such as adversary “beach head, pivot point, lateral traversal, command/control” analysis for passive cyber defense. If I don’t see it on my network, then I must not be a target and/or my business is of no interest to advanced threats actors, right? The correct answer is in asking yourself as a business, “why me?” I like to use basic cops and robbers analysis when looking at the changing landscape of advanced threats and how to help enterprises approach developing advanced approaches to cyber defense. What’s the best way in and out of your primary target? Is it a direct path or multi-dimensional vectors exist? Let’s walk through one scenario using a simple bank heist theme. First, pick a good location (high value target), a bank in a town that’s on the edge of town with easy access to a highway. You’ve done your homework and know your target’s monitoring and defense systems, mean time to detect, alerting and when and in what direction the cops (incident responders) will be coming from. You go in heavily armed (malware, diversionary DDoS, sacrificial attack vectors), get as much cash as you can and get back to your vehicle. Upon leaving on the road that leads to the freeway, you drop tire spikes (malware drop zones) to create a cushion of time for you to get away. As you get on the freeway, you drop more tire spikes and find an exit not too far from where you did the robbery (high value target) and switch to a different vehicle (the Switch Target). Let’s stop here. Now ask yourself, can I be used as a switch target as a business network in a cyber context as a pathway out for the attackers? If this is the case, can I also be used as a Switch Target for a pathway in to a primary target of interest? In Part II, I’ll address in more detail Cyber Switch Targeting and the use of advanced analytics to enumerate what an attack infrastructure may look like. Peter Tran leads RSA’s world-wide Advanced Cyber Defense Practice and directs overall professional services for Global Incident Response/Discovery (IR/D), breach readiness/management, remediation, cyber intelligence/exploitation analysis, Advanced Security Operation Center (ASOC) design/implementation and proactive computer network defense. |
|
Update your feed preferences |
My Favorite Posts
I've been taking a break over the last month or so from my typically frenetic blogging pace. For me, it's a good time for reflection, contemplation and recalibration -- all positive. Part of looking forward involves looking back over your...
|
|
Update your feed preferences |
VMware KB Digest – New Articles Published for Week Ending 1/26/13
The Big Data Storymap
In "The Power Of Visual Thinking?" I shared a great visual tool that had been created by EMC Global Services -- a somewhat humorous storymap showing how IT organizations transformed from silos to service providers. That PDF has proven to...
|
|
Update your feed preferences |
Perspective Is Everything, Or Is It?
Did you know the movie E.T. was nearly entirely filmed from the eye-level of the children? I didn’t, or at least I don’t recall knowing this; it’s been years since I watched the movie in its entirety. But what I do remember is being drawn into the story in a way no other film had done before, and I do remember my parents leaving the movie feeling similarly. I just never considered why until this past week. As it turns out, it has a lot to do with perspective. By using the filming technique he did (which also meant that adults were seen primarily from the waist down throughout the movie), Spielberg was able to create a very different experience for moviegoers. For adults, it ultimately meant seeing the story not just through the eyes of a child but as a child. Bingo! Makes me wonder if one of the reasons we have such difficulty keeping New Year’s resolutions is because we often don’t have the right perspective. Would E.T. have been the same film (enjoying the same level of success) had it been told from the eye-level of an adult? Probably not. So, in addition to balancing good and bad, focusing on the process not just the end goal, replacing bad habits with better ones and so on, maybe we also need to make sure we have the right perspectives? Do we really have the perspective of someone who has successfully completed a 15K or dropped 20 pounds? Does your backup team really see backup – the good, the bad and the ugly – from the viewpoint of application or business owners? Again, probably not. However, if you’re like many folks, you may just be stuck. I’m not sure how much help I can be on the running or weight reduction front, but I do know we can help with the backup perspective. In this short video, fellow TBW blogger and EMC BRS CTO Stephen Manley explains how backup teams can free themselves — and their businesses — from the grind that’s become daily backup and gain that all-important broader business perspective. It’s actually Part III of Stephen’s Accelerating Transformation series, but not to worry. I’ll circle back next post and explain why donning a new perspective doesn’t have to meaning losing control. Be sure to check out the video and drop us a note if you’ve got a question. |
|
Update your feed preferences |
Links from 2013-01-25 through 2013-01-28
Links from 2013-01-25 through 2013-01-28:
Possibly Related Posts:
|
|
Update your feed preferences |
Introducing Bill Jacobs
Hello to my high-tech contacts, Twitter followers, and colleagues in the Big Data world. For 18 months, I’ve avoided the blogosphere. But alas, one of the New Year’s resolutions I intend to keep will change that: I’ll be blogging here about Greenplum products, and our customers’ uses of Big Data analytics in their businesses and agencies. Who am I? For those of you that don’t know my background, I’ve pursued Big Data and analytics marketing at three companies now, with a decidedly “nerdy marketer” bent. I’ve been with Greenplum for a year and a half, and continue to have fun. I’ll skip the reasons I chose Greenplum, as those will become obvious in upcoming posts. Early in my career, I helped launch new UNIX operating systems at HP and Apple. In middleware, I helped evolve a product line that we successfully sold to Microsoft, and helped to update Sybase’s middleware offerings, creating a net new BPM / EAI platform and a few other bits and pieces. In search of new markets for tech, I’ve pursued a couple of startup opportunities in RFID and smart cards — quite fun spaces to work in, but ahead of the revenue curve in my experience. Working with Big Data analytics products has been my pursuit since 2005, first with Sybase IQ and then with IBM Netezza. Thanks to my colleagues in both companies for allowing an “integration guy” — an analytics and BI novice — to learn a few things at their feet. Through these roles, I’ve experienced a number of cycles of technology adoption – changes and shifts that just may be repeating themselves as Big Data, data science, cloud deployments, etc., dominate our industry. With Greenplum, I contribute to product direction, deliver new products to market, train our sales force and analyze our worthy competition. Most of all, I get to interact with large clients who are experiencing the challenges of Big Data analytics on a day-to-day basis — the technical aspects, determining the business uses of analytics, and regrettably, sometimes grappling with the politics of Big Data in large organizations. I’ll write more about this in future blog posts. My blog here will provide an ongoing commentary about Greenplum’s products and how Greenplum users are applying them. If you’re interested in Big Data, analytics or perhaps how Big Data seems to be repeating past themes in the never—ending cycle of technology adoption, perhaps you’ll find something interesting here. Bill Jacobs |
|
Update your feed preferences |
How Starbucks is Revolutionizing Mobile (micro) Payments
For those of you that have not been living under a rock for the last couple of years, you may have patronized a Starbucks and seen a customer scan their phone at the checkout and then somehow magically get coffee without ever paying. What is this wizardry that is going on? I mean, you’ve seen people pay with pretty looking cards that have a Starbucks logo, but that’s clearly a gift card, not some magical electronic thingie. This magical pay-by-phone is not only a convenience for regular customers, but it’s Starbucks’s way to push mobile payments forward while device manufacturers do their best Benny Hill trying to implement mobile payments. Enter the Starbucks app (and Passbook integration). Sure, techies love it because you can PAY WITH YOUR FLIPPING PHONE (MAGIC)! People with smartphones love it because they don’t have to carry around their Starbucks card anymore (lighten the wallet!). But what about Starbucks? Do they love it? Is this just some pilot that is draining millions of dollars from Starbucks and will ultimately be scrapped? Before I continue, I am not involved with Starbucks so everything that follows is from a complete outsider’s view, but everyone should be paying attention to what they are doing over there at this recognizable Seattle coffee chain. Starbucks LOVES this idea. Let’s take a look at how the process works:
Let’s take a look at those last two items. Now as far as my limited accounting experience can tell me, there is no public reporting of the current liability balances associated with their gift cards. It’s in there somewhere. The takeaways here are:
Here’s the illustration. If I only used my credit card to buy coffee, they would charge $4-5/visit and the payment system would take its cut (usually both a transaction fee plus a % of the transaction). Now, the payment system is insanely complex. Some of the best assets that retailers have is their ability to negotiate a good deal with their processors (and exploit every last penny from it). Some perks include discounts for the number of transactions, or tiered discounts depending on transaction size. Let’s use some assumptions. Let’s say that Starbucks pays $0.15/transaction plus 1.49% for each one that goes through. If I visit Starbucks 5 times per week (M-Th, plus Sunday AM) and spend $5 each time, Starbucks earns $1,300/year for my patronage and pays $58.37 in fees if I use my credit card. Now, let’s say that I install the app on my phone, and set it to recharge $50 every time my balance gets low. I have now reduced their transaction volume with me to 10% of the original (26 times to recharge vs 260 transactions). This changes the fees to $23.27, or a 60% reduction in my cost burden to the company with the added benefit that they get to use my cash for anything they want while I work it off over a period of time. Now let’s talk about how this works on a macro scale. According to the report I linked to above, Starbucks had around 148 million transactions redeemed on gift cards. Let’s run those numbers through my assumptions. Instead of $22 million in per-transaction fees they would have spent around $2 million (a savings of $20 million can’t be overlooked no matter who you are) in that quarter alone. WOW! Think about how that would work on a global scale! If I were Starbucks, I would offer incentives to up that recharge amount so that I have even fewer transactions and more cash to play with. If you look at it from a PCI DSS perspective, it’s not unreasonable that games like this could even lower their merchant level from a Level 1 to a Level 2. Does this work in every business? No. If the average ticket size isn’t somewhere in the $5-$10 range, things might break down. But think of the number of businesses that operate in a B2C market that could benefit from this! If this describes your business, you would be a fool to not look very closely at what those coffee roasters from Seattle are doing and figure out how to do something similar in your own business.
|
|
Update your feed preferences |
Cloud vs. Evil
By Howard Rubin, Product Marketing Manager, Backup and Recovery Systems My blog last month entitled Cloud Control to Major Tom talked about the top five reasons enterprise don’t leverage cloud technology. I focused on one specific reason pertaining to loss of control and visibility as being one of the top five. This week I’d like to focus on another bullet on that top 5 list: The belief that cloud computing needs to mature more. In a publicly available report by Enterprise Strategy Group, 29% of the 256 respondents in their study noted this to be the reason for them not to adopt a cloud strategy. So exactly what does “mature” mean in this use case? Are these IT departments waiting for some other IT division or data center location to be the guinea pig? Perhaps “mature” means they’re waiting for next generation of software and hardware technology that improves upon the imperfections of the current version. Or maybe they’re just waiting for the cloud providers and market analysts to report double and triple digit growth numbers. Why make trillions when we could make….billions? But I digress…. The reality is that enterprises are levering cloud technology today to help alleviate their IT pain points. And those pain points are convincing them to spend to the tune of $110.8 billion on cloud services in 2012 according to a recent Gartner report. (Dr. Evil might be on to something). At a high level, let’s take a look at another (top 5) list of reasons why enterprises are looking to leverage cloud service providers for some existing IT processes. The list includes:
So what constitutes market maturity for you? Why wait for trillion’s when you can solve your pain points today when the industry is already over 100 billion? Check out EMC’s Velocity Service Providers Trusted Partners who can help adopt a cloud strategy. You’ll only need one VSPP partner to take that one first step – not a billion. |
|
Update your feed preferences |
Must-have Competencies for the Cloud in 2013
Following on from my last blog ‘Re-enforcing our doors in 2013’ solving all of the issues of disruptive innovations isn’t going to be possible in a year but we must take strides towards making some of the changes. The four members of the disruptive family are Cloud Computing, Social Media, Big Data and Mobile. Let’s take Cloud Computing this week and examine some competencies organizations must start to build. Cloud vendor management has been on our list for a long time but how effective are we at doing this? Ultimately, organizations are responsible for the information that’s held by the Cloud service provider (CSP). Information security teams must now switch their focus from implementing controls internally to controls implemented by third parties and asking themselves ‘how can we ensure that cloud services providers are meeting our trust levels?’ Are they are attuned to our particular threats? The conventional controls assurance model is not sustainable the cloud. Client organizations cannot visit every cloud service provider to examine their security controls. Today, CSP’s provide assurance by using questionnaires. This is a wholly inefficient process as all organizations ask the same questions and it turns out to be a box ticking exercise. There are also no standards for these, apart from guidelines issued by the Cloud Security Alliance. A better approach would be third party assessment or certification like the AIPCA’s SOC 2 Report on Controls or the imminent ISO 27017 Standards for Security in Cloud Computing. In the meantime, organizations must find a happy medium to effectively measure controls and detect failures. The basic building blocks of an effective GRC implementations has some of the elements but while these need to mature companies will have to find their own way to measure assurance. Automated and transparent controls together with continuous monitoring will be an important part of the solution. Look out my next blog on – Must have competencies for Social Media in 2013. |
|
Update your feed preferences |
SQL Injection and Distributed Security
By Sandra Carielli, Senior Product Manager, Access and Data Protection It amazes me that SQL injection attacks are still prevalent, much less that they remain one of the most popular forms of attack. You don’t read as much these days about organizations being compromised due to hardcoded passwords, bad cryptography, or buffer overflows; organizations have mostly managed to control those issues via overlying technologies and good coding practices. But SQLi attacks continue to rise. The fact that they are still so popular (and so effective) surprises me because … well, we’ve been talking about SQL injection for so long that I thought that SQLi protections would be institutionalized by now. Ten years ago, I was teaching application security classes to rooms of engineers and explaining the dangers of SQL injection. “Data validation, data validation, data validation” was one of our mantras. But we’re still having trouble doing that 100% of the time. A SQLi attack occurs when an attacker is able to embed SQL commands into a field on a web form; perhaps instead of entering a username at a login page, they enter a carefully constructed SQL statement. If the application doesn’t perform proper data validation and instead just sends this string along to the application database, the attacker is able to execute that command. What sort of commands does an attacker try to execute? A common one is to dump the database, getting them access to most of the data stored in the database: usernames, e-mail addresses … and (probably hashed and salted) passwords. Many people have speculated about the high profile password thefts over the last couple of years and suggested that SQLi may have played a role (in many cases we don’t know for certain, but there are some cases where the attacker has shared their method). What if after executing a SQL injection attack and dumping the database, an attacker discovered that there wasn’t very much of value there? Or that that he needed to find a way to compromise a completely different server (which does not execute SQL commands from an application) in order to reconstruct the information he had stolen? The idea behind distributed security is to increase the cost to an attacker by splitting information among multiple locations and forcing an attacker to compromise each location in order to reconstruct that information. If the data in the SQL database is encrypted, and the encryption key is sitting somewhere else, that’s a form of distributed security. So is RSA’s recently released Distributed Credential Protection, in which authentication credentials are split between multiple servers. It’s like taking a secret formula written down on a piece of paper, ripping it into multiple pieces, and storing each piece in a different safe. Even if the guard in charge of the first safe forgets to lock it one night, an attacker doesn’t steal anything useful. If SQLi attacks are still happening after all these years, perhaps distributed security can reduce the attractiveness of that attack vector. Sandy Carielli leads product management for Distributed Credential Protection and the BSAFE portfolio at RSA, The Security Division of EMC. Ms. Carielli has over ten years of experience in the security industry, including engineering (at BBN Technologies), consulting (at @stake) and product management. Ms. Carielli holds a Sc.B. in Mathematics from Brown University and an MBA from MIT. |
|
Update your feed preferences |
2012: A Good Year In Storage For EMC
There’s simply no arguing with the numbers – EMC continues to do very well indeed in the broader storage market and continuing to gain share in almost all sub-segments. While we’re not ones to rest on our laurels, I thought...
|
|
Update your feed preferences |