Mark Bloom

投稿者 Mark Bloom

ブログ

Disrupting the Economics of Machine Data Analytics

The power of modern applications is their ability to leverage the coming together of mobile, social, information and cloud to drive new and disruptive experiences…To enable companies to be more agile, to accelerate the pace at which they roll out new code, to adopt DevSecOps methodologies where traditional siloed walls between the teams are disappearing. But these modern applications are highly complex with new development and testing processes, new architectures, new tools (i.e. containers, micro-services and configuration management tools), SLA requirements, security in the cloud concerns, and explosion of data sources, coming from these new architectures as well as IOT. In this journey to the cloud with our 1500+ customers, we have learned a few things about their challenges: All of this complexity and volume of data is creating unprecedented challenges to enable ubiquitous user access to all this machine data to drive continuous intelligence across operational and security use cases. In this new world of modern applications and cloud infrastructures, they recognize that not all data is created equal. For example, the importance, the life expectancy, the access performance needed, the types of analytics that need to be run against that data. Think IT Operations data (high value, short life span, frequent and high performance access needs) vs. regulatory compliance data (long term storage, periodic searches, esp. at audit times, slower performance may be acceptable). Data ingest in certain verticals such as retail and travel, fluctuate widely and provisioning at maximum capacity loads – with idle capacity the majority of the year – is unacceptable in this day and age. So if we step back for a moment and look at the industry as a whole, what is hindering a company’s ability to unleash their full data potential? The root of the problem comes from two primary areas: 1. The more data we have, the higher the cost 2. The pricing models of current solutions are based on volume of data ingested and not optimized for varying use cases that we are seeing… it is like a “one size fits all” kind of approach Unfortunately, organizations are often forced to make a trade-off because of the high cost of current pricing models, something we refer to as the data tax – the cost of moving data into your data analytics solution. They have to decide: “What data do I send to my data analytics service?” as well as “Which users do I enable with access?” As organizations are building out new digital initiatives, or migrating workloads to the cloud, making these kinds of tradeoffs will not lead to ultimate success. What is needed is a model that will deliver continuous intelligence across operational and security use cases. One that leverages ALL kinds of data, without compromise. We believe there is a better option – one which leverages our cloud-native machine data analytics platform, shifting from a volume based approach – fixed, rigid, static – to a value based pricing model – flexible and dynamic – aligned with the dynamic nature of the modern apps that our customers are building. One that moves us to a place where democratization of machine data is realized! Introducing Sumo Logic Cloud Flex As this launch was being conceived, there were four primary goals we set out to accomplish: Alignment: Alignment between how we priced out service and the value customers received from it. Flexibility: Maximum flexibility in the data usage and consumption controls that best align to the various use cases Universal Access: Universal access of machine data analytics to all users, not just a select few Full Transparency: Real-time dashboards on how our service is being used, the kind of searches people are running, and the performance of the system And there were four problem areas we were trying to address: Data Segmentation: Different use cases require different retention durations Data Discrimination: Not all data sets require the same performance and analytics capabilities Not economical to store and analyze low value data sets Not economical to store data sets for long periods of time, esp. as it relates to regulatory compliance mandates Data Ubiquity: Not economical for all users to access machine data analytics Data Dynamics: Support seasonal business cycles and align revenue with opex So with this Cloud Flex launch, Sumo Logic introduces the following product capabilities to address these four pain points: Variable Data Retention Analytics Profile Unlimited Users Seasonal Pricing If increasing usage flexibility in your data analytics platform is of interest, please reach out to us. If you would like to get more information on cloud flex and the Democratizing Machine Data Analytics, please read our press release.

2017年06月06日

ブログ

The Importance of Logs

ブログ

What does it take to implement & maintain a DevSecOps approach in the Cloud

Operational and Security Tips, Tricks and Best Practices In Gartner’s Top 10 Strategic Technology Trends for 2016: Adaptive Security Architecture, they argued that “Security must be more tightly integrated into the DevOps process to deliver a DevSecOps process that builds in security from the earliest stages of application design.” We ultimately need to move to this model if we are going to be successful and continue to reduce the dwell time of cyber criminals who are intent on compromising our applications and data. But how do we get from this: To this? Easier said than done. To answer this question, I sat down with our CISO and IANS Faculty Member George Gerchow about what it means to implement and maintain a DevSecOps approach in the cloud – and what operational and security best practices should organizations follow to ensure success in their move to the cloud. Below is a transcript of the conversation. DevSecOps seems like a buzz word that everyone is using these days. What does DevSecOps really mean? George: It is really about baking security in from Day 1. When you’re starting to put new workloads in the cloud or have these green field opportunities identified, start changing your habits and your behavior to incorporate security in from the very beginning. In the past we used to have a hard shell soft center type approach to security and in the cloud there is no hardshell, and we don’t run as many internal applications anymore. Now we’re releasing these things out into the wild into a hostile environment so you gotta be secure since day 1. Your developers and engineers, you have to have people who think security first when they’re developing code, that is most important take away” What does it really mean when you say baking security in….or the term shifting left, which I am starting to hear our there? George: It is about moving security earlier into the conversation, earlier into the software development lifecycle. You need to get developers to do security training. I’m talking about code review, short sprints, understanding what libraries are safe to use, and setting up feature flags that will check code in one piece at a time. The notion of a full release is a thing of the past – individual components are released continually. There also needs to be a QA mindset of testing the code and micro services to break it and then, fix accordingly through your agile DevSecOps methodologies. Sumo Logic is a cloud native service running in AWS for over 7 years now – why did you decide to build your service in the cloud? Can you describe a bit about that journey, what was it like, what obstacles did you face, how did you overcome them? And lastly, what did you learn along the way? George: Our company founders came from HP Arcsight and new full well of the pain in managing the execution environment – the hardware and software provisioning, the large teams needed, the protracted time to roll out new services. The cloud enabled us to be agile, flexible, highly elastic, and do this all securely at scale – it is at a level that was just not possible if we chose an on-prem model. The simplicity and automation capabilities of AWS was hugely attractive. You start setting up load balancers to be able to leverage tools like Chef to be able to do manage machine patching – it gets easier – and then you can start automating things from the very beginning so I think it’s that idea of starting very simple and leveraging native services that cloud service providers give you you and then looking for the gaps. The challenge initially was that this is a whole new world out and then the bigger challenge became getting people to buy off on the fact that cloud is more secure. People just weren’t there yet. What does Sumo Logic’s footprint look like in AWS? George: 100PB+ of data that is analyzed daily, 10K EC2 instances on any given day, 10M keys under management! We have over 1,300 customers and our service is growing by leaps and bounds. At this stage, it is all about logos – you wanna bring people in and you can’t afford to have bad customer service because this is a subscription-based model. When you think about the scale that we have, it’s also the scale that we have to protect our data. Now the challenge of quadrupling that number every year is extremely difficult so you have a long term view when it comes to scalability of security. 10,000+ instances it’s a very elastic type environment and auditors really struggle with this. One of the things that i’m the most proud of…if you look at hundreds of petabytes processed and analyzed daily, thats insane…thats the value of being in the cloud. 10 million of keys under management…thats huge…really?? George: It’s a very unique way that we do encryption. It makes our customers very comfortable with the dual control models…that they have some ownership over the keys and then we have capability to vault the keys for them. We do rotate the keys every 24 hours. The customers end up with 730 unique key rings on their keys at the end of the year. It’s a very slick, manageable program. We do put it into a vault and that vault is encrypted with key encryption key (KEK). So What Tools and Technologies are you using in AWS? George: Elastic load balancers are at the heart of what we do…we set up those load balancers to make sure that nothing that’s threatening gets through…so that’s our first layer of defense and then use security groups and we use firewalls to be able to route traffic to the right places to make sure only users can access. We use file integrity monitoring and we happen to use host sec for that across every host and we manage and that gives us extreme visibility. We also leverage IDS and snort and those signatures across those boxes to detect any kind of signature based attacks. Everything we do in the cloud is agentless or on the host. When you’re baking security in you have it on ALL of your systems, spun up automatically via scripts. We also have a great partnership with Crowdstrike, where threat intelligence is baked into our platform to identify malicious indicators of compromise and match that automatically to our customers logs data – very powerful So how are you leveraging Sumo to secure your own service? Can you share some of the tips, tricks and best practices you have gleaned over the years? George: Leveraging apps like CloudTrail, now we are able to see when a event takes place who is the person behind the event, and start looking for the impact of the event. I’m constantly looking for authorization type events (looking at Sumo Dashboards). When it comes to compliance I have to gather evidence of who is in the security groups. Sumo is definitely in the center of everything that we do. We have some applications built also for PCI and some other things as well to VPC flow logs but it gives us extreme visibility. We have dashboards that we have built internally to manage the logs and data sources. It is extremely valuable once you start correlating patterns of behavior and unique forms of attack patterns across the environment. You need to be able to identify how does that change that you just made impact the network traffic and latency in my environment and pulling in things like AWS inspector…How did that change that you made have an impact on my compliance and security posture. You want to have the visibility but then measure the level of impact when someone does make a change and even more proactively I want to have the visibility when something new is added to the environment or when something is deleted from the environment. Natively in AWS, it is hard to track these things” How does the Sumo Logic technology stack you talked about earlier help you with Compliance? George: Being able to do evidence gathering and prove that you’re protecting data is difficult. We’re protecting cardholder data, healthcare data and a host of other PII from the customers we serve across dozens of industries. We pursue our own security attestations like PCI, CSA Star, ISO 27001, SOC 2 Type 2, and more. We do not live vicariously through the security attestations of AWS like too many organizations do. Also, encryption across the board. All of these controls and attestations give people a level of confidence that we are doing the right things to protect their data and there’s actual evidence gathering going on. Specifically with respect to PCI, we leverage Sumo Logic PCI apps for evidence gathering -nonstop- across CloudTrail, Windows and Linus servers. We built out those apps for internal use, but released them to the public at RSA. There are a lot of threat actors out there, from Cyber Criminals, Corporate Spies, Hacktivists and Nation States. How do you see the threat landscape changing wrt the cloud. Is the risk greater given the massive scale of the attack surface? If someone hacked into an account, could they cause more damage by pointing their attack at Amazon, from within the service, possibly affecting millions of customers? George: It all starts with password hygiene. People sacrifice security for convenience. It’s a great time for us to start leveraging single sign on and multi factor authentication and all these different things that need to be involved but at a minimum end users should use heavily encrypted passwords…they should not bring in their personal type application passwords into the business world…If you start using basic password hygiene since day 1, you’re gonna follow the best habits in the business world. The people who should be the most responsible are not…I look at admins and developers in this way…all the sudden you have a developer put their full blown credentials into a slack channel. So when you look out toward the future, wrt the DevSecOps movement, the phenomenal growth of cloud providers like AWS and Azure, Machine learning and Artificial Intelligence, the rise of security as code, ….What are your thoughts, where do you see things going, and how should companies respond? George: First off, for the organizations that aren’t moving out to the cloud, at one point or the other, you’re gonna find yourself irrelevant or out of business. Secondly, you’re going to find that that the cloud is very secure. You can do a lot using cloud-based security if you bake security in since day one and work with your developers…if you work with your team…. you can be very secure. The future will hold a lot of cloud-based attacks. User behavior analytics…I can’t no longer go through this world of security and have hard-coded rules and certain things that I’m constantly looking for with all these false positives. I have to be able to leverage machine learning algorithms to consume and crunch through that data. The world is getting more cloudy more workloads moving into the cloud, teams will be coming together…security will be getting more backed in into the process. How would you summarize everything? George: “You’re developing things, you wanna make sure you have the right hygiene and security built into it and you have visibility into that and that allows you to scale as things get more complex where things actually become more complex is when you start adding more humans into it and you have less trust but if you have that scalability and visibility from day one and a simplistic approach, it’s going to do a lot of good for you. Visibility allows you to make quick decisions and it allows you to automate the right things and ultimately you need to have visibility because it allows you to have the evidence that you need to be compliant to help people feel comfortable that you’re protecting your data in the right way. George Gerchow can be reached at https://www.linkedin.com/in/georgegerchow or @georgegerchow

ブログ

The Great Big Wall and Security Analytics

Not long ago I was visiting the CISO of a large agriculture biotechnology company in the Midwest – we’ll call him Ron – and he said to me “Mark these cyber terrorists are everywhere, trying to hack into our systems from Russia and China, trying to steal our intellectual property. We have the biggest and the brightest people and the most advanced systems working on it, but they are still getting through. We are really challenged in our ability to identify and resolve these cyber threats in a timely manner. Can you help us?” Business issues that CISOs and their security teams face are significant. Customers are now making different decisions based on the trust factors they have with the companies they do business with. So implementing the right levels of controls, increasing team efficiency, to rapidly identify and resolve security incidents becomes of paramount importance. But despite this big wall that Ron has built, and the SIEM technology they are currently using, threats are still permeating the infrastructure, trying to compromise their applications and data. With over 35 security technologies in play, trying to get holistic visibility was a challenge, and with a small team, managing their SIEM was onerous. Additionally, the hardware and refresh cycles over the years, as their business has grown, has been challenged by flat budget allocations. “Do more with less” was frequently what they heard back from the CIO. Like any company that wants to be relevant in this modern age, they are moving workloads to the cloud, adopting DevOps methodologies to increase the speed of application delivery, creating new and disruptive experiences for their customers, to maintain their competitive edge. But as workloads were moved to the cloud – they chose AWS- the way things were done in the past were no longer going to work. The approach to security needed to change. And it was questionable if the SIEM solution they were using was even going to run in the cloud and support native AWS services, as scale. SIEMs are technologies that were architected over 15 year ago, and they were really designed to solve a different kind of problem – traditional on prem, perimeter based, mode 1 type security applications, going after known security threats. But as organizations are starting to move to the cloud, accelerating the pace at which they roll our new code, adopting DevOps methodologies, they need something different. Something that aligns to the Mode 2 digital initiatives of modern applications. Something that is cloud native, provides elasticity on demand, and delivers rapid time to value, not constrained by fixed rule sets going after known threats but instead, leveraging machine learning algorithms to uncover anomalies, deviations and unknown threats in the environment. And lastly, something that integrates threat intelligence OOTB to increase velocity and accuracy of threat detection – so you can get a handle on threats coming at your environment trying to compromise your applications and data. Is that great big wall working for you? Likely not. To learn more about Sumo Logic’s Security Analytics capabilities, please checkout our press release, blog or landing page. Mark Bloom can be reached at https://www.linkedin.com/in/markbloom or @bloom_mark

ブログ

OneLogin Integrates with Sumo Logic for Enhanced Visibility and Threat Detection

OneLogin and Sumo Logic are thrilled to announce our new partnership and technology integration (app coming May 2017) between the two companies. We’re alike in many ways: we’re both cloud-first, our customers include both cloud natives and cloud migrators, and we are laser-focused on helping customers implement the best security with the least amount of effort. Today’s integration is a big step forward in making effortless security a reality. What does this integration do? OneLogin’s identity and access management solution allows for the easy enforcement of login policies across all their laptops, both Macs and Windows, SaaS applications, and SAML-enabled desktop applications. This new partnership takes things a step further by making it possible to stream application authentication and access events to over 200 application-related events. This includes over 200 application-related events, including: Who’s logged into which laptops — including stolen laptops Who’s accessed which applications — e.g., a salesperson accessing a finance app Who’s unsuccessfully logged in — indicating a potential attack in progress Who’s recently changed their password — another potential indicator of an attack Which users have lost their multi-factor authentication device — indicating a potential security weakness Which users have been suspended — to confirm that a compromised account is inactive User provision and de-provision activity – to track that users are removed from systems after leaving the company And finally, which applications are the most popular and which might be underutilized, indicating potential areas of budget waste These capabilities are critical for SecOps teams that need to centralize and correlate machine data across all applications. This, in turn, facilitates early detection of targeted attacks and data breaches, extends audit trails to device and application access, and provides a wider range of user activity monitoring. Because OneLogin has over 4000 applications in our app catalog, and automatically discover new applications and add them to its catalog, we can help you extend visibility across a wide range of unsanctioned Shadow IT apps. The integration uses streaming, not polling. This means that events flow from OneLogin into Sumo as soon as they are generated, not after a polling interval. This lets you respond more quickly to attacks in progress. How does the integration work? Since both OneLogin and Sumo Logic are cloud-based, integrating the two is a simple one-screen setup. Once integration is complete, you can use Sumo Logic to query OneLogin events, as well as view the following charts: Visitors heatmap by metro area. Suppose you don’t have any known users in Alaska — that anomaly is quite clear here, and you can investigate further. Logins by country. Suppose you don’t have any known users in China; 80 potentially malicious logins are evident here. Failed logins over time. If this number spikes, it could indicate a hacking attempt. Top users by events. If one user has many events, it could indicate a compromised account that should be deactivated in OneLogin. Events by app. If an app is utilized more than expected, it could indicate anomalous activity, such as large amounts of data downloads by an employee preparing to leave the company. All this visibility helps customers better understand how security threats could have started within their company. This is especially helpful when it comes to phishing attacks, which, according to a recent report by Gartner, are “the most common targeted method of cyberattacks, and even typical, consumer-level phishing attacks can have a significant impact on security.” Summing up: Better Threat Detection and Response Sumo Logic’s vice president of business development, Randy Streu, sums it up well: “Combining OneLogin’s critical access and user behavior data with Sumo Logic’s advanced real-time security analytics solution provides unparalleled visibility and control for both Sumo Logic and OneLogin customers.” This deep and wide visibility into laptop and application access helps SecOps teams uncover weak points within their security infrastructures so that they know exactly how to best secure data across users, applications, and devices. Get started for free Even better, OneLogin and Sumo Logic are each offering free versions of their respective products to each other’s customers to help you get started. The OneLogin for Sumo Logic Plan includes free single sign-on and directory integration, providing customers with secure access to Sumo Logic through SAML SSO and multi-factor authentication while eliminating the need for passwords. Deep visibility. Incredibly simple integration. Free editions. We’re very pleased to offer all this to our customers. Click here to learn more. *The Sumo Logic App for One Login, for out of the box visualizations and dash boarding will be available May 2017* This blog was written by John Offenhartz who is the Lead Product Owner of all of OneLogin’s integration and development programs. John’s previous experiences cover over twenty years in Cloud-based Development and Product Management with such companies as Microsoft, Netscape, Oracle and SAP. John can be reached at https://www.linkedin.com/in/johnoffenhartz

2017年02月17日

ブログ

Sumo Logic Delivers Industry's First Multi-Tenant SaaS Security Analytics Solution with Integrated Threat Intelligence

Integrated Threat Intelligence Providing Visibility into Events that Matter to You! You’ve already invested a great deal in your security infrastructure to prevent, detect, and respond to cybersecurity attacks. Yet you may feel as if you’re still constantly putting out fires and are still uncertain about your current cybersecurity posture. You’re looking for ways to be more proactive, more effective, and more strategic about your defenses, without having to “rip and replace” all your existing defense infrastructure. You need the right cyber security intelligence, delivered at the right time, in the right way to help you stop breaches. That is exactly what Sumo Logic's integrated threat intelligence app delivers. Powered by Crowdstrike, Sumo's threat intelligence offering addresses a number requests we were hearing from customers: Help me increase the velocity & accuracy of threat detection. Enable me to correlate Sumo Logic log data with threat intelligence data to identify and visualize malicious IP addresses, domain names, email addresses, URLs and MD5 Hashes. Alert me when there is some penetration or event that maps to a known indicator of compromise (IOC) and tell me where else these IOCs exist in my infrastructure. And above all, make this simple, low friction, and integrated into your platform. And listen we did. Threat intelligence is offered as part of Sumo's Enterprise and Professional Editions, at no extra cost to the customer. Threat Intel Dashboard Supercharge your Threat Defenses: Consume threat intelligence directly into your enterprise systems in real time to increase velocity & accuracy of threat detection. Be Informed, Not Overwhelmed: Real-time visualizations of IOCs in your environment, with searchable queries via an intuitive web interface. Achieve Proactive Security: Know which adversaries may be targeting your assets and organization, thanks to strategic, operational and technical reporting and alerts. We chose to partner with CrowdStrike because they are a leader in cloud-delivered next-generation endpoint protection and adversary analysis. CrowdStrike’s Falcon Intelligence offers security professionals an in-depth and historical understanding of adversaries, their campaigns, and their motivations. CrowdStrike Falcon Intelligence reports provide real-time adversary analysis for effective defense and cybersecurity operations. To learn more about Sumo Logic's Integrated Threat Intelligence Solution, please go to http://www.sumologic.com/application/integrated-threat-intelligence.

AWS

2017年02月06日

ブログ

Using Sumo Logic and Trend Micro Deep Security SNS for Event Management

As a principal architect at Trend Micro, focused on AWS, I get all the ‘challenging’ customer projects. Recently a neat use case has popped up with multiple customers and I found it interesting enough to share (hopefully you readers will agree). The original question came as a result of queries about Deep Security’s SIEM output via syslog and how best to do an integration with Sumo Logic. Sumo has a ton of great guidance for getting a local collector installed and syslog piped through, but I was really hoping for something: a little less heavy at install time; a little more encrypted leaving the Deep Security Manager (DSM); and a LOT more centralized. I’d skimmed an article recently about Sumo’s hosted HTTP collector which made me wonder – could I leverage Deep Security’s SNS event forwarding along with Sumo’s hosted collector configuration to get Events from Deep Security -> SNS -> Sumo? With Deep Security SNS events sending well formatted json, could I get natural language query in Sumo Logic search without defining fields or parsing text? This would be a pretty short post if the answers were no… so let’s see how it’s done. Step 1: Create an AWS IAM account This account will be allowed to submit to the SNS topic (but have no other rights or role assigned in AWS). NOTE: Grab the access and secret keys during creation as you’ll need to provide to Deep Security (DSM) later. You’ll also need the ARN of the user to give to the SNS Topic. (I’m going to guess everyone who got past the first paragraph without falling into an acronym coma has seen the IAM console so I’ll omit the usual screenshots.) Step 2: Create the Sumo Logic Hosted HTTP Collector. Go to Manage-> Collection then “Add Collector”. Choose a Hosted Collector and pick some descriptive labels. NOTE: Make note of the Category for later Pick some useful labels again, and make note of the Source Category for the Collector (or DataSource if you choose to override the collector value). We’ll need that in a little while. Tip When configuring the DataSource, most defaults are fine except for one: Enable Multiline Processing in default configuration will split each key:value from the SNS subscription into its own message. We’ll want to keep those together for parsing later, so have the DataSource use a boundary expression to detect message beginning and end, using this string (without the quotes) for the expression: (\{)(\}) Then grab the URL provided by the Sumo console for this collector, which we’ll plug into the SNS subscription shortly. Step 3: Create the SNS topic. Give it a name and grab the Topic ARN Personally I like to put some sanity around who can submit to the topic. Hit “Other Topic Actions” then “Edit topic policy”, and enter the ARN we captured for the new users above as the only AWS user allowed to publish messages to the topic. Step 4: Create the subscription for the HTTP collector. Select type HTTPS for the protocol, and enter the endpoint shown by the Sumo Console. Step 5: Go to search page in the Sumo Console and check for events from our new _sourceCategory: And click the URL in the “SubscribeURL” field to confirm the subscription. Step 6: Configure the Deep Security Manager to send events to the topic Now that we’ve got Sumo configured to accept messages from our SNS topic, the last step will be to configure the Deep Security Manager to send events to the topic. Log in to your Deep Security console and head to Administration -> System Settings -> Event Forwarding. Check the box for “Publish Events to Amazon Simple Notification Service and enter the Access and Secret key for the user we created with permission to submit to the topic then paste in the topic ARN and save. You’ll find quickly that we have a whole ton of data from SNS in each message that we really don’t need associated with our Deep Security events. So let’s put together a base query that will get us the Deep Security event fields directly accessible from our search box: _sourceCategory=Deep_Security_Events | parse “*” as jsonobject | json field=jsonobject “Message” as DSM_Log | json auto field=DSM_Log Much better. Thanks to Sumo Logic’s auto json parsing, we’ll now have access to directly filter any field included in a Deep Security event. Let your event management begin! Ping us on if you have any feedback or questions on this blog… And let us know what kind of dashboards your ops & secops teams are using this for! A big thanks to Saif Chaudhry, Principle Architect at Trend Micro who wrote this blog.

2017年02月06日

ブログ

CISO Manifesto: 10 Rules for Vendors

This CISO blog post was contributed by Gary Hayslip, Deputy Director, Chief Information Security Officer (CISO) for the City of San Diego, Calif., and Co-Author of the book CISO Desk Reference Guide: A Practical Guide for CISOs As businesses today focus on the new opportunities cybersecurity programs provide them, CISOs like myself have to learn job roles they were not responsible for five years ago. These challenging roles and their required skill sets I believe demonstrate that the position of CISO is maturing. This role not only requires a strong technology background, good management skills, and the ability to mentor and lead teams; it now requires soft skills such as business acumen, risk management, innovative thinking, creating human networks, and building cross-organizational relationships. To be effective in this role, I believe the CISO must be able to define their “Vision” of cybersecurity to their organization. They must be able to explain the business value of that “Vision” and secure leadership support to execute and engage the business in implementing this “Vision.” So how does this relate to the subject of my manifesto? I am glad you asked. The reason I provided some background is because for us CISOs, a large portion of our time is spent working with third-party vendors to fix issues. We need these vendors to help us build our security programs, to implement innovative solutions for new services, or to just help us manage risk across sprawling network infrastructures. The truth of the matter is, organizations are looking to their CISO to help solve the hard technology and risk problems they face; this requires CISOs to look at technologies, workflows, new processes, and collaborative projects with peers to reduce risk and protect their enterprise assets. Of course, this isn’t easy to say the least, one of the hardest issues I believe CISOs face is time and again when they speak with their technology provider, the vendor truly doesn’t understand how the CISO does their job. The vendor doesn’t understand how the CISO views technology or really what the CISO is looking for in a solution. To provide some insight, I decided I would list ten rules that I hope technology providers will take to heart and just possibly make it better for all of us in the cyber security community. Now with these rules in mind, let’s get started. I will first start with several issues that really turn me off when I speak with a technology provider. I will end with some recommendation to help vendors understand what CISOs are thinking when they look at their technology. So here we go, let’s have some fun. Top Ten Rules for Technology Providers “Don’t pitch your competition” – I hate it when a vendor knows I have looked at some of their competitors, and then they spend their time telling me how bad the competition is and how much better they are. Honestly I don’t care, I contacted you to see how your technology works and if it fits for the issue I am trying to resolve. If you spend all of your time talking down about another vendor, that tells me you are more concerned about your competitor than my requirements. Maybe I called the wrong company for a demonstration. “Don’t tell me you solve 100% of ANY problem” – For vendors that like to make grand statements, don’t tell me that you do 100% of anything. The old adage “100% everything is 0% of anything.” In today’s threat environment, the only thing I believe that is 100% is eventually that I will have a breach. The rest is all B.S. so don’t waste my time saying you do 100% coverage, or 100% remediation, or 100% capturing of malware traffic. I don’t know of a single CISO that believes that anyone does 100% of anything so don’t waste your time trying to sell that to me.

ブログ

Evident.io: Visualize, Analyze and Report on Security Data From AWS

Evident.io and Sumo Logic team up to provide seamless integrated visibility into compliance monitoring and risk attribution Analyzing and visualizing all your security data in one place can be a tricky undertaking. For any SOC, DevSecOps or DevOps team in heterogeneous environments, the number of tools in place to gain visibility into and monitor compliance can be daunting. The good news is that Evident.io and Sumo Logic have teamed up to bring you a simple-to-implement, yet effective integration that allows you to perform additional analytics and visualization of your Evident Security Platform data in the Sumo Logic Analytics platform. Evident.io ESP is an agentless, cloud-native platform focused on comprehensive continuous security assessment of the control plane for AWS cloud infrastructure services. ESP can monitor all AWS services available through the API, ensuring their configurations are in line with AWS best practices for security as well as your organization’s specific compliance requirements. Sumo Logic is a leading SaaS-native, machine data analytics service for log management and time series metrics. Sumo Logic allows you to aggregate, perform statistical analytics, report on trends, visualize and alert on all your operational, performance and security related event log data in one place from just about any data source. Why integrate with Sumo Logic? Both of these platforms are architected for the cloud from the ground up and have a solid devops pedigree. This integration allows you to aggregate all the data generated by your AWS cloud infrastructure in the same place as your application level security and performance event data which allows you to perform attribution on a number of levels. The Evident.io alert data is rich with configuration state data about your security posture with regards to AWS best practices for security and the CIS Benchmarks for AWS. As customers adopt CI/CD concepts; being able to quickly visualize, alert and remediate, in near real-time, on any vulnerabilities introduced by misconfiguration is critical. Evident.io and Sumo Logic combined can help you do this better and faster. And, best yet, it is super easy to get started with Evident.io and Sumo Logic in a matter of minutes. The Sumo Logic App for Evident.io ESP The Sumo Logic App for Evident.io ESP enables a user to easily and quickly report on some key metrics from their AWS Cloud infrastructure such as: Trend analysis of alerts over time (track improving or deteriorating posture over time) Time to resolve alerts (For SLAs – by tracking the start and end of an alert in one report) Summary of unresolved alerts/risks Number of risks found by security signatures over time Below are some screen shots from the Sumo Logic App for Evident.io ESP: Figure 1 is an overview of the the types and severity of risks, alert status and how long before a risk is resolved and marked as ended on the Evident.io side. This can be an important metric when managing to SLAs. Fig. 1 Figure 2 provides a detailed view of the risks identified by Evident.io ESP within the configured time range for each of the dashboard panels. The panels present a views into: Which Evident.io ESP signatures triggered the risks A breakdown of: risks identified by AWS region risks by AWS account number of total identified risks number of newly identified risks Fig. 2 The chart in Fig 3 below is an interesting one that shows risks identified clearly trending down over 14 days. This is indicating that the teams are remediating identified issues in the Evident.io ESP alerts, and you clearly see an improvement in the security posture of this very large AWS environment that has 1000s of instances. Note: There are almost no high severity risks in this environment. Fig. 3 Is my data secure? These two platforms do an awesome job of securing your data both in flight and in transit, with both using TLS 1.2 encryption for in flight data and customer specific 256 bit AES encryption keys for at rest data. You can be confident that this data is securely transported from the Evident Security Platform to Sumo Logic and stored in a secure fashion. How can I gain access? This integration relies on the use of AWS SNS (Simple Notification Service) and a Sumo Logic native https collector. If you are both an Evident.io and Sumo Logic customer you can enable and start to benefit from the integration using the directions here: http://help.sumologic.com/Special:Search?qid=&fpid=230&fpth=&path=&search=evident.io or http://docs.evident.io/#sumo. Note you will need to have access to both Evident.io and Sumo Logic instances. Security and compliance monitoring are no longer a bottleneck in your agile environment. You can start visualizing the data from Evident Security Platform (ESP) in Sumo Logic in a matter of minutes. This blog post was written by Hermann Hesse, Senior Solutions Architect at Evident.io. He can be reached at https://www.linkedin.com/in/hermann-hesse-a040281

AWS

2016年11月30日

ブログ

AWS – The Biggest Supercomputer in the World

AWS is one of the greatest disruptive forces in the entire enterprise technology market. Who would have thought when they launched in 2006, it was going to kick off perhaps the most transformative shift in the history of the $300B data center industry. Over 25,000 people (or 0.0003% of the World’s population) are descending on Vegas this week to learn more about AWS, the biggest supercomputer in the world. As we get ready to eat, drink, network and learn, I wanted to provide some responses to inquiries I often get from prospects, reporters and folks who I meet at various conferences around the country. What advice would you pass on to anyone deciding to use AWS for public cloud storage? Understand the IaaS provider’s shared security model. In Amazon’s case, AWS is responsible for the infrastructure. The customer is responsible for the security of everything that runs on that infrastructure – The applications, the workloads and the data. Make sure any additional service you use on top of that have pursue their own security certifications and attestations to protect data at rest and in motion. This will allay fears and give people comfort in sending data through a SaaS-based service. We find that organizations are making different decisions based on the trust level they have with their partners, and we at Sumo Logic take this very seriously investing millions to achieve and maintain on an ongoing basis, these competitive differentiators. Too many people try to live vicariously through the certifications AWS has and pass this on as adequate Understand the benefits you are hoping to achieve before you start (i.e. Better pricing / reduced cost; Easier budget approvals (CAPEX vs. OPEX); Increase Business Agility; Increase flexibility and choices of what programming models, OS, DB and architectures make sense for the business; Increased security; Increased workload scalability / elasticity, etc.) How can we maximize AWS’s value? Crawl, walk, run – it is a learning curve that will take time to master. Adopt increasing levels of services as your teams get up to speed and understands how to leverage APIs and automate everything through code. Compute as code is now a reality. Understand the pain points you are trying to address – this will dictate approach (i.e. Pricing / Cost / Budget; Internal Politics; Control of Data Locality; Sovereignty; Security; Compliance, etc.) Turn on logging within AWS. More specifically, activate Amazon CloudWatch to log all your systems, applications and services and activate AWS CloudTrail to log all API actions. This will provide visibility into all user actions on AWS. The lack of visibility into cloud operations and controls stands as the largest security issue we see. What cautions might there be in terms of how to end up paying more than one should or not really getting full value out of this type of storage? Understand not all data is created equal…in terms of importance, frequency of access, life expectancy of the data, retention requirements, and search performance. Compare Operational data (high importance, high frequency of access, short life expectancy, high search performance requirements) to audit data (medium importance, lower frequency of access, longer life expectancy/data retention requirements, low performance requirements) Align your storage needs to the value and urgency of the data that you are logging (S3, S3 Infrequent Access, Glacier, EBS, etc.) Look for solutions and tools that are cloud native, so you can avoid unnecessary data exfiltration costs. 10 years ago, no one was virtualizing mission critical workloads because of Security and Compliance concerns…but we ended up there anyways. This is exactly the same thing for cloud. And in this new world, speed and time to market is everything. Organizations are looking to be more flexible, more agile, capitalize on business opportunities, and how you approach security is different. And to support the rapid pace of delivery of these digital initiatives – weekly, even daily – these companies are leveraging modern, advanced IT infrastructures like AWS and Sumo Logic. In this new world, we at Sumo Logic have a tremendous opportunity to help operations and security professionals get the visibility they need as those workloads are moved out to the cloud. We help them become cloud enablers, to help drive the business forward, not being naysayers. Visibility is everything! Come stop by our booth – #604 – and say hi!

ブログ

Advanced Security Analytics for AWS

Every company – if they are going to remain relevant – is going through some form of digital transformation today and software is at the heart of this transformation. According to a report by the center for digital business transformation, the digital disruption will displace approximately 40% of incumbent companies within the next 5 years. Don’t believe it? According to Forrester Research, between 1973 and 1983, 35% of the top 20 F1000 companies were new. Now jump forward 20 years, and this number increases to 70%. According to predictions from IDC’s recent FutureScape for Digital Transformation, two-thirds of Global 2000 companies will have digital transformation at the center of their corporate strategy by next year, and by 2020, 50% of the Global 2000 will see the majority of their business depend on their ability create digitally-enhanced products, services, and experiences. So what does this all mean? Keeping pace with the evolving digital marketplace requires not only increased innovation, but also updated systems, tools, and teams. Accenture and Forrester Research reported in their Digital Transformation in the Age of the Customer study that only 26% of organizations considered themselves fully operationally ready to execute against their digital strategies. In order to deliver on the promise of digital transformation, organizations must also modernize their infrastructure to support the increased speed, scale, and change that comes with it. We see three characteristics that define these modern applications and digital initiatives: They follow a DevOps or DevSecOps culture, where the traditionally siloed walls between the Dev, Ops and Security teams are becoming blurred, or go away completely. This enables speed, flexibility and agility. They are generally running on modern infrastructure platforms like AWS (see AWS Modern Apps Report), leveraging APIs and compute as code (see AWS – The Largest Supercomputer in the World) The way you approach security needs to change. You need deep visibility & native integrations across the AWS services that are used, you need to understand your risks and security vulnerabilities, you need to connect the dots between the services used, and understand what the users are doing, where are they coming from, what are they changing, what are the relationship of those changes, how this impacts network flows and security risks. And it is important to be able to match information contained in your AWS log data – i.e. IP Address, Ports, UserIDs, etc – from services like CloudTrail and VPC Flow Logs, with known Indicators of Compromise (IOCs) that are out there in the wild from premium threat intelligence providers like Crowdstrike. Pulling in global threat intelligence into Sumo Logic’s Next Gen Cloud Security Analytics for AWS accomplishes the following: Increases velocity & accuracy of threat detection Adds additional content to log data and helps to identify and visualize malicious IP addresses, domain names, ports, email addresses, URLs, and more. Improve security and operational posture through accelerated time to identify and resolve security threats (IOC) Come stop by our booth – #604 – for a demo and say hi!

AWS

2016年11月29日

ブログ

Starting Fresh in AWS

Many folks we speak to ask the question: “How do I get started in AWS?” The answer used to be simple. There was a single service for compute, storage, and a few others services in early trials. Fast forward 10+ years and AWS now offers over 50 services. Taking your first steps can be daunting. What follows is my recommended approach if you’re starting fresh in the AWS Cloud and don’t have a lot of legacy applications and deployments weighing you down. If that is you, check out the companion post to this one. Do Less To Do More Everything in AWS operates under a Shared Responsibility Model. The model simple states that for each of the areas required day-to-day operations (physical, infrastructure, virtualization, operation system, application, and data), someone is responsible. That someone is either you (the user) or AWS (the service provider). Light grey options are the responsibility of AWS, Black are the user’s The workload shifts towards the service provider as you move away from infrastructure services (like Amazon EC2) towards abstract services (like AWS Lambda). As a user, you want AWS to do more of the work. This directs your service choice as you start to build in the AWS Cloud. You want to pick more and more of the services that fall under the SaaS or abstract — which is a more accurate term when compared to SaaS — category. Computation If you need to run your own code as part of your application, you should be making your choice based on doing less work. This means starting with AWS Lambda, a service that runs your functions directly without worrying about the underlying frameworks or operating system. If Lambda doesn’t meet your needs, try using a Docker container running on the Amazon EC2 Container Service (ECS). The advantage of this service is that it configures the underlying EC2 instance (the OS, Docker host, scheduling, etc.) and lets you simply worry about the application container. If ECS can’t meet your needs, see if you’re a fit for AWS Elastic Beanstalk. This is a service that takes care of provisioning, capacity management, and application health for you (a/k/a you do less work). All of this runs on top of Amazon EC2. So does Lambda and ECS for that matter. If all else fails, it’s time to deploy your own instances directly in Ec2. The reason you should try to avoid this as much as possible is the simple fact that you’re responsible for the management of the operating system, any applications your install, and — as always — your data. This means you need to keep on top of patching your systems, hardening them, and configuring them to suit your needs. The best approach here is to automate as much of this operational work as possible (see our theme of “do less” repeating?). AWS offers a number of services and features to help in this area as well (start with EC2 AMIs, AWS CodeDeploy, AWS CodePipeline, and AWS OpsWorks). Data Storage When it comes to storing your data, the same principle applies; do less. Try to store you data in services like Amazon DynamoDB because the entire underlying infrastructure is abstracted away for you. You get to focus purely on your data. If you just need to store simple file object, Amazon S3 is the place to be. In concert with Amazon Glacier (long term storage), you get the simplest version of storage possible. Just add an object (key) to a bucket and you’re all set. Under the covers, AWS manages all of the moving parts in order to get you 11 9’s of durability. This means that about 0.000000001% of objects stored in the service may experience data corruption. That’s a level of quality that your simply cannot get on your own. If you need more control or custom configurations, other services like the Amazon Elastic File System or EBS volumes in EC2 are available. Each of these technologies comes with more operational overhead. That’s the price you pay for customization. Too Many Services Due to the shear number of services that AWS provides, it’s hard to get a handle on where to start. Now that you know your guiding principle, it might be worth looking at the AWS Application Architecture Center. This section of the AWS site contains a number of simple reference architectures that provide solutions to common problems. Designs for web application hosting, batch processing, media sharing, and others are all available. These designs give you an idea of how these design patterns are applied in AWS and the services you’ll need to become familiar with. It’s a simple way to find out which services you should start learning first. Pick a design that meets your needs and start learning the services that the design is composed of. Keep Learning AWS does a great job of providing a lot of information to help get you up to speed. Their “Getting Started with AWS” page has a few sample projects that you can try under the free tier. Once you start to get your footing, the whitepaper library is a great way to dive deeper on certain topics. In addition, all of the talks from previous Summits (one to two day free events) and AWS re:Invent (the major user conference) are available for viewing on the AWS YouTube channel. There are days and days of content for you to watch. Try to start with the most recent material as a lot of the functionality has changed over the years. But basic, 101-type talks are usually still accurate. Dive In There is so much to learn about AWS that it can be paralyzing. The best advice I can give is to simply dive in. Find a simple problem that you need to solve, do some research, and try it out. There is no better way to learn than doing. Which leads me to my last point, the community around AWS is fantastic. AWS hosts a set of very active forums where you can post a question and usually get an answer very quickly. On top of that the usual social outlets (Twitter, blogs, etc.) are a great way to engage with others in the community and to find answers to your pressing questions. While this post has provided a glimpse of where to start, be sure to read the official “Getting Started” resources provided by AWS. There’s also a great community of training providers (+ the official AWS training) to help get you up and running. Good luck and happy building! This blog post was contributed by Mark Nunnikhoven, Vice President, Cloud Research at Trend Micro. Mark can be reached at https://ca.linkedin.com/in/marknca.

AWS

2016年11月21日

ブログ

Getting Started Under Legacy Constraints in AWS

Getting started in AWS used to be simple. There was a single service for compute, storage, and a few others services in early trials. Fast forward 10+ years and AWS now offers over 50 services. Taking your first steps can be daunting. What follows is my recommended approach if you already have a moderate or large set of existing applications and deployments that you have to deal with and want to migrate to the AWS Cloud. If you’re starting fresh in the AWS Cloud, check out the companion post to this one. Do Less To Do More Everything in AWS operates under a Shared Responsibility Model. The model simple states that for each of the areas required day-to-day operations (physical, infrastructure, virtualization, operation system, application, and data), someone is responsible. That someone is either you (the user) or AWS (the service provider). Light grey options are the responsibility of AWS, Black are the user’s The workload shifts towards the service provider as you move away from infrastructure services (like Amazon EC2) towards abstract services (like AWS Lambda). As a user, you want AWS to do more of the work. This should direct your service choice as you start to build in the AWS Cloud. Ideally, you want to pick more and more of the services that fall under the SaaS or abstract — which is a more accurate term when compared to SaaS — category. But given your existing constraints, that probably isn’t possible. So you need to start where you can see some immediate value, keeping in mind that future project should aim to be “cloud native”. Start Up The Forklift The simplest way to get started in AWS under legacy constraints is to forklift an existing application from your data centre into the AWS Cloud. For most applications, this means you’re going to configure a VPC, deploy a few EC2 instances, an RDS instance (ideally as a Multi-AZ deployment). To make sure you can expand this deployment, leverage a tool like AWS OpsWorks to automate the deployment of the application on to your EC2 instances. This will make it a lot easier to repeat your deployments and to manage your Amazon Machine Images (AMIs). Migrating your data is extremely simple now as well. You’re going to want to use the AWS Database Migration Service to move the data and the database configuration into RDS. Second Stage Now that your application is up and running in the AWS Cloud, it’s time to start taking advantage of some of the key features of AWS. Start exploring the Amazon CloudWatch service to monitor the health of your application. You can set alarms to warn of network bandwidth constraints, CPU usage, and when the storage space on your instances starts to get a little cramped. With monitoring in place, you can now adjust the application’s configuration to support auto scaling and to sit behind a load balancer (either the classic ELB or the new ALB). This is going to provide some much needed resiliency to your application. It’s automated so you’re going to start to realize some of the benefits of AWS and reduce the operational burden on your teams at the same time. These few simple steps have started your team down a sustainable path of building in AWS. Even though these features and services are just the tip of the iceberg, they’ve allowed you to accomplish some very real goals. Namely having a production application working well in the AWS Cloud! On top of that, auto scaling and CloudWatch are great tools to help show teams the value you get by leveraging AWS services. Keep Going With a win under your belt, it’s a lot easier to convince teams to build natively in AWS. Applications that are build from the ground up to take advantages of abstract services in AWS — like Amazon Redshift, Amazon SQS, Amazon SNS, AWS Lambda, and others — will let you do more for your users with less effort on your part. Teams with existing constraints usually have a lot of preconceived notions of how to build and deliver IT services. To truly get the most out of AWS, you have to adopt a new approach to building services. Use small wins and a lot of patience to help convince hesitant team members that this is the best way to move forward. Too Many Services Due to the shear number of services that AWS provides, it’s hard to get a handle on where to start. Now that you know your guiding principle, it might be worth looking at the AWS Application Architecture Center. This section of the AWS site contains a number of simple reference architectures that provide solutions to common problems. Designs for web application hosting, batch processing, media sharing, and others are all available. These designs give you an idea of how these design patterns are applied in AWS and the services you’ll need to become familiar with. It’s a simple way to find out which services you should start learning first. Pick a design that meets your needs and start learning the services that the design is composed of. Keep Learning AWS does a great job of providing a lot of information to help get you up to speed. Their “Getting Started with AWS” page has a few sample projects that you can try under the free tier. Once you start to get your footing, the whitepaper library is a great way to dive deeper on certain topics. In addtion, all of the talks from previous Summits (one to two day free events) and AWS re:Invent (the major user conference) are available for viewing on the AWS YouTube channel. There are days and days of content for you to watch. Try to start with the most recent material as a lot of the functionality has changed over the years. But basic, 101-type talks are usually still accurate. Dive In There is so much to learn about AWS that it can be paralyzing. The best advice I can give is to simply dive in. Find a simple problem that you need to solve, do some research, and try it out. There is no better way to learn than doing. Which leads me to my last point, the community around AWS is fantastic. AWS hosts a set of very active forums where you can post a question and usually get an answer very quickly. On top of that the usual social outlets (Twitter, blogs, etc.) are a great way to engage with others in the community and to find answers to your pressing questions. While this post has provided a glimpse of where to start, be sure to read the official “Getting Started” resources provided by AWS. There’s also a great community of training providers (+ the official AWS training) to help get you up and running. Good luck and happy building! This blog post was contributed by Mark Nunnikhoven, Vice President, Cloud Research at Trend Micro. Mark can be reached at https://ca.linkedin.com/in/marknca.

AWS

2016年11月21日

ブログ

Sumo Logic Launches Ultimate Log Bible Project

ブログ

Integrated Container Security Monitoring with Twistlock

ブログ

Data Analytics and Microsoft Azure

Today plenty of businesses still have real concerns about migrating applications to the cloud. Fears about network security, availability, and potential downtime swirl through the heads of chief decision makers, sometimes paralyzing organizations into standing pat on existing tech–even though it’s aging by the minute. Enter Microsoft Azure, the industry leader’s solution for going to a partially or totally cloud-based architecture. Below is a detailed look at what Azure is, the power of partnering with Microsoft for a cloud or hybrid cloud solution, and the best way to get full and actionable visibility into your aggregated logs and infrastructure metrics so your organization can react quickly to opportunities. What is Microsoft Azure? Microsoft has leveraged its constantly-expanding worldwide network of data centers to create Azure, a cloud platform for building, deploying, and managing services and applications, anywhere. Azure lets you add cloud capabilities to your existing network through its platform as a service (PaaS) model, or entrust Microsoft with all of your computing and network needs with Infrastructure as a Service (IaaS). Either option provides secure, reliable access to your cloud hosted data–one built on Microsoft’s proven architecture. Azure provides an ever expanding array of products and services designed to meet all your needs through one convenient, easy to manage platform. Below are just some of the capabilities Microsoft offers through Azure and tips for determining if the Microsoft cloud is the right choice for your organization. What can Microsoft Azure Do? Microsoft maintains a growing directory of Azure services, with more being added all the time. All the elements necessary to build a virtual network and deliver services or applications to a global audience are available, including: Virtual machines. Create Microsoft or Linux virtual machines (VMs) in just minutes from a wide selection of marketplace templates or from your own custom machine images. These cloud-based VMs will host your apps and services as if they resided in your own data center. SQL databases. Azure offers managed SQL relational databases, from one to an unlimited number, as a service. This saves you overhead and expenses on hardware, software, and the need for in-house expertise. Azure Active Directory Domain services. Built on the same proven technology as Windows Active Directory, this service for Azure lets you remotely manage group policy, authentication, and everything else. This makes moving and existing security structure partially or totally to the cloud as easy as a few clicks. Application services. With Azure it’s easier than ever to create and globally deploy applications that are compatible on all popular web and portable platforms. Reliable, scalable cloud access lets you respond quickly to your business’s ebb and flow, saving time and money. With the introduction of Azure WebApps to the Azure Marketplace, it’s easier than ever to manage production, testing and deployment of web applications that scale as quickly as your business. Prebuilt APIs for popular cloud services like Office 365, Salesforce and more greatly accelerate development. Visual Studio team services. An add-on service available under Azure, Visual Studio team services offer a complete application lifecycle management (ALM) solution in the Microsoft cloud. Developers can share and track code changes, perform load testing, and deliver applications to production while collaborating in Azure from all over the world. Visual Studio team services simplify development and delivery for large companies or new ones building a service portfolio. Storage. Count on Microsoft’s global infrastructure to provide safe, highly accessible data storage. With massive scalability and an intelligent pricing structure that lets you store infrequently accessed data at a huge savings, building a safe and cost-effective storage plan is simple in Microsoft Azure. Microsoft continues to expand its offerings in the Azure environment, making it easy to make a la carte choices for the best applications and services for your needs. Why are people trusting their workloads to Microsoft Azure? It’s been said that the on-premise data center has no future. Like mainframes and dial-up modems before them, self-hosted data centers are becoming obsolete, being replaced by increasingly available and affordable cloud solutions. Several important players have emerged in the cloud service sphere, including Amazon Web Services (AWS), perennial computing giant IBM, and Apple’s ubiquitous iCloud, which holds the picture memories and song preferences of hundreds of millions of smartphone users, among other data. With so many options, why are companies like 3M, BMW, and GE moving workloads to Microsoft Azure? Just some of the reasons: Flexibility. With Microsoft Azure you can spin up new services and geometrically scale your data storage capabilities on the fly. Compare this to a static data center, which would require new hardware and OS purchasing, provisioning, and deployment before additional power could be brought to bear against your IT challenges. This modern flexibility makes Azure a tempting solution for organizations of any size. Cost. Azure solutions don’t just make it faster and easier to add and scale infrastructure, they make it cheaper. Physical services and infrastructure devices like routers, load balancers and more quickly add up to thousands or even hundreds of thousands of dollars. Then there’s the IT expertise required to run this equipment, which amounts to major payroll overhead. By leveraging Microsoft’s massive infrastructure and expertise, Azure can trim our annual IT budget by head-turning percentages. Applications. With a la carte service offerings like Visual Studio Team Services, Visual Studio Application Insights, and Azure’s scalable, on-demand storage for both frequently accessed and ‘cold’ data, Microsoft makes developing and testing mission-critical apps a snap. Move an application from test to production mode on the fly across a globally distributed network. Microsoft also offers substantial licensing discounts for migrating their existing apps to Azure, which represents even more opportunity for savings. Disaster recovery. Sometimes the unthinkable becomes the very immediate reality. Another advantage of Microsoft Azure lay in its high-speed and geographically decentralized infrastructure, which creates limitless options for disaster recovery plans. Ensure that your critical application and data can run from redundant sites during recovery periods that last minutes or hours instead of days. Lost time is lost business, and with Azure you can guarantee continuous service delivery even when disaster strikes. The combination of Microsoft’s vast infrastructure, constant application and services development, and powerful presence in the global IT marketplace has made Microsoft Azure solutions the choice of two-thirds of the world’s Fortune 500 companies. But the infinite scalability of Azure can make it just as right for your small personal business. Logging capabilities within Microsoft Azure The secret gold mine of any infrastructure and service solution is ongoing operational and security visibility, and ultimately these comes down to extracting critical log and infrastructure metrics from the application and underlying stack. The lack of this visibility is like flying a plane blind—no one does it. Azure comes with integrated health monitoring and alert capabilities so you can know in an instant if performance issues or outages are impacting your business. Set smart alert levels for events from: Azure diagnostic infrastructure logs. Get current insights into how your cloud network is performing and take action to resolve slow downs, bottlenecks, or service failures. Windows IIS logs. View activity on your virtual web servers and respond to traffic patterns or log-in anomalies with the data Azure gathers on IIS 7. Crash dumps. Even virtual machines can ‘blue screen’ and other virtual equipment crashes can majorly disrupt your operations. With Microsoft Azure you can record crash dump data and troubleshoot to avoid repeat problems. Custom error logs. Set Azure alerts to inform you about defined error events. This is especially helpful when hosting private applications that generate internal intelligence about operations, so you can add these errors to the health checklist Azure maintains about your network. Microsoft Azure gives you the basic tools you need for error logging and monitoring, diagnostics, and troubleshooting to ensure continuous service delivery in your Azure cloud environment. Gain Full Visibility into Azure with Unified Logs and Metrics Even with Azure’s native logging and analytics tools, the vast amount of data flowing to make your network and applications operate can be overwhelming. The volume, variety and velocity of cloud data should not be underestimated. With the help of Sumo Logic, a trusted Microsoft partner, managment of that data is simple. The Sumo Logic platform unifies logs and metrics from the structured, semi-structured, and unstructured data across your entire Microsoft environment. Machine learning algorithms process vast amounts of log and metrics data, looking for anomalies and deviations from normal patterns of activity, alerting you when appropriate. With Log Reduce, Log Compare and Outlier Detection, extract continuous intelligence from your application stack and proactively respond to operational and security issues. The Sumo Logic apps for Microsoft Azure Audit, Microsoft Azure Web Apps, Microsoft Windows Server Active Directory, Microsoft Internet Information Services (IIS), and the popular Windows Performance app, make ingesting machine data in real-time and rendering it into clear, interactive visualizations for a complete picture of your applications and data. Before long the on-premise data center—along with its expensive hardware and hordes of local technicians on the payroll—may be lost to technology’s graveyard. But smart, researched investment into cloud capabilities like those provided in Microsoft Azure will make facing tomorrow’s bold technology challenges and possibilities relatively painless.

Azure

2016年09月19日

ブログ

Improving your Security Posture with Trend Micro Deep Security Integration

Enterprises are running their workloads across complex, hybrid infrastructures, and need solutions that provide full-stack, 360-degree visibility to support rapid time to identify and resolve security threats. Trend Micro Deep Security offers seamless integration with Sumo Logic’s data analytics service to enable rich analysis, visualizations and reporting of critical security and system data. This enables an actionable, single view across all elements in an environment. I. SOLUTION COMPONENTS FOR INTEGRATION DEEP SECURITY MANAGER (DSM) This is the management component of the system and is responsible for sending rules and security settings to the Deep Security Agents. The DSM is controlled using the web-based management console. Using the console, the administrator can define security policies, manage deployed agents, query status of various managed instances, etc. The integration with Sumo Logic is done using this interface and no additional component or software is required. DEEP SECURITY AGENT (DSA) This component provides for all protection functionality. The nature of protection depends on the rules and security settings that each DSA receives from the Deep Security Manager. Additionally, the DSA sends a regular heartbeat to the DSM, and pushes event logs and other data points about the instance being protected to the DSM. SUMO LOGIC INSTALLED COLLECTORS AND SOURCES Sumo Logic Installed Collectors receive data from one or more Sources. Collectors collect raw log data, compress it, encrypt it, and send it to the Sumo Logic, in real time via HTTPS. The Deep Security Solution Components forward security events to Installed Collectors with a syslog source. SUMO LOGIC DATA ANALYTICS SERVICE AND WEB UI The Sumo Logic Web UI is browser-based and provides visibility and analysis of log data and security events sent by the Deep Security Platform to the Sumo Logic service and also provides administration tools for checking system status, managing your deployment, controlling user access and managing Collectors. SUMO LOGIC APP FOR TREND MICRO DEEP SECURITY The Sumo Logic App for Trend Micro Deep Security delivers out-of-the-box Dashboards, saved searches, and field extraction for for each security module in the Deep Security solution, including Anti-malware, Web Reputation, Intrusion Prevention, Host-based Firewall and File Integrity Monitoring. II. HOW THE DEEP SECURITY INTEGRATED SOLUTION WORKS Overview Trend Micro Deep Security Software and Deep Security as a Service integrates with Sumo Logic through the Installed Collector and Syslog Source. This Syslog Source operates like a syslog server listening on the designated port to receive syslog messages from Trend Micro Deep Security Solution. The Installed Collectors can be deployed in your environment either on a local machine, a dedicated server or in the cloud. The Deep Security platform sends system and security event logs to this server, which forwards them securely to the Sumo Logic Data Analytics Service. Figure 1 provides a high-level overview of the integration process. III. INSTALL DATA COLLECTOR Install Options The first thing to consider when you set up the integration is how to collect data from your Deep Security deployment and forward it to Sumo Logic. There are three basic methods available, local host data collection, centralized syslog data collection and hosted collector. Deep Security uses an installed centralized collector with syslog source. In this method, an installed Collector with Syslog Sources can be used to collect all relevant data in a centralized location before forwarding it on to Sumo Logic’s cloud-based service. Installed Collector with Syslog Sources The installation process involves the deployment of a Sumo Logic collector in your environment and then adding a Syslog Source to it. A Sumo Logic Installed Collector can be installed on any standard server and used to collect local files, remote files or to aggregate logs from network services via syslog. You can choose to install a small number of collectors to minimize maintenance or you can choose to install many Collectors on many machines to leverage existing configuration management and automation tools like Puppet or Chef. At the minimum you will need one “Installed Collector” setup for Deep Security. The number of syslog sources you need depends on the types of event logs that you are sending to Sumo logic. You will need one syslog source for each type of event. There are two types of events in Deep Security: “System Events” and “Security Events”. In the example shown below, we have configured Sumo Logic Installed Collector with two Syslog Sources using UDP protocol. In this example setup, the first syslog source is listening on UDP port 514 for System Event Log forwarding. The second syslog source below is listening on UDP port 1514 for Security modules event log forwarding. IV. INTEGRATE WITH SUMO LOGIC System Event Log Forwarding The integration of Trend Micro Deep Security for system events forwarding to Sumo Logic is done via system setting (Administration System Settings SIEM) configuration as shown below: Security Event Log Forwarding The integration of Trend Micro Deep Security for security event forwarding to Sumo Logic is done via Policy configuration and requires a Syslog Source with UDP protocol and connection information to be added to the policy. Deep Security allows Policy inheritance where child policies inherit their settings from their parent Policies. This way you can create a policy tree that begins with a top/base parent policy configured with settings and rules that will apply to all computers. When you have a single collector installed in your environment to collect logs from Deep Security it is recommended to set the integration details at the Top (root/base) policy as shown below: Additionally, you can configure individual collectors for each security protection module or have all Deep Security modules to send logs to one collector depending on your requirements. Integration Options for Security Events Logs There are two integration options available to configure Deep Security Solution to forward security events to Sumo Logic, Relay via Deep Security Manager and Direct Forward. Relay via Deep Security Manager This option sends the syslog messages from the Deep Security Manager after events are collected on heartbeats as shown below: Direct Forward from Deep Security Agents This option sends the security events/messages in real time directly from the Agents as shown below: Comparison Between the Two Integration Options When you are deciding what integration option to choose from to send security events to Sumo Logic Installed Collectors among these two integration choices, consider your deep security deployment (as a Service, AWS and Azure Marketplace AMI/VM or software), your network topology/design, your available bandwidth, and deep security policy design. The table below provides comparison between these two choices for easier decision process: V. ANALYZE EVENTS LOGS Once the install and integration steps are done, you are almost set to analyze Deep Security event data in Sumo Logic. Log into the Sumo Logic console, jump down to the preview tab section, and select “install” under Trend Micro – Deep Security. Once you define the _sourceCategory, you are set to run searches, identify anomalies and correlate events across your protected workloads. You can also leverage out-of-the-box, powerful dashboards to unify, enrich and visualize security related information across your entire physical, virtual and cloud infrastructure. Sumo Logic Dashboard The Sumo Logic dashboards are a powerful visualization tool to help accelerate the time to identify anomalies and indicators of compromise (IOC). The saved searches powering these dashboards can also be leverage for forensic investigations and to reduce the time it takes for root cause analysis and remediation. The uses for Dashboards are nearly endless. Perhaps your IT security group wants to keep an eye on who is installing virtual machines. You can edit, create and save the queries you run as a panel in a Dashboard, and watch for spikes over time in a line graph. Multiple graphical options/formats are supported. Dashboards bring additional assurance, knowing that unusual activity will be displayed real time in an easy-to-digest graphical format. The data that matters the most to you is even easier to track. How to Learn More on Security For additional learning on Trend Micro Deep Security, please visit their site. To watch a video from Infor’s CISO, Jim Hoover, on how to securely scale teams, manage AWS workloads and address budget challenges, please watch here. *A special thanks to Saif Chaudhry, Principle Architect at Trend Micro and Dwayne Hoover , Sr. Sales Engineering Manager at Sumo Logic for making this integration and App a reality!

2016年08月10日

ブログ

Visualize and Analyze Your Auth0 Users with Sumo Logic - A Tutorial

Gain better understanding of your users by visualizing and analyzing your Auth0 event logs with the Sumo Logic extension. Auth0 is a cloud-based, extensible identity provider for applications. The Sumo Logic extension for Auth0 makes it easy to analyze and visualize your Auth0 event logs and provides insight into security and operational issues. In this tutorial, we are going to install the Sumo Logic extension and explain how the dashboards we’ve created can help you quickly get a snapshot of how users are interacting with your application. To get started, you will need an Auth0 and a Sumo Logic account. Both services offer generous free tiers to get you started. Sign up for Auth0 here, and for Sumo Logic you can create an account here. You can follow the step by step tutorial below or watch our video tutorial to learn how and why combining Auth0 and Sumo Logic will be beneficial to your app. Watch the Auth0 and Sumo Logic integration video Benefits of Sumo Logic for Auth0 Before going through the process of setting up the extension, you may be asking yourself why would I even want to do this? What are the benefits? Using Auth0 as your identity provider allows you to capture a lot of data when users attempt to authenticate with your application. A lot of this data is stored in log files and easily forgotten about. Having this data visualized allows you to stay on top of what is happening in your applications. Sumo Logic makes it easy to see the latest failed logins, find and alert on error messages, create charts to visualize trends, or even do complex statistical analysis on your data. Here are some of the log types that can be collected: Logins, both successes and failures Token exchanges, both successes and failures Login failure reasons Connection errors User signup events Password changes Rate limiting events Configuring Sumo Logic to Receive Auth0 Logs To install the Sumo Logic extension, login to your Sumo Logic account and open up the Setup Wizard from the Manage top-level menu. On the next screen, you will want to select the Setup Streaming Data option. For the data type, we will select Your Custom App. Finally, select HTTP Source as the method for collecting the data logs. The last section will have you name the source category as well as select a time zone in the event one is not provided. With the configuration complete, the next screen will display the HTTP endpoint to be used for transmitting our logs. Copy the HTTP Source URL and click the Continue button to complete the setup wizard. Next, we’ll install the Sumo Logic extension from our Auth0 management dashboard. Installing the Sumo Logic Extension within Auth0 Installing the Sumo Logic extension is a fairly straightforward process. We only need the HTTP Source URL which we got when we ran through the Setup Wizard. Let’s look at the process for installing the Sumo Logic extension. Log into your Auth0 management dashboard and navigate to the Extensions tab. Scroll to find the extension title Auth0 Logs to Sumo Logic and select it. A modal dialog will open with a variety of configuration options. We can leave all the default options enabled, we’ll just need to update the SUMOLOGIC URL with the HTTP Source URL we copied earlier. Paste it here and hit save. By default, this job will run every five minutes. After five minutes have gone by, let’s check our extension and make sure that it ran properly. To do this, we can simply click into our Auth0 Logs to Sumo Logic extension and we will see the Cron job listed. Here, we can see when the job is scheduled to run again, the result of the last time it ran and other information. We can additionally click on the job name to see an in-depth history. Now that we have our Sumo Logic extension successfully installed and sending data, let’s go ahead and setup our dashboards in Sumo Logic so we can start making sense of the data. Installing the Auth0 Dashboards in Sumo Logic To install the Auth0 Dashboards in Sumo Logic, head over to your Sumo Logic dashboard. From here, select Library from the top level menu. Next, select the last tab titled Preview and you will see the Auth0 application at the very top. Note that at present time the Auth0 app is in Preview state, in the future it may be located in the Apps section. With the Auth0 app selected, click the Install button to configure and setup the app. Here, all you will need to select is the source category which will be the name you gave to the HTTP Source when we configured it earlier. You don’t have to remember the name as you will select the source from a dropdown list. We can leave all the other settings to their default values and just click the Install button to finish installing the app. To make sure the app is successfully installed, click on Library from your top level menu and select the tab titled Personal. You should see a new folder titled Auth0 and if you select it, you’ll see the two dashboards and all the predefined queries you can run. In the next section, we’ll take a look at the two dashboards Auth0 has created for us. Learning the Auth0 Dashboards We have created two different dashboards to better help you visualize and analyze the log data. The Overview dashboard allows you to visualize general login data while the Connections and Clients dashboard focuses primarily on showing you how and from where your users are logging in. Let’s look at deeper look into each of the dashboards. 1. Overview Dashboard The Overview dashboard provides a visual summary of login activity for your application. This dashboard is useful to quickly get a pulse on popular users, login success and fail rates, MFA usage, and the like. Login Event by Location. Performs a geo lookup operation and displays user logins based on IP address on a map of the world for the last 24 hours. Logins per Hour. Displays a line chart on a timeline showing the number of failed and successful logins per hour, over the last seven days. Top 10 Users by Successful Login. Shows a table chart with the top ten users with the most successful logins, including user name and count for the last 24 hours. Top 10 Users by Failed Login. Provides a table chart with the top ten users with the most failed logins, including user name and count for the last 24 hours. Top 10 Source IPs by Failed Login. Displays a table chart with a list of ten source IP addresses causing the most failed logins, including IP and count, for the last 24 hours. Top 10 User Agents. Displays the top ten most popular user agents in a pie chart from all connections for the last seven days. Top 10 Operating Systems. Shows the top ten most popular operating systems based on user agent in a pie chart for the last seven days. Guardian MFA Activity. Displays a line chart on a timeline showing the number of each Guardian MFA event per hour for the last seven days. 2. Connections and Clients Dashboard The Connections and Clients dashboard visualizes the logs that deal with how users are logging into your applications. This dashboard contains information such as countries, clients, and amount of times users login to specific clients. Logins by Client and Country. Displays a stacked bar chart showing the number of successful logins for the last 24 hours, grouped by both client and country name. This visualizes the relative popularity of each client overall, as well as in a given country. Logins by Client per Day.mShows a stacked bar chart on a timeline showing the number of successful logins for the last seven days, grouped by client per day. This shows the popularity of each client over the past week, and the relative popularity among clients. Connection Types per Hour. Provides a line chart on a timeline of the connection types used for the past seven days. Client Version Usage. Displays a line chart on a timeline of the Auth0 library version being used by all clients for the past seven days. This is useful to detect outdated clients, as well as to track upgrades. Top 10 Clients. Shows a table chart that lists the ten most popular clients, including client name and count for the past 24 hours. Top 10 Recent Errors. Provides a table chart with a list of the ten most frequent errors, including details on client name, connection, description and count for the last 24 hours. This is useful for discovering and troubleshooting operational issues. Logins by Client per Day. Shows a stacked bar chart on a timeline showing the number of successful logins for the last seven days, grouped by client per day. This shows the popularity of each client over the past week, and the relative popularity among clients. Connection Types per Hour. Provides a line chart on a timeline of the connection types used for the past seven days. Client Version Usage. Displays a line chart on a timeline of the Auth0 library version being used by all clients for the past seven days. This is useful to detect outdated clients, as well as to track upgrades. Top 10 Clients. Shows a table chart that lists the ten most popular clients, including client name and count for the past 24 hours. Top 10 Recent Errors. Provides a table chart with a list of the ten most frequent errors, including details on client name, connection, description and count for the last 24 hours. This is useful for discovering and troubleshooting operational issues. How to Learn More For additional learning on Auth0, please visit their site. For a video on how to configure the Sumo Logic App for Auth0, please watch here

2016年08月09日

ブログ

DevSecOps in the AWS Cloud

Security teams need to change their approach in order to be successful in the AWS Cloud. DevSecOps in the AWS Cloud is key. DevSecOps in the AWS Cloud Sure the controls you’re using are similar but their application is very different in a cloud environment. The same goes for how teams interact as they embrace cloud technologies and techniques. The concept of DevOps is quickly becoming DevSecOps which is leading to strong security practices built directly into the fabric of cloud workloads. When embraced, this shift can result in a lot of positive change. Teams Level Up DevSecOps in the AWS Cloud With security built into the fabric of a deployment, the integration of technologies will have a direct impact on your teams. Siloed teams are ineffective. The transition to the cloud (or to a cloud mindset) is a great opportunity to break those silos down. There’s a hidden benefit that comes with the shift in team structure as well. Working hand-in-hand with other teams instead of a “gate keeper” role means that your security team is now spending more time helping the next business initiative instead of racing to put out fires all the time. Security is always better when it’s not “bolted on” and embracing this approach typically means that the overall noise of false positives and lack of context is greatly reduced. The result is a security team that’s no longer combing through log files 24/7 and other security drudge work. The shift to a DevSecOps culture lets your teams focus on the tasks they are better at. Resiliency The changes continue to pay off as your security team can now start to focus more on information security’s ignored little brother, “availability”. Information security has three primary goals; confidentiality, integrity, and availability. The easy way to relate these goals is that security works to ensure that only the people you want (confidentiality) get the correct data (integrity) when they need it (availability). DevSecOps in the AWS Cloud And while we spend a lot of time worrying and talking about confidentiality and integrity, we often ignore availability typically letting other teams address this requirement. Now with the functionality available in the AWS Cloud we can actually use aspects of availability to increase our security. Leveraging features like Amazon SNS, AWS Lambda, and Auto Scaling, we can build automated response scenarios. This “continuous response” is one of the first steps to creating self-healing workloads. When you start to automate the security layer in an environment where everything is accessible via an API some very exciting possibilities open up. This cloud security blog was written by Mark Nunnikhoven, Vice-President of Cloud Research at Trend Micro. Mark can be reached on LinkedIn at https://ca.linkedin.com/in/marknca or on Twitter @marknca. Learn More For additional learning on AWS, please visit these video resources 1. AWS re:Invent 2015 | (DVO207) Defending Your Workloads Against the Next Zero-Day Attack https://www.youtube.com/watch?v=-HW_F1-fjUU Discussion on how you can increase the security and availability of your deployment in the AWS Cloud 2. AWS re:Invent 2015 | (DVO206) How to Securely Scale Teams, Workloads, and Budgets https://www.youtube.com/watch?v=Xa5nYcCh5MU Discussion on lessons from a CISO, featuring Jim Hoover, CISO Infor along with Matt Yanchyshyn from AWS and Adam Boyle from Trend Micro.

AWS

2016年08月03日

ブログ

CIS AWS Foundations Benchmark Monitoring with Sumo Logic

The Center for Internet Security (CIS) released version one of the CIS AWS Foundations Benchmark in February this year. It’s a fantastic first draft, and represents the minimum security controls that should be implemented in AWS. 4 Sections of the CIS AWS Foundations Benchmark Identity and Access Management Logging Monitoring Networking This post focuses on Monitoring. IMO, it should actually be called Monitoring and Alerting. CIS implemented the Monitoring controls based on CloudWatch Logs (CWL) integration with CloudTrail and CWL Alarms via the Simple Notification Service (SNS). This is fantastic if you already use these services liberally or cannot get funding for third-party solutions, but they aren’t needed if you already use appropriate third-party solutions. And of course, although I really dig AWS, there’s something to be said for avoiding cloud lock-in, too. While we do use the required services, and have the pre-requisites configured already, we are shipping AWS logs to Sumo Logic (Sumo). Thus, I thought,“can’t we just use Sumo to satisfy the Monitoring requirements”? The answer is yes and no. There are sixteen (16) Monitoring controls total. Fourteen (14) of them can be monitored using Sumo’s CloudTrail integration. Let’s have a look at the controls: 3.1 Ensure a log metric filter and alarm exist for unauthorized API calls 3.2 Ensure a log metric filter and alarm exist for Management Console sign-in without MFA 3.3 Ensure a log metric filter and alarm exist for usage of “root” account 3.4 Ensure a log metric filter and alarm exist for IAM policy changes 3.5 Ensure a log metric filter and alarm exist for CloudTrail configuration changes 3.6 Ensure a log metric filter and alarm exist for AWS Management Console authorization failures 3.7 Ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer created CMKs 3.8 Ensure a log metric filter and alarm exist for S3 bucket policy changes 3.9 Ensure a log metric filter and alarm exist for AWS Config configuration changes 3.1 Ensure a log metric filter and alarm exist for security group changes 3.11 Ensure a log metric filter and alarm exist for changes to Network Access Control Lists (NACL) 3.12 Ensure a log metric filter and alarm exist for changes to network gateways 3.13 Ensure a log metric filter and alarm exist for route table changes 3.14 Ensure a log metric filter and alarm exist for VPC changes 3.15 Ensure security contact information is registered 3.16 Ensure appropriate subscribers to each SNS topic Security contact information (3.15) has to be audited via the management console, and SNS subscribers (3.16) are not applicable for our configuration. Once we have the monitoring configured in Sumo Logic, we’ll use its Slack and PagerDuty integrations for alerting. Thus, the Monitoring section of the benchmark is really monitoring and alerting. We will cover Alerting as Phase Two of our Benchmark project. But first, monitoring 3.1-3.14. CIS AWS: The finished product. CIS AWS Benchmarks in Sumo Logic Dashboards Although I’m a Sumo Logic novice, this was very simple to accomplish, albeit by standing on the shoulders of giants. The majority of the searches that power the dashboards are derivatives of those used in Sumo’s out-of-the-box dashboards (dashboards are not available in Sumo Free). Next are the searches you’ll need to configure. 3.1 Detect unauthorized API calls _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"errorCode\":\"*\"" as error | where error="AccessDenied" or error="UnauthorizedOperation" | count by error 3.2 Detect console login without MFA _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"sourceIPAddress\":\"*\"" as src_ip nodrop | parse "\"eventName\":\"*\"" as eventName nodrop | parse "\"userName\":\"*\"" as userName nodrop | parse "\"responseElements\":{\"ConsoleLogin\":\"*\"}" as loginResult nodrop | parse "\"MFAUsed\":\"*\"" as mfaUsed nodrop | where eventName="ConsoleLogin" | where mfaUsed<>"Yes" | count by username, src_ip 3.3 Detect Root Account Usage _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"userIdentity\":{\"type\":\"*\"}" as authData nodrop | parse "\"type\":\"*\"" as loginType nodrop | where loginType="Root" | count by loginType 3.4 Detect IAM Policy Changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "Put*Policy" or event matches "Delete*Policy*" or event matches "Attach*Policy" or event matches "Detach*Policy" or event matches "CreatePolicy*" | count by event 3.5 Detect CloudTrail config changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "*Trail" or event matches "StartLogging" or event matches "StopLogging" | count by event 3.6 Detect AWS Mgmt Console authorization failures _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"responseElements\":{\"ConsoleLogin\":\"*\"}" as loginResult nodrop | where eventName="ConsoleLogin" | where errorMessage="Failed authentication" | count by errorMessage 3.7 Detect disabling or scheduled deletion of CMK _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "DisableKey" or event matches "ScheduleKeyDeletion" | count by event 3.8 Detect S3 bucket policy changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "*BucketAcl" or event matches "*BucketPolicy" or event matches "*BucketCors" or event matches "*BucketLifecycle" | count by event 3.9 Detect AWS Config config changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "StopConfigurationRecorder" or event matches "DeleteDeliveryChannel" or event matches "PutDeliveryChannel" or event matches "PutConfigurationRecorder" | count by event 3.10 Detect Security Group changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "CreateSecurityGroup" or event matches "DeleteSecurityGroup" or event matches "RevokeSecurityGroupEgress" or event matches "RevokeSecurityGroupIngress" | count by event 3.11 Detect Network ACL changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "CreateNetworkAcl" or event matches "CreateNetworkAclEntry" or event matches "DeleteNetworkAcl" or event matches "DeleteNetworkAclEntry" or event matches "ReplaceNetworkAclEntry" or event matches "ReplaceNetworkAclAssociation" | count by event 3.12 Detect Network Gateway changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "CreateCustomerGateway" or event matches "DeleteCustomerGateway" or event matches "AttachInternetGateway" or event matches "CreateInternetGateway" or event matches "DeleteInternetGateway" or event matches "DetachInternetGateway" | count by event 3.13 Detect Route Table changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "CreateRoute" or event matches "CreateRouteTable" or event matches "ReplaceRoute" or event matches "ReplaceRouteTableAssociation" or event matches "DeleteRouteTable" or event matches "DeleteRoute" or event matches "DisassociateRouteTable" | count by event 3.14 Detect VPC changes _sourceCategory=[YOUR SOURCE CATEGORY] | parse "\"eventName\":\"*\"" as event nodrop | where event matches "CreateVpc" or event matches "DeleteVpc" or event matches "ModifyVpcAttribute" or event matches "*VpcPeeringConnection" or event matches "*tachClassicLink" or event matches "*ableVpcClassic" | count by event As mentioned previously, I’m a Sumo Logic novice—there is no doubt these searches can be improved. The searches looking for more than a few events, like S3 bucket policy changes, can take a longer time to run depending on the date/time range chosen. The initial 7-day search we ran took over an hour to provide results, but we haven’t done any tuning or partitioning yet so YMMV. This CIS AWS Foundations Benchmark Monitoring blog was written by expert Joey Peloquin who can be reached on Twitter @jdpeloquin.

ブログ

SIEM vs. Security Analytics Checklist

ブログ

SIEM: Crash and Burn or Evolution? You Decide.

SIEM crash and burn Often times when I am presenting at conferences around the country, people will ask me “Is SIEM Dead”? Such a great question! Has the technology reached its end of life? Has SIEM really crashed and burned? I think the answer to that question is NO. SIEM is not dead, it has just evolved. The evolution of SIEM SIEMs unfortunately have struggled to keep pace with the security needs of modern enterprises, especially as the volume, variety and velocity of data has grown. As well, SIEMs have struggled to keep pace with the sophistication of modern day threats. Malware 15 years ago was static and predictable. But today’s threats are stealthy and polymorphic. Furthermore, the reality is that few enterprises have the resources to dedicate to the upkeep of SIEM and the use of SIEM technology to address threat management has become less effective and waned. Gartner Analyst Oliver Rochford famously wrote, “Implementing SIEMs continues to be fraught with difficulties, with failed and stalled deployments common.”(1) In Greek mythology, a phoenix (Greek: φοῖνιξ phoinix; Latin: phoenix, phœnix, fenix) is a long-lived bird that is cyclically regenerated or reborn. Associated with the sun, a phoenix obtains new life by arising from the ashes of its predecessor. Phoenix rising from the SIEM ashes The SIEM ashes are omnipresent and security analytics is emerging as the primary system for detection and response. Deconstructing SIEM Although we use the term SIEM to describe this market, SIEM is really made up of two distinct areas: SIM or Security Information Management (SIM) deals with the storage, analysis and reporting of log data. SIM ingests data from host systems, applications, network and security devices. SEM on the other hand, or Security Event Management (SEM), processes event data from security devices, network devices, systems and applications in real time. This is dealing with the monitoring, correlating and notification of security events that are generated across the IT infrastructure and application stack. Folks generally do not distinguish between these two areas anymore and just use “SIEM” to describe the market category. However, it’s important to take note of what you are trying to accomplish and which problems you are trying to solve with these solutions. Why Do We Care About SIEM? One could easily dismiss these solutions outright, but the security market is huge – $21.4B in 2014 according to our friends at Gartner. And the SIEM piece alone reached $1.6B last year. According to 451 Research the security market has around 1,500-1,800 vendors broken down into a number of main categories across IAM, EPP, SIEM, SMG, SWG, DLP, Encryption, Cloud Security, etc. And within each of these main categories, there are numerous sub categories. Security Landscape And despite the billions of dollars invested, current security and SIEM solutions are struggling to keep the bad guys out. Whether cyber criminals, corporate spies, or others, these bad actors are getting through. The Executive Chairman and former CEO of Cisco Systems famously said, “There are two types of companies, those who have been hacked and those who have no clue.” Consider for a moment that the median # days before a breach is detected exceeds 6 ½ months and that the % of victims notified by external 3rd parties is almost 70% (3). People indeed have no clue! Something different is clearly needed. This is the first in a series of blogs on SIEM and Security Analytics. Stay tuned next week for our second blog titled “SIEM and Security Analytics: Head to Head.” Additional Resources Find out how Sumo Logic helps deliver advanced security analytics without the pain of SIEM Sign up for a free trial of Sumo Logic. It’s quick and easy. Within just a few clicks you can configure streaming data, and start gaining security insights into your data in seconds. Mark Bloom runs Product Marketing for Compliance & Security at Sumo Logic. You can reach him on LinkedIn or on Twitter @bloom_mark Sources (1) Gartner: Overcoming Common Causes for SIEM Deployment Failures by Oliver Rochford 21Aug2014 (2) Forrester: Evolution of SIEM graph, taken from Security Analytics is the Cornerstone of Modern Detection and Response, December 2015 (3) Mandiant mTrends Reports

ブログ

Three reasons to deploy security analytics software in the enterprise

This security analytics blog was written by expert and author Dan Sullivan (@dsapptech) who outlines three use case scenarios for security analytics tools and explains how they can benefit the enterprise. If there were any doubts about the sophistication of today’s cyber threats, the 2014 attacks on Sony Corporation put them to rest. On November 22, 2014, attackers hacked the Sony network and left some employees with compromised computers displaying skulls on their screens, along with threats to expose information stolen from the company. Sony, by all accounts, was the subject of an advanced persistent threat attack using exploits that would have compromised the majority of security access controls. The scope of the attack forced employees to work with pen, paper and fax machines, while others dealt with the repercussions of the release of embarrassing emails. The coverage around the Sony breach may rightly leave many organizations wondering if their networks are sufficiently protected and — of particular interest here — whether security analytics software and tools could help them avoid the fate of Sony. The short answer is, yes. Just about any business or organization with a substantial number of devices — including desktops, mobile devices, servers and routers — can benefit from security analytics software. It is important to collect as much useful data as possible to supply the security analytics tool with the raw data it needs to detect events and alert administrators. So before deploying a security analytics tool, it helps to understand how such a product will fit within an organization’s other security controls and the gaps it will help fill in typical IT security use cases. Compliance Compliance is becoming a key driver of security requirements for more businesses. In addition to government and industry regulations, businesses are implementing their own security policies and procedures. To ensure these regulations, policies and procedures are implemented as intended, it is imperative to verify compliance. This is not a trivial endeavor. Consider for a moment how many different security controls may be needed to implement a network security policy that is compliant with various regulations and security standards. For instance, anti-malware systems might scan network traffic while endpoint anti-malware operates on individual devices. Then there are firewalls, which are deployed with various configurations depending on the type of traffic allowed on the sub-network or server hosting the firewall. Identity management systems, Active Directory and LDAP servers — meanwhile — log significant events, such as login failures and changes in authorizations. In addition to these core security controls, an enterprise may have to collect application-specific information from other logs. For example, if a salesperson downloads an unusually large volume of data from the customer relation management (CRM) system, the organization would want to know. It is important to collect as much useful data as possible to supply the security analytics tool with the raw data it needs to detect events and alert administrators. When companies have a small number of servers and a relatively simple network infrastructure, it may be possible to manually review logs. However, as the number of servers and complexity of the network grows, it is more important to automate log processing. System administrators routinely write shell scripts to process files and filter data. In theory, they should be able to write scripts in awk, Perl, Ruby or some other scripting language to collect logs, extract data and generate summaries and alerts. But how much time should system administrators invest in these tasks? If they write a basic script that works for a specific log, it may not easily generalize to other uses. If they want a more generalized script, it will likely take longer to write and thoroughly test. This presents significant opportunity costs for system administrators who could better spend their time on issues more closely linked to business operations. This is not to imply that the functionality provided by these scripts is not important — it is very important, especially when it comes to the kind of data required for compliance. The question is how to most efficiently and reliably collect log data, integrate multiple data sets and derive information that can help admins make decisions about how to proceed in the face of potentially adverse events. Security analysis tools are designed to collect a wide variety of data types, but there is much more to security analytics than copying log files. Data from different applications and servers has to be integrated so organizations can view a unified timeline of events across devices, for example. In addition, these solutions include reporting tools that are designed to help admins focus on the most important data without being overwhelmed with less useful detail. So, in a nutshell, the economic incentive of security analytics vendors is to provide solutions that generalize and relieve customers of the burden of initial development and continued maintenance. Security event detection and remediation The term “connecting the dots” is often used in security and intelligence discussions as a metaphor for linking-related — but not obviously connected — pieces of information. Security expert Bruce Schneier wrote a succinct post on why this is a poor metaphor: In real life the “dots” and their relation to each other is apparent only in hindsight; security analytics tools do not have mystical powers that allow them to discern forthcoming attacks or to “connect the dots” auto-magically. A better metaphor is “finding needles in a haystack,” where needles are significant security events and haystacks are logs, network packet and other data about the state of a network. Security analytics tools, at a minimum, should be able to alert organizations to significant events. These are defined by rules, such as a trigger that alerts the organization to failed login attempts to administrator accounts or when an FTP job is run on the database server outside of normal export schedules. Single, isolated events often do not tell the whole story. Attacks can entail multiple steps, from sending phishing lures to downloading malware and probing the network. Data on these events could show up in multiple logs over an extended period of time. Consequently, finding correlated events can be very challenging, but it is something security analytics software can help with. It is important to emphasize that security analytics researchers have not perfected methods for detecting correlated events, however. Organizations will almost certainly get false positives and miss some true positives. These tools can help reduce the time and effort required to collect, filter and analyze event data, though. Given the speed at which attacks can occur, any tool that reduces detection and remediation time should be welcomed. Forensics In some ways, computer forensics — the discipline of collecting evidence in the aftermath of a crime or other event — is the art of exploiting hindsight. Even in cases where attacks are successful and data is stolen or systems compromised, an enterprise may be able to learn how to block future attacks through forensics. For example, forensic analysis may reveal vulnerabilities in an organization’s network or desktop security controls they did not know existed. Security analytics tools are useful for forensic analysis because they collect data from multiple sources and can provide a history of events before an attack through the post-attack period. For example, an enterprise may be able to determine how an attacker initially penetrated its systems. Was it a drive-by download from a compromised website? Did an executive fall for a spear phishing lure and open a malicious email attachment? Did the attacker use an injection attack against one of its Web applications? If an organization is the victim of a cybercrime, security analytics tools can help mitigate the risk of being a victim to multiple forms of the same type of exploits in the future. The need for incident response planning In addition to the use cases outlined above, it is important to emphasize the need for incident response planning. Security analytics may help enterprises identify a breach, but it cannot tell it how to respond — this is the role of an incident response plan. Any organization contemplating a security analytics application should consider how it would use the information the platform provides. Its security practice should include an incident response plan, which is a description of how to assess the scope of a breach and what to do in response to an attack. A response plan typically includes information on how to: Make a preliminary assessment of the breach; Communicating details of the breach to appropriate executives, application owners, data owners, etc.; Isolating compromised devices to limit damage; Collecting forensic data for evidence and post-response analysis; Performing recovery operations, such as restoring applications and data from backups; and Documenting the incident. Security analytics tools help detect breaches and collect data, but it is important to have a response plan in place prior to detecting incidents. Enterprises do not want to make up their response plan as they are responding to an incident. There is too much potential for error, miscommunication and loss of evidence to risk an ad hoc response to a security breach. Deploying security analytics software For organizations that decide to proceed with a security analytics deployment, there are several recommended steps to follow, including: identifying operations that will benefit from security analytics (e.g. compliance activities); understanding the specific tasks within these operations, such as Web filtering and traffic inspection; determining how the security analytics tool will be deployed given their network architectures; and identifying systems that will provide raw data to the security analytics tool. These topics will be discussed in further detail in the next article in this series. Other resources: What is Security Analytics http://searchsecurity.techtarget.com/essentialguide/Security-analytics-The-key-to-reliable-security-data-effective-action https://www.sumologic.com/blog... class="at-below-post-recommended addthis_tool">

ブログ

How Companies Can Minimize Their Cloud Security Risk

This cloud security blog was written by Robert Plant,Vice-Chairman, Department of Business Technology at the University of Miami (@drrobertplant). As enterprises move their applications and data to the cloud, executives are increasingly being faced with balancing the benefits of productivity gains with significant concerns around compliance and security. A principal area of concern relates to unsanctioned use of cloud services and applications by employees. Data from Rajiv Gupta, CEO of Skyhigh Networks, indicates that the average company now uses 1,154 distinct cloud services, and the number is growing at over 20% per year. Many organizations are simply unaware of unsanctioned cloud usage, while others acknowledge that the use of such “shadow IT,” which is technology deployed without oversight by the core enterprise technology group, is inevitable, a side effect of today’s decentralized business structures and need for agile solutions to be deployed quickly. Most concerning for chief security officers is that this growth is led by employees seeking productivity gains through unsanctioned cloud-based solutions with a wide range of security levels. Currently it is estimated by Skyhigh Networks that 15.8% of files on the cloud contain sensitive data, and that 28.1% of users have uploaded sensitive data, 9.2% of which is then shared. Employees may, for example, upload a file while overseas to a local cloud file storage service provider without checking the terms and conditions of that vendor who may in fact claim ownership rights to any content. Additionally the data storage cloud provider may not encrypt the data either during its transmission or while stored on their cloud, thus increasing the risk. Other situations include employees who take a piece of code from a cloud-based open source site and incorporate it into their own program without fully checking the validity of the adopted code. Or someone may adopt a design feature from a site that has the potential to infringe another firm’s intellectual property. Or employees may simply discuss technical problems on a cloud-based site for like-minded individuals. While this may seem a great way to increase productivity and find a solution quickly, valuable intellectual property could be lost or insights on new products could inadvertently be revealed to rivals stalking these sites. Well, cloud “lockdown” is practically infeasible. Technical solutions such as blocking certain sites or requiring authentication, certificates and platform-specific vendors will only work so far, as employees have access to personal machines and devices that can’t be monitored and secured. Instead, employers should implement a strategy under which employees can bring new tool and resource ideas from the cloud to the enterprise, which can yield great benefits. But this has to be done within an adoption framework where the tool, product or service is properly vetted from technical and legal perspectives. For example, is the cloud service being used robust? Does it employ sufficient redundancy such that if high value data is placed there that it is always guaranteed to be available? From a legal perspective it is necessary to examine the cloud service to ensure it is within the compliance parameters required by regulators for the industry. Risk can be mitigated in a number of ways including deploying monitoring tools that scan cloud access, software downloads and storage. These tools can identify individuals, IP addresses and abnormal trends. They can rank risk by site and use against profiles for cloud vendors. Technical monitoring alone is, however, not sufficient and needs to be used in combination with education, evaluation, compliance audits, transparency, accountability and openness of discussion — all positive steps that chief security officers can take to managing cloud adoption and risk.

ブログ

Are Users the Achilles' Heel of Security?

Presaging the death of an industry or a path to user activity monitoring (UAM) enlightenment John Chamber, ex-CEO of Cisco, one said that there are two types of companies, those who have been hacked and those who don’t yet know they have been hacked? Consider for a moment, the following statistics: There were 783 major breaches in 2014 (1) This represents a 30% increase from 2013 (2) Median number of days before detection: 205 (3) Average number of systems accessed: 40 Valid credentials used: 100% Percentage of victims notified by external entities: 69% Large enterprises are finally coming to the conclusion that security vendors and their solutions are failing them. Despite the unbelievable growth in enterprise Security spend, organizations are not any safer. And security attestations like PCI and HIPAA, while helping with compliance, are not equated with a stronger security posture. Don’t believe it? Take a look at the recent announcement from Netflix where they indicated they are dumping their anti-virus solution. And because Netflix is a well-known innovator in the tech space, and the first major web firm to openly dump its anti-virus software, others are likely to follow. Even the federal government is jumping into this security cesspool. In a recent U.S. appellate court decision, the Federal Trade Commission (FTC) was granted authority to regulate corporate cybersecurity. This was done because the market has failed and it was necessary for the government to intervene through public policy (i.e. regulation or legislation). Research has indicated that security solutions are rarely successful in detecting newer, more advanced forms of malware, and scans of corporate environments reveal that most enterprises are already infected. “Enterprises are recognizing that adding more layers to their security infrastructure is not necessarily increasing their security posture,” said George Gerchow, Product Management Director, Security and Compliance at Sumo Logic. “Instead of just bolting on more and more layers, companies are looking for better ways to tackle the problem.” While security has gotten better over the years, so too have the bad actors, whether cybercriminals, hacktivists or nation states. Malware-as-a-service has made this was too easy and pervasive. You know the bad guys are going to find ways to penetrate any barrier you put up, regardless if you are running physical, virtual or cloud (PVC) infrastructures. So is all hopeless, or is there a path to enlightenment by looking at this problem through a different lens? According to a new report from CloudLock, Cybercriminals continue to focus their efforts on what is widely considered to be the weakest link in the security chain: the user. According to CloudLock CEO Gil Zimmerman, “Cyber attacks today target your users—not your infrastructure. As technology leaders wake up to this new reality, security programs are being reengineered to focus where true risk lies: with the user. The best defense is to know what typical user behavior looks like – and more importantly, what is doesn’t.” User Risks And the ROI of this approach is huge, because the report – which analyzed user behavior across 10M users, 1B files and 91K cloud applications – found that 75% of the security risk could be attributed to just 1% of the users. And almost 60% of the apps installed are conducted by highly privileged users. Given these facts, and that cybercriminals always leverage these highly coveted, privileged user accounts during a data breach, understanding user behavior is critical to improving one’s security posture. “As more and more organizations deploy modern-day productivity tools like Microsoft Office 365, Google Apps and Salesforce.com, not understanding what users are doing injects unnecessary and oftentimes unacceptable business risk,” said Mark Bloom, Product Marketing Director, Security & Compliance at Sumo Logic. Leveraging activity-monitoring APIs across these applications, it becomes possible to monitor a number of activities that help in reducing overall risk. These include: Visibility into user actions and behaviors Understand who is logging into the service and from where Investigate changes made by administrators Failed/Valid login attempts Identify anomalous activity that might suggest compromised credentials or malicious insider activity Tokens: Information about 3rd party websites and applications that have been granted access to your systems This new, emerging field of User Activity Monitoring (UAM) – applied to Cloud Productivity and Collaboration Applications - can really help to eliminate guesswork using big data and machine learning algorithms to assess the risk, in near-real time, of user activity. UAM (sometimes used interchangeably with user behavior analytics – UBA) employs modeling to establish what normal behavior looks like and can automatically identify anomalies, patterns and deviations that might require additional scrutiny. This helps security and compliance teams automatically identify areas of user risk, quickly respond and take actions. Sumo Logic applications for Office 365, Salesforce, Google Apps and Box brings a new level of visibility and transparency to activities within these SaaS-based services. And once ingested into Sumo Logic, customers are then able to combine their activity logs with logs from other cloud solutions and on-prem infrastructure, to create a single monitoring solution for operations, security and compliance across the entire enterprise. Enable cloud productivity without compromise! Sources: Identity Theft Resource Center (ITRC) Report http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks Mandiant M-Trends Report (2012 -2015)

ブログ

Introducing the Sumo Logic App for AWS Config

Introducing the Sumo Logic App for AWS Config: Real-Time Cloud Visibility The best part about an AWS infrastructure is its dynamic and flexible nature, the ability to add/delete/modify resources at any time, allowing you to rapidly meet the needs of the business. However, operating and monitoring that dynamic AWS environment on a daily basis is a different story. That dynamic nature we all appreciate, but this presents many operating challenges: Organizations need an easy way to track changes. For auditing and compliance Security investigations Tracking system failures Operations teams that support the AWS environment need to know what was changed in an environment When was it changed Who made the change What resources were impacted by this change Without detailed visibility operations teams are flying blind. They don’t have the information they need to mange their AWS infrastructure and be held accountable. To help you operate, manage and monitor your AWS environment and to maximize your investments we are please to announce the availability of the Sumo Logic App for AWS Config. The new app enables operations and security teams to monitor an AWS infrastructure and track what is being modified and its relationship with other objects. Dashboard View: Sumo Logic App for AWS Config With the Sumo Logic App for AWS Config enables organizations to. Monitor resources Generate audit and compliance reports View resource relationships Troubleshoot configurations Discover resource modification trends The Sumo Logic App for AWS Config is available today from the App Library. If you haven’t tried Sumo Logic yourself yet, sign-up for our free trial and see how you can get immediate operational visibility into your AWS infrastructure. It’s free and you can get up and running in just a few minutes. To learn more about Sumo Logic’s continuous intelligence for AWS, please go to www.sumologic.com/aws. I’d also love to hear about how you are using the app or supporting your AWS environment, so please feel free to send feedback directly to mbloom@sumologic.com. Mark Product Marketing, Compliance & Security Sumo Logic

ブログ

IT teams: How to remain an organizational asset in this 'digital or die' era

Insights from a former enterprise CIO and now chief operations officer for a startup This blog was contributed by our customer friends at Spark Ventures. It was written by Peter Yates (@peteyatesnz) who is the head of operations and platform delivery. When IT does not break, it’s all good but when things go wrong it’s all hands to the pump. IT should not just be about keeping the lights on anymore, whereas that may have been a strategy in the past it certainly will not be good enough in this Digital era. Organizations may require IT to guide them through periods of significant change or lead Digital strategies and innovation. IT must therefore be an enabler and leader of change and must ensure it can respond to the current and future needs of the organization by being flexible and agile. So how can IT achieve this? If IT cannot get to grips with this approach then we may see a proliferation of shadow IT or IT being bypassed by the organization in favor of advice from outside influences that can help the Organization consistently respond to and meet its business objectives. For IT to be an organizational enabler and leader of change, IT should: 1. Define a clear strategy What does IT stand for and how will it support organizational goals? How will it use the cloud and automation? How can IT support the Organization’s need to be more Digital, in a world where the need to be Digital or die is so prevalent? As Forbes (Cloud is the foundation for Digital Transformation, 2014) has recognized "Since 2000, 52 per cent of companies in the Fortune 500 have either gone bankrupt, been acquired or ceased to exist". The reason for this, in my view, is because Organizations have failed to keep up with the constant rate of change. 2. Be focused What does IT see as its core business? Where will it add the most value to the organization? By having this clearly defined within IT strategy as part of supporting a wider organizational strategy the answers to these questions will help clarify the most suitable technology solutions, in essence creating some guiding principles for making architectural or technology decisions. For example, an internal IT team within an innovation venture (as part of a leading telecommunications company) may decide that managing an email service is not a core service because there are cloud solutions such as Office 365 or Gmail that can be consumed without the operational overhead of a traditional email service. Read more: Challenges arise as big data becomes the ‘new normal’ 3. Get the foundations right Ensuring the IT basics are done correctly (e.g. monitoring, network and application stability/availability) are the building blocks for creating credibility and stability. Without this in place an Organization may, for example, have the best apps on the market but which are constantly unavailable and unusable by the Organization and its customers. Poor foundations means that supporting organizational growth will be hard for an IT team to achieve. Getting the foundations rights needs to be done in conjunction with setting a clear strategy and being focused on what is core to IT and ultimately the organization. 4. Deliver If you can’t deliver on projects, service levels or advice in general then you risk losing the trust of the Organization and, more than likely, IT and the CIO/CDO will be overlooked for their advice to the executive team. If you cannot consistently and quickly deliver to the needs of the Organization then you may see a proliferation of shadow IT within the company, again a possible sign that IT is not being agile or responsive enough to meet the needs of the organization. Above all get the basics right so IT can build on solid technology decisions and solutions that support the Organization and its strategies (growth or otherwise). If IT can't deliver in its current guise, it must look at ways to enable this, such as creating a separate innovation team that is not constrained by legacy, as has been shown by Spark New Zealand (Spark Ventures), New Zealand Post, Fletcher Building or Air New Zealand. Read more: CIO Upfront: Is there such a thing as bad innovation? 5. Stay current and relevant It is vital that IT stays up to date with industry and technology trends (Cloud, IoT, Digital, SaaS) and can demonstrate, or at least has a view on, how these can be utilized by the organization both now and in the future. Being relevant and current reduces an organization's need to look elsewhere for advice and technology solutions. Staying current could also mean a review of how IT is structured, a CIO versus a CDO or a less siloed approach to team structure. Digital is not one particular “thing” - it’s also a change in mind-set and a move away from the traditional to being more about an organisation’s combined use of social media and analytics to drive decisions, particularly around its customer’s. Being “Digital” is also about an organisations use of the "cloud" (SaaS, AWS or Azure) and having a mobile presence for its products, services and support. The strategies and subsequent use of social media, analytics, mobility and cloud by any organisation must coexist. For example, it is not useful to just have a mobility strategy without the customer analytics behind it, or the ability for an Organisation’s customers to tweet or comment using the Organisation’s application using any device. If IT focuses on the above five key areas it can remain relevant as well as being an enabler to an organisation in achieving its goals and strategies alongside (rather than having to go around) IT. In this Digital era it’s not only consumers that consume - organisations are looking at options within this "consumption economy" as a way of focusing on core business and consuming the rest. Some great examples of this are Salesforce (CRM), Zuora (Billing), Remedyforce (Digital Enterprise Management), Box (document storage), Sumologic (Data Analytics) or Office 365 and Gmail (Collaboration). Not going digital is really not an option for many organisations, especially if they still want to be loved by their customers and want to remain agile so they can respond to, or even lead, market changes. Read more: CIO to COO: Lessons from the cloud A quote from former GE CEO Jack Welch sums up nicely why IT needs to support and/or lead an organisation’s change programmes: “If the rate of change on the outside exceeds the rate of change on the inside, the end is near.” Related: The State of the CIO 2015: The digital mindshift Peter Yates (@peteyatesnz) is head of operations and platform delivery at Spark Ventures (formerly Telecom Digital Ventures). His previous roles included technology services group manager/CIO at Foster Moore and IS infrastructure manager at Auckland Council.

Azure

2015年11月10日

ブログ

Sumo Logic Takes Center Stage at PCI Europe Community Meeting

Back in Aug 19, 2015, we announced that Sumo Logic has joined the Payment Card Industry (PCI) Security Standards Council (SSC) as a participating organization, and is also an active member in the “Daily Log Monitoring” Special Interest Group (SIG). The purpose of the SIG and primary reason we joined, is to provide helpful guidance and techniques to organizations on improving daily log monitoring and forensic breach investigations to meet PCI Data Security Standard (DSS) Requirement 10. Organizations face many challenges in dealing with PCI DSS Requirement 10 including but not limited to large volumes of log data, distinguishing between what is a security event and what is normal, correlating log data from disparate systems, and meeting the stated frequency of manual log reviews. It was with great honor that the chair of this SIG, Jake Marcinko, Standards Manager PCI SSC asked us to co-present with him on stage at the PCI European Community Meeting in Nice France. Over 500 people came from all over Europe – Banks, Merchants, Card Brands, Qualified security assessors (QSA), Penetration testers, certified information system auditors (CISA), and vendors – for a packed three days of education, networking, discussions and of course, good food! To provide some context and background – and part of the “raison d’etre” this SIG came to fruition – when looking anecdotally at past data breaches, evidence has often been found in merchant logs. However, the details were extremely difficult to find due to the high volume of logged events. And although log collection and daily reviews are required by the PCI DSS, logs collected from merchants can be huge, at the peak of the day, some organizations seeing over 50,000 events per second. This makes it time consuming and often difficult – if not humanely possible – to accurately review and monitor those logs to meet the intent of PCI DSS. This is akin to finding the needle in the haystack, where the needle is the security event, and the haystack is the corresponding logs and data packets. According to Mandiant’s annual M-Trends Report, the median number of days before a breach is detected is 205 days. Why is this the case? Because existing security technologies are struggling to keep up with modern day threats. Fixed rule sets we see across SIEM solutions are great if you know what you are looking for, but what happens when we do not know what too look for or when we do not even know the right questions to ask? So what does this all mean? Is there hope, or are we destined to continue along with the dismal status quo? Luckily there are new cloud-native, advanced security solutions emerging that leverage data science to help us look holistically across our hybrid infrastructure to give us visibility across the entire stack, leveraging machine learning to reduce millions of data streams into human digestible patterns and security events, and to know what is normal by baselining and automatically identifying and alerting on anomalies and deviations. It is these continuous insights and visibility across hybrid workloads that become real opportunities to improve one’s security posture and approach compliance with confidence and clarity. Timelines and Deliverables Information Supplement – Daily Log Monitoring SIG guidance is expected to be released in Q1, 2016.

ブログ

Public vs. Private Cloud

ブログ

Best Practices for Securely Leveraging the Cloud

Over 20,000 people from all over the world descended on Las Vegas this week for Amazon’s completely sold out AWS re:Invent 2015 show. They came for many reasons, education, networking, great food, music and entertainment. But most importantly, they came because of AWS’s leadership and relevancy in this world of software-centric businesses driving continuous innovation and rapid delivery cycles, leveraging modern day public cloud infrastructures like AWS. On the second day of the event, I had the opportunity to sit through an afternoon session titled: If You Build It, They Will Come: Best Practices for Securely Leveraging the Cloud. Security expert and industry thought leader Joan Pepin, who has over 17 years experience in policy management, security metrics and incident response – as well as being the inventor of SecureWorks’ Anomaly Detection Engine – gave the presentation. There is no doubt that cloud computing is reshaping not only the technology landscape, but also the very way companies think about and execute their innovative processes and practices to enable faster, differentiated and more personalized customer experiences. And a path to operating in the cloud securely and confidently requires a new set of rules and a different way of thinking. This was at the heart of Joan’s session – helping security practitioners adapt to this paradigm shift and creating a pathway to securely leveraging the cloud with confidence and clarity. Securing Your Future “We are in the middle of a mass extinction. The world we are used to living, working and operating in is going to disappear over the next ten years. It’s already well underway. We are seeing the mass extinction of traditional Datacenter, of Colocation and of being our own infrastructure providers,” said Pepin. I expect a new mantra will be echoing through corporate boardrooms around the globe in the not too distant future: “Friends don’t let friends build datacenters.” Joan suggests that the future – and how one secures it – is going to be very different from the past and what most people are doing in the present. She knows this first hand, because she is living this every day, running Sumo Logic’s state-of-the-art advanced analytics platform that ingests over 50TB of data and analyzes over 25PB – daily! Joan passionately states: “The future is upon us. The cloud is the wave of the future: the economics, the scalability, the power of the architecture, security built-in from inception. It’s inevitable. If we are not prepared to adapt our thinking to this new paradigm, we will be made irrelevant.” There are boxes, inside boxes, inside boxes. And security people had very little to do with the design on those boxes. Throwing in a few FWs and IDS/IPSs into the box was how things used to be done. This is not the way to build security into a massively scalable system, with ephemeral instances. That is not a way to make security fractal so as you expand your footprint, security goes along with you. In this new paradigm, security has a greater opportunity to be much more involved in the delivery of the service and design of the architecture and be able to take security to a completely different level so that it is embedded in every layer of the infrastructure and every layer of the application. “Do I really need to see all the blinking lights of the boxes to be secure? Too many decisions are being made emotionally, not rationally.” Operationally, security organizations need to change their thinking and processes from traditional data center-centric models (aka “Flat Earth” thinking) to new, more statistical models. AWS presents this giant amorphous blog of power, with APIs, elasticity, configurability and infrastructure as code. Security is now embedded into all that automation and goodness. As you expand, as you grow, as you change, the security model stays the same and weaves itself throughout your cloud infrastructure. “This was my world is round moment” said Pepin. “I have seen the light and will never go back. My CISO friends in more traditional companies are envious of what we have been able to achieve here at Sumo Logic – the ability to ingest, index, encrypt, store and turn the data back around for searching in 30 seconds – this is generations ahead of the market. It is how the cloud of tomorrow works today!” Joan provided a number of practical and insightful best practices that security professionals should follow in thinking about cloud security: Less is More: Simplicity of design, APIs, interfaces, and data-flow all help lead to a secure and scalable system. Automate: Think of your infrastructure as code-based – it’s a game changer; Test, do rapid prototyping and implement fully automated, API-driven deployment methods; Automate a complete stack Do the Right Thing: Design in-code-reuse and centralize configuration information to keep attack surface to a minimum; Sanitize and encrypt it; Don’t trust client-side verification; enforce everything at every layer. Defense in Depth: Everything. All the Time Achieve Scale by Running POD Model Use Best-of-Breed Security Stack: IDS, FIM, Log Mgt., Host Firewall. To watch Joan’s video, please select this link: AWS re:Invent 2015 | (SEC202) Best Practices for Securely Leveraging the Cloud For more information on Sumo Logic’s cloud-native AWS solutions please visit AWS Integrations for Rapid Time-to-Value.

AWS

2015年10月08日

ブログ

Security Analytics in the AWS Cloud – Limiting the Blast Radius

This blog focuses on security event management within the AWS cloud and presents some options for implementing a security analytics solution. Security Analytics in the AWS CloudThe basic role of security analytics remains the same in the cloud, but there are a few significant differences. Arguably the biggest, is the effective blast radius of an incident can be far greater in the cloud.“I built my datacenter in 5 minutes” is a great marketing slogan and bumper sticker that AWS have. However, if someone compromises an IAM role with admin privileges, or worse, your root account, they can completely destroy that datacenter in well under 2 minutes. Having an effective strategy to identify, isolate and contain a security incident is paramount in the cloud. Amazon Web Services prides itself on its security and compliance stature and often states that security is Job Zero. Nonetheless, customers need to be mindful that this is a shared responsibility model. Whilst AWS agrees to provide physically secure hosting facilities, data storage and destruction processes and a vast array of tools and services to protect your applications and infrastructure, it is still ultimately the customer’s responsibility to protect and manage the services they run inside AWS. To name just a few of these :Managing your own firewall rules including ACLs and Security Groups.Encrypting your data both in transit and at rest ( inc. managing keys )Configuring and managing IPS and WAF devicesVirus/Malware detectionIAM events ( logins, roles etc )The list goes on… and with the speed that AWS releases new products and features, it is important to be able to keep on top of it all. We know AWS and a number of their technology and managed services partners are well aware of this and provide some really useful tools to help manage this problem.We will focus on the following AWS services and then discuss how we can incorporate them into a strategic Security Analytics solution:IAM ( Identity & Access Management ) is an absolute must for anyone who is serious about securing their environment. It provides authenticated and auditable access to all of your resources. Through the use of users, group, roles and policies you can create fine grained permissions and rules. It also allows you to federate credentials with an external user repository.CloudTrail can log all events from IAM and is one of the most important services from a SIEM perspective. CloudTrail is a web service that records all kinds of API calls made within IAM, and most other AWS services. It is essential from an auditing perspective and in the event you need to manage a security event. See this link for a full list of supported services that also links back to the relevant API reference guide. Additionally, this link provides detailed information about logging IAM events to CloudTrail.VPC Flow Logs is a fairly recent addition to the AWS inventory but has long been a feature request from the security community. Whilst Security Groups and ACLs have long provided customers with the ability to control access to their VPC, they weren’t previously able to see the logs generated. A key part of a SIEM solution is the ability to process “firewall” logs. It’s all well and good knowing that an authorized user can access a particular service but it can also be very useful to know what requests are getting rejected and who is trying to access protected resources. In this respect, Flow Logs now gives customers a much clearer view of the traffic within their VPC. Rather conveniently, Flow Logs data is processed by CloudWatch LogsCloudWatch Logs is an extension of the CloudWatch monitoring facility and provides the ability to parse system, service and application logs in near real time. There is a filtering syntax that can be used to trigger SNS alerts in the event of certain conditions. In the case of applications running on EC2 instances, this requires a log agent to be installed and configured. See this link for how to configure CloudTrail to send events to CloudWatch Logs. This service does somewhat impinge upon the functionality of existing log management products such as SumoLogic and Splunk, however, as we’ll explain in a separate post, there is still a good argument for keeping your third party tools.Config is a service that allows you to track and compare infrastructure changes in your environment over time and restore them if necessary. It provides a full inventory of your AWS resources and the facility to snapshot it into CloudFormation templates in S3. It also integrates with CloudTrail, which in turn integrates with CloudWatch Logs to provide a very useful SIEM function. For example, if a new Security Group gets created with an open access rule from the internet an alert can be raised. There is quite a bit of functional overlap with CloudTrail itself but Config can also be very useful from a change management and troubleshooting perspective.Here are a couple of real world examples that make use of these services.Example 1This scenario has a rogue administrator adding another unauthorized user to the admin role inside the IAM section of the AWS Console.If we have configured CloudTrail then this event will automatically get logged to an S3 bucket. The logs will be in JSON format and this particular entry would look something like this. An IAM role can be assigned to CloudWatch Logs to allow it to ingest the CloudTrail events and a filter can be applied to raise an alarm for this condition. You can use SNS to initiate a number of possible actions from this.Some other possible events ( there are many ) that we may want to consider monitoring are:AuthorizeSecurityGroupIngress – someone adding a rule to a security groupAssociateRouteTable – someone making a routing changeStopLogging – someone stops CloudTrail from recording eventsUnauthorized* – any event that returns a permission error“type”:”Root” – any activity at all performed under the root accountExample 2This is a very high level overview of how VPC Flow Logs, and essentially all the services we’ve outlined in this post, can be integrated with a third party log management tool.In my opinion, whilst CloudWatch Logs does provide some very useful and low cost monitoring capabilities, there are quite a few dedicated tools provided by AWS technology partners that offer a number of advantages in terms of configurability, functionality and usability. SumoLogic appears to be one of the first vendors to integrate with VPC Flow Logs and is very easy to get up and running with. As always, thank you for taking the time to read this post. I’d also like to thank David Kaplan, Security and Compliance Principal at AWS Australia, for his valuable input to this piece.This blog was contributed by our partner friends at Cloudten. It was written by Richard Tomkinson, Principle Infrastructure Architect. Cloudten Industries © is an Australian cloud practice and a recognized consulting partner of AWS. They specialize in the design, delivery and support of secure cloud based solutions.For more information on Sumo Logic’s cloud-native AWS solutions please visit AWS Integrations for Rapid Time-to-Value.

AWS

2015年09月23日

ブログ

Has SIEM Lost its Magic?

ブログ

Why Twitter Chose Sumo Logic to Address PCI Compliance

ブログ

The Digital Universe and PCI Compliance – A Customer Story

According to IDC, the digital universe is growing at 40% a year, and will continue to grow well into the next decade. It is estimated that by 2020, the digital universe will contain nearly as many digital bits as there are stars in the universe. To put this into perspective, the data we create and copy annually will reach 44 zettabytes, or 44 trillion gigabytes. In 2014 alone, the digital universe will equal 1.7 megabytes a minute for every person on earth. That is a lot of data!As a new employee at Sumo Logic, I’ve had the opportunity to come in contact with a lot of people my first few weeks – employees, customers and partners. One interaction with a global, multi-billion dollar travel powerhouse really stood out for me, as they are a great example of an organization grappling with massive growth in an ever expanding digital universe.The BusinessThe travel company provides a world-class product-bookings engine and delivers fully customized shopping experiences that build brand loyalty and drive incremental revenue. They company is also responsible for safeguarding the personal data and payment information of millions of customers. “Customer security and being compliant with PCI DSS is essential to our business” was echoed many times.The ChallengeAs a result of phenomenal growth in their business, the volume of ecommerce transactions and logs produced was skyrocketing, more than doubling from the previous year. The company was processing over 5 billion web requests per month, generating on average close to 50GB of daily log data across 250 production AWS EC2 instances. It became clear that an effective solution was required to enable the company to handle this volume of data more effectively. Current manual processes using Syslog and other monitoring tools were not manageable, searchable or scalable and it was very difficult to extract actionable intelligence. Additionally, this effort was extremely time intensive and would divert limited resources from focusing on more important areas of the business – driving innovation and competitive differentiation.PCI Compliance: The ability to track and monitor all access to network resources and cardholder data (PCI DSS Requirement 10) was of particular importance. This is not surprising as logging mechanisms and the ability to track user activities are critical in minimizing the impact of a data compromise. The presence and access to of log data across the AWS infrastructure is critical to provide necessary tracking, alerting and analysis when something goes wrong.The SolutionWhile multiple solutions were considered – including Splunk, Loggly and ELK stack, the company selected Sumo Logic for its strong time to value, feature set, and low management overhead. Additionally, the security attestations, including PCI DSS 3.0 Service Provider Level 1, as well as data encryption controls for data at rest and in motion, were levels above what other companies provided. Being able to not worry about the execution environment – handled by Sumo Logic – and focus on extracting value from the service was extremely valuable.The ResultsThe most important immediate benefits for the client included being able to reduce the time, cost and complexity of their PCI audit. They were also able to leverage the platform for IT Ops and Development use cases, reducing mean time to investigate (MTTI) and mean time to resolve (MTTR) by over 75%.As I was wrapping up our conversation, I asked if they had any “aha moments” in leveraging the Sumo Logic platform and dealing with this exponential growth in their digital universe. Their response was:“I’ve been really impressed with how fast the team has been able to identify and resolve problems. Sumo Logic’s solution has helped us change the playing field in ways that were just not possible before.”To learn more about Sumo Logic’s compliance & security solutions for AWS, please visit: http://www.sumologic.com/aws-trialTo try Sumo Logic for free, please visit: http://www.sumologic.com/pricing